Back to Prompt Library
implementation

Integrate Together AI for Multi-Model Inference

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Strategic Market Expansion Agent Team with OpenAI Agents SDK and Multi-Model Inference

Format
Code-aware
Lines
3
Sections
1
Linked challenge
Strategic Market Expansion Agent Team with OpenAI Agents SDK and Multi-Model Inference

Prompt source

Original prompt text with formatting preserved for inspection.

3 lines
1 sections
No variables
1 code block
Refine the `qualitative_analysis_tool` and `quantitative_modeling_tool` functions to make actual API calls to Together AI, specifying Claude 4 Sonnet and Gemini 3 Flash respectively. Ensure proper API key handling and error management. Show the Python code for these tool implementations, demonstrating the Together AI API calls. ```python
import together together.api_key = "YOUR_TOGETHER_API_KEY" def call_together_api(model: str, prompt: str): response = together.chat.completions.create( model=model, messages=[ {"role": "user", "content": prompt} ], max_tokens=500 ) return response.choices[0].message.content def qualitative_analysis_tool(topic: str): prompt = f"Perform a qualitative sentiment analysis on recent news and trends concerning {topic}. What are the key strategic implications?" return call_together_api("claude-4-sonnet", prompt) # Adjust model ID as per Together AI list def quantitative_modeling_tool(data: str): prompt = f"Given the following financial data: {data}, project key performance indicators for the next two quarters and identify potential risks." return call_together_api("gemini-3-flash", prompt) # Adjust model ID as per Together AI list
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.