Back to Prompt Library
implementation
Integrate GPT-5 Pro and Gemini 3 Flash via AI21 Studio/Together AI
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Editorial Compliance & Content Neutrality System Agent
Format
Code-aware
Lines
45
Sections
1
Linked challenge
Editorial Compliance & Content Neutrality System Agent
Prompt source
Original prompt text with formatting preserved for inspection.
45 lines
1 sections
No variables
1 code block
Implement the 'Bias Analyzer' node using GPT-5 Pro and the 'Fact Checker' node using Gemini 3 Flash within your LangGraph setup. For GPT-5 Pro, focus on its ability to identify subtle linguistic cues, rhetorical strategies, and omitted information that indicate bias. For Gemini 3 Flash, create a `Tool` that it can use to query a mock external fact database (e.g., a dictionary of verified claims or a simple API for public data). Utilize AI21 Studio for deploying GPT-5 Pro inference and Together AI for serving Gemini 3 Flash models, ensuring high performance and availability. Ensure API keys and endpoints are configured securely for both models.
```python
import os
from langchain_openai import ChatOpenAI # For GPT-5 Pro (assuming OpenAI-compatible API endpoint like AI21 Studio's)
from langchain_google_genai import ChatGoogleGenerativeAI # For Gemini 3 Flash
from langchain_core.tools import tool # --- GPT-5 Pro via AI21 Studio --- (Conceptual integration)
# AI21 Studio's models might have their own specific LangChain integration or be OpenAI-compatible.
# For GPT-5 Pro, we'll assume an OpenAI-compatible interface through AI21 Studio.
# gpt5_pro_llm = ChatOpenAI(
# model="gpt-5-pro", # Placeholder, use actual model ID from AI21 Studio
# openai_api_base=os.getenv("AI21_STUDIO_API_BASE_URL", "https://api.ai21.com/v1"), # Replace with AI21 Studio's endpoint
# api_key=os.getenv("AI21_STUDIO_API_KEY"), # Your AI21 Studio API key
# temperature=0.3
# ) # def bias_analyzer_node(state: AgentState) -> AgentState:
# # Logic for Bias Analyzer using gpt5_pro_llm
# analysis_prompt = f"Analyze the following text for bias and provide reasoning: {state['content_to_review']}"
# response = gpt5_pro_llm.invoke(analysis_prompt)
# state['bias_analysis'] = response.content
# state['has_issues'] = "biased" in response.content.lower() # Simple check
# return state # --- Gemini 3 Flash via Together AI --- (Conceptual integration)
# gemini3_flash_llm = ChatGoogleGenerativeAI(
# model="gemini-3-flash", # Model name as provided by Together AI for Gemini 3 Flash
# base_url=os.getenv("TOGETHER_AI_API_BASE_URL", "https://api.together.ai/v1"), # Together AI endpoint
# api_key=os.getenv("TOGETHER_AI_API_KEY"), # Your Together AI API key
# temperature=0.1
# ) # Define a custom tool for fact-checking with Gemini 3 Flash
# @tool
# def search_fact_database(query: str) -> str:
# """Searches a mock fact database for information related to the query. Returns 'Verified' or 'Unverified' with context."""
# # Simulate a database lookup
# if "economy will be destroyed" in query.lower():
# return "Unverified. Economic forecasts vary; no consensus on 'destruction'."
# return "Verified. General statement." # Mock response # def fact_checker_node(state: AgentState) -> AgentState:
# # Logic for Fact Checker using gemini3_flash_llm and tools
# claims_to_check = extract_claims_from_content(state['content_to_review']) # Custom function
# checked_facts = []
# for claim in claims_to_check:
# tool_response = search_fact_database.invoke({'query': claim})
# checked_facts.append({"claim": claim, "status": tool_response})
# state['fact_check_results'] = checked_facts
# state['has_issues'] = state['has_issues'] or any("Unverified" in f['status'] for f in checked_facts)
# return state # Add these functions as nodes to your graph_builder as defined in Prompt 1
# graph_builder.add_node("bias_analyzer", bias_analyzer_node)
# graph_builder.add_node("fact_checker", fact_checker_node)
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.