Back to Prompt Library
testing

Optimize Routing with Ray Tune and Integrate Ellipsis

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Google ADK Multi-Model Inference Routing with DeepSeek R1 for Cerebras/Trainium Optimization

Format
Code-aware
Lines
16
Sections
1
Linked challenge
Google ADK Multi-Model Inference Routing with DeepSeek R1 for Cerebras/Trainium Optimization

Prompt source

Original prompt text with formatting preserved for inspection.

16 lines
1 sections
No variables
1 code block
Set up an experimentation pipeline using Ray Tune to optimize the parameters for your inference routing agent (e.g., thresholds for switching between DeepSeek R1 and simulated Trainium based on prompt length, complexity, or user priority). Concurrently, integrate Ellipsis as a monitoring and control interface, allowing real-time adjustments to routing policies and viewing experiment results. Provide code snippets for defining a Ray Tune experiment and for Ellipsis interaction. ```python
import ray
from ray import tune # Placeholder for your inference routing function that Ray Tune will optimize
def train_router_policy(config): # Simulate routing decisions and measure metrics latency_penalty = config["latency_weight"] * 0.1 cost_penalty = config["cost_weight"] * 0.01 # In a real scenario, this would call your ADK agent and run inferences # Return a metric for Ray Tune to optimize, e.g., 'combined_score' return {"combined_score": -(latency_penalty + cost_penalty)} # Configure Ray Tune experiment
# tune.run(
# train_router_policy,
# config={
# "latency_weight": tune.uniform(0.1, 1.0),
# "cost_weight": tune.uniform(0.1, 1.0),
# },
# num_samples=10,
# ) # Ellipsis integration (conceptual - assumes Ellipsis API or SDK for messaging)
class EllipsisMonitor: def send_alert(self, message: str): print(f"Ellipsis Alert: {message}") def get_user_command(self) -> str: # Simulate getting command from Ellipsis interface return "" # Example usage in the agent system
# monitor = EllipsisMonitor()
# monitor.send_alert("New optimal routing policy identified by Ray Tune!")
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the rubric, target behavior, and pass-fail criteria as the baseline for evaluation.

Tune next

Adjust fixtures, mocks, and thresholds to the system under test instead of weakening the assertions.

Verify after

Make sure the prompt catches regressions instead of just mirroring the happy-path examples.