Multimodal RAG Implementation Prompt

implementationChallenge

Prompt Content

Implement the multimodal RAG pipeline using LlamaIndex. Focus on handling various document types (text, PDF, simulated image descriptions). How will you generate and store embeddings for these diverse inputs, and how will your RAG retrieve context most relevant to a given query, potentially spanning multiple modalities?

Try this prompt

Open the workspace to execute this prompt with free credits, or use your own API keys for unlimited usage.

Usage Tips

Copy the prompt and paste it into your preferred AI tool (Claude, ChatGPT, Gemini)

Customize placeholder values with your specific requirements and context

For best results, provide clear examples and test different variations