The AI RAG Engine
Unlock the power of cross-chapter intelligence. Learn how Portl AI remembers your entire build history.
The biggest problem with AI coding is Context Amnesia. You build a beautiful Auth system in one prompt, but when you move to the next file, the AI "forgets" what you just built. Portl fixes this using a custom RAG (Retrieval-Augmented Generation) pipeline.
"Context shouldn't be a luxury. It should be the foundation."
How it Works
Behind every Narrative project, Portl's backend maintains a lightweight Index of Intent. When you ask a question in the **Ask Portl** panel, the system performs a three-step lookup:
- Local Context: It reads the code of the chapter you're currently viewing.
- Meta-Retrieval: It fetches the AI-generated summaries (prose) of every other chapter in that same project.
- Narrative Synthesis: It combines these into a single context-rich prompt for the model.
Best Practices for Questions
To get the most out of Portl AI, structure your questions to reference multiple chapters. This triggers our cross-chapter retrieval logic more effectively.
// GOOD QUESTION:
"Based on the Auth flow we built in Chapter 1, how should I link the user profile here in Chapter 4?"
// EVEN BETTER:
"Create a logout button that uses the exact Firebase logic defined in our Auth chapter."
The "Ask One, Know All" Model
Unlike standard Claude Artifacts where every chat is a new isolated thread, Portl threads are Narrative-aware. You can ask "How is the state managed in this project?" and the AI will analyze the cumulative data from the Research, PRD, and Code chapters combined.
Scalability
Currently, Portl AI can cross-reference up to 25 chapters simultaneously. For massive builds, we recommend splitting your Narrative into "Volumes" (sub-projects) to keep context dense and efficient.