AI Context with Librams
The most powerful feature of Librams: precise control over which Vellums an AI sees when you use your knowledge base as context.
Why AI Context Matters
When you paste your notes into an AI chat, you often include too much — drafts, tangents, irrelevant content that dilutes the AI's focus. Librams solve this by letting you curate exactly what the AI receives. Every Vellum in a Libram has an "AI enabled" flag that you control individually.
The AI Toggle
Each Vellum in a Libram has an AI inclusion toggle. When enabled, the Vellum's content is considered part of the AI context for that Libram. When disabled, the Vellum is still in the Libram (visible in the list) but excluded from the AI context bundle.
For AI (enabled)
The Vellum's content will be included when this Libram is used as AI context. Shown with an emerald green badge.
Off (excluded)
The Vellum is in the Libram but won't be sent to the AI. Useful for keeping reference material visible without polluting context.
How to Toggle AI Inclusion
There are two places to control AI inclusion for a Vellum:
On the Libram detail page
Hover over a Vellum row to reveal the action buttons. Click the "For AI / Off" toggle button on the right.
From the Vellum editor
Open the ⋮ menu → Librams section. For any Libram the Vellum already belongs to, an AI pill is shown. Click it to flip the toggle.
Reading the AI Summary
Best Practices for AI Context
Getting the most from Librams as AI context bundles:
Keep context tight
Include only the Vellums directly relevant to the task. Aim for 3–10 focused Vellums rather than dumping everything.
Exclude drafts and stubs
Vellums that are incomplete or tangential should be toggled off. They're visible in the Libram for reference but won't confuse the AI.
One Libram per project
Create a dedicated Libram per project or topic. This makes it easy to hand the AI a coherent, consistent context bundle.
Update as you write
Add new Vellums to the Libram as you write them. The AI context grows naturally with your knowledge.