Viewert as Your AI Memory Layer
Your AI models are only as good as the context you give them. Librams turn Viewert into a persistent, reusable memory layer for every AI tool you use — ChatGPT, Claude, Gemini, Grok, Llama, and more.
The Problem with AI Today
Every time you start a new conversation with an AI, it knows nothing about you. You re-explain your project, re-paste your notes, re-describe your codebase. Hours of context-building, repeated over and over. Viewert eliminates this.
What Is the Viewert Memory Layer?
How It Works
The flow is simple and takes about two minutes to set up:
Write in Viewert
Create Vellums for anything: your project's architecture, research summaries, personal bios, code conventions, product specs, meeting notes — whatever context you keep re-pasting into AI chats.
Organise into a Libram
Group related Vellums into a Libram. Toggle each Vellum's "For AI" switch to control exactly what the AI sees. One Libram might be "My React Project" with 8 Vellums about architecture decisions and coding style.
Generate an API key
Go to Settings → API Keys and create a key. It starts with vwt_ and is shown only once.
Connect your AI tool
Add the Viewert MCP server to your AI client config (Claude Desktop, Cursor, Windsurf, etc.) — a one-time JSON edit. Or copy the context URL and paste it directly into ChatGPT, Gemini, or Grok.
The AI loads your memory
In any conversation, the AI can call list_librams and get_libram_context to pull your curated knowledge — formatted as clean Markdown, ready to reason over.
Real Examples
Here is what this looks like in practice across different AI tools:
Software developer using Claude Desktop
"Load my Backend Architecture libram" — Claude instantly reads 12 Vellums covering database schema, API conventions, auth decisions, and deployment config. No more copy-pasting README files.
Researcher using ChatGPT
"Here is my Literature Review libram" — pastes the context URL. ChatGPT reads 20 annotated paper summaries and immediately synthesises gaps in the research without the user needing to re-explain anything.
Freelance writer using Gemini
"Use my Client Brand Voice libram as context" — Gemini reads tone guidelines, sample copy, audience personas, and vocabulary preferences from 6 Vellums. Every draft it writes is on-brand from the first word.
Product manager using Cursor
Cursor automatically loads the "Product Requirements" Libram at the start of every coding session. The AI understands user stories, acceptance criteria, and design constraints without being told.
Student using Grok
"Load my Exam Prep libram" — Grok reads Vellums on thermodynamics, mechanics, and problem-solving frameworks. The study session starts immediately with full subject context.
Running Llama locally
Point a local Llama server at the Viewert context API endpoint. Your private notes, served privately, to your private model. Full control, zero cloud exposure.
Why This Is Different from System Prompts
System prompts are flat text blocks that you manually maintain and copy around. The Viewert memory layer is structured, versioned, and live:
Structured
Each piece of knowledge is a separate Vellum. You edit one Vellum without touching the rest. The Libram always reflects your latest thinking.
Selective
The AI toggle lets you include or exclude individual Vellums per Libram. Fine-grained control over what each AI session receives — not a monolithic paste.
Reusable across tools
The same Libram works in Claude, Cursor, Windsurf, and any MCP-compatible tool simultaneously. Write once, use everywhere.
Shareable (optionally)
Make a Libram public to share a curated knowledge bundle with a team, community, or audience. Others can load your Libram as context without accessing your private Vellums.
Getting Started in 2 Minutes
If you already have Vellums, you can have your first AI memory layer live today:
Create a Libram
Go to Librams → New Libram. Give it a name that describes the context bundle, e.g. "My Coding Style" or "Research Notes Q1".
Add your key Vellums
Click "Add Vellum" and add 3–10 Vellums that contain the context you most frequently re-paste into AI chats.
Toggle them all For AI
Make sure each Vellum's AI toggle is enabled (green). You can refine this later.
Get your API key
Settings → API Keys → Create Key. Copy it immediately.
Set up your AI client
Follow the MCP Setup guide (next article) for your specific AI tool. Most setups take under 2 minutes.