MCP Setup & AI Client Guides
Step-by-step instructions for connecting Viewert to ChatGPT, Claude, Gemini, Grok, Cursor, Windsurf, Llama, and any other AI tool — with real example prompts for each.
Step 1: Create an API Key
Every integration method requires a Viewert API key. Generate one in Settings → API Keys.
Open Settings
Click your avatar → Settings, then scroll to the API Keys section.
Click "Create Key"
Give the key a recognizable name like "Claude Desktop" or "Cursor".
Copy it immediately
The full key (starting with vwt_) is shown only once. Store it in a password manager or your shell's environment file.
You can have up to 10 keys
Create one per client so you can revoke individual integrations without affecting others.
Step 2: Find Your Libram ID
Claude Desktop (MCP — Recommended)
Claude Desktop supports MCP natively. This is the richest integration — Claude can list and load any of your Librams without you specifying an ID.
Run the setup wizard (easiest — no prerequisites)
Open your terminal and run: npx --package=@viewert/mcp viewert-mcp-setup The wizard will install the package, verify your API key, detect Claude Desktop, and write the config automatically. No manual editing required.
Or configure manually
If you prefer to edit the config yourself: 1. Install globally: npm install -g @viewert/mcp 2. Find the path: which viewert-mcp 3. Open ~/Library/Application Support/Claude/claude_desktop_config.json 4. Add: { "mcpServers": { "viewert": { "command": "/usr/local/bin/viewert-mcp", "args": [], "env": { "VIEWERT_API_KEY": "vwt_your_key_here" } } } } Replace the command path with the output of `which viewert-mcp`.
Fully quit and reopen Claude Desktop
Right-click the Claude icon in the menu bar and choose Quit — closing the window is not enough. Reopen Claude. A small hammer icon appears in the toolbar when MCP tools are active.
Try it
Type: "Load my [Libram name] libram and summarise the key points." Claude will call list_librams and get_libram_context automatically.
Cursor & Windsurf (MCP)
Cursor and Windsurf both support MCP via a project-level or global config file. The Viewert MCP server gives your AI coding assistant full access to your knowledge base.
Run the setup wizard (easiest — no prerequisites)
npx --package=@viewert/mcp viewert-mcp-setup The wizard detects Cursor and Windsurf automatically and writes the config for whichever you choose.
Or configure manually
Install globally: npm install -g @viewert/mcp Create .cursor/mcp.json or ~/.cursor/mcp.json (global): { "mcpServers": { "viewert": { "command": "/usr/local/bin/viewert-mcp", "args": [], "env": { "VIEWERT_API_KEY": "vwt_your_key_here" } } } } Replace the command path with the output of `which viewert-mcp`.
Reload the window
Run the "Reload Window" command in the command palette (Cmd+Shift+P). The Viewert tools load automatically.
Example coding prompt
Type: "Load my Backend Architecture libram and then help me refactor the auth middleware to match our conventions." The AI reads your docs and codes accordingly.
ChatGPT (Context URL Method)
ChatGPT does not support MCP yet, but you can paste your Libram's context URL directly into any conversation to give it your knowledge.
Get your context URL
Your Libram context endpoint is: https://www.viewert.com/api/librams/YOUR_LIBRAM_ID/context Append your API key as a header or use the public URL for public Librams.
For private Librams, use the markdown fetch
Paste this prompt at the start of your ChatGPT conversation: "Fetch the following URL and use the content as your context for this conversation: https://www.viewert.com/api/librams/YOUR_LIBRAM_ID/context Authorization: Bearer vwt_your_key_here"
For public Librams, even simpler
If your Libram is public, no key needed. Just paste the URL and ask ChatGPT to read it: "Please read the content at https://www.viewert.com/api/librams/LIBRAM_ID/context and use it as context."
Example prompt
"Read my research notes from the URL above. Based on those notes, what are the three strongest arguments for my thesis and where are the gaps?"
Gemini (Google AI Studio & Gemini App)
Gemini can fetch and reason over URLs directly. Use your Libram's public context URL or paste the content.
Make your Libram public (simplest)
In the Libram settings, toggle Visibility to Public. This generates a shareable context URL with no authentication required.
Paste the URL into Gemini
Start your Gemini conversation with: "Use the content at https://www.viewert.com/api/librams/LIBRAM_ID/context as your knowledge base for this session."
For private Librams via Google AI Studio
In AI Studio, use the system instructions field to paste your Libram's full markdown content (fetched via the API with your key). This loads it as persistent context across the session.
Example prompt
"I've given you my Brand Voice libram. Write a 500-word blog post about our new product launch. Match the tone, vocabulary, and style from the guidelines exactly."
Grok (xAI)
Grok supports URL reading and large context windows. Feed it your Libram context directly.
Use a public Libram URL
For public Librams, paste the URL and ask Grok to read it. Grok can fetch and reason over the content in the same message.
Or paste the markdown directly
Fetch your Libram's markdown via the API (using your key), then paste it into Grok's context window. Grok's context window handles large knowledge bundles well.
Example prompt
"I'm going to paste my Physics Exam Prep notes below. Read them, identify the topics I've covered, then quiz me with 10 exam-style questions starting with the weakest areas. [paste Libram markdown here]"
Local Models (Ollama, LM Studio, Llama.cpp)
Running Llama, Mistral, Qwen, or any model locally? Viewert works perfectly. Fetch your context with a simple curl call and pipe it into your prompt.
Fetch your Libram as markdown
curl -H "Authorization: Bearer vwt_your_key_here" \ "https://www.viewert.com/api/librams/LIBRAM_ID/context" \ > context.md
Include it in your system prompt
Prepend the contents of context.md to your system prompt when calling the local API. Most local frontends (Open WebUI, LM Studio) have a system prompt field where you can paste it.
Automate with a shell alias
Add a shell function that fetches your most-used Libram and pipes it into your local model. Your context is always fresh and one command away.
Example curl + ollama flow
CONTEXT=$(curl -s -H "Authorization: Bearer vwt_your_key" \ "https://www.viewert.com/api/librams/LIBRAM_ID/context") ollama run llama3 "Using this context: $CONTEXT\n\nNow answer: What are the key design decisions in this project?"
Any OpenAI-Compatible API
Any tool that accepts an OpenAI-compatible system prompt can use your Libram as context. Fetch it, inject it.
Fetch the Libram markdown
GET https://www.viewert.com/api/librams/LIBRAM_ID/context Authorization: Bearer vwt_your_key_here Returns clean Markdown — headings, bullets, code blocks, all properly formatted.
Inject as system message
Add the Libram content as the first message in your messages array: { "role": "system", "content": "[your libram markdown]" }
Use the JSON format for structured access
Add ?format=json to the URL for a structured response with individual vellum IDs, titles, and markdown content — useful for programmatic workflows.
API Reference
The context endpoint is simple and stateless. No SDKs required.
GET /api/librams/:id/context
Returns all AI-enabled Vellums in the Libram as a single Markdown document. Public Librams need no auth. Private Librams require Authorization: Bearer vwt_your_key.
GET /api/librams/:id/context?format=json
Returns structured JSON: { libram_id, libram_name, vellums: [{ id, title, markdown }] }. Useful for building integrations, custom UIs, or automation.
GET /api/librams (authenticated)
Returns all your Librams as JSON. Used by the MCP server's list_librams tool. Requires your API key.
Security & Privacy
Your knowledge is yours. Here is exactly how access control works:
Private Librams
Only accessible with your API key. No one else can read your private Libram's context — not even if they know the Libram ID.
Private Vellums in public Librams
If a Libram is public but contains private Vellums, those private Vellums are automatically excluded from the context served to non-owners. Only your public Vellums are exposed.
API key scope
Each API key is scoped to your account. It can only read your own Librams. There is no write access via API key.
Revocation
Delete any key from Settings → API Keys at any time. Revocation is instant — the key stops working immediately.