Connecting AI Clients (OAuth & MCP)
Step-by-step instructions for connecting Viewert to Claude Desktop, Cursor, Windsurf, ChatGPT, Gemini, Grok, Llama, and more — OAuth is the recommended method, with API key and local npx as fallbacks.
Three Ways to Connect
Viewert supports three connection methods in order of recommendation:
1. OAuth (recommended)
Add the remote MCP URL https://www.viewert.com/api/mcp to your AI client. On first connection, a consent screen opens — approve once and your account is linked automatically. No API key to copy or store. Works in Claude Desktop, Cursor, Windsurf, and any client that supports remote MCP with OAuth.
2. API Key + Remote MCP (fallback)
Generate a vwt_ key in Settings → API Keys and configure it in your client's remote MCP settings with an Authorization header. Use this if your client supports remote MCP but not OAuth.
3. API Key + Local npx (legacy fallback)
Run npx @viewert/mcp locally with your API key. This spawns a local MCP server via stdio. Use this only if your client does not support remote MCP at all.
Claude Desktop — OAuth (Recommended)
Claude Desktop supports remote MCP with OAuth natively. This is the richest and easiest integration — no API key needed.
Open Claude Desktop Settings
Click the Claude menu → Settings → Developer.
Add a new MCP server
Click "Add MCP Server". Set the type to "Remote" and enter the URL: https://www.viewert.com/api/mcp
Approve the OAuth consent screen
Claude will open a browser window showing the Viewert consent screen. Review the permissions (read your Librams) and click "Allow access". Your account is now linked — you will never need to do this again.
Fully quit and reopen Claude Desktop
macOS: Right-click the Claude icon in the menu bar → Quit. Windows: Right-click the Claude tray icon → Quit. Reopen Claude. A small hammer icon in the toolbar confirms MCP tools are active.
Try it
Type: "Load my [Libram name] libram and summarise the key points." Claude will call list_librams and get_libram_context automatically.
Cursor & Windsurf — OAuth (Recommended)
Cursor and Windsurf both support remote MCP servers. Add the Viewert URL and authenticate once via OAuth.
Open MCP settings
Cursor: Cmd+Shift+P → "Open MCP Settings" (or edit ~/.cursor/mcp.json). Windsurf: Cmd+Shift+P → "Open MCP Config" (or edit ~/.codeium/windsurf/mcp_config.json).
Add the remote MCP server
Add the following entry: { "mcpServers": { "viewert": { "url": "https://www.viewert.com/api/mcp" } } }
Authenticate via OAuth
On the next tool call, your editor will open the Viewert OAuth consent screen in a browser. Click "Allow access" to link your account. The OAuth token is stored and refreshed automatically.
Reload the window
Run "Reload Window" from the command palette (Cmd+Shift+P). The Viewert tools load automatically.
Example coding prompt
"Load my Backend Architecture libram and help me refactor the auth middleware to match our conventions." The AI reads your docs and codes accordingly.
Fallback: API Key + Local npx
If your AI client does not support remote MCP or OAuth, use the local stdio server with an API key. Requires Node.js 18+ (nodejs.org).
Create an API key
Go to Settings → API Keys → Create Key. Give it a recognizable name. Copy the vwt_ key immediately — it is shown only once.
Run the setup wizard (easiest)
Open your terminal and run: npx -y @viewert/mcp The wizard verifies your API key, detects Claude Desktop, Cursor, and Windsurf, and writes the config automatically.
Or configure manually — macOS / Linux (Claude Desktop)
Config file: ~/Library/Application Support/Claude/claude_desktop_config.json { "mcpServers": { "viewert": { "command": "npx", "args": ["-y", "@viewert/mcp"], "env": { "VIEWERT_API_KEY": "vwt_your_key_here" } } } }
Or configure manually — macOS / Linux (Cursor)
Config file: ~/.cursor/mcp.json { "mcpServers": { "viewert": { "command": "npx", "args": ["-y", "@viewert/mcp"], "env": { "VIEWERT_API_KEY": "vwt_your_key_here" } } } }
Or configure manually — Windows
Claude: %APPDATA%\Claude\claude_desktop_config.json Cursor: %USERPROFILE%\.cursor\mcp.json Windsurf: %USERPROFILE%\.codeium\windsurf\mcp_config.json Same JSON structure as above.
Restart your client
Fully quit and reopen the app. The hammer icon (Claude) or MCP indicator (Cursor/Windsurf) confirms the tools are active.
ChatGPT (Paste Method)
ChatGPT does not support MCP. The most reliable method is to fetch your Libram content and paste it directly.
Fetch your Libram content
Run this in your terminal: curl -H "Authorization: Bearer vwt_your_key_here" \ "https://www.viewert.com/api/librams/YOUR_LIBRAM_ID/context" Windows (PowerShell): Invoke-RestMethod -Headers @{Authorization="Bearer vwt_your_key_here"} "https://www.viewert.com/api/librams/YOUR_LIBRAM_ID/context"
Paste into ChatGPT as context
"Here are my research notes: [paste markdown here] Based on these notes, what are the three strongest arguments for my thesis and where are the gaps?"
For public Librams — URL method (ChatGPT with browsing)
If your Libram is public and you have ChatGPT Plus with browsing enabled: "Please read https://www.viewert.com/api/librams/LIBRAM_ID/context and use it as context." Note: Only works with public Librams and a browsing-enabled model.
Gemini (Google AI Studio & Gemini App)
Gemini 1.5+ can read public URLs in some contexts. For private Librams, paste the content directly into AI Studio system instructions.
Most reliable: paste content directly
Fetch your Libram markdown (curl command above) and paste it into your Gemini conversation or AI Studio system instructions field.
For public Librams in Gemini app (1.5+ models)
"Please read https://www.viewert.com/api/librams/LIBRAM_ID/context and use it as your context for this session." Note: URL fetching is available on 1.5 Pro and later but not guaranteed on all plans.
Best experience: Google AI Studio with system instructions
Fetch your Libram content and paste it into the System Instructions field in AI Studio (aistudio.google.com). This loads it as persistent context for the entire session.
Grok (xAI)
Grok has a large context window and handles pasted markdown well. The paste method is most reliable.
Paste the markdown directly
Fetch your Libram content (curl command above) and paste it into Grok's context window.
Via the xAI API (developers)
Inject your Libram content as a system message in the OpenAI-compatible API (see the API section below).
Local Models (Ollama, LM Studio, Llama.cpp)
Running Llama, Mistral, Qwen, or any model locally? Fetch your context with curl and inject it into your system prompt.
Fetch your Libram as markdown
curl -H "Authorization: Bearer vwt_your_key_here" \ "https://www.viewert.com/api/librams/LIBRAM_ID/context" \ > context.md
Include it in your system prompt
Prepend context.md to your system prompt when calling the local API. Most local frontends (Open WebUI, LM Studio) have a system prompt field.
Example curl + ollama flow
CONTEXT=$(curl -s -H "Authorization: Bearer vwt_your_key" \ "https://www.viewert.com/api/librams/LIBRAM_ID/context") ollama run llama3 "Using this context: $CONTEXT\n\nWhat are the key design decisions in this project?"
MCP Tools Reference
The Viewert MCP server exposes two tools to any MCP-compatible client (Claude Desktop, Cursor, Windsurf, etc.).
list_librams
Lists all Librams in your account. Inputs: none Returns: Array of { id, name, description, vellum_count, visibility, updated_at } Example prompt: "What librams do I have?" — Claude calls this automatically.
get_libram_context
Fetches the full context for a Libram — all AI-enabled Vellums concatenated as Markdown. Inputs: • libram_id (string, required) • libram_name (string, optional) Returns: Markdown string of all AI-enabled Vellum content. Example: "Load my Backend Architecture libram and review this function."
OAuth tokens (remote MCP)
When connecting via OAuth, the client stores and auto-refreshes your access token. Tokens expire after 1 hour and are refreshed automatically using the refresh token (30-day lifetime). No manual action needed.
API key (local npx fallback)
Set VIEWERT_API_KEY in the env block of your MCP client config. Must be a valid vwt_ prefixed key from Settings → API Keys.
REST API Reference
The context endpoints are stateless and require no SDK. Use them to fetch Libram content for any tool that accepts a system prompt or pasted text.
GET /api/librams/:id/context
Returns all AI-enabled Vellums in the Libram as a single Markdown document. Auth: None for public Librams. Authorization: Bearer vwt_your_key or OAuth Bearer token for private Librams. Query params: format=json — returns structured JSON instead of Markdown.
GET /api/librams/:id/context?format=json
Structured JSON response: { "libram_id": "01J...", "libram_name": "My Research", "vellums": [ { "id": "01J...", "title": "Section Title", "markdown": "..." } ] }
GET /api/librams (authenticated)
Returns all Librams in your account. Auth: Bearer token required (OAuth or API key). Response: Array of { id, name, description, vellum_count, visibility, updated_at, created_at }
Rate limits & errors
401 Unauthorized — missing or invalid token. 403 Forbidden — valid token but no access to this resource. 404 Not Found — Libram does not exist or is private. 429 Too Many Requests — rate limit hit. All errors: { "error": "message" }
Security & Privacy
Your knowledge is yours. Here is exactly how access control works:
OAuth tokens
OAuth access tokens are short-lived (1 hour) and scoped to read your Librams only. Revoke access anytime from Settings → OAuth Apps.
API keys
Each API key is scoped to your account and grants read-only access to your Librams. Revoke instantly from Settings → API Keys.
Private Librams
Only accessible with your token. No one else can read your private Libram's context — not even if they know the Libram ID.
Private Vellums in public Librams
If a Libram is public but contains private Vellums, those private Vellums are automatically excluded from context served to non-owners.