VVVKernel Docs
Venice AI substrate · Base 8453 · Bankrbot
What is VVVKernel
VVVKernel is a character-driven launch & operations kernel for meme/culture/token projects on Base, deployed via Bankrbot.
Under the hood it runs on the Venice AI substrate — a decentralized, privacy-preserving inference network. The kernel exposes 7 specialized expert roles you can talk to via the web terminal, a CLI, or any MCP-compatible client.
Use the kernel to:
- Plan a token launch end-to-end (timeline, phases, metrics, risks)
- Workshop brand, narrative, audit, or tier mechanics with focused experts
- Wire the kernel into Claude Desktop, Cursor, or your own AI agent stack via MCP
Quickstart
Three paths — pick whichever matches your tooling:
1. Web terminal
Open vvvkernel.com. Type help, then experts. Slash commands: /stop, /clear, /export, /help.
2. CLI
# install once, globally npm install -g vvvkernel-cli # talk to a specific expert vvvkernel chat "day-one launch checklist" --expert="Brand Expert" # or run without installing npx vvvkernel-cli plan "7-day launch runway" --expert="Growth Expert"
3. MCP (Claude Desktop / Cursor)
Add this entry to your MCP client config — see Claude Desktop or Cursor sections for full path:
"vvvkernel": { "command": "npx", "args": ["vvvkernel-cli", "mcp"] }
Or connect to the remote MCP endpoint at https://vvvkernel.com/mcp with HTTP transport — see HTTP Transport.
Pre-launch posture
When the launch happens, it will be:
- Venue: Bankrbot — a Base-native bonding curve launchpad
- Chain: Base (8453)
- Symbol: VVV (provisional)
- Source of truth: only the official site and X account. Anything else is a scam.
7 Expert Roles
Each kernel call can target a specific expert via expert_role (HTTP / MCP) or --expert (CLI). The expert shapes vocabulary, evaluation lens, and output emphasis.
| ID | Name | Focus |
|---|---|---|
onchain | Onchain Expert | Smart contracts, token launch, liquidity, on-chain mechanics |
brand | Brand Expert | Visual identity, positioning, messaging, narrative aesthetics |
growth | Growth Expert | Acquisition, holder mechanics, retention loops, funnel design |
community | Community Expert | Discord/Telegram engagement, moderation systems, culture |
audit | Audit Expert | Contract security, invariants, slashing vectors, economic exploits |
narrative | Narrative Expert | Lore, meta-narrative, character arcs, cultural positioning |
tier-design | Tier Design Expert | Holder tiers, token-gated benefits, incentive ladders |
Kernel modes
Three execution modes share the same substrate but differ in output discipline. Pick the mode that matches your task.
| Mode | Endpoint | Use it for | Style |
|---|---|---|---|
| Chat | POST /api/chat (SSE) | Open conversation, brainstorming, expert lookup | Streaming, conversational |
| Agent | POST /api/agent/chat | Single-turn answer with tool grounding | Concise, structured |
| Plan | POST /api/agent/plan | Decompose a goal into ordered steps | Numbered list, risks called out |
The kernel never blends modes. A plan request always returns a numbered plan; a chat request never gets cut to a list unless asked.
Holder tiers
Tiers are pre-launch — final thresholds are set on TGE. The kernel advises against promising any unverified perk.
| Tier | Threshold (TBD) | Benefits |
|---|---|---|
| Initiate | any holder | Public terminal, /docs, Discord read |
| Operator | top 50% | Discord write, expert chat unlocked |
| Architect | top 10% | Plan mode, custom skills upload, governance signal |
| Substrate | top 1% | Direct kernel ops, MCP server seat, beta features |
CLI · Install
Published to npm as vvvkernel-cli. Requires Node 18+.
npm install -g vvvkernel-cli
vvvkernel --version
# 1.0.1
Or one-shot via npx:
npx vvvkernel-cli ask "what is the kernel?"
CLI · Commands
| Command | Purpose |
|---|---|
vvvkernel ask "<prompt>" | Single-turn chat against the kernel |
vvvkernel plan "<goal>" | Generate an ordered plan with risks |
vvvkernel experts | List 7 experts with focus areas |
vvvkernel mcp | Run as a stdio MCP server (for Claude Desktop / Cursor) |
vvvkernel skills list | List local + remote skills |
vvvkernel config | Print effective configuration |
All commands respect --json for machine-readable output.
CLI · Custom skills
Drop a markdown file in ~/.vvvkernel/skills/. The CLI loads it as a callable skill on the next run.
# ~/.vvvkernel/skills/audit-checklist.md
---
name: audit-checklist
description: Run a security checklist on a contract address
---
You are an audit lead. Given a contract address:
1. Check ownership and proxy status
2. Verify source on Etherscan
3. List all privileged functions
4. Flag honeypot signals
5. Output a one-page report
Skills are local-first. Anything you add stays on your machine unless you opt in to publishing.
CLI · Config
Configuration is read from ~/.vvvkernel/config.json with env overrides taking precedence.
{
"endpoint": "https://vvvkernel.com",
"model": "venice-substrate",
"stream": true,
"timeout_ms": 60000
}
Env vars: VVVKERNEL_ENDPOINT, VVVKERNEL_MODEL, VVVKERNEL_TOKEN.
MCP · Overview
VVVKernel speaks the Model Context Protocol. Any MCP-aware client (Claude Desktop, Cursor, Continue, custom) can mount the kernel as a tool surface.
Two transports are supported:
- stdio — local CLI process spawned by the host. Lowest latency, zero network setup.
- http — remote JSON-RPC over HTTP. Use this for shared servers, multi-seat teams, or when the host doesn't support stdio.
The protocol exposes three core methods: initialize, tools/list, and tools/call. The kernel registers tools for chat, plan, expert routing, and skill execution.
MCP · Claude Desktop
Edit claude_desktop_config.json:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"vvvkernel": {
"command": "npx",
"args": ["-y", "vvvkernel-cli", "mcp"],
"env": {
"VVVKERNEL_ENDPOINT": "https://vvvkernel.com"
}
}
}
}
Restart Claude Desktop. The kernel appears in the tool list. Try: "Ask vvvkernel to draft a launch plan for a tier-1 release."
MCP · Cursor
Open Cursor → Settings → MCP → Add new server. Use the same shape as Claude Desktop:
{
"vvvkernel": {
"command": "npx",
"args": ["-y", "vvvkernel-cli", "mcp"]
}
}
Tools become callable from the agent panel. Useful inside Cursor: plan for refactor sequencing, experts to route a code question to the audit lead.
MCP · HTTP transport
Skip the CLI and connect directly:
curl -X POST https://vvvkernel.com/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{}}}'
List tools:
curl -X POST https://vvvkernel.com/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list"}'
Call a tool:
curl -X POST https://vvvkernel.com/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"plan","arguments":{"goal":"ship phase-1 audit"}}}'
MCP · Tools reference
| Tool | Args | Returns |
|---|---|---|
chat | { message: string, expert?: string } | Single-turn reply, kernel-voiced |
plan | { goal: string, horizon?: "day"|"week"|"quarter" } | Numbered plan with risks |
experts | {} | List of 7 experts with focus areas |
route | { question: string } | Best-fit expert + reasoning |
skill | { name: string, input: object } | Output of the named skill |
API · Health
GET /health
200 OK
{"ok":true,"version":"1.0.1","uptime_s":12345}
API · Chat (SSE stream)
POST /api/chat
Content-Type: application/json
{"messages":[{"role":"user","content":"what is the kernel?"}]}
Response is text/event-stream. Each event is a JSON delta:
data: {"type":"delta","text":"VVVKernel is "}
data: {"type":"delta","text":"a Venice AI substrate..."}
data: {"type":"done"}
API · Agent · chat & plan
Single-turn structured answers. Use these when you don't need streaming.
POST /api/agent/chat
{"input":"route this question to the right expert: tokenomics for a new chain"}
POST /api/agent/plan
{"goal":"launch phase-1 community in 30 days","horizon":"week"}
Both return {"output": string, "meta": {...}}.
API · Manifest
Machine-readable description of the kernel for client discovery:
GET /api/manifest
{
"name": "vvvkernel",
"version": "1.0.1",
"experts": [...],
"modes": ["chat","agent","plan"],
"mcp": {"stdio":true,"http":true}
}
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
| CLI hangs on first call | Cold container start | Retry once. Subsequent calls are warm. |
vvvkernel mcp exits immediately | Host not piping stdin | Only run via Claude Desktop / Cursor config, not standalone. |
| SSE stream cuts off | Proxy buffering | Set X-Accel-Buffering: no on intermediate proxies. |
| MCP tool not appearing | Config not reloaded | Fully quit Claude Desktop / Cursor, reopen. |
experts returns garbage | CLI < 1.0.1 | Upgrade: npm i -g vvvkernel-cli@latest |
| HTTPS errors from CLI | Corporate TLS interception | Set NODE_EXTRA_CA_CERTS to your root CA. |
Still stuck? Open an issue with the request id from the response header (x-vvv-rid).
Changelog
1.0.1 — current
- Fix:
vvvkernel expertsnow formats name + focus instead of[object Object]. - Docs: full /docs page with MCP integration, HTTP API reference, troubleshooting.
- Kernel: enriched system prompts (chat / agent / plan) with substrate context and 7-expert focus.
1.0.0
- Initial public release on npm.
- stdio MCP server, HTTP transport, plan + chat + experts commands.
- Custom skills loader from
~/.vvvkernel/skills/.