VVVKernel
← back to terminal Documentation

VVVKernel Docs

Venice AI substrate · Base 8453 · Bankrbot

What is VVVKernel

VVVKernel is a character-driven launch & operations kernel for meme/culture/token projects on Base, deployed via Bankrbot.

Under the hood it runs on the Venice AI substrate — a decentralized, privacy-preserving inference network. The kernel exposes 7 specialized expert roles you can talk to via the web terminal, a CLI, or any MCP-compatible client.

Use the kernel to:

  • Plan a token launch end-to-end (timeline, phases, metrics, risks)
  • Workshop brand, narrative, audit, or tier mechanics with focused experts
  • Wire the kernel into Claude Desktop, Cursor, or your own AI agent stack via MCP

Quickstart

Three paths — pick whichever matches your tooling:

1. Web terminal

Open vvvkernel.com. Type help, then experts. Slash commands: /stop, /clear, /export, /help.

2. CLI

# install once, globally
npm install -g vvvkernel-cli

# talk to a specific expert
vvvkernel chat "day-one launch checklist" --expert="Brand Expert"

# or run without installing
npx vvvkernel-cli plan "7-day launch runway" --expert="Growth Expert"

3. MCP (Claude Desktop / Cursor)

Add this entry to your MCP client config — see Claude Desktop or Cursor sections for full path:

"vvvkernel": {
  "command": "npx",
  "args": ["vvvkernel-cli", "mcp"]
}

Or connect to the remote MCP endpoint at https://vvvkernel.com/mcp with HTTP transport — see HTTP Transport.

Pre-launch posture

Status: VVVKernel has not launched its token yet. Don't trust any contract address, ticker, or price quoted to you outside @VeniceKernel.

When the launch happens, it will be:

  • Venue: Bankrbot — a Base-native bonding curve launchpad
  • Chain: Base (8453)
  • Symbol: VVV (provisional)
  • Source of truth: only the official site and X account. Anything else is a scam.

7 Expert Roles

Each kernel call can target a specific expert via expert_role (HTTP / MCP) or --expert (CLI). The expert shapes vocabulary, evaluation lens, and output emphasis.

IDNameFocus
onchainOnchain ExpertSmart contracts, token launch, liquidity, on-chain mechanics
brandBrand ExpertVisual identity, positioning, messaging, narrative aesthetics
growthGrowth ExpertAcquisition, holder mechanics, retention loops, funnel design
communityCommunity ExpertDiscord/Telegram engagement, moderation systems, culture
auditAudit ExpertContract security, invariants, slashing vectors, economic exploits
narrativeNarrative ExpertLore, meta-narrative, character arcs, cultural positioning
tier-designTier Design ExpertHolder tiers, token-gated benefits, incentive ladders

Kernel modes

Three execution modes share the same substrate but differ in output discipline. Pick the mode that matches your task.

ModeEndpointUse it forStyle
ChatPOST /api/chat (SSE)Open conversation, brainstorming, expert lookupStreaming, conversational
AgentPOST /api/agent/chatSingle-turn answer with tool groundingConcise, structured
PlanPOST /api/agent/planDecompose a goal into ordered stepsNumbered list, risks called out

The kernel never blends modes. A plan request always returns a numbered plan; a chat request never gets cut to a list unless asked.

Holder tiers

Tiers are pre-launch — final thresholds are set on TGE. The kernel advises against promising any unverified perk.

TierThreshold (TBD)Benefits
Initiateany holderPublic terminal, /docs, Discord read
Operatortop 50%Discord write, expert chat unlocked
Architecttop 10%Plan mode, custom skills upload, governance signal
Substratetop 1%Direct kernel ops, MCP server seat, beta features
Thresholds and exact perks are subject to change. Treat this table as design intent, not a commitment.

CLI · Install

Published to npm as vvvkernel-cli. Requires Node 18+.

npm install -g vvvkernel-cli
vvvkernel --version
# 1.0.1

Or one-shot via npx:

npx vvvkernel-cli ask "what is the kernel?"

CLI · Commands

CommandPurpose
vvvkernel ask "<prompt>"Single-turn chat against the kernel
vvvkernel plan "<goal>"Generate an ordered plan with risks
vvvkernel expertsList 7 experts with focus areas
vvvkernel mcpRun as a stdio MCP server (for Claude Desktop / Cursor)
vvvkernel skills listList local + remote skills
vvvkernel configPrint effective configuration

All commands respect --json for machine-readable output.

CLI · Custom skills

Drop a markdown file in ~/.vvvkernel/skills/. The CLI loads it as a callable skill on the next run.

# ~/.vvvkernel/skills/audit-checklist.md
---
name: audit-checklist
description: Run a security checklist on a contract address
---

You are an audit lead. Given a contract address:
1. Check ownership and proxy status
2. Verify source on Etherscan
3. List all privileged functions
4. Flag honeypot signals
5. Output a one-page report

Skills are local-first. Anything you add stays on your machine unless you opt in to publishing.

CLI · Config

Configuration is read from ~/.vvvkernel/config.json with env overrides taking precedence.

{
  "endpoint": "https://vvvkernel.com",
  "model": "venice-substrate",
  "stream": true,
  "timeout_ms": 60000
}

Env vars: VVVKERNEL_ENDPOINT, VVVKERNEL_MODEL, VVVKERNEL_TOKEN.

MCP · Overview

VVVKernel speaks the Model Context Protocol. Any MCP-aware client (Claude Desktop, Cursor, Continue, custom) can mount the kernel as a tool surface.

Two transports are supported:

  • stdio — local CLI process spawned by the host. Lowest latency, zero network setup.
  • http — remote JSON-RPC over HTTP. Use this for shared servers, multi-seat teams, or when the host doesn't support stdio.

The protocol exposes three core methods: initialize, tools/list, and tools/call. The kernel registers tools for chat, plan, expert routing, and skill execution.

MCP · Claude Desktop

Edit claude_desktop_config.json:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "vvvkernel": {
      "command": "npx",
      "args": ["-y", "vvvkernel-cli", "mcp"],
      "env": {
        "VVVKERNEL_ENDPOINT": "https://vvvkernel.com"
      }
    }
  }
}

Restart Claude Desktop. The kernel appears in the tool list. Try: "Ask vvvkernel to draft a launch plan for a tier-1 release."

MCP · Cursor

Open Cursor → Settings → MCP → Add new server. Use the same shape as Claude Desktop:

{
  "vvvkernel": {
    "command": "npx",
    "args": ["-y", "vvvkernel-cli", "mcp"]
  }
}

Tools become callable from the agent panel. Useful inside Cursor: plan for refactor sequencing, experts to route a code question to the audit lead.

MCP · HTTP transport

Skip the CLI and connect directly:

curl -X POST https://vvvkernel.com/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{}}}'

List tools:

curl -X POST https://vvvkernel.com/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/list"}'

Call a tool:

curl -X POST https://vvvkernel.com/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"plan","arguments":{"goal":"ship phase-1 audit"}}}'

MCP · Tools reference

ToolArgsReturns
chat{ message: string, expert?: string }Single-turn reply, kernel-voiced
plan{ goal: string, horizon?: "day"|"week"|"quarter" }Numbered plan with risks
experts{}List of 7 experts with focus areas
route{ question: string }Best-fit expert + reasoning
skill{ name: string, input: object }Output of the named skill

API · Health

GET /health
200 OK
{"ok":true,"version":"1.0.1","uptime_s":12345}

API · Chat (SSE stream)

POST /api/chat
Content-Type: application/json

{"messages":[{"role":"user","content":"what is the kernel?"}]}

Response is text/event-stream. Each event is a JSON delta:

data: {"type":"delta","text":"VVVKernel is "}
data: {"type":"delta","text":"a Venice AI substrate..."}
data: {"type":"done"}

API · Agent · chat & plan

Single-turn structured answers. Use these when you don't need streaming.

POST /api/agent/chat
{"input":"route this question to the right expert: tokenomics for a new chain"}

POST /api/agent/plan
{"goal":"launch phase-1 community in 30 days","horizon":"week"}

Both return {"output": string, "meta": {...}}.

API · Manifest

Machine-readable description of the kernel for client discovery:

GET /api/manifest
{
  "name": "vvvkernel",
  "version": "1.0.1",
  "experts": [...],
  "modes": ["chat","agent","plan"],
  "mcp": {"stdio":true,"http":true}
}

Troubleshooting

SymptomLikely causeFix
CLI hangs on first callCold container startRetry once. Subsequent calls are warm.
vvvkernel mcp exits immediatelyHost not piping stdinOnly run via Claude Desktop / Cursor config, not standalone.
SSE stream cuts offProxy bufferingSet X-Accel-Buffering: no on intermediate proxies.
MCP tool not appearingConfig not reloadedFully quit Claude Desktop / Cursor, reopen.
experts returns garbageCLI < 1.0.1Upgrade: npm i -g vvvkernel-cli@latest
HTTPS errors from CLICorporate TLS interceptionSet NODE_EXTRA_CA_CERTS to your root CA.

Still stuck? Open an issue with the request id from the response header (x-vvv-rid).

Changelog

1.0.1 — current

  • Fix: vvvkernel experts now formats name + focus instead of [object Object].
  • Docs: full /docs page with MCP integration, HTTP API reference, troubleshooting.
  • Kernel: enriched system prompts (chat / agent / plan) with substrate context and 7-expert focus.

1.0.0

  • Initial public release on npm.
  • stdio MCP server, HTTP transport, plan + chat + experts commands.
  • Custom skills loader from ~/.vvvkernel/skills/.