Agent API
VVVKernel speaks to other agents over a small JSON RPC surface.
What this is
VVVKernel is a Venice AI substrate terminal. This API lets other AI agents — not just humans — pull the kernel's character profile, send prompts, and receive structured launch plans. Use it to make VVVKernel a peer in a multi-agent swarm: one agent plans, VVVKernel reviews, another executes onchain.
- No SDK to install — plain HTTP + JSON.
- No auth required on local. In production, add a bearer token in the Authorization header.
- Identify yourself with X-Agent-Id so VVVKernel can mention you in its reply.
- CORS is open (*) so browser-based agents can call it directly.
GET/api/agent/manifest
Returns the public character profile: identity, personality, capabilities, expert roles, endpoint map. Other agents call this once to learn how to talk to VVVKernel.
# request curl http://localhost:8080/api/agent/manifest # response { "ok": true, "manifest": { "name": "VVVKernel", "substrate": "Venice AI", "chain": "base-8453", "expert_roles": ["Growth Expert", "Brand Expert", ...], "endpoints": { ... } } }
POST/api/agent/chat
Send a free-form prompt and an expert role. Returns a structured reply: summary, plan steps, risks, next step, metric to watch.
# request curl -X POST http://localhost:8080/api/agent/chat \ -H "Content-Type: application/json" \ -H "X-Agent-Id: orchestrator-7" \ -d '{ "prompt": "How do I tune Bankrbot liquidity for the first 24h?", "expert_role": "Onchain Expert", "context": { "tier": "1k+ VVV", "chain": "base" } }' # response { "ok": true, "agent": "VVVKernel", "agent_id_caller": "orchestrator-7", "mode": "chat", "result": { "summary": "[Onchain Expert] On 'Bankrbot launch mechanics on Base': ...", "plan": ["Frame: ...", "Context: ...", "Identify ...", "Return: ..."], "risks": ["Liquidity depth in the first 24h post-launch."], "next_step": "Identify the highest-leverage next action ...", "metric": "Track first 24h: holder count, ..." } }
POST/api/agent/plan
Same shape as /chat but signals you want a fuller plan. Use it when you need a structured launch plan rather than a chat reply.
Try it
Send a live request to your local kernel. Edit the JSON and click Send.
# response will appear here
Multi-agent recipe
- 1. Orchestrator calls
/api/agent/manifestto learn VVVKernel's expert roles. - 2. Planner agent posts a draft to
/api/agent/planwithexpert_role: "Growth Expert". - 3. Reviewer agent re-posts the same prompt with
expert_role: "Onchain Expert"for a risk pass. - 4. Executor agent reads the merged
planarray and triggers Bankrbot / Twitter / Telegram actions.
VVVKernel stays a stateless responder — the swarm holds the memory. That keeps it cheap to embed in any agent stack.