Open Source MCP Server

Stop AI Hallucinations Before They Start

Run models from OpenAI, Google, Anthropic, xAI, Perplexity, and OpenRouter in parallel. They check each other's work, debate solutions, and catch errors before you see them.

Works best with Claude Code MCP integration
6 Providers
Multi-Model Verification
31 Tools
Fully Configurable
3-200 Rounds
Debate & Refinement
Stop Trusting. Start Verifying.

Core Capabilities

Built for developers who need reliable AI reasoning

Parallel Verification

Run multiple models simultaneously from OpenAI, Google, Anthropic, xAI, Perplexity, and OpenRouter. They vote on answers and cross-check each other's work in parallel.

simultaneous execution

Live Fact-Checking

Perplexity and Grok search the web in real-time for the latest information (past week). Get verified answers with actual sources, not hallucinated citations.

real-time search

Multi-Round Debates

Make models argue for 3-200 rounds to refine solutions. Competitive mode (challenge each other), collaborative mode (build together), or debate mode (structured discussion).

3-200 rounds

Adversarial Challenge

Built-in challenger tool finds logical flaws, pokes holes in reasoning, and prevents echo chambers. Bad answers get caught before you see them.

error prevention

Configurable Tools

Turn tools on/off in tools.config.json. Need just 1 tool? Use 400 tokens overhead. Want all 31 tools? Use 13k tokens. You control costs and context usage.

400-13k tokens

Custom Workflows

Chain unlimited steps with YAML/JSON config. Pass outputs between steps, run models in parallel, branch conditionally. Works best with Claude Code MCP integration.

unlimited steps

How It Reduces Hallucinations

Run multiple AI models on the same question. They check each other's answers, debate solutions, and catch mistakes before you see them.

Based on peer-reviewed research (arXiv:2406.04692)

The Problem

A single AI model can confidently give you wrong answers. It doesn't know when it's making things up.

No way to verify if the answer is correct
You have to manually fact-check everything
Mistakes only show up after you've used the answer

What TachiBot Does

TachiBot runs your question through multiple models at once. They generate answers independently, then review each other's work to catch mistakes.

Step 1
4-6 models answer your question separately
Step 2
Each model reviews the others' answers and points out errors
Step 3
Final answer combines the best parts and removes mistakes

Why This Works

Researchers tested this approach and published the results. When models check each other, hallucinations drop significantly.

30-40% Fewer Mistakes
Published research shows measurable reduction
More Accurate Answers
Models catch errors other models would miss
Open Research
Published on arXiv, code on GitHub - verify it yourself
Real Workflow Output

The AI your AI calls for help.

Real GPT-5 to GPT-5.1 migration analysis with 5 AI models

5 steps
~3 minutes
5 AI models
QUERY
"I'm using GPT-5 in production. Should I migrate to GPT-5.1? What are the differences, breaking changes, and migration steps?"
Two Modes: GPT-5.1 Instant (2x faster) and GPT-5.1 Thinking (deeper reasoning) vs GPT-5's single tier
8 New Personalities: Customizable tone and style - addresses GPT-5's 'too neutral' feedback
Performance: 20-30% faster inference, 25% better at coding tasks, 15% improved factuality
Context: 2x longer session retention without drift - better for complex conversations
Real-time Integration: Dynamic web updates for current information

Real Example: Deep Research with Verification

Stop Trusting. Start Verifying.

Single Model

"What breaking changes are in React 19?"
Generic advice, might miss recent updates
No sources or official documentation links
Could confuse React 18 vs 19 features
Unreliable, unverified answers

TachiBot

"What breaking changes are in React 19?"
1.Run in Parallel
OpenAIGooglePerplexityOpenRouter
2.Debate & Refine

Models challenge each other (3-200 rounds)

3.Verify Facts
PerplexityGrok

Search live sources with recency filters

Accurate list with official documentation

Why You Need This

AI makes stuff up. One model gives you confident wrong answers. You waste hours debugging hallucinations.

Token costs eat your budget. Every tool loaded costs tokens. 31 tools = thousands of tokens per request before you even start.

You're stuck with rigid workflows. Want to verify an API with 3 different models? Build a custom 40-step process? Too bad.

One model isn't enough. Complex problems need multiple perspectives. But coordinating models manually is painful.

What You Get

AI models check each other. Perplexity researches, Grok verifies, Challenger pokes holes. Bad answers get caught before you see them.

You control token costs. Need 1 tool? Use 400 tokens. Need all 25? Use 13k. Turn tools on/off in one config file.

Build any workflow you want. YAML/JSON config. Chain unlimited steps. Customize parameters. Make models debate for 200 rounds if you want. Have fun.

Models work together. Multiple AI models brainstorm, build on ideas, and synthesize better solutions than any single model can produce.

Works With Leading AI Providers

Always using the latest models from each provider

OpenAI
GPT models
Google
Gemini models
Perplexity
Sonar search
xAI
Grok models
OpenRouter
Qwen Coder & more
Anthropic
Claude models

Works best with Claude Code MCP integration

Full Control. Zero Lock-In.

Customize Everything

Control token costs and build custom workflows with simple config files

Profile System

Toggle tools on/off to control token usage

Choose a preset profile or create your own. Toggle individual tools on/off to control exactly which capabilities load and how many tokens you use.

tools.config.jsonJSON
{
  "customProfile": {
    "enabled": true,  // ← Use custom profile
    "tools": {
      // Research tools
      "perplexity_ask": true,    // ✓ ON
      "scout": true,             // ✓ ON

      // Reasoning tools
      "grok_reason": true,       // ✓ ON
      "challenger": true,        // ✓ ON
      "verifier": true,          // ✓ ON

      // Creative tools
      "openai_brainstorm": true, // ✓ ON
      "gemini_analyze_code": false, // ✗ OFF
      "qwen_coder": false        // ✗ OFF
    }
  }
}
1 tool enabled~400 tokens
All 31 tools enabled~20k tokens

Optimize your context. Tools take token space. Load only what you need. Switch profiles anytime.

Custom Workflows

Write your own multi-step AI workflows

Define custom workflows in YAML or JSON. Chain any tools together, pass outputs between steps, run models in parallel. This example runs 4 models simultaneously, synchronizes their perspectives, then debates to refine the solution.

swarm-think.yamlYAML
# Real workflow from TachiBot
steps:
  # Step 1: 4 models run in parallel
  - tool: gemini_brainstorm
    output: creative_view
  - tool: openai_brainstorm
    output: systematic_view
  - tool: perplexity_ask
    output: research_facts
  - tool: qwen_coder
    output: technical_view

  # Step 2: Synchronize perspectives
  - tool: think
    params:
      thought: "Combine all perspectives"
    output: sync

  # Step 3: Debate to refine
  - tool: focus
    params:
      mode: "deep-reasoning"
      rounds: 5
    output: refined

Build your own workflows. Create unlimited variations. Save as .yaml or .json files. Run with workflow(name: "swarm-think")

Get Started in Minutes

Add to Claude Desktop or any MCP client

Installation
# 1. Install via npm
npm install -g tachibot-mcp

# 2. Add to Claude Desktop config
# ~/.config/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "tachibot": {
      "command": "tachibot-mcp",
      "args": ["--config", "~/.tachibot/config.json"]
    }
  }
}

# 3. Configure API keys (optional)
{
  "apiKeys": {
    "openai": "sk-...",
    "gemini": "...",
    "perplexity": "..."
  }
}

# 4. Start using!
tachibot workflow run swarm-think "Your query here"