Open Source MCP Server

Stop Switching Tabs. Command Every LLM From One Prompt.

Seven models, seven training sets, seven perspectives. Perplexity searches, Grok reasons, Gemini critiques, Kimi thinks step-by-step — different minds, one answer.

Gateway:One OpenRouter key
BYOB:Your own provider keys
Perplexity: Always needs own key
"Have Grok check Twitter for that error message""Ask Perplexity what changed in React 19 this week""Get Gemini to brainstorm, then have Kimi K2.5 and GPT-5.2 both analyze it"
7
Providers
35+
Tools
22
Prompt Techniques

Stop Trusting. Start Verifying.

Same question. Different approach. Better answer.

"What breaking changes are in React 19?"

One Model

Generic advice, might miss recent updates

No sources or documentation links

Could confuse React 18 vs 19 features

No way to verify accuracy

Unverified, possibly outdated

TachiBot

1. Run in Parallel
OpenAIGooglePerplexityOpenRouter
2. Cross-Verify

Models challenge each other, 3-200 rounds

3. Fact-Check
PerplexityGrok

Live sources with recency filters

Verified, with official sources
Building Blocks for Thinking

Mix and Match Your Reasoning Pipeline

Mix and match tools, prompt techniques, and workflows to build your reasoning pipeline.

New

Multi-Model Planner

Create bulletproof implementation plans. A council of 6 AI models — Grok searches for ground truth, Qwen analyzes code, Kimi K2.5 reasons step-by-step and decomposes into dependency-ordered subtasks, GPT-5.2 critiques gaps, Qwen 235B drafts the synthesis, and Gemini judges the final plan with quality scores and verification checkpoints.

planner_maker + planner_runner

Multi-Model Council

Run your question through 4-6 AI models simultaneously. Each analyzes from its unique perspective. A final judge synthesizes the best insights, scores confidence, and resolves conflicts.

council + judge

7 AI Providers

GPT-5.2, Gemini, Grok, Perplexity, Kimi K2.5, Qwen, and MiniMax — each with unique strengths. Gateway mode with one OpenRouter key, or BYOB with your own provider keys.

98%Qwen 235B·HMMT76.8%Kimi K2.5·SWE88.4%GPT-5.2·GPQA95%Gemini 3·AIME72.5%MiniMax·SWE
gateway + BYOB
New

22 Prompt Techniques

Research-backed patterns like first_principles, tree_of_thoughts, and council_of_experts. Preview the enhanced prompt before executing. Apply any technique to any model.

preview → execute

YAML Workflows

Chain unlimited steps. Variable interpolation, step dependencies, auto-distillation, comparison tables, and optional AI judge. PingPong debates, research pipelines, code reviews.

workflow + planner
Decision-Making Framework

How Seven Minds Think Together

A structured reasoning pipeline — ground in data, decompose, explore alternatives, stress test for holes, then judge.

01Ground Truth

Search real-time data from 4 providers. No thinking starts without facts.

PerplexityGrok SearchGemini SearchOpenAI Search
02Break Down

Decompose into atomic parts. Map dependencies, constraints, and execution order.

Kimi K2.5Qwen 235BQwen Algo
03Explore Paths

Generate alternative approaches from different training data and perspectives.

GPT-5.2GeminiGrokMiniMax
04Stress Test

Attack assumptions. Find holes, blind spots, and failure modes in every path.

GPT-5.2 CriticQwen Reason
05Judge

Synthesize the best elements from every model. Resolve conflicts. Score everything. Not 10? Here's why — and how to fix it.

Gemini 3 ProKimi K2.5
Confidence91%
Code Quality8.4/10
-1.6 — Cyclomatic complexity >10 in handler
Fix: Extract validation to utility class
Security7/10
-3 — No rate limiting on auth endpoints
Fix: Add express-rate-limit, 5 req/min on /login
Performance9/10
-1 — O(n²) nested loop on unindexed array
Fix: Use HashMap for O(1) lookup
Every deduction comes with a reason and a fix. No mystery scores.
Why these models

Each model was chosen for a specific strength. Different training data, different benchmarks, different blind spots.

98%
Qwen 235B
HMMT — Harvard-MIT math tournament. Proof-based, multi-step olympiad problems that PhD students struggle with.
95%
Gemini 3 Pro
AIME — American Invitational Math Exam. Top math competition, one level below the International Math Olympiad.
88.4%
GPT-5.2
GPQA Diamond — PhD-level science questions written by domain experts. Tests deep reasoning, not memorization.
76.8%
Kimi K2.5
SWE-Bench Verified — resolves real GitHub issues from open-source repos. Tops Gemini 3 Pro (76.2%). Open-weights.
72.5%
MiniMax M2.1
SWE-Bench — agentic code fixes. Best value at $0.27/M input tokens. 10x cheaper than GPT-5.2 for routine tasks.
1261
QwQ-32B
CodeElo — competitive programming rating. Solves Codeforces-style algorithm problems. Efficient at 32B parameters.
Real Workflow Output

5 Models. One Answer.

Real GPT-5 → GPT-5.2 migration analysis, ~3 minutes

Query

"Should I migrate from GPT-5 to GPT-5.2? Differences, breaking changes, migration steps."

1
Version DiscoveryPerplexity

GPT-5.2 released Nov 12 — automatic migration, backward compatible

2
Feature ComparisonGrok

2 modes, 8 personalities, 25% better coding, 15% better factuality

3
Migration ImpactGPT-5.2

No breaking changes — backward compatible until Q1 2026

4
Performance AnalysisGemini

30% latency reduction, 10% fewer hallucinations

5
Final RecommendationKimi K2.5

Low-risk upgrade — enable in staging, test 1 week, then production

Verdict: Safe to upgrade
No breaking changes, backward compatible, easy rollback
92%

Customize Everything

Toggle tools, write workflows, control exactly what loads

Profile System

Toggle tools on/off per project

tools.config.json
{
  "customProfile": {
    "tools": {
"perplexity_ask": true,
"grok_reason": true,
"qwen_coder": false
} } }
1 tool~400 tokens
All 35+ toolsloaded dynamically

YAML Workflows

Chain models into multi-step pipelines

general-council.yaml
steps:
  - tool: grok_reason
    output: reasoning
  - tool: perplexity_ask
    output: research
  - tool: gemini_analyze_text
    output: patterns
  - tool: openai_brainstorm
    output: final

Pass outputs between steps, run in parallel, add variables. Save as .yaml or .json.

Works With Leading AI Providers

Always using the latest models from each provider

OpenAI
GPT-5.2 series
Google
Gemini 3 Pro
Perplexity
Sonar Pro search
xAI
Grok 4.1
OpenRouter
Qwen3-235B, Kimi K2.5
MiniMax
M2.1 agentic model
Anthropic
Claude models

Works best with Claude Code MCP integration

Get Started in Minutes

Add to Claude Desktop or any MCP client

1Install via npm
npm install -g tachibot-mcp
2Add to Claude Desktop config
{
  "mcpServers": {
    "tachibot": {
      "command": "tachibot-mcp",
      "args": ["--config", "~/.tachibot/config.json"]
    }
  }
}

~/.config/Claude/claude_desktop_config.json

3Start using
tachibot workflow run general-council "Your query here"

Built in public. Backed by stars.

TachiBot is open source and actively maintained. If it helps your workflow, a star helps us keep going.

GitHub Starsnpm downloadsLast commit