OpenSwarm
What it is
Section titled “What it is”Autonomous AI agent orchestrator (Claude, GPT, Codex, Ollama, LMStudio, llama.cpp). Picks up Linear issues, runs a Worker/Reviewer pair pipeline, reports to Discord, retains long-term memory via LanceDB.
What we considered keeping
Section titled “What we considered keeping”- Linear/Notion as task source. We considered this. Rejected: user explicitly said no external task source integration.
- LanceDB vector memory. Considered for cross-mission semantic search. Rejected: markdown files are fine until they aren’t, and we’re not at “aren’t” scale.
- Worker/Reviewer pair. We have an analogous Worker/Validator pair, plus crosscheck and ui-qa specialists. Same idea, finer-grained.
What we dropped
Section titled “What we dropped”- Discord control surface. Coupled the harness to a UI chat that we don’t run.
- Multi-model support. Claude-only.
- TUI as primary surface. We have a Next.js dashboard.
Differences in philosophy
Section titled “Differences in philosophy”| OpenSwarm | Papercup |
|---|---|
| Linear-driven | features.json-driven |
| Discord-controlled | HTTP API + Next.js UI |
| Vector DB memory | Markdown files |
| Worker/Reviewer pair | Worker/Validator + 11 other roles |
| Multi-model (Claude/GPT/Ollama) | Claude-only |
Honest take
Section titled “Honest take”OpenSwarm is the closest spiritual cousin to our harness in the “autonomous dev team” category. The differences are tooling choices, not architectural ones. Linear-driven is a totally reasonable design; it just doesn’t fit our constraint (no external tracker).
If you have a small team using Linear and want AI to drain the backlog, OpenSwarm is the reference implementation.