Vine

← Back
Tools

Ruflo — Agent Swarms for Claude Code

Spawn entire swarms of specialized agents (Architect, Coder, Tester) that work on a task in parallel instead of one lonely Claude.

What Ruflo is

Ruflo is an agent orchestration layer for Claude Code. You spawn entire swarms of specialized agents (Architect, Coder, Tester, Researcher, …) that work on a task in parallel instead of one Claude grinding through everything alone. Built for solo founders, builders, and anyone shipping real stuff with Claude Code who wants to be done faster. Drop in a goal, Ruflo splits the work, you review at the end. Free, open source, runs locally on your Claude Code install.

Installation

Option 1 — One-liner in the terminal:

curl -fsSL https://cdn.jsdelivr.net/gh/ruvnet/ruflo@main/scripts/install.sh | bash

Option 2 — Paste into Claude Code:

Install Ruflo via "npx ruflo@latest init --wizard" and walk me through the setup. Then show me how to spawn my first swarm.

Liking this?

Inside the community I show you how I run all of this day-to-day — live sessions, direct feedback on your setup, and my full configs.

Use cases

1. Ship a landing page in one evening

Problem: You want to validate a new idea, but just the basic landing (hero, features, CTA, pricing, FAQ, footer) eats a whole day.

Swarm setup:

  • Topology: hierarchical
  • maxAgents: 6
  • Agents: system-architect, coder (×2), ui-designer, tester, reviewer

Prompt:

Spawn a hierarchical swarm with 6 agents (system-architect, 2x coder, ui-designer, tester, reviewer) and build a landing page for [PRODUCT IN 1 SENTENCE].

Stack: Next.js 14 App Router, TailwindCSS, shadcn/ui, deployed on Vercel.
Sections: Hero with email capture, 3 feature blocks, social proof slot, pricing (3 tiers), FAQ, footer.
Colors: [COLORS], tone: direct and concrete, no buzzwords.

Architect sets the structure, designer sets the styling system, coders build the sections in parallel, tester checks responsive + Lighthouse, reviewer goes over it and lands a PR-ready commit at the end.

What you get: Complete repo with a deploy-ready landing, responsive, email capture wired up. A few hours instead of a full day — plus you can do something else while the swarm runs.


2. Understand a foreign repo fast (onboarding)

Problem: New client, new project, or you're taking over something from someone else. Two days of reading before you can touch anything.

Swarm setup:

  • Topology: mesh
  • maxAgents: 5
  • Agents: researcher (×3), code-analyzer, documenter

Prompt:

Analyze this repo like a new dev starting Monday. 3 researchers in parallel:
- researcher-1: Architecture — which frameworks, which modules, data flow, external services
- researcher-2: Entry points — how do you start it, how do you deploy it, which ENV vars are required
- researcher-3: Domain logic — what does the app do from a user's view, which business rules are in the code

code-analyzer finds the hotspots (files with the most dependencies, most complex functions, test-coverage gaps).

documenter consolidates everything into an ONBOARDING.md with: quick start, architecture diagram as Mermaid, "where is what," the 10 files you must know, known issues.

What you get: An ONBOARDING.md you read in 15 minutes and then you know where to touch. Keep it for later — even you three months from now will thank you.


3. Multi-platform content pipeline

Problem: You have a solid idea, a transcript, or a blog article and you want to pull a Twitter thread, LinkedIn post, reel script, and newsletter out of it. Doing it alone is zero fun.

Swarm setup:

  • Topology: hierarchical
  • maxAgents: 6
  • Agents: researcher, content-writer (×3), editor, seo-optimizer

Prompt:

Input: [PASTE TRANSCRIPT / ARTICLE / IDEA].

Goal: 1 core idea, 5 formats.

- researcher pulls the 3 strongest takes (hot takes, no truisms)
- content-writer-1: Twitter thread, 8-12 tweets, hook in tweet 1, CTA at the end
- content-writer-2: LinkedIn post, 1200-1500 chars, PAS framework, personal opener
- content-writer-3: Reel/TikTok script, 30-45 sec, hook in the first 2 sec, 3 beats, loop CTA
- seo-optimizer: Newsletter version, 400-600 words with A/B subject-line variant
- editor goes over it, kills fluff, checks every output stands on its own

Tone: direct, concrete, second-person, no buzzwords, no emojis except where they actually fit.

What you get: 5 finished drafts in one file, each tuned per platform. 30 minutes of work instead of half a day.


4. Competitor teardown

Problem: You want to build a feature, landing, or pricing model and not start from zero. Scrolling through 5 competitors is tedious and unstructured.

Swarm setup:

  • Topology: mesh
  • maxAgents: 6
  • Agents: researcher (×4), analyst, reviewer

Prompt:

Competitor teardown for [YOUR NICHE / PRODUCT].

4 researchers in parallel, each takes one competitor:
- [URL 1]
- [URL 2]
- [URL 3]
- [URL 4]

Per competitor pull:
- Positioning (1 sentence from their hero)
- Price tiers and what's in each
- Main features (list)
- Strongest copy elements (hooks, CTAs, social proof)
- Weaknesses / gaps
- Target customer (who it speaks to)

analyst builds a comparison matrix as a markdown table and derives 5 angles of attack: where everyone's weak, where I can differentiate.

reviewer checks for nonsense (no hallucinations — if it's not on the page, cut it).

What you get: A comparison table + attack plan. Priceless as the base for your own positioning.


5. Data analysis pipeline from CSV/API

Problem: You have data (Stripe export, CSV from some tool, API response) and want insights. Excel pivot isn't enough, but hiring a data scientist isn't either.

Swarm setup:

  • Topology: hierarchical
  • maxAgents: 5
  • Agents: data-engineer, analyst (×2), coder, reviewer

Prompt:

Input: [PATH TO CSV / API ENDPOINT].

Goal: analysis script in Python + report.

1. data-engineer: read the CSV, detect schema, report nulls/duplicates/outliers, clean DataFrame
2. analyst-1: descriptive stats (counts, distributions, top-N), time series if there's a date column
3. analyst-2: segmentation — who are the top 10% users/customers/transactions, what do they have in common
4. coder: Matplotlib/Plotly charts for the 5 most important findings
5. reviewer: Markdown report with findings at the top, charts in the middle, "what should I ask next" at the bottom

Give me the 3 non-obvious insights at the end.

What you get: analyze.py + report.md + PNG charts. Reproducible — run it again on new data = 1 command.


Pro tip

Start with topology: "hierarchical" and maxAgents: 6-8. Mesh sounds cooler, but the more agents talking in parallel, the faster they drift from the goal. Hierarchical = one queen stays on course, the rest delivers. Only scale up once you know how it feels.

Want to jump straight in?

Here's the tool.

This help you out? Inside the Vine community I drop new workflows every week that never make it here.