March 15, 20267 min read

The Agentic Pipeline Pattern: One Architecture, Endless Business Problems

One architecture, many problems

When I built our autonomous outreach system, I kept noticing something: the architecture had very little to do with outreach specifically. The orchestrator didn't care about landing pages. The agents didn't know they were targeting plumbers. The state layer didn't know anything about emails.

What I'd actually built was a general-purpose pattern for autonomous business automation. The specific use case — find businesses, build their sites, send emails — was just one configuration of a more fundamental structure.

That structure is what this post is about.

The pattern

Every instance of this architecture has four stages.

Prospect. Autonomously discover candidates from an external data source. Score and filter them using an AI model. Feed approved candidates into a queue.

Research. For each candidate, run an agentic research loop. Use real tools — web search, scraping, API lookups — to build a rich profile. The output is structured data, not a summary.

Act. Use the research profile to do something: generate content, produce a document, prepare a message, create an asset. This step transforms research data into a deliverable.

Monitor. After the action, watch for a response or outcome. Log it. Decide whether to follow up. Feed the result back into the system.

The pipeline runs in a loop, self-seeding when the queue is empty, self-healing on failures. The orchestrator is generic. Only the agents are domain-specific.

Recruiting outreach

The problem: identifying qualified candidates, personalizing outreach at scale, and tracking responses.

The prospector scans job sites, GitHub, or LinkedIn profiles and scores each one against a role definition — skills match, seniority, location, recent activity. The research agent goes deeper: reads their GitHub contributions, portfolio, and public work history. It builds a profile that includes what they've worked on recently and the best angle for a personalized message.

The outreach agent drafts a message that references specific work — not a template. It can also host a role-specific landing page at [candidate-name].careers.yourcompany.com, tailored to what the research found about their interests. The monitor tracks opens, replies, and non-responses, and auto-follows-up after configurable intervals.

The only things you need to change from the outreach system: the prospector's scoring criteria, the research agent's system prompt, the act agent's output format, and the monitoring thresholds.

B2B lead generation

The problem: identifying companies that fit your ICP, qualifying them, and getting to the right person with a message that doesn't sound like a mass email.

The prospector queries a data source — LinkedIn, Crunchbase, industry directories — and scores each company against your ICP. Revenue range, headcount, tech stack, recent funding, industry signals. The research agent investigates each company: scrapes the careers page for hiring signals, reads recent press releases, checks review sites, and finds the right contact within the org.

The outreach agent generates a personalized first touch that references something specific — a recent product launch, a job posting that signals a relevant pain point, a Glassdoor theme. No merge field templates. The difference between this and a generic email sequence tool: the research step makes the message feel like it came from someone who actually looked you up.

Content intelligence and publishing

The problem: staying on top of relevant topics and producing consistent content without a full-time content team.

The prospector monitors RSS feeds, Hacker News, Reddit, and keyword alerts. Claude scores each item: is this trending in our space, does our audience care, do we have a credible angle? The research agent reads the primary sources, finds contrarian takes, looks for data, and produces a structured brief — key arguments, interesting angles, data points to cite, things that haven't been said yet.

The act agent generates a draft based on the brief, targeting a specific audience and argument — not just "write a post about X." The monitor tracks publish metrics and engagement, feeding high-performing patterns back into the prospector's scoring logic over time.

This works for newsletters, LinkedIn posts, technical documentation, or any recurring content format where you want depth and consistency without proportional manual effort.

Competitive intelligence

The problem: tracking what competitors are doing across dozens of signals and surfacing what actually matters.

The prospector maintains a watchlist of competitors and adjacent signals: job postings, GitHub activity, press releases, review site changes, social media, SEO movements. It scores each new signal for relevance. The research agent investigates material signals and cross-references them. A new enterprise job posting might indicate a new market segment. A pricing change is more significant if it follows a pattern of feature additions.

The act agent generates a brief or alert for the relevant stakeholder — product, sales, marketing, or executive — with the signal, the context, and the implication. The monitor tracks which alerts led to action and tunes scoring thresholds over time to reduce noise.

This is the kind of intelligence function that large companies have entire teams for. The agentic pattern makes it accessible at any scale.

Client reporting and account management

The problem: producing regular reports for clients is time-consuming and often delayed because someone has to pull data, interpret it, and write it up.

In this version, the prospector is simple: a scheduled trigger queues every account with a report due. The research agent pulls data from connected systems — analytics, ads platforms, CRM — and Claude interprets it: what's up, what's down, what's anomalous, what needs attention.

The act agent generates a report that explains what happened, why it matters, and what the recommendation is — written in the client's language, referencing their specific goals. Not a raw data dump. The monitor tracks whether the report was opened and whether follow-up questions came in.

The value here isn't that AI writes better reports than humans. It's that it writes consistent, complete, timely reports every time, without the bottleneck of someone finding hours in their schedule.

The common infrastructure

Every one of these use cases runs on the same underlying infrastructure: a Node.js orchestrator with a self-driving pipeline loop, SQLite for task queue and state, Claude for the reasoning-heavy steps, a wildcard subdomain pattern for hosting per-entity assets, PM2 for process management, and Ansible for repeatable deployment.

The only meaningful differences are the prompts inside each agent and the external APIs each agent calls. The orchestrator, state layer, retry logic, deployment playbook — all identical.

This is what makes the pattern worth investing in. You build the infrastructure once, then adapt it to each new problem by writing new agent modules and plugging them in.

What makes this different from traditional automation

Most automation tools — Zapier, Make, n8n — connect systems using predefined triggers and fixed action sequences. They're excellent for deterministic workflows: when X happens in system A, do Y in system B.

Agentic pipelines handle a different class of problem: workflows that require judgment. Scoring whether a business is a good prospect requires reading a website and making a call. Researching a company requires deciding what to look for next based on what you've found. Writing a personalized email requires understanding what's actually interesting about the recipient.

These steps don't fit into a conditional logic flowchart. They require a model that can reason, adapt, and make decisions with incomplete information. The agent model is what makes that possible — and ambiguity is where most of the interesting business workflows actually live.

Where to start

If you're evaluating whether an agentic pipeline makes sense for a workflow, these are the questions worth asking: Does the workflow involve a repeating pattern with many instances? Is there a qualification step that currently requires a human to make a judgment call? Is the output something that takes significant time to produce but follows a consistent structure?

If the answer to those questions is yes, the pattern is probably a fit.


At 10xDev, we build agentic systems like this for businesses. If you have a workflow that fits this pattern, we'd love to hear about it.