I Built a Self-Driving Sales Machine on a $10/Month VPS Using AI Agents
What it does
Last month I shipped a fully autonomous outreach system that runs 24/7 on a cheap VPS. It finds local businesses with bad websites, builds them a better one using AI, hosts it live at a custom URL, and emails the owner a link — all without any human involvement after the first npm start.
This post is a technical walkthrough of how it works, how it's deployed, and the specific patterns that make it feel genuinely autonomous rather than just automated.
The big picture
The system has one job: research web development client prospects and provide them with a fully customized landing page. Five distinct steps each require different APIs, different reasoning, and different failure modes.
The pipeline is: prospect → research → build → outreach → monitor.
Each step is handled by a dedicated agent module. A thin orchestrator drives them in sequence, self-healing on failures and self-seeding when the queue runs dry.
Queue empty
→ Prospector queries Google Places
→ Scores each candidate with Claude
→ Approved businesses enter the queue
For each queued business:
→ Research agent scrapes the web, finds contact email
→ Build agent generates a custom HTML landing page
→ Page goes live immediately at {slug}.10xdev.io
→ Outreach agent sends a personalized email with the link
In parallel (every hour):
→ Monitor agent checks inbox for replies
The whole thing runs as a single Node.js process on a Ubuntu VPS, managed by PM2.
The prospector: autonomous targeting
The prospector is what makes this system truly self-driving. It doesn't pick businesses from a static list — it chooses its own targets.
Every time the queue runs empty, the orchestrator calls prospectorAgent.run(). The first thing it does is ask Claude to pick a category and city:
const system = `You pick a category and US city for a business prospecting system.
Pick a local service business category and a US city. Favor small/mid-size cities and
industries where businesses commonly have outdated or no websites.
Do not repeat these previous picks if provided.
Return JSON only: { "category": "...", "location": "...", "reasoning": "..." }`;
It passes Claude the list of previous picks from the database so it doesn't repeat itself. The result is a model that wanders organically across the country, hitting different industries in different cities over time — without any hardcoded configuration.
Once it has a target, it queries the Google Places API for matching businesses, then scores each one:
const system = `You are evaluating whether a local business is a good candidate for a new landing page.
Consider: outdated design, missing mobile responsiveness, poor copy, lack of clear CTA, or no website at all.
Return JSON only: { "verdict": "approved" | "rejected", "reason": "..." }`;
Approved candidates are inserted into the database as prospects with a research task queued. Rejected ones are logged with a reason.
The research agent: agentic web research
Rather than just scraping a homepage, the research agent runs a full agentic loop using Claude's tool use API. It gives Claude two tools — search_web and scrape_url — and lets Claude decide what to look up, in what order, until it has enough to build a useful profile.
The agent gets up to 15 turns to work. In practice it uses 6–10, searching for reviews, checking the contact page, cross-referencing directories, and so on. The output is a structured JSON profile including services, tone, target audience, contact info, and gaps in the current site.
After Claude finishes, the agent cross-checks the contact email with Hunter.io. If Hunter has a higher-confidence match, it overrides Claude's result. This two-source approach meaningfully improves email deliverability.
The build agent: AI-generated landing pages
The build agent takes the research JSON and calls Claude to generate a complete, self-contained HTML landing page — inline CSS, no external dependencies, mobile responsive:
const system = `You are an expert web designer. Generate a complete, modern, single-file HTML landing page.
Requirements:
- Fully self-contained (inline CSS, no external dependencies)
- Mobile-responsive with clean typography
- Clear call-to-action
- Return ONLY the HTML`;
Claude generates up to 32,000 tokens of HTML. The agent then writes the output to /sites/{slug}/index.html. That's it. No deploy step, no build process. The moment the file is written, the page is live.
The subdomain pattern: how live URLs work
Every generated page gets its own unique subdomain built from the business name — https://bug-out.10xdev.io, https://go-forth-pest-control.10xdev.io, and so on. Three pieces make this work:
Wildcard DNS. A single A record in Cloudflare maps *.10xdev.io to the VPS IP. Any subdomain resolves instantly with no per-business DNS changes.
Wildcard nginx config. The server block uses a regex to capture the subdomain and use it as the directory name:
server {
listen 443 ssl;
server_name ~^(?<slug>.+)\.10xdev\.io$;
ssl_certificate /etc/letsencrypt/live/10xdev.io/fullchain.pem;
root /sites/$slug;
index index.html;
}
When nginx receives a request for banks-septic.10xdev.io, it looks for /sites/banks-septic/index.html. If the build agent has already written that file, it serves it. No configuration change needed per site.
Wildcard SSL. A single cert covers *.10xdev.io via a Cloudflare DNS challenge, provisioned by Certbot. Every new subdomain gets HTTPS automatically.
The result: the build agent writes one file and a live HTTPS URL exists at a custom subdomain seconds later.
The orchestrator: a self-driving loop
The orchestrator is intentionally thin — all business logic lives inside the agents. Its job is to drive the pipeline, handle retries, and enforce guardrails:
async function pipelineLoop() {
while (true) {
const task = getNextPendingTask();
if (!task) {
await prospectorAgent.run(); // self-seeding
} else {
await runWithRetry(agentMap[task.type], task);
}
await sleep(config.cycleDelayMs);
}
}
The retry logic uses exponential backoff — 2 seconds on the first retry, 4 on the second. After three failed attempts, the task is marked as failed, an alert email goes to my inbox, and the pipeline halts for inspection. This keeps the system honest: it won't silently skip tasks or spin indefinitely on broken state.
The monitor agent: running in parallel
While the pipeline loop processes one business at a time, the monitor agent runs independently on a setInterval. Every hour it authenticates to Gmail via OAuth2 and scans the inbox for replies to sent outreach emails.
It matches replies to specific businesses using the Message-ID header stored in the database when each email was sent. When a reply comes in, it logs the response and marks the outreach status accordingly.
State: SQLite
All runtime state lives in a single SQLite database. The schema covers prospects, tasks, research results, outreach records, email delivery events, and a general audit log. The database lives on the host filesystem and gets backed up daily by a cron job with 7-day retention.
Deployment: Ansible from scratch
The entire infrastructure is codified in an Ansible playbook. Running ./deploy.sh on a fresh Ubuntu VPS installs Node, PM2, nginx, and Certbot; clones the app repo; writes the .env file from an encrypted secrets file; deploys the wildcard nginx config; provisions the wildcard SSL cert; and starts the worker with PM2.
The secrets file is encrypted with ansible-vault, which means the entire codebase can live on GitHub without leaking credentials. Redeployment is idempotent.
Guardrails
An autonomous system that sends emails without human review needs hard limits.
Daily cap. A configurable maxPipelineCyclesPerDay limits how many full pipeline runs happen per day.
Complaint circuit breaker. The Resend API reports delivery events. If the complaint rate exceeds 10% across a minimum sample of 20 emails, the pipeline halts entirely and sends an alert.
Retry with backoff. Every agent task gets three attempts with exponential backoff before failing and triggering an alert.
What's next
The system works. It's currently targeting local service businesses — plumbers, roofers, septic companies — in cities across the US. Replies are coming in.
Five agents, one orchestrator, a single SQLite database, and a wildcard nginx config. That's the whole thing.
Building agentic solutions for businesses is what we do at 10xDev. If you want something like this built for your workflow, reach out.

