Author: Matt

  • Lessons From a Decade of Programmatic SEO

    Lessons From a Decade of Programmatic SEO

    This is the final post in a three-part series on programmatic SEO. Part one covered what it is and whether it’s worth your time. Part two walked through the simplest way to get started. This post is the retrospective — what I’ve learned from building programmatic SEO projects since 2014, what actually works, and what’s coming next.

    Lesson 1: Google Always Catches Up

    In 2014, my Automatic Blog Machine product was making money. Article spinning worked. Keyword stuffing worked. Building a hundred sites with rotated content and pointing links between them worked. For about six months.

    Then Google’s Panda update got smarter, and everything I’d built evaporated. Rankings disappeared overnight. Revenue went to zero. The sites were worthless.

    Every generation of programmatic SEO has its version of this story. Somebody finds a technique that games the algorithm, it works for a while, and then Google closes the loophole. Article spinning died. Exact-match domain networks died. Private blog networks died. Thin template pages with swapped city names and nothing else — those died too.

    The lesson isn’t that Google is unbeatable. It’s that any approach built on fooling the algorithm has an expiration date. The only programmatic SEO that survives long-term is the kind that would still make sense if Google didn’t exist — pages that people actually want to read.

    Lesson 2: The Quality Bar Keeps Rising

    What counted as “good enough” in 2014 would get you penalized today. And what’s acceptable today will probably look thin in three years.

    In the article spinning era, uniqueness was the bar. If the text didn’t trigger a duplicate content check, it was “good enough.” Nobody was reading these pages — they existed to rank, not to serve readers.

    In the template era, usefulness was the bar. If the page had real data — actual business listings, real product specs, genuine local information — it could rank even with a formulaic template. The information was valuable even if the presentation was boring.

    Now, in the AI era, the bar is comprehensive quality. The page needs real data, good writing, proper formatting, useful structure, internal links, and a design that doesn’t scream “this was generated.” Readers expect the same quality from a programmatic page that they’d expect from a hand-written one.

    This isn’t Google being arbitrary. It’s reflecting what users actually want. Every time people complain about search quality — and they complain a lot — Google tightens the screws. The sites that survive each tightening are the ones that were already over-delivering on quality.

    The practical takeaway: build to a quality standard that’s higher than what currently ranks. If the top results for your target query are mediocre, don’t match them — beat them. That margin is your insurance against the next algorithm update.

    Lesson 3: Small Sites Can Win Specific Niches

    The biggest misconception about programmatic SEO is that you need to be Yelp or Zapier to succeed. You don’t. Those companies succeed because they operate at massive scale across broad categories. But scale and breadth aren’t the only ways to win.

    Small, focused sites win by going deeper than the big players bother to. A mega-site might have a page for “plumbing in Austin” but it won’t have a page about Austin’s specific water hardness regulations and what they mean for residential plumbing maintenance. That level of specificity is where the opportunity lives.

    The best small-site programmatic SEO projects share three traits:

    Deep niche expertise. The creator knows the subject well enough to spot what’s missing from existing content. They’re not just generating pages — they’re filling genuine information gaps.

    Specificity that big sites can’t match. A large directory has breadth but not depth. They can’t afford to write 2,000-word deep dives for every long-tail variation. You can — especially with AI handling the research and drafting.

    Willingness to maintain and update. Most programmatic sites get published and abandoned. The ones that win long-term keep their data fresh. If your competitor pages reference 2023 pricing, update yours to 2026 pricing. If a local regulation changed, update your city page. This sounds obvious, but almost nobody does it.

    Lesson 4: Internal Linking Is the Multiplier

    I underestimated internal linking for years. Then I saw the data.

    A set of programmatic pages with no links between them behaves like a hundred isolated blog posts. Google crawls them independently, doesn’t understand the relationship between them, and treats each page as a standalone piece of content competing on its own merits.

    The same set of pages with intentional internal linking becomes a content hub. Google understands the topical relationship. Authority flows between pages. When one page ranks well, it lifts the others. The whole is genuinely greater than the sum of its parts.

    For programmatic SEO specifically, the linking structure should be systematic:

    • Every page links to the hub — the main topic page that anchors the entire collection
    • Related pages link to each other — city pages in the same state, comparison pages in the same category, FAQ pages on related topics
    • The hub links to its best-performing spokes — as you learn which pages rank, link from your strongest page to support the weaker ones
    • External content links in too — your blog posts, your about page, your other site content should all link to relevant programmatic pages

    When I added systematic internal linking to a set of pages I’d published months earlier, some of them jumped from page 3 to page 1 within weeks. The content hadn’t changed. The links made Google understand what it was looking at.

    Lesson 5: Failures Teach More Than Successes

    I want to be honest about the projects that didn’t work, because the failure modes are instructive.

    The 10,000-page experiment (2024). After writing about programmatic SEO as a concept, I decided to test it at scale. Build a large site, publish thousands of pages, see what happens. The content was AI-generated with some data enrichment, but the quality was inconsistent. Some pages were genuinely useful. Many were thin. Google’s March 2024 core update hit the site hard. Traffic dropped 70% in a week. The lesson: volume without consistent quality is a liability, not an asset.

    The comparison site (2023). I built a site with product comparison pages using early ChatGPT-generated content. The information was plausible but not always accurate. Some product features were hallucinated. Some pricing was wrong. Readers complained in comments. Google noticed the bounce rates. The site never gained traction. The lesson: AI content without real data sourcing produces pages that look right but aren’t. Readers can tell.

    The directory that worked (2025). On the other hand, a small directory project — fewer than 100 pages — that aggregated genuinely hard-to-find local information performed well from day one. Each page took longer to produce because the data required real research. But because the information wasn’t available elsewhere in a consolidated format, the pages ranked quickly and stayed ranked. The lesson: less content, more value per page, wins.

    The pattern across every failure was the same: I prioritized quantity over quality. Every success came from the opposite decision.

    Lesson 6: The Maintenance Problem Is Real

    Here’s something nobody talks about in programmatic SEO guides: what happens after you publish?

    Content decays. Prices change. Businesses close. Regulations update. Links break. Data goes stale. A page that was accurate when you published it becomes misleading six months later — and misleading content eventually gets outranked by something fresher.

    For hand-written blog posts, this is manageable. You have 50 posts, you review them periodically, you update what’s outdated. For 500 programmatic pages, the maintenance burden is significant.

    The solutions I’ve found:

    Build refresh into the pipeline. If your data comes from scrapeable sources, schedule regular re-scrapes. Have the AI compare new data to old data and flag pages that need updates. Automate the parts that can be automated.

    Prioritize maintenance by traffic. Not every page needs to be updated on the same schedule. Your top 20% of pages by traffic deserve monthly reviews. The rest can be quarterly or annual. Focus your attention where it has the most impact.

    Design for easy updates. If your page template separates structured data from narrative content, updating the data is easy — just refresh the numbers. If every fact is buried in flowing prose, updating requires rewriting paragraphs. Think about maintainability when you design your template.

    Remove pages that can’t be maintained. If a category of pages depends on data you can no longer source reliably, it’s better to remove those pages than to let them go stale. A smaller, accurate site outperforms a larger, unreliable one.

    Lesson 7: AI Changed Everything (But Not How You Think)

    The biggest shift in programmatic SEO isn’t that AI can write content. It’s that AI can do research.

    Content generation was always the easy part. Even before AI, you could spin articles, fill templates, generate text. The hard part was getting accurate, specific, useful information for each page. That required actual research — visiting sources, extracting data, cross-referencing facts, understanding context.

    What’s different now is that AI agents can do that research at scale. Claude Code can browse the web, read source documents, extract specific data points, and compile them into structured content — for every row in your spreadsheet. That’s not just faster writing. That’s faster research, which was always the bottleneck.

    This changes the economics completely. A project that would have required weeks of manual research to populate with real data can now be researched in hours. The constraint shifts from “can I gather enough information?” to “is this information worth publishing?”

    But here’s the nuance: AI research still needs human judgment. The AI doesn’t know which sources are trustworthy for your niche. It doesn’t know when a fact is technically accurate but misleading in context. It doesn’t know the difference between a useful page and a page that merely looks useful. That judgment is still yours — and it’s what separates programmatic SEO that works from programmatic SEO that gets penalized.

    Where This Is All Heading

    Three trends are shaping the future of programmatic SEO:

    AI search is changing the game. Google’s AI Overviews, ChatGPT’s search, Perplexity — these tools synthesize information from across the web and present it directly to the user. If an AI can answer the query by reading your page and summarizing it, the user might never visit your site. This means programmatic pages need to offer something beyond summarizable facts — interactive tools, downloadable resources, visual comparisons, or depth that can’t be condensed into a snippet.

    E-E-A-T matters more than ever. Google’s emphasis on Experience, Expertise, Authoritativeness, and Trustworthiness is a direct response to the flood of AI-generated content. Sites with a real author, real expertise, and real experience behind them get preferential treatment. For programmatic SEO, this means connecting your template pages to your broader brand — author bios, links to your other work, evidence that a real person stands behind the content.

    The bar for “unique value” keeps climbing. Aggregating publicly available information into a cleaner format used to be enough. Increasingly, the winning programmatic sites add something genuinely new — original analysis, proprietary data, interactive tools, expert commentary layered on top of the aggregated data. The template is just the delivery mechanism. The unique value is what gets the page ranked.

    The Only Rule That Never Changes

    After a decade of building, failing, rebuilding, and occasionally succeeding at programmatic SEO, one principle has held constant through every algorithm update, every technology shift, and every competitive wave:

    If the page helps the reader, it will eventually rank. If it doesn’t, it eventually won’t.

    Every technical decision — the template structure, the data sources, the publishing pace, the internal linking, the AI tooling — is in service of that one question. Would a real person find this page useful?

    Build for that standard, and the algorithm updates become opportunities instead of threats. The sites that survive Google’s crackdowns are always the ones that were building for readers, not for robots.

    The tools have never been better. AI can research, write, and publish at a scale that was unimaginable even two years ago. But the strategic question is the same one it’s always been: are you creating something of value, or are you just creating more noise?

    If you’ve read all three posts in this series, you have everything you need to answer that question for yourself. Start with the concept. Build with the simplest approach that works. And keep the long view in mind — because the sites that win in programmatic SEO are the ones that are still useful five years from now.

    For more on building AI-powered content workflows, check out how I use AI to write and publish blog posts. And if you want to see the original post that started this whole series, that’s here.

  • The Simplest Programmatic SEO You Can Build Today

    The Simplest Programmatic SEO You Can Build Today

    In the last post, I explained what programmatic SEO is and when it’s worth pursuing. The short version: it’s creating web pages using templates and data instead of writing every page by hand.

    But knowing what it is and actually building it are different things. Most guides jump straight to complex tech stacks — custom databases, headless CMS platforms, expensive plugins — and lose 90% of readers before they publish a single page.

    The reality in 2026 is that AI has collapsed most of those steps. You don’t need to manually copy-paste pages from a spreadsheet. You don’t need to learn a page builder plugin. You can start with an AI assistant, a WordPress site, and a clear idea of what pages you want to create.

    Step 1: Let AI Build Your Data Set

    Every programmatic SEO project starts with a list of pages. The old advice was to sit down with a spreadsheet and fill in rows by hand. That still works — but why would you?

    Instead, start by telling an AI assistant what you’re trying to build. Be specific about your niche and what kind of pages you want. For example:

    “Give me a list of 50 cities in Texas with populations over 50,000, along with their county, population, and top three industries.”

    Or: “Research and list every competitor in the meal prep delivery space, with their pricing, delivery areas, and key differentiators.”

    Or: “What are the 30 most common questions people ask about home solar installation, organized by stage of the buying process?”

    The AI generates your seed data in seconds. Export it to a Google Sheet or CSV file, and you’ve got the skeleton of your project. Each row is a potential page. Each column is a variable that changes between pages.

    Here’s where the multiplication happens. Say you have 20 cities and 5 services. That’s 100 potential pages — “[service] in [city]” — generated from two simple lists. Add industries, and you’ve got another dimension. The data set grows fast.

    Keep a local copy of everything. Download your research, cache your data sources, save reference material to your computer. You don’t want to re-fetch the same information every time you work on the project. A local folder with your spreadsheets, source documents, and reference data becomes your project’s knowledge base.

    Step 2: Design Your Template

    Before you generate a single page, you need to know what a good page looks like. This is the most important step, and it’s worth spending real time on.

    Pick one row from your data set — one city, one product, one question — and build the best possible page for it. Not blindly with AI. By hand. Think about what someone searching for that query actually wants to know, and make sure the page delivers it. Your pages need to be good enough that people stay and read.

    This manual page becomes your template. Study it:

    • What headings did you use?
    • What data points appear on every page versus what’s unique?
    • How long does it need to be to genuinely answer the question?
    • What internal links connect it to related pages in your set?

    Once you’re happy with the template, describe it clearly — the structure, the sections, the tone, what goes where. This description becomes your prompt for generating every other page.

    Step 3: Establish Your Brand Guide Early

    This is something most programmatic SEO guides skip entirely, and it’s why so many pSEO sites feel like they were stamped out of a factory.

    Before you generate content at scale, decide on your brand voice and visual identity. Write it down. These decisions are hard to change later, and consistency is what separates a site that feels trustworthy from one that feels like spam.

    For writing voice, decide:

    • First person or third person?
    • Authoritative and expert, or friendly and conversational?
    • Technical language or plain English?
    • What phrases or patterns does your brand use? What does it avoid?

    Feed this brand guide to your AI as context for every page it generates. The difference between “write a page about solar installation in Austin” and “write a page about solar installation in Austin using this voice guide” is enormous. Without it, every page will sound like generic AI output. With it, they’ll sound like they came from the same knowledgeable author.

    For visual identity, decide:

    • What style of images will you use? AI-generated, stock photos, custom graphics?
    • Pick a specific image style and dial in the prompt so it’s consistent across all pages
    • Choose a color palette and typography that carries through the site
    • Decide on a layout template before you start publishing

    Spend an afternoon getting your image generation prompt right. Test it on 5-10 variations and make sure the results feel cohesive. A site where every hero image looks like it belongs to the same brand signals quality. A site where every image looks randomly generated signals the opposite.

    Step 4: Generate and Publish With AI

    Here’s where modern tools change the game entirely. You don’t need to manually create pages one by one, and you don’t need an expensive import plugin to do it for you.

    An AI coding assistant like Claude Code can take your spreadsheet, your template, and your brand guide and do the heavy lifting:

    1. Research each row — For every entry in your data set, the AI can search the web, pull real information from multiple sources, and compile facts that are specific to that page. A page about “plumbing services in Austin” shouldn’t contain generic plumbing advice — it should reference Austin’s actual building codes, local licensing requirements, and water quality specifics.
    2. Write the content — Using your template structure and brand voice, the AI drafts each page. Because it’s working from real research rather than generating from memory, the content is grounded in verifiable facts.
    3. Publish directly — Tools like the WordPress REST API let AI publish pages directly to your site, complete with formatting, categories, tags, and featured images. No copying and pasting between tools.
    4. Review each page — And this is the step you never skip. Read every page before it goes live, especially in the beginning. Check that the facts are accurate, the voice is consistent, and the page would pass the quality test from the last post: would a real person feel their time was respected?

    For the first 10-20 pages, review every single one. As you get confident that your template and prompts produce reliable output, you can shift to reviewing a sample — but never stop reviewing entirely.

    Start Slow, Accelerate Later

    There’s a temptation to use these tools to publish hundreds of pages in a weekend. Resist it.

    When a new site suddenly appears with 500 pages, Google notices. And not in a good way. A brand-new domain with a flood of content looks exactly like the kind of spam site that Google’s algorithms are designed to catch — regardless of how good the content actually is.

    The better approach is to start with a handful of pages and grow steadily:

    Week 1-2: Publish 5-10 of your best pages. Obsess over quality. Make sure every fact is right, every image looks good, every internal link works.

    Week 3-6: Add 3-5 pages per week. Monitor which pages get indexed and start appearing in search. Pay attention to what Google seems to like.

    Month 2-3: If pages are getting indexed and attracting some traffic, increase your pace. Maybe 10 pages per week. Keep reviewing quality.

    Month 3+: If the signal is positive, you can ramp up further. But always tie the pace to the quality you can maintain.

    This gradual approach does two things. It gives Google time to build trust in your domain. And it gives you time to learn what’s working — which page structures perform best, which topics attract traffic, and which ones fall flat. That feedback loop is worth more than a thousand pages published blind.

    Picking Your First Project

    The hardest part isn’t the technology. It’s choosing what to build.

    Here are five proven patterns that work well for a first project, ordered from simplest to most ambitious:

    1. FAQ pages for your niche. Take the 20-30 most-asked questions in your field and create a dedicated page for each one. Have AI research the best current answer for each, pulling from authoritative sources. This is the lowest-risk starting point because each page targets a specific long-tail query with clear search intent.

    2. Comparison pages. “[Product A] vs [Product B]” for every meaningful combination in your space. AI can research current pricing, features, and reviews for each product. The data changes, so keep local copies and plan to refresh these periodically.

    3. Location + service pages. “[Service] in [city]” combinations. This is the classic multiplication approach — 10 services across 20 cities gives you 200 pages. AI can research city-specific details (regulations, demographics, local competitors) to make each page genuinely useful rather than just swapping the city name.

    4. Tool or resource directories. Curate every tool, service, or resource in a specific category. AI can research pricing, features, and user reviews from across the web, then present it in a consistent format. The value is in the consolidation — saving the reader from visiting 30 different websites.

    5. Data-driven analysis pages. Turn public datasets into readable insights. Government databases, industry reports, and public APIs contain enormous amounts of information that nobody has bothered to make accessible. AI can process raw data and present it in plain language for specific audiences.

    Pick one. Build 10 pages. See what happens.

    Common Mistakes to Avoid

    Having tried (and failed at) programmatic SEO more than once, here are the mistakes that kill projects:

    Starting too big. Don’t plan 1,000 pages before you’ve proven 10 work. Build the smallest possible version, see if it gets traffic, then scale what works.

    Skipping the brand guide. Without a consistent voice and visual identity, your site will feel like a content farm even if the information is good. Invest the time upfront.

    No quality review. Publishing AI-generated pages without reading them is how sites get penalized. Review every page early on. Spot-check as you scale. Never publish blind.

    Thin content. If your template produces pages with 200 words of generic text and a data table, that’s not enough. Each page needs to genuinely answer the searcher’s question. If you can’t make a page useful, don’t create it.

    Ignoring internal linking. A hundred orphan pages with no links between them won’t perform. Every page should link to related pages in your set, and your set should link back to your main site content. Build the web of connections from day one.

    Sloppy images. Inconsistent or obviously AI-generated images with different styles on every page undermine trust. Pick one style, refine the prompt, and stick with it across the entire site.

    Going too fast on a new domain. Publishing hundreds of pages on a fresh domain in your first week is a red flag to Google. Start slow, build trust, accelerate when you see positive signals.

    What to Do This Week

    If this approach sounds interesting, here’s a concrete starting point:

    1. Pick a pattern from the five options above that fits your expertise or business
    2. Ask an AI assistant to generate your seed data — cities, competitors, questions, whatever your pattern requires
    3. Build one perfect page by hand — this becomes your template and quality benchmark
    4. Write your brand guide — voice, tone, image style, what to avoid
    5. Search for your target queries and compare your template page to what’s already ranking

    If your page is better than what’s currently out there, you’ve found your project. The tools to scale it are available right now — and most of them are free or close to it.

    In the next post, I’ll share lessons from a decade of building programmatic SEO projects — what actually works long-term, what gets penalized, and where this is all heading as AI gets more capable. For more on how AI fits into content workflows, check out my AI-assisted content strategy. And if you’re a builder looking for the technical deep dive, growth engineering with Claude Code covers the pipeline side in detail.

    But start with the pattern and the brand guide. Everything else follows from those two decisions.

  • What Is Programmatic SEO (And Is It Worth Your Time?)

    What Is Programmatic SEO (And Is It Worth Your Time?)

    A decade ago, I launched a product called Automatic Blog Machine. The idea was simple: use natural language processing to find synonyms and rotate sentence structures so that scraped content wouldn’t get flagged as duplicate text. Spin a paragraph enough times and Google’s algorithms couldn’t tell it was the same article published across a hundred different sites.

    It worked — for about six months. Then Google got smarter, the rankings disappeared, and I learned an expensive lesson about building on a foundation of trickery.

    That was my introduction to programmatic SEO. And while the tools have changed dramatically since then, the core question hasn’t: can you create content at scale without it being garbage?

    What Programmatic SEO Actually Is

    Programmatic SEO is creating web pages using templates and data instead of writing every page by hand. That’s it. No magic, no dark art.

    Think about it this way. A real estate site with a page for every neighborhood in a city — those pages aren’t hand-written. They pull from a database: median home price, school ratings, walkability score, recent sales. The template is the same, but the data makes each page unique and useful.

    That’s programmatic SEO at its simplest. You define a pattern, plug in data, and generate pages that target specific search queries.

    Some real-world examples that are probably already in your life:

    • Yelp has a page for every “best [restaurant type] in [city]” combination
    • Zapier has integration pages for every app pairing — thousands of them
    • NerdWallet has comparison pages for financial products across every category
    • Tripadvisor has pages for every hotel, restaurant, and attraction in every city on Earth

    These aren’t hand-crafted blog posts. They’re templates filled with structured data, and they drive millions of organic search visits every month.

    The Spectrum of Complexity

    Here’s where people get intimidated. They hear “programmatic SEO” and picture a team of engineers building complex data pipelines. But the spectrum is much wider than that.

    The simple end: A Google Sheet with 50 rows of FAQ questions, turned into individual pages on a Wix or WordPress site. Each page targets a specific long-tail search query. No code required.

    The middle: A WordPress site with a template that pulls in data from a spreadsheet or simple database. Maybe you’re building city-specific landing pages for a local service, or comparison pages for products in your niche.

    The advanced end: A full pipeline that scrapes data sources, enriches it with AI, generates unique content for each page, and publishes automatically. This is where tools like Claude Code come in — but you don’t need to start here.

    The point is that programmatic SEO isn’t binary. You don’t need a sophisticated tech stack to benefit from the approach. You need a repeatable pattern and data to fill it.

    A Decade of Cat and Mouse

    My Automatic Blog Machine story isn’t unique. The history of programmatic SEO is really the history of people trying to create content at scale and Google trying to separate the valuable from the worthless.

    The early era (2010-2015): Article spinning, keyword stuffing, link farms. Content was generated to game algorithms, not to help readers. Google’s Panda and Penguin updates torched most of it. My product was part of this wave, and it deserved to get squashed.

    The template era (2016-2022): Smarter operators moved to database-driven templates. If you had genuinely useful structured data — business listings, product specs, local information — you could build pages that actually served a purpose. This worked better because there was real information behind each page, even if the presentation was formulaic.

    The early AI era (2023-2024): ChatGPT arrived, and suddenly everyone could generate “unique” text at scale. But GPT-2 and GPT-3 era content had obvious problems. The hallucinations were rampant. There was no way to connect the model to real data sources, so it would confidently make up facts, invent statistics, and fabricate references. If you read enough AI-generated content from that period, you developed a sixth sense for it — the same vague structure, the same filler phrases, the same lack of specificity.

    Some people tried to work around this. I experimented with using web search APIs to pull real content, then feeding it to ChatGPT to create summaries and rephrase things in a more natural way. It was better than pure hallucination, but still produced that unmistakable AI voice. And Google was getting better at detecting it.

    Where we are now (2025-2026): This is where things genuinely changed. The current generation of AI tools — particularly agent-based systems like Claude Code — can do something the earlier models couldn’t: go out on the internet, find ten real references for every claim, consolidate and synthesize that information, and present it in a way that actually helps the reader.

    That’s a fundamentally different value proposition than spinning synonyms or generating hallucinated text.

    The Real Turning Point

    Here’s the thing that changed my mind about programmatic SEO after years of skepticism.

    When you can connect AI to real data sources — web scraping, APIs, databases, live search results — you’re not faking content anymore. You’re doing genuine research at scale. The AI becomes a research assistant that can:

    • Pull together information from dozens of sources for a single page
    • Take complicated language (legal documents, scientific papers, technical specs) and rephrase it for different audiences
    • Cross-reference facts across multiple sources to reduce hallucination
    • Tie together related concepts in ways that would take a human researcher hours

    Could someone get this information by doing a Google search themselves? Maybe. Could they have a conversation with an AI chatbot and get similar answers? Possibly. But if the value you’re providing involves pulling together many sources, consolidating scattered information, and presenting it in a clear format — that’s real work, even if a machine is doing it.

    Think about a directory site that aggregates local business information from public records, review sites, and social media — then presents it in a clean, searchable format with plain-language summaries. That’s providing genuine value. The information exists on the internet already, but it’s scattered across dozens of sites in inconsistent formats. Consolidating it is the service.

    Or consider taking dense regulatory documents and creating simple, city-specific guides for small business owners. The source material is public, but it’s written in legal language that most people can’t easily parse. Making it accessible is the value.

    When Programmatic SEO Is Worth It

    Not every site or business benefits from this approach. Here’s a honest framework for deciding.

    It’s probably worth exploring if:

    • You can identify a clear pattern of search queries (like “[thing] in [place]” or “ vs “)
    • Structured data exists that could populate those pages (public databases, APIs, scraped information)
    • Each generated page would genuinely answer someone’s question
    • You’re willing to invest upfront in building the pipeline, knowing the payoff is gradual
    • You have some technical comfort, even if it’s just spreadsheets and a basic website builder

    It’s probably not worth it if:

    • Your topic requires deep original thought or personal experience on every page
    • The search queries you’d target are already dominated by massive sites with real authority
    • You can’t identify a repeatable template that works across many variations
    • You’re only interested in tricking Google rather than helping readers
    • You need results next week (programmatic SEO is a long game)

    The honest truth: Most people who attempt programmatic SEO either give up before publishing enough pages to see results, or they cut corners on quality and get penalized. The sweet spot is finding a niche where you can provide genuine value at scale — and that niche is more specific than you think.

    The Quality Test

    Before I invest time building programmatic pages for any topic, I apply a simple test:

    If a real person landed on this page from a Google search, would they feel like their time was respected?

    Not “would they click around the site.” Not “would Google’s algorithm reward it.” Would an actual human being read this page and think, “Good, that’s what I needed to know”?

    If the answer is yes, the approach is sound regardless of how the content was created — by hand, by template, by AI, or by some combination. If the answer is no, no amount of technical sophistication will save it. Google is remarkably good at figuring out when people are disappointed by what they find.

    This is the real shift in programmatic SEO. It’s no longer about creating content that fools algorithms into thinking you’re providing value when you’re not. It’s about actually providing value — and using automation to do it at a scale that would be impossible manually.

    Where to Start

    If you’re curious about programmatic SEO but don’t want to build a complex pipeline on day one, start here:

    1. Find your pattern. What questions do people search for in your space that follow a repeatable format? Use Google’s autocomplete, “People also ask” boxes, or a tool like AlsoAsked to spot templates.
    2. Check the competition. Search for a few variations of your pattern. If the top results are from massive sites with huge authority, pick a more specific niche. If the results are thin or unhelpful, you’ve found an opportunity.
    3. Build one page by hand. Before automating anything, manually create the best possible version of one page in your template. This becomes your quality benchmark.
    4. Then scale gradually. Start with 10-20 pages, not 1,000. See how they perform. Adjust your template based on what works. Only then consider building automation.

    The tools available today — from simple no-code builders to full AI agent pipelines — make the scaling part easier than ever. But the strategic thinking that goes into choosing what to build? That’s still on you.

    I’ve written more about the technical side of building these pipelines in my post on programmatic SEO, and if you’re interested in how AI fits into a broader content workflow, take a look at how I use AI to write and publish blog posts. For the growth-minded builders, growth engineering with Claude Code gets into the deeper technical possibilities.

    But honestly? Start with the pattern. Everything else follows from that.

  • I Built an AI Agent That Monitors Your Competitors While You Sleep

    I Built an AI Agent That Monitors Your Competitors While You Sleep

    Most e-commerce brands are flying blind on competitive intelligence. They rely on a team member manually checking a few competitor sites once a week — if they remember. A competitor drops prices on a Friday afternoon. The team doesn’t notice until Monday. That’s an entire weekend of lost sales to an alert you never got.

    The manual approach doesn’t scale. It doesn’t run on weekends. And it can’t simultaneously watch pricing pages, Amazon listings, product catalogs, review trends, and ad activity across five competitors at once.

    That’s the problem this project set out to solve.


    The Build Story

    The Competitor Tracker Agent started as a personal frustration. Running a brand means constantly asking: what are competitors doing right now? Are they running a sale? Did they just launch something new? Are their reviews tanking — and is that an opening to capture market share?

    The only honest answer used to be: “I don’t know, and finding out takes too long to be worth it.”

    Here’s the thing — the data isn’t hidden. Competitor pricing is public. Amazon reviews are public. New product launches on Shopify stores are detectable. Google Ads transparency data is accessible. The problem isn’t access to the data. The problem is that gathering it, comparing it to what you saw last week, and then reasoning about what it means — that’s a full-time job.

    So the question became: what if an AI agent could do all of that automatically?

    Building small AI automations has been a recurring theme in this workflow — the insight from working on mini AI automations was that the highest-leverage moves are rarely the complex ones. You chain a few reliable steps together, automate the repetitive parts, and let the AI handle the reasoning layer. That’s exactly the architecture here.


    What the Agent Actually Does

    The Competitor Tracker Agent runs on a 6-hour scan cycle, 24 hours a day. It monitors four intelligence pillars:

    Price Monitoring

    Tracks competitor pricing across DTC websites and Amazon ASINs. Configurable thresholds mean you only get alerted when it actually matters (say, a change greater than 5%), not every minor fluctuation. It catches flash sales, coupon activity, and Buy Box changes.

    Product Intelligence

    Detects new product launches before they’re announced publicly. Shopify stores expose their full product catalog via a public endpoint — a new SKU showing up there at 11pm on a Thursday gets flagged immediately. Discontinuations, variant expansions, and positioning copy changes are all tracked.

    Review and Sentiment Analysis

    Monitors Amazon review counts and star ratings over time. When a competitor’s ratings start declining — say, dropping from 4.3 to 4.0 over 30 days — that’s a signal. It means customers are unhappy, and if you’re selling in the same category, that’s an opening. The agent surfaces these trends before they show up in your own sales data.

    Ad and Campaign Monitoring

    Tracks competitor advertising activity via Google Ads Transparency Center and Amazon Sponsored placements. When a competitor pivots their messaging or launches a new campaign targeting terms they’ve never used before, that signals a strategic shift worth knowing about.


    The Tech Behind It

    The agent is built in Python with Claude AI as the reasoning layer. Here’s the stack:

    • Web scraping layer — Custom scrapers for competitor DTC sites, Shopify catalog endpoints, and Amazon product pages. Rotating request intervals to stay within reasonable limits.
    • Amazon monitoring — ASIN-level tracking for pricing, review counts, BSR, and ad placements via public data and optional SP-API integration.
    • Ad intelligence — SerpAPI for Google Shopping and Ads Transparency Center data; Amazon Sponsored Brands detection from search result pages.
    • Claude AI for analysis — Raw data gets fed into Claude with context about what changed since the last scan. Claude reasons about whether a change is significant, what it likely means strategically, and what action to take. This is the part that makes it genuinely useful rather than just another data dump.
    • Slack integration — Alerts fire within minutes of a significant change being detected. The daily briefing is a structured report generated every weekday at 8am.

    The agent also maintains persistent memory across scans — tracking trends over weeks and months, not just comparing today against yesterday. That historical context is what lets it say things like “Acme’s prices are at a 6-month low” rather than just “price changed.”

    This fits into a broader pattern of thinking about AI as infrastructure rather than as a one-off tool. The post on growth engineering with Claude Code explored this — when you treat AI as the reasoning engine inside a persistent automated system, you get compounding returns that a prompt-and-response workflow never will.


    What the Morning Briefing Looks Like

    Every weekday at 8am, a structured report lands in a dedicated Slack channel. Here’s what a typical Friday briefing looks like:

    Price Intel: Acme Corp dropped Widget Pro from $34.99 to $27.99 (-20%). Flash sale, likely ends Sunday. Their lowest price in 6 months.

    Product Intel: BrandX quietly added “Pro Max Bundle” to their Shopify store. Not announced publicly. $89 price point — a new premium tier.

    Review Intel: No major rating changes. BrandX trending slightly down: 4.1 to 4.0 stars over 30 days.

    Ad Intel: Acme Corp added 3 new Google Shopping ads this week targeting “budget widget” and “affordable widget 2026” — consistent with their price drop strategy.

    Recommended Actions:

    1. Consider a targeted counter-promotion this weekend while Acme’s prices are low — capture price-sensitive shoppers before they return to normal pricing.
    2. Investigate BrandX’s Pro Max Bundle. If it gains traction, it could pressure mid-tier SKUs.
    3. BrandX’s review decline is an opening — consider increasing PPC bids on their branded terms.

    The key distinction is the recommendations section. Raw data is noise. The agent uses Claude to reason about what the data means in context and what to do about it. That’s the difference between a monitoring tool and actual intelligence.


    The DIY Competitive Advantage

    There’s a strong argument for building tools like this rather than buying off-the-shelf software. Enterprise competitive intelligence platforms like Crayon and Klue exist — but they’re built for B2B SaaS companies, start at $15,000+ per year, and track PR and content rather than pricing and Amazon reviews. They’re solving a different problem.

    The doing-it-yourself advantage is that custom-built systems can be tuned exactly to the competitive landscape at hand. Which competitors matter. Which price changes actually warrant a response. Which product categories to watch. That specificity is what turns monitoring into actionable intelligence.


    What This Becomes

    Competitive intelligence at this level of depth and automation wasn’t accessible to small and mid-size e-commerce brands before. It required a dedicated analyst, an expensive platform, or a lot of manual work that was never consistent enough to be reliable.

    The agent changes that calculus. Tell us who your competitors are, and we install a monitoring system tailored to your market in under two weeks. Scans run every 6 hours. Alerts arrive in real-time. The morning briefing is waiting before the team starts their day.

    The parallel to building AI agent systems that handle complex, multi-step reasoning tasks is clear: the value isn’t in any single AI call, it’s in the architecture that chains intelligence together into something that runs continuously without human intervention.


    Full Details and Demo

    The full service page — including pricing, the complete feature breakdown, and a sample Slack report — is at mattwarren.co/competitive-intelligence.

    If this is a problem your brand is dealing with, book a free 30-minute competitor audit. Walk away with a competitive landscape snapshot whether you buy or not.

  • I Couldn’t Afford an Executive Coach, So I Built One

    I Couldn’t Afford an Executive Coach, So I Built One

    Over the weekend I was talking with a high-level executive coach. Smart person. Real deal. Halfway through the conversation, they offered me a spot in their group program — a dozen people, regular group sessions, accountability framework, the whole package.

    I passed.

    Not because it wasn’t valuable. It clearly was. But the price point was higher than I wanted to commit to, and the weekly time requirement was more than my schedule could absorb right now. So I said thanks, thought about it for about 20 minutes, and then opened Claude Code.

    Here’s what I built instead.

    The idea that sparked it

    The coaching conversation had been happening over text. Just back-and-forth messages, advice trickling in throughout the day. What struck me about that format wasn’t the content — it was the delivery mechanism.

    You don’t go get it. It comes to you.

    That’s a fundamentally different experience than opening ChatGPT and typing a question. When a message shows up on your phone unprompted, your brain processes it differently. There’s a social reflex that kicks in. Someone reached out. Someone is thinking about you and your goals. Even if intellectually you know it’s a bot, the messenger app context does something to the accountability equation that a browser tab simply doesn’t.

    So the question became: could an AI agent replicate that dynamic? Not just a chatbot that answers questions, but something that runs persistently, thinks about your situation in the background, and reaches out to you when it has something worth saying?

    The build: research first

    The first thing I did was ask Claude Code to go do research. Not write code — just go learn things. I sent it out to find scientific papers, behavioral research, and business frameworks on what actually makes executive coaching effective. What questions do good coaches ask? How do they maintain accountability? What cadences work? How do you help someone stay focused on priorities without becoming a nag?

    It ran for about 20 minutes, pulling from multiple sources, organizing findings into a structured research document. The output was genuinely useful — not just “here are some coaching tips” but a breakdown of the behavioral psychology behind why coaching works, what distinguishes great coaches from mediocre ones, and the specific techniques that show up consistently in the research.

    That document became the knowledge base for everything that followed.

    From research to software plan

    Once the research was solid, I asked it to turn those findings into a software plan. Here’s what that plan centered on:

    A Telegram bot as the interface. Not a web app. Not a new chat window you have to go find. A bot that lives in your existing messaging app, alongside your other conversations, and behaves like a contact in your phone. This was non-negotiable from the start — the whole point was that the interface creates accountability, and that only works if it’s somewhere you already check.

    Proactive scheduling. The research consistently highlighted a morning check-in as one of the highest-leverage interventions in any coaching relationship. What are your top three things to accomplish today? Simple question, but when asked by a person (or something that feels like a person), it creates a kind of micro-commitment that the end of the day will test. The bot would send this every morning, unprompted.

    Evening accountability. Paired with the morning check-in is an end-of-day follow-up. Did you accomplish those three things? If not, what got in the way? This is where accountability becomes real. It’s easy to type your priorities and then ignore them. It’s harder when something is going to ask you about them later.

    A memory system. This was the piece that made everything else worth building. A good coach remembers what you told them last week. They notice patterns. They connect what you said in January to something you’re struggling with in March. Without memory, a coaching bot is just a fancy prompt. With it, the conversations compound. I asked for a SQLite database and a system that would pull relevant context into each interaction — what goals had been discussed, what came up in recent check-ins, what had been going well or poorly.

    What it does, concretely

    The resulting application is straightforward in structure but surprisingly capable in practice. It runs as a persistent background process on my computer. When it starts up (configured to launch on login), it connects to Telegram and starts listening. A scheduler runs alongside it.

    Every morning, before I’ve opened a browser, there’s a message waiting:

    Good morning. What are your three most important things today?

    I type back. The conversation might go a few exchanges. The bot has context from previous days, so it might note that one of the things I mentioned connects to a goal we discussed earlier in the week, or ask whether the thing I said was blocking me last Thursday got resolved.

    At the end of the day, there’s a follow-up. At the end of the week, a slightly longer check-in. The system also has a lightweight internal process that evaluates whether anything in my recent history is worth proactively surfacing — something that’s been flagged multiple times, a deadline approaching, a thread that went quiet. Most days it decides there’s nothing urgent to interrupt with. Occasionally it sends something.

    That’s the whole system. It’s not complicated. But the experience of using it is substantially different from anything I’ve tried before.

    Why this actually works

    Here’s the thing about accountability: it’s a social phenomenon. The research makes this clear. People keep commitments better when they feel that someone is tracking. It doesn’t matter much whether the tracker is a person, a journal, or a piece of software — what matters is the sense that your stated intentions have a witness.

    Web apps and chatbots fail at this because they require you to initiate. You have to go there, open it, decide to engage with your goals. That friction is small in theory and enormous in practice. The days you need accountability most are the days you’re least likely to open the accountability app.

    A Telegram bot sidesteps this entirely. It comes to you. The interface is indistinguishable from a message from a real person. On some level your brain doesn’t fully process the distinction, and that’s exactly the point.

    After a few days of using this, the morning question has started to feel like a real thing I need to respond to. The end-of-day check-in has made me more honest with myself about what actually got done. I’ve written before about the challenge of maintaining focus — this is the most practical solution I’ve found.

    The bigger shift

    I keep coming back to one observation from this project: at the end of it, I felt like I’d hired a coach. Not built an app. Hired someone.

    That reframe is worth sitting with. There’s a long history of software built around the idea of helping people with productivity, goal setting, and accountability. Most of it has failed to change behavior in any meaningful way because it treats the problem as an organizational challenge. Here’s a system to track your goals. Here’s a dashboard. Here’s a way to categorize your tasks.

    What actually changes behavior is a relationship. Someone who asks how it’s going, remembers what you said, and expects you to show up. Software has historically been incapable of this because it’s reactive — it waits for you.

    The combination of a persistent agent, a messenger interface, and a memory system produces something that isn’t quite software and isn’t quite a relationship. It’s something new. And it works because it targets the actual mechanism of accountability rather than building another dashboard nobody opens.

    Building something like this

    If this sounds useful, the pattern is reproducible. You’re not building anything exotic. The components are:

    A Telegram bot (the interface — python-telegram-bot library handles this in about 50 lines). A scheduler (APScheduler or a simple cron-like structure) for proactive messages. A memory layer (SQLite is more than sufficient — just store conversations and let the agent summarize and retrieve them). A knowledge base (the research Claude Code collected became the system prompt that shapes every interaction). A persistent process (a simple Python script set to run on startup, or a systemd service if you want something more robust).

    The whole thing lives in a folder on your computer. No hosting required. No subscriptions. Accessible anywhere through Telegram.

    Claude Code handled the research, the architecture, and the implementation in a single session. The approach I use for building with AI — research first, then architecture, then implementation — works especially well for projects like this because the research phase directly shapes the software design.

    Software that runs forever

    There’s something that becomes obvious once you build one of these persistent agents and live with it for a few days: this is a fundamentally different category of software.

    We know command-line tools, web apps, desktop apps, and mobile apps. These are all things you go get when you need them. They sit in a menu or a browser tab or an app drawer, waiting to be opened.

    Persistent agents are different. They run in the background. They monitor things. They decide when something requires your attention. They interrupt you only when warranted. The interface is a chat — a format your brain associates with people, not programs.

    This is where more software is going. Not apps you download — agents you deploy. Processes that run on your hardware (or in the cloud), maintain memory across months of interaction, and have access to tools that let them actually do things on your behalf. The executive coach is a simple example. The same architecture could monitor your business metrics and alert you when something looks off. It could track your health data and notice patterns before you do. It could manage a process — customer follow-ups, content scheduling, financial reporting — and only surface the items that genuinely need your judgment.

    The paradigm shift isn’t about AI getting smarter. It’s about software becoming proactive. That transition is happening fast enough that most people haven’t noticed it yet, but once you’ve lived with a persistent agent for a week, you’ll wonder why all your software was passive.

    I passed on the coaching program. A few hours later I had a coach. That’s the most interesting part of all of this — not the technology, but what becomes possible when the barrier to building drops to nearly zero.

  • How to Use AI Agent Teams to Optimize Your Product Pages

    How to Use AI Agent Teams to Optimize Your Product Pages

    Most product pages are built once and forgotten. Someone writes a description, uploads photos, sets a price, and moves on. Months later, the page is still converting at 1% and nobody’s touched it because “it’s fine.”

    The problem is that a good product page isn’t one skill. It’s copywriting, conversion rate optimization, visual design, and brand consistency all at once. No single AI prompt holds all of those disciplines in focus simultaneously.

    I’ve written about the adversarial agent approach before — assembling specialized AI agents into a team, giving each one a scoring rubric, and iterating until they all agree the work is good. I recently applied this to a real Shopify product page with a four-agent team: a copywriter, a CRO specialist, a branding expert, and a visual designer. The conversion rate doubled in seven days.

    Here’s how to adapt this for your own pages.

    Score First, Then Build a Task List

    The key adaptation for product pages is turning agent feedback into a concrete task list you can work through.

    Point your agent team at the current page and have each specialist score it out of ten against their rubric. You’ll get feedback like: “6/10 — Add to Cart button blends into the background, social proof is buried below three scrolls” from the CRO agent, and “5/10 — product descriptions are feature lists, not benefit statements” from the copywriter.

    Combine all of their recommendations into a single prioritized list. This is your improvement backlog. The types of changes that consistently surface across e-commerce pages:

    • Primary action prominence — more contrast, higher placement on mobile, larger touch target for the CTA. Almost always the highest-impact change.
    • Mobile layout — product images eating too much vertical space, pushing price and CTA below the fold.
    • Benefit-oriented copy — shifting descriptions from “what this is” to “what this does for you.”
    • Social proof repositioning — moving reviews and trust signals closer to the point of purchase decision.
    • FAQ expansion — every unanswered objection is a reason to leave the page.

    Work through the list with yourself in the loop. Don’t hand everything to the AI and walk away. Agents occasionally recommend changes that score well on their rubric but don’t fit your broader context — aggressive urgency tactics that feel off-brand, or rewrites of sections you’ve crafted for a specific reason.

    After each batch of changes, re-score. You’ll see numbers climb, and you’ll see new issues surface that weren’t visible before. If you’re not familiar with the challenges of split testing, this iterative approach with agent scoring is a practical alternative — you get structured feedback without needing statistical significance on every change.

    Build Features Instead of Buying Apps

    One thing that came out of this process: AI agents can build small features that would normally cost $10 to $20 a month as a Shopify app.

    CRO agent suggested a social proof notifications — the little popups showing recent purchases. Instead of installing an app, an AI agent wrote a script that pulls real order data from the Shopify API, stores it in metafields, and displays it with a liquid snippet. Twenty minutes of agent time, no monthly fee, no bloated JavaScript, no third-party tracking.

    This works for a surprising number of app store features. Countdown timers, stock warnings, cross-sell blocks, announcement bars. If the feature is simple enough to describe, an agent can build a lightweight version that does exactly what you need. This is the same growth engineering approach I’ve been using across my marketing stack — treating your code editor as the platform instead of buying SaaS for everything.

    Then Work on the Economics

    A better-converting page is only half the equation. If margins are thin and average order value is low, you can’t scale paid advertising profitably.

    Once conversion improvements stabilize, shift the agent team to pricing structure. Have them model bundle configurations, free shipping thresholds, COGS at different quantities, pick and pack costs, and shipping rates across weight breaks. The goal is maximizing contribution margin per order while maintaining conversion rates.

    What came out of this for me was more aggressive than I would have tested on my own. The AI ran the numbers without the emotional anchoring that comes from having set the original prices yourself. No bias. Just math.

    The structural changes worth considering:

    • Bundle incentives inside the cart — present options the moment someone adds a product, not on a separate page.
    • Tiered thresholds — make each additional item feel like an obvious deal. Free shipping at one level, a percentage off at the next.
    • Higher price points — if your page is now doing its job with strong copy and visible social proof, customers may tolerate more than you assume.

    Measure Patiently

    Page layout changes show results fast. My conversion improvements were clear within the first week.

    Avoid changing too much at the same time. It’s hard to isolate what changes were improvement and which were duds.

    Give it a shot on your site – let me know how it goes.

  • What You’re Really Avoiding Isn’t the Work

    What You’re Really Avoiding Isn’t the Work

    Everyone has a version of this. A category of work that sits on the to-do list for weeks, then months, slowly accumulating guilt. For some founders it’s legal. For others it’s HR, compliance, or investor reporting. For me, it’s always been accounting.

    Not because I can’t do math. Because every time I opened QuickBooks, I’d feel the weight of everything I didn’t understand, and I’d close the tab. There’s always something more urgent than confronting what you don’t know.

    This week I finally sat down and did all of it. Reverse-engineered spreadsheets. Audited our QuickBooks accounts. Found missing payables. Fixed miscategorized transactions. Worked through international currency adjustments. Even handled an off-the-books equity correction I’d been dreading for longer than I’d like to admit.

    And here’s the part I didn’t expect: it was actually kind of fun.

    The difference wasn’t discipline. It was having AI as a collaborator. And the reason that mattered has nothing to do with accounting specifically.

    The real barrier is shame

    Think about the task you’ve been avoiding. Now think about why.

    It’s probably not because the task itself is impossibly hard. It’s because there’s a gap between what you know and what you’d need to know to do it confidently, and closing that gap feels expensive. You’d have to ask someone. That someone is busy, or expensive, or both. And the questions you need to ask feel like they should be obvious.

    That was my relationship with accounting for years. Accountants always seem busy. When I’d get on a call with mine, I’d feel the clock ticking. Every question felt like it should be obvious. Do I really need to ask what a trial balance is? Can I admit I don’t understand why this line item is negative? Is it okay to not know the difference between cash-basis and accrual?

    So you nod along, say “makes sense,” and leave the call having learned nothing. Then you avoid the whole topic for another month.

    This is the shame barrier. It’s not a knowledge problem. It’s a help-access problem. The help exists, but the social cost of accessing it is high enough that you just… don’t.

    What happens when the shame disappears

    When I sat down with Claude Code this week and started working through our financials, I could ask anything. Literally anything.

    “What does this column mean?” No judgment. “Why is this number negative when we received money?” Clear explanation. “Walk me through how this journal entry should work.” Step by step, as many times as I needed.

    I went deep on things I’d been skating past for years. The nuances of our P&L statement. How the balance sheet connects to the trial balance. Why certain transactions were showing up in the wrong categories. What our cash flow statement was actually telling me versus what I assumed it was telling me.

    Each question led to a better question. And because I wasn’t worried about wasting someone’s time or looking dumb, I kept going. I’d ask a follow-up, then another, then branch into something related. It was the first time accounting felt like learning instead of an exam I was failing.

    If you’ve ever had a mentor who made you feel safe asking the dumb questions, you know how much faster you learn in that environment. AI gives you that dynamic on demand, in any domain, at any hour.

    The concrete results

    This wasn’t a vague learning exercise. I worked through real problems in our actual books:

    Reverse-engineered inherited spreadsheets. We had several financial spreadsheets maintained by different people over time. I fed them to Claude and asked it to explain what each one was tracking, how the formulas worked, and where there were inconsistencies. It found things that had been wrong for months. If you’ve ever inherited a spreadsheet from someone who left the company and spent hours trying to figure out what it was supposed to do, AI turns that from hours to minutes.

    Audited QuickBooks categories. Transactions miscategorized across multiple accounts. Expenses in the wrong cost centers. Payables missing entirely. Claude walked me through each one, explained what the correct category should be and why, and helped me make the corrections.

    Handled the stuff I’d been avoiding. International currency adjustments. An equity correction I didn’t fully understand the accounting treatment for. Reconciliation of accounts that hadn’t been reconciled in too long. These are the kinds of things where I’d normally email the accountant, wait three days, get an answer I half-understood, and still feel uncertain about whether it was done right.

    Thought through the strategic questions. Beyond the bookkeeping, I used the conversation to think through bigger questions. I’ve thought about managing cash flow before, but this was different. What are our actual options right now? What interest rate is expensive versus reasonable for our situation? What are the trade-offs between different funding approaches? These aren’t strictly accounting questions, but they live in the same “financial stuff I’m uncomfortable with” bucket, and having a patient conversation partner made them approachable.

    The pattern worth noticing

    Here’s what I want you to take from this. It’s not “use AI for accounting,” although you should.

    Every business owner has domains they understand well and domains where they’re faking it. For me, the product development, marketing, and technical infrastructure are comfortable territory. Finance has always been the thing I know I should understand better but never prioritize learning. It’s a version of the fear of the unfamiliar that I think most founders carry around quietly.

    AI doesn’t replace the expert. I still need a CPA for tax strategy and compliance. But it fills the gap between “I know nothing” and “I know enough to have a productive conversation with my accountant.” That middle layer of competence is what most people skip, and it’s exactly where AI excels.

    Before this week, my accounting approach was “send everything to the accountant and hope for the best.” Now I actually understand what’s in our books. I can read a P&L and know what I’m looking at. I can spot when something looks wrong. That upgrade happened because the learning barrier dropped to zero.

    Apply this to your thing

    This keeps happening. Tasks I’ve been dreading turn out to be approachable, even enjoyable, once I have a collaborator that’s patient, knowledgeable, and available whenever I’m ready to work. It happened with growth engineering. It happened with the small automations that add up. Now it’s happened with accounting.

    The common thread is that the barrier was never ability. It was the friction of getting help. AI removes that friction, and suddenly the things you’ve been avoiding become the things you’re making progress on.

    So here’s my challenge to you: think about the task that’s been sitting on your list the longest. The one you keep bumping to next week. Ask yourself whether the problem is really that the task is hard, or whether the problem is that you don’t have a safe, low-cost way to close your knowledge gap.

    If it’s the second one, you might be surprised at what happens when you just start asking questions.

  • Let’s Talk About the Openclaw in the Room

    Let’s Talk About the Openclaw in the Room

    Everyone’s talking about Openclaw this week. If you haven’t seen it: it takes a Claude model, strips off the guardrails, wraps it in some extra tooling, and lets it run autonomously. People are impressed. I ran it. And I have thoughts.

    What Openclaw actually does

    There are really three things going on:

    First, it runs in what they call dangerous mode. No safety rails, full access to your machine. The agent will scour your computer for API keys hidden in config files, environment variables, wherever. It may use them. It may publish them. You don’t know. This is why the security-conscious crowd runs it on dedicated cloud hardware with nothing on it they didn’t explicitly provision. That’s the right instinct.

    Second, it has a built-in cron that lets the agent schedule its own work. This is the part that matters most. Tell it to manage your X account and it will keep posting all day without stopping. It doesn’t run to completion and then wait for you to kick it again. It stays alive.

    Third, it shifts the interface to be chat-centric through existing messaging channels. The win here is portability. You can talk to it while commuting, ask questions from your phone, and it has the full context of your projects, your files, your authentication. That’s something you don’t get when you open a fresh conversation in ChatGPT.

    My take: too big a leap

    I’ve been running agents hard for weeks. I’ve built multi-agent teams with Claude Code and pushed the current tooling about as far as it goes. And my honest reaction to Openclaw is that it jumped too far.

    The user interface introduces a huge number of configuration options. There are a lot of moving parts to set up. It’s not an incremental lift from the interfaces people are already comfortable with. It’s a full departure. And I think that matters more than the community is acknowledging right now.

    There may also be some architectural choices that are going to be hard to walk back from. When you build a foundation that’s too complex from day one, you end up having to simplify later, and simplifying is always harder than starting simple.

    The real insight: agents need a heartbeat

    Strip away the configuration and the dangerous mode and the chat interface. What’s the core idea that makes Openclaw feel alive?

    It’s a loop.

    Current AI models just turn off. They don’t compute any signals between conversations. There’s no input, no processing, nothing running. They’re not awake unless someone talks to them or they’re working through a task. They have no self-start feature. When they reach the end of a prompt, they effectively pass out and don’t wake up until someone asks them another question.

    That’s wildly different from a human brain, which keeps running between conversations. You finish talking to someone and you keep thinking about what they said. You notice things. You have ideas at 2am.

    The insight, whether it came from the RALPH loop concept or from Openclaw’s cron, is the same: give the agent a heartbeat. A daemon process that periodically checks in and says “is there anything new to do?” That’s what keeps a little bit of life alive in these things.

    What a heartbeat enables

    With just a simple startup hook, every time your agent wakes up it checks:

    • Are there new blog posts or news to process?
    • Did anybody post something on a website I’m monitoring?
    • What time is it, and should I adjust the smart home lights?
    • Are there new GitHub issues or error logs on the server?
    • Is there anything left in the PRD that needs building?
    • Can I rerun the unit tests to make sure everything still passes?

    Each check is an opportunity for the agent to take a bigger action. That action might be posting to Twitter, writing a marketing report, continuing development on a project, or flagging something that needs human attention.

    This is a different thing entirely from scheduled tasks in ChatGPT. Those run a prompt on a timer, sure. But they don’t spin off and create new things. They don’t continuously work through a multi-step project. A local agent with a heartbeat can pick up where it left off, assess the state of a project, and keep going. I’ve been using this kind of persistent agent approach for growth engineering with Claude Code and the difference is night and day.

    A simpler path

    I’ve been building this into the Culture framework. There’s a daemon that auto-updates itself when the core code changes and pings each agent on a schedule. Anything the agent wants to do gets triggered on that heartbeat. Check a website, generate some content, participate in a larger process.

    Right now it’s basic. But the direction is clear. Delayed jobs: “check this in 30 minutes” and the agent schedules itself to wake up in 30 minutes. Recurring tasks on a cron: run this report every two days, check inventory every morning, post a thread every afternoon. These patterns are well established in SaaS operations. Work queues, background jobs, scheduled tasks. Every serious web application runs on them. The difference is that now the worker picking up the job is an AI agent instead of a function.

    And the whole thing sits on top of Claude Code. No new interface to learn. No massive configuration surface. Just a daemon and a skill file, extending the tools people are already using.

    Incremental beats revolutionary

    Openclaw might get there. They might simplify the interface and solidify the architecture. But right now it feels like it skipped a few steps.

    I think the safer bet is incremental. Add one thing at a time to the tools people already know. The daemon is the single most valuable addition: it turns a stateless prompt-response tool into something that behaves like a persistent agent. Combine that with skill files for context and you have most of what makes Openclaw exciting without the complexity tax. It’s the same philosophy behind mini AI automations: small additions, compounding returns.

    If you want to try this approach, join the Culture at join-the-culture.com. It’s early, but the idea is simple: give your agents a heartbeat and see what they do with it.

  • Adversarial Agents: How AI Teams Build Better Creative Work

    Adversarial Agents: How AI Teams Build Better Creative Work

    In software engineering, tests and code exist in tension. Unit tests verify the program is correct. The program, in turn, validates that the tests make sense. They reinforce each other. Neither is complete without the other.

    I’ve been applying this same adversarial principle to creative work with AI, and it’s producing noticeably better results than single-agent prompting.

    The single-agent problem

    A single AI agent thinks linearly, one token at a time. Ask it to build a landing page, and it’ll produce something reasonable. But a good landing page isn’t just one skill. It’s copywriting, web design, conversion rate optimization, brand compliance, marketing strategy, and sometimes legal considerations, all at once.

    No single pass through a context window can hold all of those disciplines in focus simultaneously. The agent will nail the copy but forget the CRO fundamentals, or get the design right but drift off brand voice. Something always slips.

    Setting up the adversarial team

    The fix is to stop asking one agent to do everything and instead assemble a team where each member brings a deep specialization.

    I have a main orchestrator agent spawn sub-agents (or use Claude Code’s team features), and each team member pulls in a dedicated skill file loaded with context for their domain. A copywriting agent might have 500 examples from top copywriters, excerpts from books, your favorite and least favorite examples. A web design agent has example pages, layout patterns, accessibility standards. A branding agent carries your full brand guidelines, voice documentation, and imagery specs.

    These skill files can be massive and detailed. That’s the point. You’re front-loading each agent’s short-term memory with deep expertise before it ever looks at your work. I touched on this idea of building AI-operable systems in a previous post, and the skill file approach takes it even further.

    The rubric and scoring loop

    Each specialized agent receives the current draft of whatever you’re building and evaluates it through its own lens. The CRO agent, for example, might score against a rubric like:

    • Is the value proposition clear above the fold?
    • Are CTAs bold with action-oriented copy?
    • Are social proof elements (ratings, testimonials) visible?
    • Where is the pricing positioned?
    • Is there urgency (countdown timer, limited availability)?
    • Is the page scannable with clear visual hierarchy?

    It scores each dimension, produces an overall rating out of 10, and returns the score along with its top recommendations for improvement.

    Every agent does this independently, through its own lens. The copywriter scores the writing. The designer scores the layout and visuals. The brand agent checks voice and visual consistency. Each one comes back with a number and a list of suggestions.

    Convergence through conflict

    This is where it gets interesting. These agents don’t naturally agree. Good copywriting might clash with brand voice. Bold CRO tactics might conflict with clean design sensibility. Compliance requirements can undercut persuasive copy.

    They’re in genuine tension, just like real team members with different expertise.

    The orchestrator’s job is to synthesize:

    1. Collect all scores. If any agent scores below 9 out of 10, another iteration is needed.
    2. Read the feedback from all agents and identify the most impactful changes.
    3. Revise the deliverable, balancing competing recommendations.
    4. Send it back out for another round of scoring.

    Each cycle tightens the work. The copy gets sharper and the design gets more intentional. Objections get handled. Details that a single-pass agent would miss get caught by one specialist or another.

    The GAN connection

    This plays on one of my favorite concepts in AI: the generative adversarial network. In a classic GAN, one model generates images while a second model tries to determine if each image is real or AI-generated. They train against each other. The generator improves because the discriminator keeps catching it, and the discriminator improves because the generator keeps getting better at fooling it.

    What makes GANs clever is that they create a self-improving feedback loop without needing manually labeled training data. The adversarial structure itself is the training signal.

    What I’m describing with agent teams operates at a higher level, LLMs in role-based scenarios providing structured feedback to each other. But the principle is the same: tension between evaluators and creators drives quality upward through iteration.

    What this actually looks like

    Over the past couple weeks, I’ve used this pattern for:

    • Landing pages for my business. Multiple sales pages where CRO, copywriting, brand, and design agents each scored and refined the work through several iteration cycles.
    • A full blog redesign pulling in SEO, marketing strategy, brand identity, and web design as separate evaluation lenses. I’ve been using this kind of growth engineering with Claude Code approach across a lot of my marketing work.
    • A short playbook on using AI for business, where editorial, subject matter, and audience-fit agents each had their say.
    • Software where domain expertise agents (say, one that understands CPG accounting) worked alongside a coding agent to build something neither could have built alone.

    In each case, the final product had a completeness that single-pass generation just doesn’t produce. You notice it. Fewer holes, fewer “oh we forgot about that” moments.

    The cost

    Let’s be honest about the trade-offs. This approach burns through tokens. A landing page might take 30 to 40 minutes of agent runtime with multiple research phases, iteration loops, browser screenshots for visual verification, and re-scoring cycles.

    That’s a lot compared to a single prompt that returns something in 30 seconds. But 30 minutes for a landing page that’s been reviewed by the equivalent of five specialists? I’ll take that trade every time.

    You’re trading tokens for quality assurance. The same way a real team costs time and money to review each other’s work, the agent team costs compute. But the output is closer to what a real team would produce.

    Same brain, different books

    I keep coming back to this thought. These agents are fundamentally the same model. Claude is Claude, whether it’s playing the copywriter or the CRO specialist. The difference is what you loaded into its context window before it started working.

    It’s like having the same person walk into the room, but each time they’ve just finished reading five different books. The copywriting agent just absorbed every example and principle you could fit in. The brand agent just re-read your entire brand bible. They bring different perspectives because they’re primed with different information, not because they’re different intelligences.

    That framing is why I think this works so well. You’re giving the same capable reasoner different source material to reason from, and the disagreements that emerge are real, not manufactured.

    Running a company of agents

    Working this way is starting to feel less like programming and more like management. You delegate work, wait for feedback, reconcile conflicting opinions, make a call, and send it back for another round. I wrote about the early stages of this shift in The AI CEO, and it’s accelerating faster than I expected.

    In some of these cases, you’re delegating to a team. It won’t be long before you’re delegating to departments. Fully AI departments with dozens or hundreds of agents that have been sub-delegated to operate on specific pieces of a larger project.

    I’m already routinely running five to ten agents against the same deliverable. Scale that up and you start to see the shape of something that looks a lot like an org chart, except every box is an AI agent with a specialized skill set.

    Try it

    If your AI tool of choice supports agents and sub-agents, try this. Even a rough version works:

    1. Pick a deliverable: a landing page, a blog post, a piece of code.
    2. Identify three or four disciplines that matter for quality. Copy, design, SEO, whatever fits.
    3. Create a skill prompt for each discipline, as detailed as you can make it.
    4. Have each specialist score the work on a 1 to 10 rubric with specific recommendations.
    5. Iterate until every specialist scores a 9 or above.

    You’ll burn more tokens and it’ll take longer. But I haven’t gone back to single-pass generation for anything that matters. Once you’ve seen what a team of agents produces compared to one agent winging it, the difference is hard to unsee. The same idea that makes software testing indispensable, that adversarial pressure produces better results, turns out to work just as well when the thing being tested is a creative deliverable instead of a codebase.

  • Give People What They Want: Entertainment

    Give People What They Want: Entertainment

    I work in the sports industry. We sell tickets, sponsorships, media rights. But what we’re actually creating is entertainment. That’s the core product. Everything else is a derivative.

    Most content creators forget this.

    They produce tips and tricks. How-tos. Educational content. And there’s a place for that (you’re reading one right now). But scroll through your feed. How much of what you’re actually consuming is educational? How much of it is making you feel something in the moment?

    People don’t open TikTok to learn. They open it to feel.

    The Hey Al Experiment

    Yesterday, I rebooted an old concept I’d been sitting on for years. A short-form video series called “Hey Al.”

    The premise: I have conversations with an AI assistant named Al (voiced by a cheerful feminine AI), and things go sideways. Al takes instructions literally. Al lacks the context that makes human requests make sense. Al is helpful to a fault, which is exactly what makes it funny.

    It’s fictional comedy. Not a tutorial. Not tips. Not “5 ways to use AI better.”

    The first episode (about having a productive day) went out yesterday and performed better than anything educational I’ve posted in months. Not because the production was better. Because people wanted to watch it. They wanted to see what Al would do next.

    That’s entertainment.

    The Content Creator Trap

    Most of us creating content online default to education mode. It feels safer. It feels valuable. You’re giving people information they can use.

    Businesses creating content tend to create announcements and ads – boring!

    Information is abundant. Entertainment is scarce.

    Scroll your own feed. Most of what stops you isn’t a tutorial. It’s something that made you feel curious or surprised. The educational content you actually consume is usually wrapped in entertainment. The YouTuber who makes you laugh while teaching. The thread that opens with a story before the lesson.

    Give people what they want. They’re holding a device that used to be called a television. They want to be entertained.

    AI-Assisted Production

    The irony isn’t lost on me: I’m using AI to produce entertainment about AI.

    For Hey Al, Claude Code helped me manage the production pipeline. Script development, extracting audio from video files, converting my voice recording to Al’s voice character, organizing the batch filming schedule.

    These aren’t creative decisions. They’re boilerplate labor. The automation frees me to focus on what actually matters: making the joke land.

    The ideal state is producing multiple episodes per day, batched and scheduled. We’re not there yet. But the direction is clear.

    Quality vs. Quantity Is a False Dichotomy

    The world is flooded with content. You’ve heard the advice: focus on quality, not quantity. Or: volume wins, ship more.

    But it’s not actually a seesaw where you trade one for the other. Better tools give you better trade-offs on both.

    Everything we produce today is higher quality than what was possible in the 1980s. Obviously. But it’s also faster to produce. Both lines went up, because the tools improved.

    The bar is always rising. The low bar of yesterday is buried. But if you’re using modern tools, you’re not giving up quality for speed. You’re getting both.

    The game isn’t quality or quantity. It’s using the right tools to stay ahead of the rising floor.

    The Job

    If you’re creating content, you’re in the entertainment business. Whether you like it or not. Whether you’re selling sports tickets or SaaS products or your own personal brand.

    Education is a delivery mechanism. The wrapper matters.

    Give people what they want. They want to feel something. They want to be entertained.

    That’s the job.