Category: AI & Automation

AI tools, Claude Code, automation, and AI-assisted workflows

  • From a Week to Four Hours: Building Chrome Extensions with AI

    From a Week to Four Hours: Building Chrome Extensions with AI

    A year ago, I built my first Chrome extension. It took the better part of a week.

    A few days ago, I built my second Chrome extension. It took four hours.

    Same developer. Similar complexity. Almost no retained knowledge about Chrome extension development between the two projects. The difference was the AI.

    The First Extension

    The first project was a scraper for Amazon Seller Central—pulling data out of the seller dashboard and generating reports. I built it with one of the ChatGPT 4.x models, whichever was current at the time.

    It was painful. But impressive at the time.

    Not because Chrome extensions are impossibly hard, but because I’d never built one before and the AI couldn’t quite get me there cleanly. Every step involved back-and-forth. I’d describe what I wanted, get code that didn’t work, debug it, explain the error, get a fix that broke something else, repeat.

    The manifest file alone took multiple attempts to get right. Permissions, content scripts, background workers—each concept required me to learn enough to understand why the AI’s suggestions weren’t working, then nudge it toward a solution.

    By the end of the week I had a working extension, but I’d earned it through iteration and frustration.

    The Second Extension

    Fast forward to last week. I needed another Chrome extension—this one scrapes recipe information from web pages and submits it to a backend API. Different purpose, but similar complexity to the first project.

    I opened Claude Code and described what I wanted.

    One prompt later, I had a working prototype running locally.

    Not a starting point. Not scaffolding that needed extensive modification. A working extension that did the core job. From there, it was small iterations—mostly around authentication with my backend. But the foundation was solid from the first response.

    What Changed

    The moments that stood out weren’t dramatic. They were just… easy in a way that felt wrong.

    The manifest: Chrome extensions require a manifest.json file that defines permissions, scripts, icons, and metadata. Last year, this was a source of misunderstandings and rejections. This time, Claude one-shot it. Correct permissions, proper structure, sensible defaults. I didn’t have to understand why it worked—it just did.

    The submission process: I’d completely forgotten how to submit an extension to the Chrome Web Store. Claude walked me through it—descriptions, screenshots, privacy policy requirements, the review process. Not generic advice, but specific guidance tailored to what I’d built.

    Performance and security: After the core functionality worked, I prompted my way through improvements. “Make this more efficient.” “Are there any security concerns?” Each time, I got specific changes to the code. I did a cursory review to make sure nothing looked insane, but I didn’t have to dive deep into the implementation to fix anything myself.

    Four hours from start to ready-for-submission.

    The Gap Is Closing

    I’m not a better developer than I was a year ago—at least not at Chrome extensions. I’d forgotten almost everything I learned during that first project. But the AI got dramatically better.

    ChatGPT 4.x was helpful but unreliable. It got me part of the way there, then I had to fight through the gaps. Claude Code with Opus 4.5 understood what I was trying to build and just… built it.

    The difference isn’t subtle. It’s not 20% faster or “somewhat easier.” It’s the difference between a week of grinding and an afternoon of iterating.

    What This Means

    I think about this when people ask whether AI is actually useful for development, or if it’s just hype. The answer depends entirely on when you last tried it.

    If your experience with AI coding assistants was ChatGPT circa 2024, you probably remember the frustration—code that almost worked, endless debugging, the feeling that you could’ve done it faster yourself. That was real.

    But the tools from six months ago aren’t the tools from today. The gap between “AI assistant that helps” and “AI that builds” is closing fast. For a task I’d done exactly once before, with knowledge I’d completely lost, I went from a week to four hours.

    That’s not incremental improvement. That’s a phase change.


    Both extensions are in production. One took a week of frustration. One took an afternoon.

  • How I Use AI to Write and Publish Blog Posts

    How I Use AI to Write and Publish Blog Posts

    This post is a bit meta. I’m using the exact workflow I’m about to describe to write and publish this very article.

    Here’s the setup: I speak my ideas out loud, an AI turns them into polished prose, another AI generates the hero image, and a set of scripts I built with AI assistance handles the publishing. The whole thing lives in a GitHub repository that you can clone and use yourself.

    Let me walk you through how it works.

    The Problem With Writing

    I have ideas. Lots of them. The bottleneck has never been coming up with things to write about—it’s the friction between having a thought and getting it published.

    Traditional blogging requires you to:

    1. Sit down and type out your thoughts
    2. Edit and format the content
    3. Find or create images
    4. Log into WordPress
    5. Copy-paste everything into the editor
    6. Set featured images, categories, meta descriptions
    7. Preview, fix issues, publish

    Each step is a context switch. Each context switch is an opportunity to abandon the post entirely. My drafts folder is a graveyard of half-finished ideas.

    Voice First

    The breakthrough was realizing I don’t need to type. I use Wispr Flow for voice-to-text dictation. It runs locally on my Mac and transcribes speech with surprisingly good accuracy.

    Now when I have an idea for a post, I just… talk. I ramble through my thoughts, explain the concept as if I’m telling a friend, and let the words flow without worrying about structure or polish.

    The output is messy. It’s conversational, full of “um”s and tangents. But it captures the core ideas in a way that staring at a blank page never did.

    AI as Editor

    This is where Claude Code comes in. I take my raw dictation and ask Claude to transform it into a structured blog post. Not just grammar cleanup—actual restructuring, adding headers, tightening the prose, finding the narrative thread in my stream of consciousness.

    The key is that I stay in control. Claude produces a markdown draft, and I review it. I keep what works, rewrite what doesn’t, add details Claude couldn’t know. The AI handles the tedious transformation from spoken word to written word. I handle the judgment calls about what’s actually worth saying.

    The Publishing Pipeline

    Here’s where it gets interesting. I built a set of CLI tools that Claude Code can use to handle the entire publishing workflow.

    When I’m ready to publish, I have a conversation like this:

    Me: "Generate a cyberpunk-style hero image for this post about AI blogging workflows,
    crop it to 16:9, and publish to WordPress with the featured image attached."
    
    Claude: [Generates image with Gemini] → [Crops and converts to JPG] →
            [Uploads to WordPress] → [Converts markdown to Gutenberg blocks] →
            [Creates post with featured image] → Done. Here's your URL.

    One conversation. Full pipeline. No clicking through WordPress admin panels.

    How the Tools Work

    The publishing toolkit includes:

    Voice capture – Wispr Flow transcribes my dictation to text

    Content transformation – Claude Code converts raw transcription to structured markdown

    Image generation – The Nano Banana Pro plugin generates hero images using Google’s Gemini model

    Image processing – A Python script crops images to 16:9 and converts to web-optimized JPG

    WordPress publishing – Another Python script handles media uploads, post creation, and metadata via the WordPress REST API

    File organization – Each post lives in its own dated folder with the markdown source, images, and a metadata JSON file for future edits

    The WordPress MCP server that ships with Claude Code can create posts, but it can’t upload media or set featured images. So I built CLI tools to fill those gaps. Claude Code runs them as needed during the publishing conversation.

    Everything in Git

    The entire setup lives in a GitHub repository. Each blog post is a folder:

    posts/
    ├── 2026-01-13-ai-powered-blog-workflow/
    │   ├── content.md          # This post
    │   ├── featured.jpg        # Hero image
    │   ├── hero.png            # Original generated image
    │   └── meta.json           # WordPress post ID, dates, SEO fields

    Version control for blog posts. If I need to update something, I know exactly where to find it. The meta.json file stores the WordPress post ID so I can push updates to the live site.

    The Meta Part

    Here’s what’s happening right now:

    1. I dictated the concept for this post using Wispr Flow
    2. I asked Claude Code to turn my rambling into a structured article
    3. I reviewed and edited the markdown
    4. I’ll ask Claude to generate a hero image
    5. Claude will crop it, upload it to WordPress, and publish

    The workflow I’m describing is the workflow producing this description. It’s turtles all the way down.

    Try It Yourself

    The publishing toolkit is open source: github.com/mfwarren/personal-brand

    You’ll need:

    • A WordPress site with REST API access
    • An application password for authentication
    • Claude Code with the Nano Banana Pro plugin for image generation
    • Wispr Flow (or any voice-to-text tool) for dictation

    Clone the repo, configure your credentials, and start talking. The gap between having an idea and publishing it has never been smaller.


    Written by dictation, edited by AI, published by CLI. The future of blogging is conversational.

  • How a Holiday Tech Support Call Turned Into a Full-Stack AI Project

    How a Holiday Tech Support Call Turned Into a Full-Stack AI Project

    Like many eldest sons, I have a standing role as family tech support. This holiday season, that role led me somewhere unexpected: launching a new product.

    The Call

    I was visiting my parents over the holidays when they asked for help with a recipe app called MasterCook. They’d been using it for years, but the service was being decommissioned. Could I help them migrate their recipes somewhere else?

    I looked at the recommended migration path. Then I looked at the replacement applications. They were… not great. Clunky interfaces, limited features, the kind of software that feels abandoned even when it’s technically still maintained.

    I had a week of vacation left. I thought: I can build something better than this.

    One Week Later

    That thought became save.cooking – and it’s grown far beyond what I originally imagined.

    What started as a simple tool to import MasterCook recipe files has evolved into a fully-featured AI-enhanced meal planning platform:

    Core Features:

    • Import recipes from MasterCook (.mxp, .mx2) and other formats
    • AI-powered recipe parsing that actually understands ingredients and instructions
    • Vector embeddings that map recipe similarity – find dishes related to ones you love
    • Automatic shopping list generation synced to your weekly meal plan
    • Public recipe sharing with user profiles
    • Full meal plan sharing – not just individual recipes

    Technical Details I Never Would Have Tackled Alone:

    • JSON-LD structured data for Google Recipe rich results
    • Pinterest-optimized images and metadata
    • Open Graph tags specifically tuned for recipe content
    • Responsive Next.js frontend (not my usual stack)

    The site already has over 300 public recipes in its database, and that number grows daily.

    The AI Difference

    Here’s the thing: I’m not a Next.js developer. I’ve built backends, APIs, CLIs – but modern React frontends aren’t my wheelhouse. A year ago, this project would have taken months and looked significantly worse.

    With Claude Code handling the heavy lifting, I could focus on product decisions while the AI handled implementation details. Need Pinterest meta tags? Claude knew the exact format. Want vector similarity search? Claude set up the embeddings pipeline. Struggling with a responsive layout? Claude fixed the CSS.

    This isn’t about AI writing code for me. It’s about AI expanding what I can realistically build. The cognitive load of learning a new framework while also designing features while also handling deployment – that’s usually where side projects die. AI agents absorbed that load.

    The Graveyard Problem

    Recipe websites are a graveyard. AllRecipes feels like it hasn’t been updated since 2010. Food blogs are drowning in ads and life stories before you get to the actual recipe. Apps come and go, taking your data with them.

    People have stopped expecting good software in this space. They’ve accepted that finding a recipe means scrolling past someone’s childhood memories and closing seventeen popups.

    I think we can do better. I think we should do better. Cooking is fundamental – it’s one of the few things that genuinely brings people together. The software around it shouldn’t be an obstacle.

    What’s Next

    save.cooking is live and growing. I’m using it daily for my own meal planning. Features are shipping weekly:

    • Ingredient substitution suggestions
    • Nutritional analysis
    • Collaborative meal planning for households
    • Recipe scaling that actually works
    • Smarter shopping list organization by store section

    If you’ve got recipes trapped in old software, or you’re just tired of the current options, come check it out at save.cooking.

    And if you’re a developer wondering what you could build in a week with AI assistance – the answer might surprise you. The constraint isn’t technical capability anymore. It’s just deciding what’s worth building.


    Built with Claude Code over a holiday week. The family tech support call that actually paid off.

  • Claude Code First Development: Building AI-Operable Systems

    Claude Code First Development: Building AI-Operable Systems

    Most developers think about AI coding assistants as tools that help you write code faster. But there’s a more interesting question: how do you architect your systems so an AI can operate them?

    I’ve been running production applications for years. The traditional approach is to build admin dashboards – React UIs, Django admin, custom internal tools. You click around, run queries, check metrics, send emails to users. It works, but it’s slow and requires constant context-switching.

    Here’s the insight: Claude Code is a command-line interface. It can run shell commands, read output, and take action based on what it sees. If you build your admin tooling as CLI commands and APIs instead of web UIs, Claude Code becomes your admin interface.

    Instead of clicking through dashboards to debug a production issue, you tell Claude: “Find all users who signed up in the last 24 hours but haven’t verified their email, and show me their signup source.” It runs the commands, parses the output, and gives you the answer.

    This is Claude Code First Development – designing your production infrastructure to be AI-operable.

    The Architecture

    There are three layers to this:

    1. Admin API Layer

    Your application exposes authenticated API endpoints for admin operations. Not public APIs – internal endpoints that require admin credentials. These give you programmatic access to:

    • User data (lookups, activity, state)
    • System metrics (signups, WAU, churn, error rates)
    • Operations (send emails, trigger jobs, toggle features, issue refunds)

    2. CLI Tooling

    Command-line tools that wrap those APIs. Claude Code can invoke these directly:

    ./admin users search --email "foo@example.com"
    ./admin metrics signups --since "7 days ago"
    ./admin jobs trigger welcome-sequence --user-id 12345
    ./admin logs errors --service api --last 1h

    3. Credential Management

    The CLI tools handle authentication – reading tokens from environment variables or config files. Claude Code doesn’t need to know how auth works, it just runs commands.

    Building the CLI Tools

    The great thing about AI Developer Agents is that you don’t need to code these tools yourself.

    Based on the data models in this application, build a command line cli tool and claude code skill to
    use it. the cli tool should authticate with admin-only scoped API endpoints to be able to execude basic crud
    capabilities, report on activitiy metrics, generate reports and provide insights that help control the application
    in the production environment without relying on a administrator dashboard.
    Build authentication into the cli tool to save credentials securely.
    examples:
    ./admin-cli users list
    ./admin-cli users add user@example.com --sent-invite
    ./admin-cli reports DAU
    ./admin-cli error-log

    Level up

    Here are prompts you can give Claude Code to build out this infrastructure for your specific application:

    Initial CLI Scaffold

    Create a Python CLI tool using Click for admin operations on my [Django/FastAPI/Express]
    application. The CLI should:
    - Read API credentials from environment variables (ADMIN_API_URL, ADMIN_API_TOKEN)
    - Have command groups for: users, metrics, logs, jobs
    - Output JSON by default with an option for table format
    - Include proper error handling for API failures
    
    Start with the scaffold and user search command.

    Adding User Management

    Add these user management commands to my admin CLI:
    
    1. users search - find users by email, name, or ID
    2. users get <id> - get full user profile including subscription status
    3. users recent - list signups from last N hours/days with filters for source and verification status
    4. users activity <id> - show recent actions for a user
    
    Each command should have sensible defaults and output JSON.

    Adding Metrics Commands

    Add metrics commands to my admin CLI that query our analytics:
    
    1. metrics signups - signup counts grouped by day/week with source breakdown
    2. metrics wau - weekly active users over time
    3. metrics churn - churn rate and churned user counts
    4. metrics health - overall system health (error rates, response times, queue depths)
    5. metrics revenue - MRR, new revenue, churned revenue (if applicable)
    
    Include --since flags for time windows and sensible output formatting.

    Adding Log Access

    Add log viewing commands to my admin CLI:
    
    1. logs errors - recent errors across services with filtering
    2. logs user <id> - all log entries related to a specific user
    3. logs request <id> - trace a specific request through the system
    4. logs search --pattern "..." - search logs by pattern
    
    Format output for terminal readability - timestamps, service names, messages on separate lines.

    Adding Actions/Jobs

    Add commands to trigger admin actions:
    
    1. jobs list - show available background jobs
    2. jobs trigger <name> - trigger a job with optional parameters
    3. jobs status <id> - check job status
    4. email send <user_id> <template> - send a specific email
    5. email templates - list available templates
    
    Include --dry-run flags where destructive or user-facing operations are involved.

    Building the API Endpoints

    Create admin API endpoints for my [framework] application to support the admin CLI:
    
    1. GET /admin/users/search?email=&id=
    2. GET /admin/users/<id>
    3. GET /admin/users/<id>/activity
    4. GET /admin/users/recent?since=&source=&verified=
    5. GET /admin/metrics/signups?since=&group_by=
    6. GET /admin/metrics/wau
    7. GET /admin/logs?service=&level=&since=
    8. POST /admin/jobs/trigger
    
    All endpoints should require Bearer token authentication. Use our existing User and
    Activity models. Return JSON responses.

    Making Tools Work Well With Claude Code

    Claude Code reads text output. The better your tools format their output, the more effectively Claude can interpret and act on the results.

    Principle 1: JSON for Data, Text for Logs

    Return structured data as JSON – Claude parses it accurately:

    $ ./admin users get 12345
    {
      "id": 12345,
      "email": "user@example.com",
      "created_at": "2024-01-15T10:30:00Z",
      "subscription": "pro",
      "verified": true
    }

    But format logs for human readability – Claude understands context better:

    $ ./admin logs errors --last 1h
    [2024-01-15 10:45:23] api: Failed to process payment for user 12345: card_declined
    [2024-01-15 10:47:01] worker: Job send_welcome_email failed: SMTP timeout
    [2024-01-15 10:52:18] api: Rate limit exceeded for IP 192.168.1.1

    Principle 2: Include Context in Output

    When something fails, include enough context for Claude to suggest fixes:

    $ ./admin jobs trigger welcome-email --user-id 99999
    {
      "error": "user_not_found",
      "message": "No user with ID 99999",
      "suggestion": "Use 'admin users search' to find the correct user ID"
    }

    Principle 3: Support Filtering at the Source

    Don’t make Claude grep through huge outputs. Add filters to your commands:

    # Bad - returns everything, Claude has to parse
    $ ./admin logs errors --last 24h
    
    # Good - filtered at the API level
    $ ./admin logs errors --last 24h --service api --level error --limit 20

    Principle 4: Dry Run Everything Destructive

    Any command that modifies state should support --dry-run:

    $ ./admin email send 12345 password-reset --dry-run
    {
      "would_send": true,
      "recipient": "user@example.com",
      "template": "password-reset",
      "subject": "Reset your password",
      "preview_url": "https://admin.yourapp.com/email/preview/abc123"
    }

    This lets Claude verify actions before executing them, and lets you review what it’s about to do.

    Principle 5: Exit Codes Matter

    Use proper exit codes so Claude knows when commands fail:

    @users.command()
    def get(user_id: str):
        try:
            result = api_request("GET", f"/admin/users/{user_id}")
            output(result)
        except requests.HTTPError as e:
            if e.response.status_code == 404:
                click.echo(f"User {user_id} not found", err=True)
                raise SystemExit(1)
            raise

    Note: When the commands crash out – the app can immediately fix itself!

    Integrating With Claude Code Skills

    Claude Code supports Skills – custom commands that extend its capabilities. You can create a Skill that wraps your admin CLI and provides context about your specific system.

    Just tell Claude Code to document your new CLI into a skill:

    Create a claude code skill to document how to use admin-cli, then give me examples of what I can do with this new skill.

    Now Claude Code has context about your admin tools and can use them appropriately.

    MCP Tool Integration

    For deeper integration, you can expose your admin API as an MCP (Model Context Protocol) server. This lets Claude call your admin functions directly as tools rather than shelling out to CLI commands. (so it can be used by more people than terminal centric administrators)

    Security Considerations

    A few things to keep in mind:

    1. Scope your tokens. Create admin API tokens with the minimum permissions needed. Don’t use your personal account credentials.

    2. Audit everything. Log every admin API call with the token used, timestamp, and parameters. You want a trail of what Claude (or you) did.

    3. Require confirmation for destructive operations. Either use --dry-run flags or build confirmation into your workflow. Claude should show you what it’s about to do before doing it.

    4. Separate read and write access. Consider having read-only tokens for investigation and separate tokens for operations that modify state.

    5. Don’t expose production databases directly. Always go through your application layer. This gives you validation, audit logging, and the ability to revoke access.

    Putting It Together

    Here’s what a typical debugging session looks like with this setup:

    You: "Users are complaining about not receiving welcome emails. Investigate."
    
    Claude: Let me check recent signups and email job status.
    
    $ ./admin users recent --since 24h --verified
    [shows recent verified users]
    
    $ ./admin logs errors --service email --last 24h
    [2024-01-15 08:15:00] email: SMTP connection timeout to smtp.sendgrid.net
    [2024-01-15 08:15:01] email: Failed to send welcome-email to user 12340
    [2024-01-15 08:15:01] email: Failed to send welcome-email to user 12341
    ...
    
    Looks like the email service had SMTP connection issues starting at 8:15 AM.
    Let me check how many users were affected:
    
    $ ./admin metrics health
    {
      "email_queue_depth": 847,
      "email_error_rate": 0.94,
      ...
    }
    
    There are 847 emails stuck in the queue with a 94% error rate. The issue appears
    to be SMTP connectivity. Once that's resolved, you may want to replay the failed
    jobs. I can show you the affected users if you'd like to manually trigger their
    welcome emails.

    This is the power of Claude Code First Development. You’re not clicking through dashboards or writing one-off SQL queries. You’re having a conversation with an AI that has direct access to your systems through well-designed tooling.

    Build the CLI tools. Expose the APIs. Give Claude the access it needs to help you operate your systems. That’s the future of production debugging.

  • Starting Daily Founder Fuel

    Starting Daily Founder Fuel

    This past week, I had an idea for an app. This idea came from an impulse to jot down some thoughts about my business challenges and how they needed to be written out, journaled, thought through, and developed further. I wanted to incorporate this into a daily practice, recognizing the value of writing. Everyone knows that through writing, ideas become more concrete, real, and memorable, as well as easier to share. So, writing was on my mind as I considered how to approach this.

    I also wanted to maintain a balanced approach to my thought process. Some days are for strategic thinking, others for sales processes, numbers, finance, long-term growth, or professional development. As a founder or entrepreneur, it’s crucial not to fall into old patterns of focusing only on preferred areas but to address all necessary aspects that might otherwise be neglected. Having a structured approach ensures a balanced distribution of thoughts and developing ideas, preventing a single-minded focus and fostering a holistic view of the business.

    I began searching for journaling apps tailored to entrepreneurs, addressing their specific concerns and questions to improve their business and life. Most journals available are generic, catering to a wide audience with personal goals and life thoughts. I wanted something more specific to business ideas. While I enjoy writing on paper, a physical book can be easily forgotten. To counter this, I decided to create an email newsletter that would appear daily, ensuring it remains visible and part of my routine. This way, it consistently prompts daily reflection without being easily hidden or forgotten.

    After collecting journaling prompt ideas and quotes, I realized many turned into homework-like tasks, which, while interesting and fun, also served as valuable exercises. Questions about handling team conflicts, delegation, sales tactics, personal skill development, team motivation, and defining unique sales propositions kept me engaged. These prompts helped test my clarity of thought and understanding of various business aspects, ensuring I stayed sharp and well-rounded in my approach.

    Seeing a need for such a resource, I launched a website, dailyfounderfuel.com, and created a newsletter signup so everyone could try it. I populated it with prompts scheduled out for the next several months. This experiment required minimal effort and low cost—around $40 initially and $10 monthly for email service, domain, and hosting. This small investment sets up an ongoing experiment to see if there’s interest in such a resource. If successful, I’ll have a valuable list of entrepreneurs and founders. If not, it’s a learning exercise. Either way, it’s a worthwhile endeavor. If this sounds interesting, check out dailyfounderfuel.com and sign up.

  • Mini AI Automations

    Mini AI Automations

    I’ve been reading the book “Buy Back Your Time” by Dan Martell. It is a new perspective to me for how to think about hiring people. Well worth the read if this is something in your wheelhouse.

    One of the core ideas in the book is to find the time sucking tasks and focus on removing those first by hiring people to do these (usually) simpler tasks. It’s a great use case for applying AI.

    So this past few weeks I’ve been doing an audit of my time to find the 30 minutes here and there that could be automated away.

    It got me down the rabbit hole of no-code automation systems and Make.com. With Make, it is easy to deal with the world we live in these days that run on a dozen disconnected services:

    Gmail for emails

    Google docs for spreadsheets

    Notion for business documentation

    Slack for team communication

    e-commerce platforms – Shopify, Amazon

    …and countless others

    Writing code to connect with and authenticate all these and then manage all the keys, and understand all the APIs enough to do something quickly is the kind of boring boilerplate code that:

    Can often be implemented with Make or Zapier using drag and drop

    Can be code written by AI.

    With these approaches a number of simple automations can be built relatively easily that can save you hours of time. This week I created an automation to organize ad creatives into Google Drive, and another to cross-post blog posts.

    Interestingly, these tools have spawned a new industry of automation consultants who drop into your business to find various processes that can be automated.

    Even without a consultant there are thousands of pre-built templates that can be tweaked to match your needs. It’s designed to be a tool that doesn’t require much technical experience to use.

    Whatever path you choose, these tools can help whittle away the minor annoying tasks that can suck up time. And for that reason it might be worth playing with. Buy back some of your time with a little automation this weekend.

  • The AI CEO

    The AI CEO

    It was just one year ago that a user on Twitter (@jacksonfall – now deactivated) went viral. This was shortly after the launch of GPT-4, a model that was a massive leap better than the previous generation. He proposed a challenge for the AI to act as the CEO of a startup, it would direct the finances, develop the strategy and command people to do the work.

    This AI was coined HustleGPT and it quickly turned into a sensation that attracted thousands of dollars of investment to see this experiment through.

    It quickly directed the launch of an ecommerce shop to sell green gadgets. And initiated some advertising to promote the website.

    However, things fell apart a couple weeks later.

    The operation of this venture was ostensibly transferred to a community on Discord that formed up around it. From there very little progress was made. The website stagnated.

    Just two months after starting, the project was shuttered and @jacksonfall went quiet.

    After a year of working with GPT-4, it’s become more apparent that despite its capabilities, there are limitations that make it challenging to manage something as complex as a startup.

    In the meantime, a year of progress has happened. OpenAI, Google, Claude, Grok, + many more models have been trained and improved. Context windows have expanded and logical reasoning has improved. Agent systems (multiple AIs working together) are starting to see some promising results. See Devin the AI software engineer:

    Are we ready to try this AI CEO experiment again?

    This past week I started to explore this. The result has been a revamp of my website: mattwarren.co

    Will this have a similar fate to HustleGPT?

  • My Early AI Business Failure

    My Early AI Business Failure

    The year was 2011, I was deep into affiliate marketing and AdSense with content websites. On the side I had created and launched a dozen different WordPress blogs.

    Things were getting unwieldy. The backups, updates, monitoring, designing, and writing content for all these websites was a lot of work, and it easy to miss things that broke.

    I knew that keeping the content fresh and updated was key to ranking on Google. Even back in 2011 I been already been blogging for over a decade.

    I wanted to scratch my own itch and build some tools that could automate the management of these WordPress installations. As with all things that get automated I wondered about how big of a scale this could get. Would it be possible to manage 100 blogs? 1000?

    At a certain scale, writing blog posts for these websites becomes an impossibility, so I started looking into an approach called content spinning. This used an earlier approach of AI techniques called Natural Language Processing to re-word and re-arrange other written content so that it appeared unique in the eyes of Google.

    I built it and it worked!

    This platform could crawl the internet and compile the latest news, and interesting data into dozens of fresh blog posts every day.

    This was the first SaaS business that I built and launched with paying customers. I was pumped.

    A customer could simply load in a domain name, a list of keywords to target for the blog content and select one of several available themes. The system would then install WordPress, configure it with users, themes and plugins and it would start the processes to crawl the internet for related content that could be re-purposed.

    It was almost entirely hands off.

    I used it to launch and run over 60 websites.

    The launch went well, and I had a perfect number of users to build off of.

    And then it happened.

    The same month that I launched Automatic Blog Machine, Google rolled out a major overhaul of it’s search engine and it was insanely good at finding websites with content like what was possible to generate with the NLP approaches I was using. It was immediately able to flag and de-rank all these websites.

    With no other better approaches for automated content generation, and better AI still a decade away the users slowly churned. The defeat was de-moralizing, and instead of pivoting this into what could have become a decent WordPress management and hosting service I lost the motivation to keep it going.

    At the time I learned the wrong lessons from this experience:

    Google can stomp you out of business in an instance

    Timing is just part of the random chance involved – I lost.

    However, with more experience under my belt I can say that it should have taught me some different lessons:

    Persevere in the face of challenges – there are always challenges.

    Pivot if necessary.

    Success is a mental game as much as it is about execution.

    What seems like bad timing might just be the natural course of competition and innovation required to stay ahead.

    Hopefully you found this story entertaining. If so, let me know. Thanks for reading.

  • Copy My AI Assisted Content Strategy

    Copy My AI Assisted Content Strategy

    Getting and growing attention is the core of any marketing strategy. But standing out is harder than ever when everyone is equipped with high quality cameras, microphones and great software.

    Today I’m going to tell you about how I’m leveraging AI to accelerate the production of short form video content that I’m cross posting to TikTok, Instagram, Facebook and YouTube.

    When thinking about producing short-form video here’s what I consider important:

    Develop a format and style that can repeat. This reduces the amount of decisions that need to be made and streamlines the production. It also makes the videos more binge-worthy, since people who like one will be highly likely to enjoy the other videos.

    The structure should have a strong hook – most videos on these platforms have 1 second to grab your attention. If you manage to keep 50% of people past the 3rd second you’re doing well.

    Make the content valuable – entertainment value or educational value. High value content is more likely to get shared

    Don’t spend too much time/money on producing a short video. These things have a short life-span. Going super viral is a lot of luck so embrace “internet ugly”.

    Always be testing – use the short lifespan to your advantage – re-edit and re-post often. Remember, 90% of people who saw the video, didn’t watch the whole thing, the remaining 10% will have forgotten about it by next week.

    So, here’s a strategy I’ve started to use to develop a personal brand presence. I developed a structure for the videos that include (in my case) 5 stages:

    Opening/hook

    Conflict

    Escalation

    Resolution or cliffhanger

    Closing

    Then I add some additional constraints:

    Must be easily recorded with just myself and a phone camera

    very few if any props or setting changes

    no special effects required

    With the help of ChatGPT, I asked for help developing the first 10 video ideas that can fit this criteria and BOOM! There’s a list of concepts.

    Just a little bit of workshopping these ideas to turn them into short 8-10 line scripts.

    In my case I decided to have an AI character in my scripts. This adds to the complexity of editing. But it’s kind of fun, so here’s what I did for that:

    Use the voices from elevenlabs.io to generate the audio files. Interesting note here – The speech-to-speech AI option can match the tone and cadence but with another character’s voice – which helps with telling jokes.

    Used CapCut video editor – this is significantly easier to use than Adobe’s professional tools. It layers in the video with the extra audio track. A short video can be edited in less than 10 minutes.

    Take advantage of automated AI caption generation – they’re usually 95% correct and the timing is aligned for you. People often watch with the sound off – so captions are important.

    SEO is a part of the process with publishing videos. I use ChatGPT to help write a video title and description that matches the video and provides enough textual content for indexing the video.

    Putting this together and a bit of practice it’s possible to script, record, edit and publish a decent short video in as little as 15 minutes.

    If you want to see some of the results – subscribe to my YouTube channel

  • AI Video Personalization

    AI Video Personalization

    Let’s explore how AI continues to transform personalization and what that means for your business. This week, we’re focusing on personalized video.

    AI Tool of the Week: Maverick

    This week’s featured AI tool is Maverick, an interesting video personalization solution that enables sending a unique video to each person. Ideal for e-commerce DTC businesses (but perhaps also with sales), it offers an improvement in engagement with emails and an increase in ROI as a result.

    How-To:

    It’s surprisingly easy to implement Maverick’s custom videos into an e-commerce business. 

    Record the video – I think something low budget will feel more authentic. Have a spot so the first word is the customer’s name: that’s the hook that gets them to watch the whole video

    Record an audio script. It is used to train the model to match your voice.

    Integrate with your email automation/flow.

    Case Study Highlight: Dr. Squatch

    Dr. Squatch implemented Maverick and had a reportedly big improvement on their email engagement. Watch the video:

    AI News Roundup

    Video was big this week

    OpenAI’s Sora text-to-video model shocked everyone with the massive leap forward: https://www.youtube.com/watch?v=HK6y8DAPN_0

    EMO out of the Alibaba group demoed some fascinating progress for animating a photo with lip syncing and matching the emotion of an audio: https://www.youtube.com/watch?v=f_d-8BGIzPI

    Have questions or insights of your own? Reply to this email! I’d love to hear from you.

    That wraps up this week’s journey through the world of AI for businesses. Remember, integrating AI into your business strategies is not just about staying competitive; it’s about setting new standards of excellence and innovation.

    Share AI Commerce with your colleagues or network, and help build a community of AI-savvy professionals. Got feedback or want to see a specific topic covered? Let me know!

    Until next week,
    Matt Warren