Author: Matt

  • Give People What They Want: Entertainment

    Give People What They Want: Entertainment

    I work in the sports industry. We sell tickets, sponsorships, media rights. But what we’re actually creating is entertainment. That’s the core product. Everything else is a derivative.

    Most content creators forget this.

    They produce tips and tricks. How-tos. Educational content. And there’s a place for that (you’re reading one right now). But scroll through your feed. How much of what you’re actually consuming is educational? How much of it is making you feel something in the moment?

    People don’t open TikTok to learn. They open it to feel.

    The Hey Al Experiment

    Yesterday, I rebooted an old concept I’d been sitting on for years. A short-form video series called “Hey Al.”

    The premise: I have conversations with an AI assistant named Al (voiced by a cheerful feminine AI), and things go sideways. Al takes instructions literally. Al lacks the context that makes human requests make sense. Al is helpful to a fault, which is exactly what makes it funny.

    It’s fictional comedy. Not a tutorial. Not tips. Not “5 ways to use AI better.”

    The first episode (about having a productive day) went out yesterday and performed better than anything educational I’ve posted in months. Not because the production was better. Because people wanted to watch it. They wanted to see what Al would do next.

    That’s entertainment.

    The Content Creator Trap

    Most of us creating content online default to education mode. It feels safer. It feels valuable. You’re giving people information they can use.

    Businesses creating content tend to create announcements and ads – boring!

    Information is abundant. Entertainment is scarce.

    Scroll your own feed. Most of what stops you isn’t a tutorial. It’s something that made you feel curious or surprised. The educational content you actually consume is usually wrapped in entertainment. The YouTuber who makes you laugh while teaching. The thread that opens with a story before the lesson.

    Give people what they want. They’re holding a device that used to be called a television. They want to be entertained.

    AI-Assisted Production

    The irony isn’t lost on me: I’m using AI to produce entertainment about AI.

    For Hey Al, Claude Code helped me manage the production pipeline. Script development, extracting audio from video files, converting my voice recording to Al’s voice character, organizing the batch filming schedule.

    These aren’t creative decisions. They’re boilerplate labor. The automation frees me to focus on what actually matters: making the joke land.

    The ideal state is producing multiple episodes per day, batched and scheduled. We’re not there yet. But the direction is clear.

    Quality vs. Quantity Is a False Dichotomy

    The world is flooded with content. You’ve heard the advice: focus on quality, not quantity. Or: volume wins, ship more.

    But it’s not actually a seesaw where you trade one for the other. Better tools give you better trade-offs on both.

    Everything we produce today is higher quality than what was possible in the 1980s. Obviously. But it’s also faster to produce. Both lines went up, because the tools improved.

    The bar is always rising. The low bar of yesterday is buried. But if you’re using modern tools, you’re not giving up quality for speed. You’re getting both.

    The game isn’t quality or quantity. It’s using the right tools to stay ahead of the rising floor.

    The Job

    If you’re creating content, you’re in the entertainment business. Whether you like it or not. Whether you’re selling sports tickets or SaaS products or your own personal brand.

    Education is a delivery mechanism. The wrapper matters.

    Give people what they want. They want to feel something. They want to be entertained.

    That’s the job.

  • Building a Personal Knowledge Base: How I Created a Semantic Search Engine Over Everything I’ve Ever Made

    Building a Personal Knowledge Base: How I Created a Semantic Search Engine Over Everything I’ve Ever Made

    I’ve been creating content for years. YouTube videos, blog posts, tweets, podcast appearances, internal docs for my company. Thousands of pieces scattered across platforms and folders.

    Here’s the problem: I can’t remember what I’ve said.

    Not in a concerning way. In a “did I already share that framework?” or “what was that thing I said about distribution vs product?” way. My past content exists, but I can’t access it when I need it. When I sit down to write something new, I’m starting from scratch instead of building on foundations I’ve already laid.

    The Inspiration

    I was listening to a podcast where Caleb Ralston (a personal branding creator on YouTube) mentioned that his team had built an “AI database” of all his historical content. They transcribed every video he’d ever appeared in and turned it into something searchable. It let them understand his existing talking points, find frameworks he’d already developed, and maintain consistency across content.

    The concept stuck with me. What would it look like to build something similar for myself?

    What I Built

    A local semantic search engine that can answer questions about my own content. The entire system runs on my laptop. No cloud services, no API costs after setup, complete privacy.

    The stack is surprisingly simple:

    • ChromaDB for vector storage
    • Ollama for local embeddings (nomic-embed-text model)
    • Python script to ingest and query
    • Markdown as the universal format

    Total setup: maybe 200 lines of code.

    How It Works

    1. Collect content – YouTube transcripts (downloaded via yt-dlp), blog posts, docs, anything in text form
    2. Chunk it – Split documents into ~500 word segments with overlap
    3. Embed it – Convert each chunk to a vector using Ollama locally
    4. Store it – ChromaDB persists everything to disk
    5. Query it – Semantic search returns relevant chunks for any question
    # Ingest all content
    uv run build-kb.py --ingest
    
    # Ask questions
    uv run build-kb.py --query "What have I said about content systems?"
    uv run build-kb.py --query "My thoughts on distribution vs product"

    The “semantic” part matters. I’m not doing keyword matching. When I ask about “content systems,” it returns chunks that discuss workflows, automation, and publishing pipelines—even if those exact words aren’t used. The embedding model understands meaning, not just strings.

    The Obsidian Connection

    Here’s where it gets interesting.

    My entire working directory is a folder of markdown files. Blog posts, notes, drafts, transcripts—all .md files in a structured hierarchy. That folder is also an Obsidian vault.

    Obsidian gives me:

    • Visual browsing – Navigate content through a nice UI
    • Linking – Connect related ideas with [[wiki-style links]]
    • Graph view – See how concepts cluster together
    • Search – Quick full-text search when I know what I’m looking for

    The knowledge base adds:

    • Semantic search – Find content by meaning, not keywords
    • Cross-reference discovery – “What else have I said that’s similar to this?”
    • Topic clustering – Analyze patterns in what I talk about most

    They complement each other. Obsidian for browsing and organizing. The knowledge base for querying and discovering.

    What I Discovered

    After ingesting ~400 chunks from my content, I ran an analysis to find topic clusters. The results were illuminating:

    TopicFrequency
    Claude Code / AI automation86 mentions
    Content systems & workflows75 mentions
    Marketing & business106 mentions
    Founder productivity / goals62 mentions

    The phrase “claude code” appeared 38 times in my personal brand content. “Content” appeared 131 times. These are the themes I return to constantly.

    More useful than the raw counts were the semantic clusters. When I queried “What have I said about content systems?”, I got back chunks from:

    • A blog post about growth engineering with Claude Code
    • A YouTube video called “Creating a Content System”
    • Internal documentation about creative direction

    Content I’d forgotten I made. Ideas I’d already articulated that I can now build on instead of recreating.

    The Broader Pattern

    This is part of something I’ve been calling “growth engineering”—treating marketing infrastructure like software infrastructure. The knowledge base is one component.

    The full system looks like this:

    Working Directory (Obsidian Vault)
    ├── posts/           # Blog content
    ├── content/         # Thought leadership drafts
    ├── knowledge-base/  # Vector DB + scripts
    │   ├── youtube-transcripts/
    │   ├── chroma-db/
    │   └── build-kb.py
    └── products/        # Product pages and docs

    Everything is markdown. Everything is version controlled. Everything is queryable.

    When I want to write something new:

    1. Query the knowledge base: “What have I said about [topic]?”
    2. Review existing content in Obsidian
    3. Build on what exists instead of starting fresh
    4. Publish through the same markdown → WordPress pipeline

    The AI isn’t writing my content. It’s helping me remember and organize what I’ve already created. The knowledge base becomes institutional memory for a one-person operation.

    How to Build Your Own

    If you want to try this, here’s the minimal setup:

    1. Install Ollama

    brew install ollama
    ollama serve
    ollama pull nomic-embed-text

    2. Create the ingestion script

    The core is maybe 100 lines. Collect documents, chunk them, embed them, store them in ChromaDB. The full script is in my knowledge-base repo.

    3. Point it at your content

    YouTube transcripts are easy:

    yt-dlp --write-auto-sub --sub-lang en --skip-download \
      "https://www.youtube.com/@your-channel"

    Markdown files just need to be in a folder. The script recursively finds them.

    4. Query away

    uv run build-kb.py --query "your question here" -n 10

    The embedding model runs locally. No API keys needed after you pull the model. Completely private—your content never leaves your machine.

    The Meta Layer

    There’s something recursive about using AI to build the system that helps me leverage AI.

    Claude Code helped me write the ingestion script. It helped me debug the VTT parsing for YouTube transcripts. It helped me analyze the topic clusters. Now the knowledge base feeds context back into Claude Code when I’m working on new content.

    The tools build the tools that improve the tools.

    That’s the pattern I keep returning to. Not “AI writes my content” but “AI amplifies my ability to create and connect my own content.” The knowledge base doesn’t have opinions. It has receipts—everything I’ve said, searchable by meaning.

    For someone building a personal brand, that’s the foundation. Know what you’ve said. Build on it. Be consistent AND repetitive but in unique ways. Let the system remember so you can focus on what’s new.

  • Growth Engineering with Claude Code: Why Your Next Marketing Platform is a Code Editor

    Growth Engineering with Claude Code: Why Your Next Marketing Platform is a Code Editor

    Claude Code was built for software engineers. It’s a CLI tool that helps developers write, debug, and ship code faster with AI assistance.

    I’m using it to run the entire marketing operation for Psychedelic Water.

    Not the coding parts—though there’s some of that. I’m using it to create content, coordinate campaigns, maintain brand voice across six channels, and build a self-improving system where analytics feed back into strategy. The file system is the CMS. Markdown files are the content. CLAUDE.md files are the strategy documents. And AI is the executor.

    Here’s why I think this is where growth engineering is headed.

    The Problem with Marketing Tools

    Modern marketing requires presence everywhere: Instagram, Twitter, TikTok, YouTube, email, blog, third-party publications. Each platform has its own dashboard, its own analytics, its own content format.

    The result is fragmentation. Your Instagram strategy lives in one place. Your email campaigns live in another. Your content calendar is a spreadsheet that’s always out of date. And maintaining consistent brand voice across all of it? Good luck.

    Most teams solve this by hiring more people. A social media manager, a content writer, an email specialist, someone to pull analytics together. Each person becomes the keeper of their channel, and coordination happens through meetings, Slack, and hope.

    What if the coordination layer was built into the system itself?

    The File System as Marketing Infrastructure

    At Psychedelic Water, I’ve built a folder structure that serves as the entire marketing operation:

    psychedelic-marketing/
    ├── CLAUDE.md                    # High-level strategy and goals
    ├── products/                    # Product info, photography, specs
    ├── brand/                       # Voice guidelines, visual assets
    ├── channels/
    │   ├── instagram/
    │   │   ├── CLAUDE.md            # Instagram-specific strategy
    │   │   ├── scripts/             # Posting, analytics, scheduling
    │   │   └── drafts/              # Content in progress
    │   ├── twitter/
    │   │   ├── CLAUDE.md
    │   │   ├── scripts/
    │   │   └── drafts/
    │   ├── email/
    │   │   ├── CLAUDE.md
    │   │   ├── scripts/             # Klaviyo integration
    │   │   └── campaigns/
    │   ├── blog/
    │   │   ├── CLAUDE.md
    │   │   ├── scripts/             # Shopify publishing, analytics
    │   │   └── posts/
    │   └── ...
    ├── analytics/                   # Performance data, reports
    └── campaigns/                   # Cross-channel coordinated efforts
        └── 2026-01-functional-focus/
            ├── strategy.md
            ├── instagram/
            ├── twitter/
            └── email/

    Every channel has its own CLAUDE.md file that defines the strategy for that platform. When I work in the Instagram folder, Claude understands the Instagram strategy. When I work in email, it understands the email strategy. The context is built into the structure.

    Strategy as Code

    Here’s what a channel-level CLAUDE.md might contain:

    • Audience: Who we’re talking to on this platform
    • Voice adjustments: How brand voice adapts for this channel
    • Content types: What performs well here
    • Posting cadence: Frequency and timing
    • Scripts available: What automation exists
    • Success metrics: What we’re optimizing for

    When I ask Claude to draft an Instagram caption, it doesn’t start from zero. It reads the strategy document, understands the voice, knows what’s worked before. The strategic context is embedded in the file system.

    The top-level CLAUDE.md contains the overarching marketing goals—what we’re focusing on this month, what story we’re telling, what campaigns are active. This creates consistency. If the focus is on functional ingredients this month, every channel knows it. Instagram, Twitter, email, blog—they’re all telling the same story in their own way.

    Scripts as Integrations

    Each channel folder contains scripts that handle the platform-specific work:

    Blog scripts connect to Shopify to publish content and pull performance data. I can ask Claude to check how last week’s post performed relative to historical averages, and it runs the analytics script, interprets the results, and incorporates that into future recommendations.

    Email scripts integrate with Klaviyo to schedule campaigns and pull engagement metrics.

    Image generation scripts use AI to create visuals that match the brand aesthetic, then resize them appropriately for each platform.

    These aren’t complex applications. They’re small, focused tools—often just a few dozen lines of Python—that bridge Claude Code to the platforms where content lives. The AI orchestrates them; the scripts do the platform-specific work.

    Content as Dated Folders

    Every piece of content lives in a dated folder:

    channels/instagram/drafts/
    ├── 2026-01-20-functional-energy/
    │   ├── caption.md
    │   ├── image.jpg
    │   ├── notes.md
    │   └── analytics.json
    ├── 2026-01-21-behind-the-scenes/
    │   ├── caption.md
    │   ├── images/
    │   └── notes.md

    This creates a natural archive. I can look back at what we posted, see what performed, understand what we were thinking at the time. When analytics data comes in, it gets saved alongside the content. The system learns from itself.

    Cross-Channel Coordination

    The hardest part of multi-channel marketing is consistency. You want the same story told everywhere, adapted for each platform’s format and audience.

    The campaigns/ folder solves this. A campaign is a coordinated effort across channels:

    campaigns/2026-01-functional-focus/
    ├── strategy.md          # The core message and goals
    ├── instagram/           # Instagram-specific executions
    ├── twitter/             # Twitter-specific executions
    ├── email/               # Email-specific executions
    └── results.md           # What happened

    The strategy.md defines what we’re saying and why. Each channel folder contains the platform-specific adaptations. Claude understands that these are connected—if I’m working on the Instagram content, it knows the overarching strategy and can ensure the messaging aligns.

    If someone misses the Instagram post, they might catch it on Twitter. If they’re not on social media, they’ll get the email. The story reaches them somewhere.

    Why This Works

    Claude Code wasn’t designed for this. It was built to help developers write software. But the core patterns translate perfectly:

    File system as memory: Just like code lives in files, content lives in files. The structure is the organization.

    Markdown as content: Developers write documentation in markdown. Marketers can write content in markdown. It’s portable, version-controlled, and AI-friendly.

    Scripts as integrations: Instead of API calls to deploy code, scripts make API calls to publish content or pull analytics.

    AI as executor: Instead of writing code, the AI writes content, following the strategic guidelines embedded in the folder structure.

    The gap between “AI coding assistant” and “AI marketing operations platform” is smaller than it looks.

    What’s Missing

    This system isn’t fully automated. Some platforms don’t have good APIs for posting. Some content needs human review before it goes out. The analytics integrations are still being built.

    But the bones are there. The organizational structure exists. The strategy is embedded. The feedback loops are forming.

    Right now, I work alongside Claude in this system—reviewing drafts, approving posts, adjusting strategy based on what the data says. But the system is designed to become more autonomous over time. As the AI gets better, as the integrations get more complete, the human involvement shifts from execution to oversight.

    The Future of Growth Engineering

    I think this is where marketing operations is headed. Not more dashboards. Not more point solutions. Not more people managing more tools.

    Instead: AI-native systems where the file system is the source of truth, strategy is embedded in the structure, and AI handles the execution across every channel.

    Claude Code is a code editor. But it turns out that growth engineering looks a lot like software engineering—just with different outputs. Instead of shipping code, you’re shipping content. Instead of deploying to production, you’re publishing to platforms. Instead of monitoring systems, you’re tracking engagement.

    The tools built for one translate surprisingly well to the other.


    I’m building this system for Psychedelic Water, where I’m President and Co-Founder. If you’re thinking about AI-native marketing operations, I’d be interested to hear what you’re building.

  • From a Week to Four Hours: Building Chrome Extensions with AI

    From a Week to Four Hours: Building Chrome Extensions with AI

    A year ago, I built my first Chrome extension. It took the better part of a week.

    A few days ago, I built my second Chrome extension. It took four hours.

    Same developer. Similar complexity. Almost no retained knowledge about Chrome extension development between the two projects. The difference was the AI.

    The First Extension

    The first project was a scraper for Amazon Seller Central—pulling data out of the seller dashboard and generating reports. I built it with one of the ChatGPT 4.x models, whichever was current at the time.

    It was painful. But impressive at the time.

    Not because Chrome extensions are impossibly hard, but because I’d never built one before and the AI couldn’t quite get me there cleanly. Every step involved back-and-forth. I’d describe what I wanted, get code that didn’t work, debug it, explain the error, get a fix that broke something else, repeat.

    The manifest file alone took multiple attempts to get right. Permissions, content scripts, background workers—each concept required me to learn enough to understand why the AI’s suggestions weren’t working, then nudge it toward a solution.

    By the end of the week I had a working extension, but I’d earned it through iteration and frustration.

    The Second Extension

    Fast forward to last week. I needed another Chrome extension—this one scrapes recipe information from web pages and submits it to a backend API. Different purpose, but similar complexity to the first project.

    I opened Claude Code and described what I wanted.

    One prompt later, I had a working prototype running locally.

    Not a starting point. Not scaffolding that needed extensive modification. A working extension that did the core job. From there, it was small iterations—mostly around authentication with my backend. But the foundation was solid from the first response.

    What Changed

    The moments that stood out weren’t dramatic. They were just… easy in a way that felt wrong.

    The manifest: Chrome extensions require a manifest.json file that defines permissions, scripts, icons, and metadata. Last year, this was a source of misunderstandings and rejections. This time, Claude one-shot it. Correct permissions, proper structure, sensible defaults. I didn’t have to understand why it worked—it just did.

    The submission process: I’d completely forgotten how to submit an extension to the Chrome Web Store. Claude walked me through it—descriptions, screenshots, privacy policy requirements, the review process. Not generic advice, but specific guidance tailored to what I’d built.

    Performance and security: After the core functionality worked, I prompted my way through improvements. “Make this more efficient.” “Are there any security concerns?” Each time, I got specific changes to the code. I did a cursory review to make sure nothing looked insane, but I didn’t have to dive deep into the implementation to fix anything myself.

    Four hours from start to ready-for-submission.

    The Gap Is Closing

    I’m not a better developer than I was a year ago—at least not at Chrome extensions. I’d forgotten almost everything I learned during that first project. But the AI got dramatically better.

    ChatGPT 4.x was helpful but unreliable. It got me part of the way there, then I had to fight through the gaps. Claude Code with Opus 4.5 understood what I was trying to build and just… built it.

    The difference isn’t subtle. It’s not 20% faster or “somewhat easier.” It’s the difference between a week of grinding and an afternoon of iterating.

    What This Means

    I think about this when people ask whether AI is actually useful for development, or if it’s just hype. The answer depends entirely on when you last tried it.

    If your experience with AI coding assistants was ChatGPT circa 2024, you probably remember the frustration—code that almost worked, endless debugging, the feeling that you could’ve done it faster yourself. That was real.

    But the tools from six months ago aren’t the tools from today. The gap between “AI assistant that helps” and “AI that builds” is closing fast. For a task I’d done exactly once before, with knowledge I’d completely lost, I went from a week to four hours.

    That’s not incremental improvement. That’s a phase change.


    Both extensions are in production. One took a week of frustration. One took an afternoon.

  • My 2026 Resolution: Rebuild the Morning

    My 2026 Resolution: Rebuild the Morning

    I’m not big on New Year’s resolutions. Most of them are wishful thinking dressed up as commitment. But this year I’m trying something different—I’m not adding new habits, I’m recovering old ones. Not setting goals, just simple routines.

    These are habits I’ve had before. Habits I know work for me. Habits I let slip during a busy year and need to bring back.

    The Foundation: Sleep

    Everything starts with sleep. When I sleep well, everything else gets easier. When I don’t, willpower becomes the only thing holding my day together—and willpower is a finite resource.

    The habits I’m rebuilding aren’t complicated. They’re foundational. The kind of boring, unsexy practices that don’t make for good social media content but actually move the needle on quality of life.

    Here’s the thing: I’m not trying to optimize my sleep with gadgets and sleep scores. I’m trying to remove the friction that prevents good sleep from happening naturally.

    Phone Out of the Bedroom

    This is the single biggest change I’ve made to my sleep quality in years. And I keep forgetting it.

    When my phone is on my nightstand, I check it before sleep. I scroll when I should be winding down. Then in the morning, I lie in bed catching up on notifications instead of getting up.

    The fix is absurdly simple: the phone stays on the charger in my office. If I’m in bed, I’m either sleeping or I’m bored enough to get up. There’s no third option where I’m doom-scrolling at midnight or hitting snooze until I’ve wasted half the morning.

    The 10-Minute Walk

    I’m adding a morning walk to the routine. Just 10 minutes outside before I start work.

    I’ll be honest—I’m skeptical I’ll stick to this one. It’s January in Canada. The appeal of stepping outside into -15°C weather is… limited.

    But the evidence is hard to ignore. Morning sunlight resets your circadian rhythm. Movement wakes up your body. Cold air (unfortunately) wakes up your brain. Everyone I know who does this swears by it.

    So I’m committing to the experiment. If I can build the habit during the hardest months, it should be easy to maintain when spring arrives.

    Morning Ketones

    Here’s where things get a bit more experimental.

    I don’t eat a big breakfast. Never have. My brain needs to work in the morning, but my body doesn’t need a carb-heavy meal.

    I need fuel for the challenging cognitive work I do early in the day. The solution I’m testing: ketones.

    Ketones are an alternative fuel source for your brain. When you’re in ketosis—either from fasting or a low-carb diet—your body produces them naturally. But you can also supplement them directly.

    I’m starting with powdered MCT oil, which is a precursor to ketones. Your liver converts medium-chain triglycerides into ketones relatively quickly. The powdered form mixes easily into coffee and doesn’t cause the digestive… adventures… that liquid MCT oil is famous for.

    If that doesn’t give me the mental clarity I’m looking for, I’ll try ketone salt powders next. And if those don’t work, there are direct ketone ester shots—expensive and allegedly terrible-tasting, but effective.

    The goal isn’t ketosis for weight loss or any of the other popular reasons people try it. The goal is giving my brain high-octane fuel first thing in the morning without spiking my blood sugar.

    Why Recovery, Not Optimization

    I’ve noticed a pattern in my life. I discover habits that work. I practice them consistently. Things improve. Then life gets busy, the habits slip, and I spend months wondering why I feel worse.

    The answer is usually obvious in retrospect: I stopped doing the things that were working.

    This year’s resolution isn’t about finding new hacks or optimizing my stack. It’s about acknowledging that I already know what works for me—I just need to do it again.

    Phone out of the bedroom. Walk in the morning. Fuel the brain without crashing the blood sugar. Go to bed at a reasonable hour.

    Nothing revolutionary. Just the basics, rebuilt.


    Here’s to a year of doing the boring stuff that actually works.

  • How I Use AI to Write and Publish Blog Posts

    How I Use AI to Write and Publish Blog Posts

    This post is a bit meta. I’m using the exact workflow I’m about to describe to write and publish this very article.

    Here’s the setup: I speak my ideas out loud, an AI turns them into polished prose, another AI generates the hero image, and a set of scripts I built with AI assistance handles the publishing. The whole thing lives in a GitHub repository that you can clone and use yourself.

    Let me walk you through how it works.

    The Problem With Writing

    I have ideas. Lots of them. The bottleneck has never been coming up with things to write about—it’s the friction between having a thought and getting it published.

    Traditional blogging requires you to:

    1. Sit down and type out your thoughts
    2. Edit and format the content
    3. Find or create images
    4. Log into WordPress
    5. Copy-paste everything into the editor
    6. Set featured images, categories, meta descriptions
    7. Preview, fix issues, publish

    Each step is a context switch. Each context switch is an opportunity to abandon the post entirely. My drafts folder is a graveyard of half-finished ideas.

    Voice First

    The breakthrough was realizing I don’t need to type. I use Wispr Flow for voice-to-text dictation. It runs locally on my Mac and transcribes speech with surprisingly good accuracy.

    Now when I have an idea for a post, I just… talk. I ramble through my thoughts, explain the concept as if I’m telling a friend, and let the words flow without worrying about structure or polish.

    The output is messy. It’s conversational, full of “um”s and tangents. But it captures the core ideas in a way that staring at a blank page never did.

    AI as Editor

    This is where Claude Code comes in. I take my raw dictation and ask Claude to transform it into a structured blog post. Not just grammar cleanup—actual restructuring, adding headers, tightening the prose, finding the narrative thread in my stream of consciousness.

    The key is that I stay in control. Claude produces a markdown draft, and I review it. I keep what works, rewrite what doesn’t, add details Claude couldn’t know. The AI handles the tedious transformation from spoken word to written word. I handle the judgment calls about what’s actually worth saying.

    The Publishing Pipeline

    Here’s where it gets interesting. I built a set of CLI tools that Claude Code can use to handle the entire publishing workflow.

    When I’m ready to publish, I have a conversation like this:

    Me: "Generate a cyberpunk-style hero image for this post about AI blogging workflows,
    crop it to 16:9, and publish to WordPress with the featured image attached."
    
    Claude: [Generates image with Gemini] → [Crops and converts to JPG] →
            [Uploads to WordPress] → [Converts markdown to Gutenberg blocks] →
            [Creates post with featured image] → Done. Here's your URL.

    One conversation. Full pipeline. No clicking through WordPress admin panels.

    How the Tools Work

    The publishing toolkit includes:

    Voice capture – Wispr Flow transcribes my dictation to text

    Content transformation – Claude Code converts raw transcription to structured markdown

    Image generation – The Nano Banana Pro plugin generates hero images using Google’s Gemini model

    Image processing – A Python script crops images to 16:9 and converts to web-optimized JPG

    WordPress publishing – Another Python script handles media uploads, post creation, and metadata via the WordPress REST API

    File organization – Each post lives in its own dated folder with the markdown source, images, and a metadata JSON file for future edits

    The WordPress MCP server that ships with Claude Code can create posts, but it can’t upload media or set featured images. So I built CLI tools to fill those gaps. Claude Code runs them as needed during the publishing conversation.

    Everything in Git

    The entire setup lives in a GitHub repository. Each blog post is a folder:

    posts/
    ├── 2026-01-13-ai-powered-blog-workflow/
    │   ├── content.md          # This post
    │   ├── featured.jpg        # Hero image
    │   ├── hero.png            # Original generated image
    │   └── meta.json           # WordPress post ID, dates, SEO fields

    Version control for blog posts. If I need to update something, I know exactly where to find it. The meta.json file stores the WordPress post ID so I can push updates to the live site.

    The Meta Part

    Here’s what’s happening right now:

    1. I dictated the concept for this post using Wispr Flow
    2. I asked Claude Code to turn my rambling into a structured article
    3. I reviewed and edited the markdown
    4. I’ll ask Claude to generate a hero image
    5. Claude will crop it, upload it to WordPress, and publish

    The workflow I’m describing is the workflow producing this description. It’s turtles all the way down.

    Try It Yourself

    The publishing toolkit is open source: github.com/mfwarren/personal-brand

    You’ll need:

    • A WordPress site with REST API access
    • An application password for authentication
    • Claude Code with the Nano Banana Pro plugin for image generation
    • Wispr Flow (or any voice-to-text tool) for dictation

    Clone the repo, configure your credentials, and start talking. The gap between having an idea and publishing it has never been smaller.


    Written by dictation, edited by AI, published by CLI. The future of blogging is conversational.

  • How a Holiday Tech Support Call Turned Into a Full-Stack AI Project

    How a Holiday Tech Support Call Turned Into a Full-Stack AI Project

    Like many eldest sons, I have a standing role as family tech support. This holiday season, that role led me somewhere unexpected: launching a new product.

    The Call

    I was visiting my parents over the holidays when they asked for help with a recipe app called MasterCook. They’d been using it for years, but the service was being decommissioned. Could I help them migrate their recipes somewhere else?

    I looked at the recommended migration path. Then I looked at the replacement applications. They were… not great. Clunky interfaces, limited features, the kind of software that feels abandoned even when it’s technically still maintained.

    I had a week of vacation left. I thought: I can build something better than this.

    One Week Later

    That thought became save.cooking – and it’s grown far beyond what I originally imagined.

    What started as a simple tool to import MasterCook recipe files has evolved into a fully-featured AI-enhanced meal planning platform:

    Core Features:

    • Import recipes from MasterCook (.mxp, .mx2) and other formats
    • AI-powered recipe parsing that actually understands ingredients and instructions
    • Vector embeddings that map recipe similarity – find dishes related to ones you love
    • Automatic shopping list generation synced to your weekly meal plan
    • Public recipe sharing with user profiles
    • Full meal plan sharing – not just individual recipes

    Technical Details I Never Would Have Tackled Alone:

    • JSON-LD structured data for Google Recipe rich results
    • Pinterest-optimized images and metadata
    • Open Graph tags specifically tuned for recipe content
    • Responsive Next.js frontend (not my usual stack)

    The site already has over 300 public recipes in its database, and that number grows daily.

    The AI Difference

    Here’s the thing: I’m not a Next.js developer. I’ve built backends, APIs, CLIs – but modern React frontends aren’t my wheelhouse. A year ago, this project would have taken months and looked significantly worse.

    With Claude Code handling the heavy lifting, I could focus on product decisions while the AI handled implementation details. Need Pinterest meta tags? Claude knew the exact format. Want vector similarity search? Claude set up the embeddings pipeline. Struggling with a responsive layout? Claude fixed the CSS.

    This isn’t about AI writing code for me. It’s about AI expanding what I can realistically build. The cognitive load of learning a new framework while also designing features while also handling deployment – that’s usually where side projects die. AI agents absorbed that load.

    The Graveyard Problem

    Recipe websites are a graveyard. AllRecipes feels like it hasn’t been updated since 2010. Food blogs are drowning in ads and life stories before you get to the actual recipe. Apps come and go, taking your data with them.

    People have stopped expecting good software in this space. They’ve accepted that finding a recipe means scrolling past someone’s childhood memories and closing seventeen popups.

    I think we can do better. I think we should do better. Cooking is fundamental – it’s one of the few things that genuinely brings people together. The software around it shouldn’t be an obstacle.

    What’s Next

    save.cooking is live and growing. I’m using it daily for my own meal planning. Features are shipping weekly:

    • Ingredient substitution suggestions
    • Nutritional analysis
    • Collaborative meal planning for households
    • Recipe scaling that actually works
    • Smarter shopping list organization by store section

    If you’ve got recipes trapped in old software, or you’re just tired of the current options, come check it out at save.cooking.

    And if you’re a developer wondering what you could build in a week with AI assistance – the answer might surprise you. The constraint isn’t technical capability anymore. It’s just deciding what’s worth building.


    Built with Claude Code over a holiday week. The family tech support call that actually paid off.

  • Claude Code First Development: Building AI-Operable Systems

    Claude Code First Development: Building AI-Operable Systems

    Most developers think about AI coding assistants as tools that help you write code faster. But there’s a more interesting question: how do you architect your systems so an AI can operate them?

    I’ve been running production applications for years. The traditional approach is to build admin dashboards – React UIs, Django admin, custom internal tools. You click around, run queries, check metrics, send emails to users. It works, but it’s slow and requires constant context-switching.

    Here’s the insight: Claude Code is a command-line interface. It can run shell commands, read output, and take action based on what it sees. If you build your admin tooling as CLI commands and APIs instead of web UIs, Claude Code becomes your admin interface.

    Instead of clicking through dashboards to debug a production issue, you tell Claude: “Find all users who signed up in the last 24 hours but haven’t verified their email, and show me their signup source.” It runs the commands, parses the output, and gives you the answer.

    This is Claude Code First Development – designing your production infrastructure to be AI-operable.

    The Architecture

    There are three layers to this:

    1. Admin API Layer

    Your application exposes authenticated API endpoints for admin operations. Not public APIs – internal endpoints that require admin credentials. These give you programmatic access to:

    • User data (lookups, activity, state)
    • System metrics (signups, WAU, churn, error rates)
    • Operations (send emails, trigger jobs, toggle features, issue refunds)

    2. CLI Tooling

    Command-line tools that wrap those APIs. Claude Code can invoke these directly:

    ./admin users search --email "foo@example.com"
    ./admin metrics signups --since "7 days ago"
    ./admin jobs trigger welcome-sequence --user-id 12345
    ./admin logs errors --service api --last 1h

    3. Credential Management

    The CLI tools handle authentication – reading tokens from environment variables or config files. Claude Code doesn’t need to know how auth works, it just runs commands.

    Building the CLI Tools

    The great thing about AI Developer Agents is that you don’t need to code these tools yourself.

    Based on the data models in this application, build a command line cli tool and claude code skill to
    use it. the cli tool should authticate with admin-only scoped API endpoints to be able to execude basic crud
    capabilities, report on activitiy metrics, generate reports and provide insights that help control the application
    in the production environment without relying on a administrator dashboard.
    Build authentication into the cli tool to save credentials securely.
    examples:
    ./admin-cli users list
    ./admin-cli users add user@example.com --sent-invite
    ./admin-cli reports DAU
    ./admin-cli error-log

    Level up

    Here are prompts you can give Claude Code to build out this infrastructure for your specific application:

    Initial CLI Scaffold

    Create a Python CLI tool using Click for admin operations on my [Django/FastAPI/Express]
    application. The CLI should:
    - Read API credentials from environment variables (ADMIN_API_URL, ADMIN_API_TOKEN)
    - Have command groups for: users, metrics, logs, jobs
    - Output JSON by default with an option for table format
    - Include proper error handling for API failures
    
    Start with the scaffold and user search command.

    Adding User Management

    Add these user management commands to my admin CLI:
    
    1. users search - find users by email, name, or ID
    2. users get <id> - get full user profile including subscription status
    3. users recent - list signups from last N hours/days with filters for source and verification status
    4. users activity <id> - show recent actions for a user
    
    Each command should have sensible defaults and output JSON.

    Adding Metrics Commands

    Add metrics commands to my admin CLI that query our analytics:
    
    1. metrics signups - signup counts grouped by day/week with source breakdown
    2. metrics wau - weekly active users over time
    3. metrics churn - churn rate and churned user counts
    4. metrics health - overall system health (error rates, response times, queue depths)
    5. metrics revenue - MRR, new revenue, churned revenue (if applicable)
    
    Include --since flags for time windows and sensible output formatting.

    Adding Log Access

    Add log viewing commands to my admin CLI:
    
    1. logs errors - recent errors across services with filtering
    2. logs user <id> - all log entries related to a specific user
    3. logs request <id> - trace a specific request through the system
    4. logs search --pattern "..." - search logs by pattern
    
    Format output for terminal readability - timestamps, service names, messages on separate lines.

    Adding Actions/Jobs

    Add commands to trigger admin actions:
    
    1. jobs list - show available background jobs
    2. jobs trigger <name> - trigger a job with optional parameters
    3. jobs status <id> - check job status
    4. email send <user_id> <template> - send a specific email
    5. email templates - list available templates
    
    Include --dry-run flags where destructive or user-facing operations are involved.

    Building the API Endpoints

    Create admin API endpoints for my [framework] application to support the admin CLI:
    
    1. GET /admin/users/search?email=&id=
    2. GET /admin/users/<id>
    3. GET /admin/users/<id>/activity
    4. GET /admin/users/recent?since=&source=&verified=
    5. GET /admin/metrics/signups?since=&group_by=
    6. GET /admin/metrics/wau
    7. GET /admin/logs?service=&level=&since=
    8. POST /admin/jobs/trigger
    
    All endpoints should require Bearer token authentication. Use our existing User and
    Activity models. Return JSON responses.

    Making Tools Work Well With Claude Code

    Claude Code reads text output. The better your tools format their output, the more effectively Claude can interpret and act on the results.

    Principle 1: JSON for Data, Text for Logs

    Return structured data as JSON – Claude parses it accurately:

    $ ./admin users get 12345
    {
      "id": 12345,
      "email": "user@example.com",
      "created_at": "2024-01-15T10:30:00Z",
      "subscription": "pro",
      "verified": true
    }

    But format logs for human readability – Claude understands context better:

    $ ./admin logs errors --last 1h
    [2024-01-15 10:45:23] api: Failed to process payment for user 12345: card_declined
    [2024-01-15 10:47:01] worker: Job send_welcome_email failed: SMTP timeout
    [2024-01-15 10:52:18] api: Rate limit exceeded for IP 192.168.1.1

    Principle 2: Include Context in Output

    When something fails, include enough context for Claude to suggest fixes:

    $ ./admin jobs trigger welcome-email --user-id 99999
    {
      "error": "user_not_found",
      "message": "No user with ID 99999",
      "suggestion": "Use 'admin users search' to find the correct user ID"
    }

    Principle 3: Support Filtering at the Source

    Don’t make Claude grep through huge outputs. Add filters to your commands:

    # Bad - returns everything, Claude has to parse
    $ ./admin logs errors --last 24h
    
    # Good - filtered at the API level
    $ ./admin logs errors --last 24h --service api --level error --limit 20

    Principle 4: Dry Run Everything Destructive

    Any command that modifies state should support --dry-run:

    $ ./admin email send 12345 password-reset --dry-run
    {
      "would_send": true,
      "recipient": "user@example.com",
      "template": "password-reset",
      "subject": "Reset your password",
      "preview_url": "https://admin.yourapp.com/email/preview/abc123"
    }

    This lets Claude verify actions before executing them, and lets you review what it’s about to do.

    Principle 5: Exit Codes Matter

    Use proper exit codes so Claude knows when commands fail:

    @users.command()
    def get(user_id: str):
        try:
            result = api_request("GET", f"/admin/users/{user_id}")
            output(result)
        except requests.HTTPError as e:
            if e.response.status_code == 404:
                click.echo(f"User {user_id} not found", err=True)
                raise SystemExit(1)
            raise

    Note: When the commands crash out – the app can immediately fix itself!

    Integrating With Claude Code Skills

    Claude Code supports Skills – custom commands that extend its capabilities. You can create a Skill that wraps your admin CLI and provides context about your specific system.

    Just tell Claude Code to document your new CLI into a skill:

    Create a claude code skill to document how to use admin-cli, then give me examples of what I can do with this new skill.

    Now Claude Code has context about your admin tools and can use them appropriately.

    MCP Tool Integration

    For deeper integration, you can expose your admin API as an MCP (Model Context Protocol) server. This lets Claude call your admin functions directly as tools rather than shelling out to CLI commands. (so it can be used by more people than terminal centric administrators)

    Security Considerations

    A few things to keep in mind:

    1. Scope your tokens. Create admin API tokens with the minimum permissions needed. Don’t use your personal account credentials.

    2. Audit everything. Log every admin API call with the token used, timestamp, and parameters. You want a trail of what Claude (or you) did.

    3. Require confirmation for destructive operations. Either use --dry-run flags or build confirmation into your workflow. Claude should show you what it’s about to do before doing it.

    4. Separate read and write access. Consider having read-only tokens for investigation and separate tokens for operations that modify state.

    5. Don’t expose production databases directly. Always go through your application layer. This gives you validation, audit logging, and the ability to revoke access.

    Putting It Together

    Here’s what a typical debugging session looks like with this setup:

    You: "Users are complaining about not receiving welcome emails. Investigate."
    
    Claude: Let me check recent signups and email job status.
    
    $ ./admin users recent --since 24h --verified
    [shows recent verified users]
    
    $ ./admin logs errors --service email --last 24h
    [2024-01-15 08:15:00] email: SMTP connection timeout to smtp.sendgrid.net
    [2024-01-15 08:15:01] email: Failed to send welcome-email to user 12340
    [2024-01-15 08:15:01] email: Failed to send welcome-email to user 12341
    ...
    
    Looks like the email service had SMTP connection issues starting at 8:15 AM.
    Let me check how many users were affected:
    
    $ ./admin metrics health
    {
      "email_queue_depth": 847,
      "email_error_rate": 0.94,
      ...
    }
    
    There are 847 emails stuck in the queue with a 94% error rate. The issue appears
    to be SMTP connectivity. Once that's resolved, you may want to replay the failed
    jobs. I can show you the affected users if you'd like to manually trigger their
    welcome emails.

    This is the power of Claude Code First Development. You’re not clicking through dashboards or writing one-off SQL queries. You’re having a conversation with an AI that has direct access to your systems through well-designed tooling.

    Build the CLI tools. Expose the APIs. Give Claude the access it needs to help you operate your systems. That’s the future of production debugging.

  • The ACTUALLY Free QR Code Generator

    The ACTUALLY Free QR Code Generator

    No Tricks here. You won’t get duped into signing up for a paid service like those other sites, or have to log in. I’m not middle-manning your link to hold you hostage in 2 weeks with an expiring image. I’m not adding any watermarks or branding.

    HOW TO USE:

    1. Put in your link URL.
    2. Select a color
    3. Test the QR code with your phone
    4. Download it.
    5. Use your favorite image editor as you wish


  • Why Doing It Yourself Is the Ultimate Competitive Advantage

    Why Doing It Yourself Is the Ultimate Competitive Advantage

    In a world where AI is accelerating everything—and the barriers to learning are lower than ever—being a DIY generalist isn’t just a personality quirk. It’s a superpower.

    Here’s why mastering many skills and doing things yourself can set you apart.

    1. You Learn Faster Than You Delegate

    Hiring someone to do something sounds efficient—until you realize you don’t understand what you’re asking for. Learning a skill yourself first gives you context, vocabulary, and a feel for what’s hard vs. easy, expensive vs. cheap.

    When you know how something works, you communicate better, negotiate smarter, and make better decisions.

    2. You Don’t Have to Wait on Anyone

    Speed matters. Especially in the early stages of a project or business. When you can jump in and do it yourself, you avoid delays, blockers, and the endless back-and-forth of delegation.

    DIY lets you ship faster. Period.

    3. You Attract More Opportunities

    People notice when you can actually do stuff. The more skills you develop, the more likely someone is to say, “Hey, can you help with this?” That’s how doors open.

    Being seen as “useful” makes you opportunity-rich.

    4. You Go From Idea to Execution Without Friction

    Most projects die between inspiration and execution. Why? Because there are a hundred tiny skills required to get to the finish line. Writing, editing, designing, coding, publishing…

    The fewer skills you lack, the fewer excuses you have.

    5. You Avoid the Paralysis of Complexity

    When you need others to execute every step, you introduce friction: sourcing talent, communicating needs, aligning timelines, budgeting. That can kill momentum.

    The more you can do yourself, the simpler the project becomes.

    6. You Future-Proof Yourself Against Disruption

    Specialists are increasingly vulnerable to automation. When one tool can replace a tightly defined role, that role disappears. Generalists thrive by adapting, connecting ideas, and solving a wider range of problems.

    In an AI world, adaptability beats specialization.

    7. You Build Confidence and Clarity

    There’s nothing more empowering than getting something across the finish line yourself. That confidence compounds. You don’t wonder if you can do something—you know you can.

    DIY doesn’t just get things done. It makes you unstoppable.

    Final Thought: The DIY Ethos Isn’t About Doing Everything—Forever

    It’s about learning enough to start, to understand, and to execute when you need to. Later, you can delegate—but from a position of strength, not ignorance.