Category: Entrepreneurship

Founder journey, startup lessons, and business strategy

  • From BYOD to BYOA: The New Workplace Shift Nobody’s Naming Yet

    From BYOD to BYOA: The New Workplace Shift Nobody’s Naming Yet

    Work has been offloading its infrastructure onto workers for years.

    First the commute. Then the device. Then the office.

    Now the next shift is starting to emerge: bring your own agent.

    Ten years ago, bring your own device was a workplace trend. Employers increasingly expected people to have their own phone, their own laptop, and their own hardware wrapped into the company’s workflow.

    Then remote work pushed the idea further. For a lot of people, it effectively became bring your own office. Your internet. Your desk. Your extra monitor. Your spare bedroom. Your heat. Your coffee. The company still got the output, but more of the working environment moved onto the employee.

    If you go back even further, you can find older versions of the same pattern. In some industries, even getting to work used to be part of the system. Over time that became your car, your gas, your commute, your problem.

    That is why bring your own AI matters.

    Not because it is a catchy acronym, but because it fits a long-running pattern: productive assets keep moving outward from the company and into the hands of the worker.

    And unlike a laptop or a phone, an agent stack is not just a tool. It is accumulated capability.

    This is more than “use ChatGPT at work”

    A lot of people still think AI adoption means opening a chatbot and asking it a few questions.

    That is the beginner version.

    The real edge starts when someone builds a private operating system around their work:

    • prompt libraries refined over months
    • little scripts that clean data, generate reports, or move work between tools
    • retrieval systems and notes that give the model better context
    • review workflows for checking accuracy, tone, and quality
    • persistent agents that can wake up, monitor things, and keep moving
    • multi-agent setups where different models play different roles

    That stack compounds.

    I’ve written before about how I use AI to write and publish blog posts and about building AI-operable systems instead of isolated prompts. The same pattern keeps showing up: the value is rarely in one prompt. The value is in the system around it.

    When somebody builds that system on their own time, on their own machine, with their own habits and history baked into it, they are not just bringing labor to a company anymore.

    They are bringing infrastructure.

    The moat is not the model. It is the context.

    This is where bring your own agent gets much more interesting than bring your own software.

    Software licenses are easy to understand. A company can buy a seat and hand it to anyone.

    An agent stack is different because the most valuable part is often personal.

    The memory lives in your account. The prompt files live in your folders. The judgment about how to scope a task, which tools to call, what good output looks like, and how to audit the result lives in a thousand small decisions you have already made.

    Even the context itself becomes an asset.

    A personal AI system gets better when it has access to your notes, your past work, your frameworks, your examples, your definitions of quality, and the patterns you have trained yourself to follow. That is part of why I built a personal knowledge base over everything I’ve made. The context is not a side detail. It is the advantage.

    That creates a strange boundary.

    If an employee becomes dramatically more productive because of a personal agent stack, how much of that should transfer to the employer? Should the company expect access to the whole system? The prompt library? The memory? The scripts? The evaluation harnesses? The accumulated context?

    That is not a normal software procurement question. It starts to look more like asking someone to show up with their own miniature company attached.

    In software, this is already happening

    The clearest example is coding.

    A growing number of AI-assisted developers are no longer staring at code in the old way all day. They are orchestrating systems that can:

    • write code
    • explain code
    • edit code across multiple files
    • run tests and interpret failures
    • audit for security, style, and performance
    • generate documentation
    • compare different implementation paths
    • review each other and challenge each other

    I’ve written about persistent agents needing a heartbeat and about adversarial agents improving the quality of creative and analytical work. Once you start using these systems seriously, it stops feeling like one person with one tool and starts feeling like one person directing a small team.

    That matters.

    Because when a company hires that person, it is not only hiring judgment and taste. It is hiring the ability to mobilize an entire stack of capability on demand.

    And this is not going to stay inside software.

    Marketing teams will bring campaign-generation systems. Salespeople will bring prospecting and follow-up agents. Operators will bring reporting workflows. Researchers will bring literature-review agents. Writers will bring editorial pipelines. Scientists will bring experiment design and analysis harnesses.

    Whatever the domain is, the pattern is the same.

    The worker who knows how to build and run agents does not arrive alone.

    Better systems create an awkward compensation problem

    From the worker’s side, this is obviously powerful.

    If one person can produce the output of five or ten people because they have better systems, that is a real hiring advantage. It creates independence. It creates negotiating power. It changes what one person can realistically promise to deliver.

    But from the employer’s side, it creates a compensation problem.

    If an employee brings 10x output but gets paid on a normal salary band, most of that upside is captured by the company.

    And in many cases the worker is paying part of the bill.

    They may be covering model subscriptions. They may be covering API costs. They may have spent hundreds of hours building the prompts, scripts, notes, and workflows that make the system useful. They may even be floating the cost for a while and getting reimbursed later, imperfectly, or not at all.

    That is what makes BYOA different from an ordinary productivity tip.

    What looks like a simple efficiency story is also a story about ownership.

    Who paid to build the system? Who owns the context? Who keeps the prompts? Who captures the gains?

    BYOA fits freelancing better than salaried work

    This is why I think bring your own agent will push more people toward freelancing, consulting, and one-person businesses.

    If your real moat is a personal stack of AI systems, then selling outcomes starts to make more sense than selling hours.

    A freelancer can say: here is the result, here is the speed, here is the quality, and here is the price.

    That framing fits AI-powered work much better than a salary band does.

    It also gives the worker a cleaner way to protect the asset.

    Instead of donating their entire operating system into an employer’s workflow, they can keep the system private and sell the output. They can price in the tooling costs. They can improve the stack over time and keep more of the upside for themselves.

    This does not mean normal jobs disappear overnight. But it does mean the center of gravity shifts.

    If companies are trying to hire fewer people and get more output from each one, and if high-performing workers are building private agent systems that dramatically raise what they can do, the natural meeting point is not always full-time employment. Often it is some form of entrepreneurial freelancing.

    That may end up being one of the most important second-order effects of AI at work.

    Companies should get ahead of this now

    Most businesses are still treating AI adoption like a tooling question.

    Should we buy seats? Which model should we use? What policy should we write?

    Those questions matter, but they are not the whole thing.

    The deeper questions are organizational:

    • What should be company-owned versus worker-owned?
    • Are employees expected to use personal agent stacks?
    • If so, who pays for them?
    • If someone builds a workflow that makes them radically more productive, how should that show up in compensation?
    • Should critical workflows live in personal accounts and private folders at all?
    • What happens when the most productive person on the team leaves with the entire system in their backpack?

    Those questions are going to get louder.

    Because BYOA is not just a work habit. It is a form of capital formation at the edge of the company.

    The employee is accumulating productive assets outside the business, then deciding how much of that power to rent back in.

    The shift nobody is naming yet

    Bring your own device felt normal. Then bring your own office started to feel normal. Bring your own agent sounds strange today, but probably not for long.

    The people who will create outsized value over the next few years will not just be good at AI.

    They will know how to build agents, manage context, collect tools, define evaluation loops, and orchestrate systems that keep getting better.

    In other words, they will have built a private factory for thought work.

    That is an amazing opportunity for workers.

    It is also a warning sign.

    Because if people are expected to show up with their own devices, their own office, and now their own agent infrastructure, the obvious next question is this:

    Why rent all of that capability to an employer at a discount?

    The real question is not whether people will bring their own agents to work.

    It is who pays for them, who owns them, and who captures the upside when they do.

  • I Built an AI Agent That Monitors Your Competitors While You Sleep

    I Built an AI Agent That Monitors Your Competitors While You Sleep

    Most e-commerce brands are flying blind on competitive intelligence. They rely on a team member manually checking a few competitor sites once a week — if they remember. A competitor drops prices on a Friday afternoon. The team doesn’t notice until Monday. That’s an entire weekend of lost sales to an alert you never got.

    The manual approach doesn’t scale. It doesn’t run on weekends. And it can’t simultaneously watch pricing pages, Amazon listings, product catalogs, review trends, and ad activity across five competitors at once.

    That’s the problem this project set out to solve.


    The Build Story

    The Competitor Tracker Agent started as a personal frustration. Running a brand means constantly asking: what are competitors doing right now? Are they running a sale? Did they just launch something new? Are their reviews tanking — and is that an opening to capture market share?

    The only honest answer used to be: “I don’t know, and finding out takes too long to be worth it.”

    Here’s the thing — the data isn’t hidden. Competitor pricing is public. Amazon reviews are public. New product launches on Shopify stores are detectable. Google Ads transparency data is accessible. The problem isn’t access to the data. The problem is that gathering it, comparing it to what you saw last week, and then reasoning about what it means — that’s a full-time job.

    So the question became: what if an AI agent could do all of that automatically?

    Building small AI automations has been a recurring theme in this workflow — the insight from working on mini AI automations was that the highest-leverage moves are rarely the complex ones. You chain a few reliable steps together, automate the repetitive parts, and let the AI handle the reasoning layer. That’s exactly the architecture here.


    What the Agent Actually Does

    The Competitor Tracker Agent runs on a 6-hour scan cycle, 24 hours a day. It monitors four intelligence pillars:

    Price Monitoring

    Tracks competitor pricing across DTC websites and Amazon ASINs. Configurable thresholds mean you only get alerted when it actually matters (say, a change greater than 5%), not every minor fluctuation. It catches flash sales, coupon activity, and Buy Box changes.

    Product Intelligence

    Detects new product launches before they’re announced publicly. Shopify stores expose their full product catalog via a public endpoint — a new SKU showing up there at 11pm on a Thursday gets flagged immediately. Discontinuations, variant expansions, and positioning copy changes are all tracked.

    Review and Sentiment Analysis

    Monitors Amazon review counts and star ratings over time. When a competitor’s ratings start declining — say, dropping from 4.3 to 4.0 over 30 days — that’s a signal. It means customers are unhappy, and if you’re selling in the same category, that’s an opening. The agent surfaces these trends before they show up in your own sales data.

    Ad and Campaign Monitoring

    Tracks competitor advertising activity via Google Ads Transparency Center and Amazon Sponsored placements. When a competitor pivots their messaging or launches a new campaign targeting terms they’ve never used before, that signals a strategic shift worth knowing about.


    The Tech Behind It

    The agent is built in Python with Claude AI as the reasoning layer. Here’s the stack:

    • Web scraping layer — Custom scrapers for competitor DTC sites, Shopify catalog endpoints, and Amazon product pages. Rotating request intervals to stay within reasonable limits.
    • Amazon monitoring — ASIN-level tracking for pricing, review counts, BSR, and ad placements via public data and optional SP-API integration.
    • Ad intelligence — SerpAPI for Google Shopping and Ads Transparency Center data; Amazon Sponsored Brands detection from search result pages.
    • Claude AI for analysis — Raw data gets fed into Claude with context about what changed since the last scan. Claude reasons about whether a change is significant, what it likely means strategically, and what action to take. This is the part that makes it genuinely useful rather than just another data dump.
    • Slack integration — Alerts fire within minutes of a significant change being detected. The daily briefing is a structured report generated every weekday at 8am.

    The agent also maintains persistent memory across scans — tracking trends over weeks and months, not just comparing today against yesterday. That historical context is what lets it say things like “Acme’s prices are at a 6-month low” rather than just “price changed.”

    This fits into a broader pattern of thinking about AI as infrastructure rather than as a one-off tool. The post on growth engineering with Claude Code explored this — when you treat AI as the reasoning engine inside a persistent automated system, you get compounding returns that a prompt-and-response workflow never will.


    What the Morning Briefing Looks Like

    Every weekday at 8am, a structured report lands in a dedicated Slack channel. Here’s what a typical Friday briefing looks like:

    Price Intel: Acme Corp dropped Widget Pro from $34.99 to $27.99 (-20%). Flash sale, likely ends Sunday. Their lowest price in 6 months.

    Product Intel: BrandX quietly added “Pro Max Bundle” to their Shopify store. Not announced publicly. $89 price point — a new premium tier.

    Review Intel: No major rating changes. BrandX trending slightly down: 4.1 to 4.0 stars over 30 days.

    Ad Intel: Acme Corp added 3 new Google Shopping ads this week targeting “budget widget” and “affordable widget 2026” — consistent with their price drop strategy.

    Recommended Actions:

    1. Consider a targeted counter-promotion this weekend while Acme’s prices are low — capture price-sensitive shoppers before they return to normal pricing.
    2. Investigate BrandX’s Pro Max Bundle. If it gains traction, it could pressure mid-tier SKUs.
    3. BrandX’s review decline is an opening — consider increasing PPC bids on their branded terms.

    The key distinction is the recommendations section. Raw data is noise. The agent uses Claude to reason about what the data means in context and what to do about it. That’s the difference between a monitoring tool and actual intelligence.


    The DIY Competitive Advantage

    There’s a strong argument for building tools like this rather than buying off-the-shelf software. Enterprise competitive intelligence platforms like Crayon and Klue exist — but they’re built for B2B SaaS companies, start at $15,000+ per year, and track PR and content rather than pricing and Amazon reviews. They’re solving a different problem.

    The doing-it-yourself advantage is that custom-built systems can be tuned exactly to the competitive landscape at hand. Which competitors matter. Which price changes actually warrant a response. Which product categories to watch. That specificity is what turns monitoring into actionable intelligence.


    What This Becomes

    Competitive intelligence at this level of depth and automation wasn’t accessible to small and mid-size e-commerce brands before. It required a dedicated analyst, an expensive platform, or a lot of manual work that was never consistent enough to be reliable.

    The agent changes that calculus. Tell us who your competitors are, and we install a monitoring system tailored to your market in under two weeks. Scans run every 6 hours. Alerts arrive in real-time. The morning briefing is waiting before the team starts their day.

    The parallel to building AI agent systems that handle complex, multi-step reasoning tasks is clear: the value isn’t in any single AI call, it’s in the architecture that chains intelligence together into something that runs continuously without human intervention.


    Full Details and Demo

    The full service page — including pricing, the complete feature breakdown, and a sample Slack report — is at mattwarren.co/competitive-intelligence.

    If this is a problem your brand is dealing with, book a free 30-minute competitor audit. Walk away with a competitive landscape snapshot whether you buy or not.

  • I Couldn’t Afford an Executive Coach, So I Built One

    I Couldn’t Afford an Executive Coach, So I Built One

    Over the weekend I was talking with a high-level executive coach. Smart person. Real deal. Halfway through the conversation, they offered me a spot in their group program — a dozen people, regular group sessions, accountability framework, the whole package.

    I passed.

    Not because it wasn’t valuable. It clearly was. But the price point was higher than I wanted to commit to, and the weekly time requirement was more than my schedule could absorb right now. So I said thanks, thought about it for about 20 minutes, and then opened Claude Code.

    Here’s what I built instead.

    The idea that sparked it

    The coaching conversation had been happening over text. Just back-and-forth messages, advice trickling in throughout the day. What struck me about that format wasn’t the content — it was the delivery mechanism.

    You don’t go get it. It comes to you.

    That’s a fundamentally different experience than opening ChatGPT and typing a question. When a message shows up on your phone unprompted, your brain processes it differently. There’s a social reflex that kicks in. Someone reached out. Someone is thinking about you and your goals. Even if intellectually you know it’s a bot, the messenger app context does something to the accountability equation that a browser tab simply doesn’t.

    So the question became: could an AI agent replicate that dynamic? Not just a chatbot that answers questions, but something that runs persistently, thinks about your situation in the background, and reaches out to you when it has something worth saying?

    The build: research first

    The first thing I did was ask Claude Code to go do research. Not write code — just go learn things. I sent it out to find scientific papers, behavioral research, and business frameworks on what actually makes executive coaching effective. What questions do good coaches ask? How do they maintain accountability? What cadences work? How do you help someone stay focused on priorities without becoming a nag?

    It ran for about 20 minutes, pulling from multiple sources, organizing findings into a structured research document. The output was genuinely useful — not just “here are some coaching tips” but a breakdown of the behavioral psychology behind why coaching works, what distinguishes great coaches from mediocre ones, and the specific techniques that show up consistently in the research.

    That document became the knowledge base for everything that followed.

    From research to software plan

    Once the research was solid, I asked it to turn those findings into a software plan. Here’s what that plan centered on:

    A Telegram bot as the interface. Not a web app. Not a new chat window you have to go find. A bot that lives in your existing messaging app, alongside your other conversations, and behaves like a contact in your phone. This was non-negotiable from the start — the whole point was that the interface creates accountability, and that only works if it’s somewhere you already check.

    Proactive scheduling. The research consistently highlighted a morning check-in as one of the highest-leverage interventions in any coaching relationship. What are your top three things to accomplish today? Simple question, but when asked by a person (or something that feels like a person), it creates a kind of micro-commitment that the end of the day will test. The bot would send this every morning, unprompted.

    Evening accountability. Paired with the morning check-in is an end-of-day follow-up. Did you accomplish those three things? If not, what got in the way? This is where accountability becomes real. It’s easy to type your priorities and then ignore them. It’s harder when something is going to ask you about them later.

    A memory system. This was the piece that made everything else worth building. A good coach remembers what you told them last week. They notice patterns. They connect what you said in January to something you’re struggling with in March. Without memory, a coaching bot is just a fancy prompt. With it, the conversations compound. I asked for a SQLite database and a system that would pull relevant context into each interaction — what goals had been discussed, what came up in recent check-ins, what had been going well or poorly.

    What it does, concretely

    The resulting application is straightforward in structure but surprisingly capable in practice. It runs as a persistent background process on my computer. When it starts up (configured to launch on login), it connects to Telegram and starts listening. A scheduler runs alongside it.

    Every morning, before I’ve opened a browser, there’s a message waiting:

    Good morning. What are your three most important things today?

    I type back. The conversation might go a few exchanges. The bot has context from previous days, so it might note that one of the things I mentioned connects to a goal we discussed earlier in the week, or ask whether the thing I said was blocking me last Thursday got resolved.

    At the end of the day, there’s a follow-up. At the end of the week, a slightly longer check-in. The system also has a lightweight internal process that evaluates whether anything in my recent history is worth proactively surfacing — something that’s been flagged multiple times, a deadline approaching, a thread that went quiet. Most days it decides there’s nothing urgent to interrupt with. Occasionally it sends something.

    That’s the whole system. It’s not complicated. But the experience of using it is substantially different from anything I’ve tried before.

    Why this actually works

    Here’s the thing about accountability: it’s a social phenomenon. The research makes this clear. People keep commitments better when they feel that someone is tracking. It doesn’t matter much whether the tracker is a person, a journal, or a piece of software — what matters is the sense that your stated intentions have a witness.

    Web apps and chatbots fail at this because they require you to initiate. You have to go there, open it, decide to engage with your goals. That friction is small in theory and enormous in practice. The days you need accountability most are the days you’re least likely to open the accountability app.

    A Telegram bot sidesteps this entirely. It comes to you. The interface is indistinguishable from a message from a real person. On some level your brain doesn’t fully process the distinction, and that’s exactly the point.

    After a few days of using this, the morning question has started to feel like a real thing I need to respond to. The end-of-day check-in has made me more honest with myself about what actually got done. I’ve written before about the challenge of maintaining focus — this is the most practical solution I’ve found.

    The bigger shift

    I keep coming back to one observation from this project: at the end of it, I felt like I’d hired a coach. Not built an app. Hired someone.

    That reframe is worth sitting with. There’s a long history of software built around the idea of helping people with productivity, goal setting, and accountability. Most of it has failed to change behavior in any meaningful way because it treats the problem as an organizational challenge. Here’s a system to track your goals. Here’s a dashboard. Here’s a way to categorize your tasks.

    What actually changes behavior is a relationship. Someone who asks how it’s going, remembers what you said, and expects you to show up. Software has historically been incapable of this because it’s reactive — it waits for you.

    The combination of a persistent agent, a messenger interface, and a memory system produces something that isn’t quite software and isn’t quite a relationship. It’s something new. And it works because it targets the actual mechanism of accountability rather than building another dashboard nobody opens.

    Building something like this

    If this sounds useful, the pattern is reproducible. You’re not building anything exotic. The components are:

    A Telegram bot (the interface — python-telegram-bot library handles this in about 50 lines). A scheduler (APScheduler or a simple cron-like structure) for proactive messages. A memory layer (SQLite is more than sufficient — just store conversations and let the agent summarize and retrieve them). A knowledge base (the research Claude Code collected became the system prompt that shapes every interaction). A persistent process (a simple Python script set to run on startup, or a systemd service if you want something more robust).

    The whole thing lives in a folder on your computer. No hosting required. No subscriptions. Accessible anywhere through Telegram.

    Claude Code handled the research, the architecture, and the implementation in a single session. The approach I use for building with AI — research first, then architecture, then implementation — works especially well for projects like this because the research phase directly shapes the software design.

    Software that runs forever

    There’s something that becomes obvious once you build one of these persistent agents and live with it for a few days: this is a fundamentally different category of software.

    We know command-line tools, web apps, desktop apps, and mobile apps. These are all things you go get when you need them. They sit in a menu or a browser tab or an app drawer, waiting to be opened.

    Persistent agents are different. They run in the background. They monitor things. They decide when something requires your attention. They interrupt you only when warranted. The interface is a chat — a format your brain associates with people, not programs.

    This is where more software is going. Not apps you download — agents you deploy. Processes that run on your hardware (or in the cloud), maintain memory across months of interaction, and have access to tools that let them actually do things on your behalf. The executive coach is a simple example. The same architecture could monitor your business metrics and alert you when something looks off. It could track your health data and notice patterns before you do. It could manage a process — customer follow-ups, content scheduling, financial reporting — and only surface the items that genuinely need your judgment.

    The paradigm shift isn’t about AI getting smarter. It’s about software becoming proactive. That transition is happening fast enough that most people haven’t noticed it yet, but once you’ve lived with a persistent agent for a week, you’ll wonder why all your software was passive.

    I passed on the coaching program. A few hours later I had a coach. That’s the most interesting part of all of this — not the technology, but what becomes possible when the barrier to building drops to nearly zero.

  • What You’re Really Avoiding Isn’t the Work

    What You’re Really Avoiding Isn’t the Work

    Everyone has a version of this. A category of work that sits on the to-do list for weeks, then months, slowly accumulating guilt. For some founders it’s legal. For others it’s HR, compliance, or investor reporting. For me, it’s always been accounting.

    Not because I can’t do math. Because every time I opened QuickBooks, I’d feel the weight of everything I didn’t understand, and I’d close the tab. There’s always something more urgent than confronting what you don’t know.

    This week I finally sat down and did all of it. Reverse-engineered spreadsheets. Audited our QuickBooks accounts. Found missing payables. Fixed miscategorized transactions. Worked through international currency adjustments. Even handled an off-the-books equity correction I’d been dreading for longer than I’d like to admit.

    And here’s the part I didn’t expect: it was actually kind of fun.

    The difference wasn’t discipline. It was having AI as a collaborator. And the reason that mattered has nothing to do with accounting specifically.

    The real barrier is shame

    Think about the task you’ve been avoiding. Now think about why.

    It’s probably not because the task itself is impossibly hard. It’s because there’s a gap between what you know and what you’d need to know to do it confidently, and closing that gap feels expensive. You’d have to ask someone. That someone is busy, or expensive, or both. And the questions you need to ask feel like they should be obvious.

    That was my relationship with accounting for years. Accountants always seem busy. When I’d get on a call with mine, I’d feel the clock ticking. Every question felt like it should be obvious. Do I really need to ask what a trial balance is? Can I admit I don’t understand why this line item is negative? Is it okay to not know the difference between cash-basis and accrual?

    So you nod along, say “makes sense,” and leave the call having learned nothing. Then you avoid the whole topic for another month.

    This is the shame barrier. It’s not a knowledge problem. It’s a help-access problem. The help exists, but the social cost of accessing it is high enough that you just… don’t.

    What happens when the shame disappears

    When I sat down with Claude Code this week and started working through our financials, I could ask anything. Literally anything.

    “What does this column mean?” No judgment. “Why is this number negative when we received money?” Clear explanation. “Walk me through how this journal entry should work.” Step by step, as many times as I needed.

    I went deep on things I’d been skating past for years. The nuances of our P&L statement. How the balance sheet connects to the trial balance. Why certain transactions were showing up in the wrong categories. What our cash flow statement was actually telling me versus what I assumed it was telling me.

    Each question led to a better question. And because I wasn’t worried about wasting someone’s time or looking dumb, I kept going. I’d ask a follow-up, then another, then branch into something related. It was the first time accounting felt like learning instead of an exam I was failing.

    If you’ve ever had a mentor who made you feel safe asking the dumb questions, you know how much faster you learn in that environment. AI gives you that dynamic on demand, in any domain, at any hour.

    The concrete results

    This wasn’t a vague learning exercise. I worked through real problems in our actual books:

    Reverse-engineered inherited spreadsheets. We had several financial spreadsheets maintained by different people over time. I fed them to Claude and asked it to explain what each one was tracking, how the formulas worked, and where there were inconsistencies. It found things that had been wrong for months. If you’ve ever inherited a spreadsheet from someone who left the company and spent hours trying to figure out what it was supposed to do, AI turns that from hours to minutes.

    Audited QuickBooks categories. Transactions miscategorized across multiple accounts. Expenses in the wrong cost centers. Payables missing entirely. Claude walked me through each one, explained what the correct category should be and why, and helped me make the corrections.

    Handled the stuff I’d been avoiding. International currency adjustments. An equity correction I didn’t fully understand the accounting treatment for. Reconciliation of accounts that hadn’t been reconciled in too long. These are the kinds of things where I’d normally email the accountant, wait three days, get an answer I half-understood, and still feel uncertain about whether it was done right.

    Thought through the strategic questions. Beyond the bookkeeping, I used the conversation to think through bigger questions. I’ve thought about managing cash flow before, but this was different. What are our actual options right now? What interest rate is expensive versus reasonable for our situation? What are the trade-offs between different funding approaches? These aren’t strictly accounting questions, but they live in the same “financial stuff I’m uncomfortable with” bucket, and having a patient conversation partner made them approachable.

    The pattern worth noticing

    Here’s what I want you to take from this. It’s not “use AI for accounting,” although you should.

    Every business owner has domains they understand well and domains where they’re faking it. For me, the product development, marketing, and technical infrastructure are comfortable territory. Finance has always been the thing I know I should understand better but never prioritize learning. It’s a version of the fear of the unfamiliar that I think most founders carry around quietly.

    AI doesn’t replace the expert. I still need a CPA for tax strategy and compliance. But it fills the gap between “I know nothing” and “I know enough to have a productive conversation with my accountant.” That middle layer of competence is what most people skip, and it’s exactly where AI excels.

    Before this week, my accounting approach was “send everything to the accountant and hope for the best.” Now I actually understand what’s in our books. I can read a P&L and know what I’m looking at. I can spot when something looks wrong. That upgrade happened because the learning barrier dropped to zero.

    Apply this to your thing

    This keeps happening. Tasks I’ve been dreading turn out to be approachable, even enjoyable, once I have a collaborator that’s patient, knowledgeable, and available whenever I’m ready to work. It happened with growth engineering. It happened with the small automations that add up. Now it’s happened with accounting.

    The common thread is that the barrier was never ability. It was the friction of getting help. AI removes that friction, and suddenly the things you’ve been avoiding become the things you’re making progress on.

    So here’s my challenge to you: think about the task that’s been sitting on your list the longest. The one you keep bumping to next week. Ask yourself whether the problem is really that the task is hard, or whether the problem is that you don’t have a safe, low-cost way to close your knowledge gap.

    If it’s the second one, you might be surprised at what happens when you just start asking questions.

  • Building a Personal Knowledge Base: How I Created a Semantic Search Engine Over Everything I’ve Ever Made

    Building a Personal Knowledge Base: How I Created a Semantic Search Engine Over Everything I’ve Ever Made

    I’ve been creating content for years. YouTube videos, blog posts, tweets, podcast appearances, internal docs for my company. Thousands of pieces scattered across platforms and folders.

    Here’s the problem: I can’t remember what I’ve said.

    Not in a concerning way. In a “did I already share that framework?” or “what was that thing I said about distribution vs product?” way. My past content exists, but I can’t access it when I need it. When I sit down to write something new, I’m starting from scratch instead of building on foundations I’ve already laid.

    The Inspiration

    I was listening to a podcast where Caleb Ralston (a personal branding creator on YouTube) mentioned that his team had built an “AI database” of all his historical content. They transcribed every video he’d ever appeared in and turned it into something searchable. It let them understand his existing talking points, find frameworks he’d already developed, and maintain consistency across content.

    The concept stuck with me. What would it look like to build something similar for myself?

    What I Built

    A local semantic search engine that can answer questions about my own content. The entire system runs on my laptop. No cloud services, no API costs after setup, complete privacy.

    The stack is surprisingly simple:

    • ChromaDB for vector storage
    • Ollama for local embeddings (nomic-embed-text model)
    • Python script to ingest and query
    • Markdown as the universal format

    Total setup: maybe 200 lines of code.

    How It Works

    1. Collect content – YouTube transcripts (downloaded via yt-dlp), blog posts, docs, anything in text form
    2. Chunk it – Split documents into ~500 word segments with overlap
    3. Embed it – Convert each chunk to a vector using Ollama locally
    4. Store it – ChromaDB persists everything to disk
    5. Query it – Semantic search returns relevant chunks for any question
    # Ingest all content
    uv run build-kb.py --ingest
    
    # Ask questions
    uv run build-kb.py --query "What have I said about content systems?"
    uv run build-kb.py --query "My thoughts on distribution vs product"

    The “semantic” part matters. I’m not doing keyword matching. When I ask about “content systems,” it returns chunks that discuss workflows, automation, and publishing pipelines—even if those exact words aren’t used. The embedding model understands meaning, not just strings.

    The Obsidian Connection

    Here’s where it gets interesting.

    My entire working directory is a folder of markdown files. Blog posts, notes, drafts, transcripts—all .md files in a structured hierarchy. That folder is also an Obsidian vault.

    Obsidian gives me:

    • Visual browsing – Navigate content through a nice UI
    • Linking – Connect related ideas with [[wiki-style links]]
    • Graph view – See how concepts cluster together
    • Search – Quick full-text search when I know what I’m looking for

    The knowledge base adds:

    • Semantic search – Find content by meaning, not keywords
    • Cross-reference discovery – “What else have I said that’s similar to this?”
    • Topic clustering – Analyze patterns in what I talk about most

    They complement each other. Obsidian for browsing and organizing. The knowledge base for querying and discovering.

    What I Discovered

    After ingesting ~400 chunks from my content, I ran an analysis to find topic clusters. The results were illuminating:

    TopicFrequency
    Claude Code / AI automation86 mentions
    Content systems & workflows75 mentions
    Marketing & business106 mentions
    Founder productivity / goals62 mentions

    The phrase “claude code” appeared 38 times in my personal brand content. “Content” appeared 131 times. These are the themes I return to constantly.

    More useful than the raw counts were the semantic clusters. When I queried “What have I said about content systems?”, I got back chunks from:

    • A blog post about growth engineering with Claude Code
    • A YouTube video called “Creating a Content System”
    • Internal documentation about creative direction

    Content I’d forgotten I made. Ideas I’d already articulated that I can now build on instead of recreating.

    The Broader Pattern

    This is part of something I’ve been calling “growth engineering”—treating marketing infrastructure like software infrastructure. The knowledge base is one component.

    The full system looks like this:

    Working Directory (Obsidian Vault)
    ├── posts/           # Blog content
    ├── content/         # Thought leadership drafts
    ├── knowledge-base/  # Vector DB + scripts
    │   ├── youtube-transcripts/
    │   ├── chroma-db/
    │   └── build-kb.py
    └── products/        # Product pages and docs

    Everything is markdown. Everything is version controlled. Everything is queryable.

    When I want to write something new:

    1. Query the knowledge base: “What have I said about [topic]?”
    2. Review existing content in Obsidian
    3. Build on what exists instead of starting fresh
    4. Publish through the same markdown → WordPress pipeline

    The AI isn’t writing my content. It’s helping me remember and organize what I’ve already created. The knowledge base becomes institutional memory for a one-person operation.

    How to Build Your Own

    If you want to try this, here’s the minimal setup:

    1. Install Ollama

    brew install ollama
    ollama serve
    ollama pull nomic-embed-text

    2. Create the ingestion script

    The core is maybe 100 lines. Collect documents, chunk them, embed them, store them in ChromaDB. The full script is in my knowledge-base repo.

    3. Point it at your content

    YouTube transcripts are easy:

    yt-dlp --write-auto-sub --sub-lang en --skip-download \
      "https://www.youtube.com/@your-channel"

    Markdown files just need to be in a folder. The script recursively finds them.

    4. Query away

    uv run build-kb.py --query "your question here" -n 10

    The embedding model runs locally. No API keys needed after you pull the model. Completely private—your content never leaves your machine.

    The Meta Layer

    There’s something recursive about using AI to build the system that helps me leverage AI.

    Claude Code helped me write the ingestion script. It helped me debug the VTT parsing for YouTube transcripts. It helped me analyze the topic clusters. Now the knowledge base feeds context back into Claude Code when I’m working on new content.

    The tools build the tools that improve the tools.

    That’s the pattern I keep returning to. Not “AI writes my content” but “AI amplifies my ability to create and connect my own content.” The knowledge base doesn’t have opinions. It has receipts—everything I’ve said, searchable by meaning.

    For someone building a personal brand, that’s the foundation. Know what you’ve said. Build on it. Be consistent AND repetitive but in unique ways. Let the system remember so you can focus on what’s new.

  • Merchant Cash Advance to Annual Interest Calculator

    Merchant Cash Advance to Annual Interest Calculator

    in eCommerce, Merchant Cash Advances are common financial offereings. Both Shopify and Amazon have integrated financial offerings based on your sales data. These offers are often quite appealing: borrow $10,000 and payback $11,000 in 6 months.

    At face value these offers seem like a fair market rate like 10% in that case. However, it’s a bit deceiving.

    If you had instead borrowed with a term loan, then as you pay down the principal of the debt the interest payments come down as well.

    So, borrowing $10,000 at 10% APY, with monthly payments you’d pay back about $10,530. Not $11,000. That’s a big difference!

    Here’s the calculator to find the equivalent APY rate for a merchant cash advance. Use this to more fairly compare the cost of credit and wether or not you’d be better off with credit cards

    MCA to Annual Interest Calculator (Monthly Payments)

    Enter your merchant cash advance details below:







  • Founder Fuel: How do you Manage your Business Finances and Cash Flow?

    Founder Fuel: How do you Manage your Business Finances and Cash Flow?

    Sometimes a business is in growth mode and it makes sense to spend money to buy revenue, other times cashflow is king and the focus is on accounts receivable and payable. This stuff is hard and involves a bunch of things I rarely hear people talk about.

    If you have your fingers into the accounting system like I do there are some tools to help make the bookkeeping just a little less tedious and give you visibility on where the money is coming and going.

    Here’s the current software in my finance stack:

    1. Dext – for automatically parsing invoices and receipts, it categorizes and sends into Quickbooks
    2. Quickbooks Online – love it or hate it, it’s a defacto standard that’s hard to avoid for profit and loss, and balance sheets. and is the double entry accounting final resting place for all numbers.
    3. A2X – tears open the payouts sent from Amazon and Shopify into the more granular transactions that make it up. These import into QBO where it’s then possible to see revenue and payment processing fees, returns and other transactions split out.
    4. custom software – theres a few things that A2X can’t handle and for that we have some custom tools (nice to have a software dev in-house). It can help get COGS numbers across all sales channels, and display nicer reports.
    5. Google Sheets – the most versatile tool for adhoc models, and tracking things. Flexibility and shareability is hard to beat.
    6. Notion – notion is the home of all documented SOPs. These how to guides are incredibly helpful for making sure that recurring tasks are understood and done consistently each time.
    7. Banks with virtual cards – virtual cards make it easier to give employees a card for paying expenses, and lower risk that if one card is compromised, a bunch of things will need to be updated. I’m using Relay in the US and Vault in Canada.

    Is there anything I wish could be done better? Yes. The custom software is kind of a pain, mostly because so many systems it interacts with don’t have APIs and have to use fragile ways to get data.

    And that’s it for this Daily Founder Fuel Journal entry.

  • Founder Fuel: How do you measure the success of your marketing efforts?

    Founder Fuel: How do you measure the success of your marketing efforts?

    At different times in a business success means different things. Sometimes it’s measured in Likes and views, other times in ROAS or ACOS, other times it’s in brand recall. But as with most things in business, if you don’t measure you aren’t in control of it. So KPIs for your marketing efforts is an important aspect of understanding if things are working as intended.

    I have been drawn to sales and direct marketing approaches in the past because the metrics are easy to collect. I think most small companies should start here. The direct testing enables quicker learning which can then form the backbone of a brand. Where as jumping straight into brand marketing usually requires a lot of assumptions.

    With one-on-one sales you get immediate and clear annecdotal feedback about what customers want. Then expand with direct marketing ads to scale those up to 10,000+ people to further refine the messaging and targeting, and test the market further. Finally layering in the branding in a way that applies everything you’ve learned so far.

    Easier said than done. People feel compelled to start with a logo.

    As you get more sophisticated, something I have not yet explored is Marketing Mix Modeling approaches to measure the effectiveness of marketing. With a complex set of marketing channels in which to budget marketing spend this is probably the best way to assess how it all is affecting sales.

    That’s how I think about measuring the success of marketing. This journaling prompt came from Daily Founder Fuel a very short daily newsletter that contains a journaling prompt for founders, entreprenurs and business owners.

  • Founder Fuel: How do you motivate and inspire your team?

    Founder Fuel: How do you motivate and inspire your team?

    Leaders can lead in different ways, but some of the best leaders have been truly inspirational. They may inspire people to invent new things, or produce better work, or just provide enough motivation to put in 5 extra minutes. The Big Hairy Audacious Goals (BHAGs) can work, but can also backfire if people don’t believe they’re possible.

    It’s been really enlightening to see that clear vision and inspiration can break through walls, but that once those walls are broken others quickly follow behind. This happens in sport – the 4 minute mile – once broken more and more people started to do what was previously thought impossible. It also happens all the time in technology – Landing rockets, now that SpaceX has done it, the Chinese are copying it.

    Inspiration is required to get the first people to unlock these barriers.

    How do I inspire? I hope to inspire by setting a good example. By working hard. By being informed enough to provide good reasons for decisions. But it is hard. Inspiring is not a skill learned in high school, though perhaps inspirational writing should be taught right after persuasive writing.

    I believe that more times than not inspiration comes from good story telling. The story you tell yourself or that you tell others.

    It’s not something that I feel confident I have excelled at in practice. But through writing down these thoughts I have a little bit more clarity on how I might learn and improve.

    For more inspirational journal prompts subscribe to Daily Founder Fuel

  • Founder Fuel: What new markets or customer segments could you explore?

    Founder Fuel: What new markets or customer segments could you explore?

    The best markets and segments to explore are the ones that are adjacent with what you already have. The further you go the more you have to start from the beginning to find the audience and the less overlap you have with your existing marketing assets.

    Much more preferable to push into new markets by expanding at the borders.

    This land and expand type of strategy could be geographic, it could be demographic or it could be interest based.

    Now, the thought of how to be deliberate about this comes to mind. How do you align people and keep the focus on markets and customer segments that can be efficiently won? Tie bonuses to winning certain types of customers over others? Create visualizations of the data?