Tag: productivity

  • How to Make AI Watch Your Most Important Business Numbers

    How to Make AI Watch Your Most Important Business Numbers

    Most businesses don’t have a data problem.

    They have an attention problem.

    The numbers are already somewhere — Shopify, Triple Whale, Looker, a spreadsheet somebody updates on Fridays, a finance model only one person fully understands. The issue is not access. It’s whether anyone is still looking at the right number often enough to matter.

    That’s where AI can be useful.

    Not as a replacement for judgment. Not as some magic strategy layer. Just as a way to keep one important business number visible every day, without relying on memory or good intentions.

    That sounds small.

    It isn’t.

    In an operating business, the difference between “we noticed it early” and “we noticed it too late” can be expensive.

    The real problem is drift

    Here’s what usually happens.

    When a company is small, the important numbers are close enough to the surface that you can feel them.

    Spend goes up. Sales move. Repeat orders change. Margins tighten. You can usually tell when something is off.

    Then the company gets more complex.

    More channels. More campaigns. More SKUs. More meetings. More people touching the numbers. More noise.

    The KPI doesn’t disappear. It just gets crowded out.

    That’s when drift becomes costly.

    CAC creeps up for a few weeks before anyone reacts. Retention softens, but revenue still looks fine. Margins compress in a way that seems temporary until it isn’t.

    Usually it’s not one dramatic mistake.

    It’s a series of ordinary misses that compound because nobody stayed close enough to the basics.

    That’s the opportunity here: use AI to make the important number harder to ignore.

    A concrete example: CAC-to-90-day-LTV at Psychedelic Water

    At Psychedelic Water, one useful workflow is a daily Slack report on one relationship:

    CAC to 90-day LTV

    That number tells you whether growth is healthy or just getting more expensive.

    If CAC rises while 90-day LTV stays flat, the business is becoming less efficient. If LTV improves while CAC stays stable, you have room to push. If both move the wrong way, you want to know immediately.

    So instead of relying on someone to remember to check it, we automated the update.

    AI pulls the relevant numbers, formats a short summary, and posts it in Slack. It follows the same logic behind mini AI automations: automate the repetitive part, then make the output easy for a human to use.

    Not a dashboard with ten charts. Not a memo nobody reads. Not a raw data dump.

    Just the metric, the comparison, and a plain-English note about what changed.

    That’s the point.

    AI isn’t “running the business” here. It’s protecting the operating rhythm around one number that matters. It is really an example of building AI-operable systems instead of relying on isolated prompts.

    Why this works better than another dashboard

    Dashboards are passive.

    They wait for someone to remember to check them.

    A daily AI report is active.

    It shows up on its own.

    That small difference changes behavior.

    A metric buried in a dashboard competes with everything else on someone’s list. A metric that lands in Slack becomes part of the daily environment. It stays visible. It stays discussable. It has a better chance of shaping decisions while there’s still time to do something about it.

    Most businesses don’t fail from a lack of information.

    They fail because the right information never becomes part of the operating cadence.

    The best system is usually the one people actually see, trust, and use.

    For one team, that might be Slack. For another, email, a text summary, a Notion page, or a morning note in a leadership channel.

    The channel matters less than the habit.

    Start with one KPI, not a reporting empire

    If you want to build something like this, don’t start by monitoring everything.

    Start with one KPI that genuinely matters.

    A good test is simple:

    If this number moved against you for two weeks and nobody noticed, would that create a real business problem?

    If the answer is yes, you’ve got a candidate.

    Depending on the business, that KPI might be:

    • CAC
    • 90-day LTV
    • Churn
    • Gross margin
    • Fill rate
    • Conversion rate
    • Inventory weeks on hand
    • Average order value
    • Contribution margin by channel

    The right KPI is not the one that sounds smartest in a meeting.

    It’s the one that changes your decisions.

    That’s the number worth putting in front of the team every day.

    What the report should actually include

    A useful daily AI report should be short enough to read in under a minute.

    At minimum, it should answer three questions:

    1. What happened?
      Show the current number.

    2. How does it compare?
      Show yesterday, last week, or the relevant baseline.

    3. Why does it matter?
      Add one line of plain-English context.

    For example:

    CAC-to-90-day-LTV today: 2.8x
    7-day average: 3.1x
    Driver: higher paid social CAC while repeat purchase rate held flat
    Action: watch closely if this trend continues

    That’s enough.

    The goal is not a polished memo.

    The goal is to reduce friction, keep the number visible, and catch drift early.

    The hidden value is discipline

    The obvious benefit of this kind of system is speed.

    The less obvious benefit is discipline.

    Once the report exists, the business has a daily moment of truth.

    Nobody has to remember to pull the numbers manually. Nobody has to stitch together an update from four tabs. Nobody gets to say, “I hadn’t looked at that in a while.” I wrote recently in what you’re really avoiding isn’t the work about how visibility lowers the friction around hard operational work. The same thing happens here.

    That sounds boring. It is boring.

    But boring is underrated.

    A lot of expensive business problems start small:

    • a metric slips a little
    • the slip gets rationalized
    • the team waits for more data
    • the delay becomes normal
    • the habit becomes a miss

    A daily AI report interrupts that sequence.

    And in an operating business, earlier is usually cheaper.

    The part people skip: the basics

    This is where a lot of AI projects go sideways.

    People get excited about prompts, agents, and automation before they’ve handled the operating basics.

    Those basics matter more than the tooling:

    • Is the KPI defined clearly?
    • Is there one trusted source of truth?
    • Does the report arrive at the same time every day?
    • Is it short enough that people will read it?
    • Is there a clear owner when the number moves the wrong way?
    • Is there a threshold that triggers action?

    If those basics are weak, AI doesn’t fix the process.

    It scales the mistake.

    A broken reporting process with AI attached can feel sophisticated while making the business slower and sloppier. The number gets delivered every day, but it’s the wrong number, the wrong definition, or the wrong interpretation.

    That’s worse than no automation.

    AI should strengthen a clear operating system, not cover up a messy one. That is also why making the right context easy to surface matters so much: retrieval only helps when the underlying source of truth is clear.

    A simple setup any operator can copy

    If you want to build this, keep it simple.

    1. Choose one KPI

    Pick the number that matters most right now.

    2. Define the source of truth

    Make sure the report pulls from one reliable place, not three competing versions of reality.

    3. Decide the comparison window

    Use yesterday, a 7-day average, last week, or target. Pick the benchmark that helps people make better decisions.

    4. Keep the output tight

    One metric. One comparison. One short explanation. One action note if needed.

    5. Deliver it where the team already works

    Slack is great if that’s where attention lives. If not, use the place people already check.

    6. Add an action rule

    If the KPI crosses a threshold, who gets pulled in? What gets reviewed? What decision gets made?

    That’s the system.

    You do not need a giant AI initiative to make this useful.

    You need a reliable loop around one important business number.

    The broader takeaway

    The best AI workflows in an operating business are usually not the flashy ones.

    They are the ones that quietly keep the company close to reality.

    They make it harder to miss the obvious. They reduce the lag between signal and response. They protect attention around the basics.

    And the basics matter more than people want to admit.

    Most businesses don’t lose because they lacked advanced tools.

    They lose because they stopped watching the number that would have told them something important was changing.

    So the useful question is not:

    How can AI help with everything?

    It’s this:

    What is the one number this business cannot afford to stop watching?

    Start there.

    Then use AI to make forgetting it much harder.

    Reader exercise

    Take 10 minutes and write down:

    • the one KPI that matters most in your business right now
    • where that number currently lives
    • how often it is actually checked
    • who needs to see it
    • what should happen if it moves the wrong way

    Then answer one final question:

    What is the simplest daily AI report that would make this number hard to ignore?

    If you can answer that clearly, you’re probably closer to a useful AI workflow than you think.

  • From BYOD to BYOA: The New Workplace Shift Nobody’s Naming Yet

    From BYOD to BYOA: The New Workplace Shift Nobody’s Naming Yet

    Work has been offloading its infrastructure onto workers for years.

    First the commute. Then the device. Then the office.

    Now the next shift is starting to emerge: bring your own agent.

    Ten years ago, bring your own device was a workplace trend. Employers increasingly expected people to have their own phone, their own laptop, and their own hardware wrapped into the company’s workflow.

    Then remote work pushed the idea further. For a lot of people, it effectively became bring your own office. Your internet. Your desk. Your extra monitor. Your spare bedroom. Your heat. Your coffee. The company still got the output, but more of the working environment moved onto the employee.

    If you go back even further, you can find older versions of the same pattern. In some industries, even getting to work used to be part of the system. Over time that became your car, your gas, your commute, your problem.

    That is why bring your own AI matters.

    Not because it is a catchy acronym, but because it fits a long-running pattern: productive assets keep moving outward from the company and into the hands of the worker.

    And unlike a laptop or a phone, an agent stack is not just a tool. It is accumulated capability.

    This is more than “use ChatGPT at work”

    A lot of people still think AI adoption means opening a chatbot and asking it a few questions.

    That is the beginner version.

    The real edge starts when someone builds a private operating system around their work:

    • prompt libraries refined over months
    • little scripts that clean data, generate reports, or move work between tools
    • retrieval systems and notes that give the model better context
    • review workflows for checking accuracy, tone, and quality
    • persistent agents that can wake up, monitor things, and keep moving
    • multi-agent setups where different models play different roles

    That stack compounds.

    I’ve written before about how I use AI to write and publish blog posts and about building AI-operable systems instead of isolated prompts. The same pattern keeps showing up: the value is rarely in one prompt. The value is in the system around it.

    When somebody builds that system on their own time, on their own machine, with their own habits and history baked into it, they are not just bringing labor to a company anymore.

    They are bringing infrastructure.

    The moat is not the model. It is the context.

    This is where bring your own agent gets much more interesting than bring your own software.

    Software licenses are easy to understand. A company can buy a seat and hand it to anyone.

    An agent stack is different because the most valuable part is often personal.

    The memory lives in your account. The prompt files live in your folders. The judgment about how to scope a task, which tools to call, what good output looks like, and how to audit the result lives in a thousand small decisions you have already made.

    Even the context itself becomes an asset.

    A personal AI system gets better when it has access to your notes, your past work, your frameworks, your examples, your definitions of quality, and the patterns you have trained yourself to follow. That is part of why I built a personal knowledge base over everything I’ve made. The context is not a side detail. It is the advantage.

    That creates a strange boundary.

    If an employee becomes dramatically more productive because of a personal agent stack, how much of that should transfer to the employer? Should the company expect access to the whole system? The prompt library? The memory? The scripts? The evaluation harnesses? The accumulated context?

    That is not a normal software procurement question. It starts to look more like asking someone to show up with their own miniature company attached.

    In software, this is already happening

    The clearest example is coding.

    A growing number of AI-assisted developers are no longer staring at code in the old way all day. They are orchestrating systems that can:

    • write code
    • explain code
    • edit code across multiple files
    • run tests and interpret failures
    • audit for security, style, and performance
    • generate documentation
    • compare different implementation paths
    • review each other and challenge each other

    I’ve written about persistent agents needing a heartbeat and about adversarial agents improving the quality of creative and analytical work. Once you start using these systems seriously, it stops feeling like one person with one tool and starts feeling like one person directing a small team.

    That matters.

    Because when a company hires that person, it is not only hiring judgment and taste. It is hiring the ability to mobilize an entire stack of capability on demand.

    And this is not going to stay inside software.

    Marketing teams will bring campaign-generation systems. Salespeople will bring prospecting and follow-up agents. Operators will bring reporting workflows. Researchers will bring literature-review agents. Writers will bring editorial pipelines. Scientists will bring experiment design and analysis harnesses.

    Whatever the domain is, the pattern is the same.

    The worker who knows how to build and run agents does not arrive alone.

    Better systems create an awkward compensation problem

    From the worker’s side, this is obviously powerful.

    If one person can produce the output of five or ten people because they have better systems, that is a real hiring advantage. It creates independence. It creates negotiating power. It changes what one person can realistically promise to deliver.

    But from the employer’s side, it creates a compensation problem.

    If an employee brings 10x output but gets paid on a normal salary band, most of that upside is captured by the company.

    And in many cases the worker is paying part of the bill.

    They may be covering model subscriptions. They may be covering API costs. They may have spent hundreds of hours building the prompts, scripts, notes, and workflows that make the system useful. They may even be floating the cost for a while and getting reimbursed later, imperfectly, or not at all.

    That is what makes BYOA different from an ordinary productivity tip.

    What looks like a simple efficiency story is also a story about ownership.

    Who paid to build the system? Who owns the context? Who keeps the prompts? Who captures the gains?

    BYOA fits freelancing better than salaried work

    This is why I think bring your own agent will push more people toward freelancing, consulting, and one-person businesses.

    If your real moat is a personal stack of AI systems, then selling outcomes starts to make more sense than selling hours.

    A freelancer can say: here is the result, here is the speed, here is the quality, and here is the price.

    That framing fits AI-powered work much better than a salary band does.

    It also gives the worker a cleaner way to protect the asset.

    Instead of donating their entire operating system into an employer’s workflow, they can keep the system private and sell the output. They can price in the tooling costs. They can improve the stack over time and keep more of the upside for themselves.

    This does not mean normal jobs disappear overnight. But it does mean the center of gravity shifts.

    If companies are trying to hire fewer people and get more output from each one, and if high-performing workers are building private agent systems that dramatically raise what they can do, the natural meeting point is not always full-time employment. Often it is some form of entrepreneurial freelancing.

    That may end up being one of the most important second-order effects of AI at work.

    Companies should get ahead of this now

    Most businesses are still treating AI adoption like a tooling question.

    Should we buy seats? Which model should we use? What policy should we write?

    Those questions matter, but they are not the whole thing.

    The deeper questions are organizational:

    • What should be company-owned versus worker-owned?
    • Are employees expected to use personal agent stacks?
    • If so, who pays for them?
    • If someone builds a workflow that makes them radically more productive, how should that show up in compensation?
    • Should critical workflows live in personal accounts and private folders at all?
    • What happens when the most productive person on the team leaves with the entire system in their backpack?

    Those questions are going to get louder.

    Because BYOA is not just a work habit. It is a form of capital formation at the edge of the company.

    The employee is accumulating productive assets outside the business, then deciding how much of that power to rent back in.

    The shift nobody is naming yet

    Bring your own device felt normal. Then bring your own office started to feel normal. Bring your own agent sounds strange today, but probably not for long.

    The people who will create outsized value over the next few years will not just be good at AI.

    They will know how to build agents, manage context, collect tools, define evaluation loops, and orchestrate systems that keep getting better.

    In other words, they will have built a private factory for thought work.

    That is an amazing opportunity for workers.

    It is also a warning sign.

    Because if people are expected to show up with their own devices, their own office, and now their own agent infrastructure, the obvious next question is this:

    Why rent all of that capability to an employer at a discount?

    The real question is not whether people will bring their own agents to work.

    It is who pays for them, who owns them, and who captures the upside when they do.

  • Executive Coaching Is Expensive. Daily Accountability Doesn’t Have to Be

    Executive Coaching Is Expensive. Daily Accountability Doesn’t Have to Be

    One of the most useful things for personal productivity isn’t a to-do app.

    It isn’t a new notebook, a better calendar, or a more elaborate morning routine either.

    It’s having someone ask good questions on a regular basis.

    That’s the real value of executive coaching. A good coach helps you decide what matters, pushes back when your priorities drift, notices your patterns, and creates enough accountability that you actually follow through.

    The problem is that real executive coaching is expensive. Really expensive.

    For a lot of founders, operators, and ambitious people working on their own stuff, it’s hard to justify spending thousands of dollars for occasional calls, even if the upside is obvious.

    So I started with a simple question: what if AI could deliver even 20% of the value?

    Not by pretending to be a perfect human executive coach. Just by handling some of the repeatable parts well enough to matter.

    What I actually wanted from a coach

    I wasn’t looking for motivational speeches.

    I wanted help with the things that quietly break productivity over time:

    • picking the wrong priorities for the day
    • letting uncomfortable tasks slide for too long
    • spending time on interesting work instead of important work
    • losing sight of quarterly goals in the chaos of a normal week
    • repeating the same self-defeating habits without noticing

    A strong coach is useful because they create a rhythm around all of this. There’s a cadence. A check-in. A follow-up. A little bit of pressure. A little bit of perspective.

    That seemed reproducible.

    The first step was research, not code

    Before building anything, I asked AI to go research executive coaching properly.

    Not the vague internet version. The actual practice.

    I had it pull together material on:

    • coaching best practices
    • common question frameworks
    • behavioral science and accountability research
    • how executive coaches structure sessions and follow-up
    • the difference between good coaching and generic advice

    What came back was a much more structured picture than I expected. The useful parts of coaching are not mysterious. A lot of it comes down to repeatable practices:

    • daily check-ins
    • honest prioritization
    • regular self-scoring and reflection
    • end-of-day accountability
    • weekly and quarterly reviews
    • pattern recognition over time

    That became the foundation for the system.

    What made it work was not the intelligence. It was the design.

    The breakthrough wasn’t simply “build a chatbot.”

    Plenty of chatbots are smart enough to answer questions. That is not the hard part.

    The hard part is creating the conditions where accountability feels real.

    Three design choices mattered a lot.

    1. It had to live in chat

    I already knew from building other systems that a normal chat interface has a very different feel from opening a blank browser tab.

    If something lives in Telegram, it comes with you. It’s on your phone. It’s in the same place as real conversations. You don’t have to remember to open the app that is supposed to help you. It shows up where you already are.

    That sounds minor. It isn’t.

    A lot of self-improvement software fails because it depends on you having enough discipline to go use it at exactly the moment you’re least likely to want to. If you’re avoiding something, you’re not going to voluntarily open the accountability dashboard.

    A message in chat changes that dynamic.

    2. It had to be proactive

    This was the second big insight. The system couldn’t just wait for me to ask a question.

    It needed a heartbeat.

    I’ve written before about how persistent agents become more interesting when they can wake themselves up and check in rather than sitting dormant between prompts. That’s the same idea I covered in Let’s Talk About the Open CLAW in the Room. The value isn’t just intelligence. It’s continuity.

    So I built the coach around proactive outreach:

    • a morning stand-up
    • occasional midday follow-up when something time-sensitive was mentioned
    • an end-of-day recap
    • scoring and reflection
    • longer review cycles over time

    That one change made the whole thing feel less like software and more like a process.

    3. It had to remember

    Without memory, a coaching bot is just a clever prompt.

    With memory, it starts to become useful.

    A real coach remembers what you said last week. They remember the thing you promised to do and didn’t do. They notice when the same excuse keeps showing up in a different form.

    That memory layer ended up being one of the most important parts of the whole system. It let the coach connect today’s priorities to older conversations, recurring friction, and longer-term goals.

    That’s when the pushback started getting good.

    What the conversations actually look like

    Most mornings start with a simple stand-up:

    What are the three most important things today?

    That’s not a revolutionary question. But it becomes powerful when something is going to ask you about it later.

    Sometimes the coach just captures the plan. Sometimes it pushes back.

    If I list something that should obviously be delegated, it asks why I’m still doing it myself.

    If I fill the day with low-value tasks, it asks whether any of them are actually connected to revenue or the highest-leverage goal.

    If I keep postponing something important, it notices.

    And the memory makes the confrontation sharper than I expected.

    It can say things like:

    • this is the second time you’ve pushed off writing that marketing email
    • you keep making room for side projects when the main project still needs attention
    • you said this meeting was important yesterday, so what changed?

    That kind of feedback is useful because it cuts through the story you tell yourself in the moment.

    I’ve written before that goals work better when they turn into measurable daily actions. This system effectively enforces that translation every day. Big intentions have to become concrete commitments.

    The surprising part: even AI can create accountability

    This is the part that surprised me most.

    On paper, it sounds silly. It’s just software. It’s not a real human being. Why should it create any accountability at all?

    But accountability is not only about authority. It’s also about having a witness.

    Once the coach lives in a real chat, checks in proactively, follows up later, and complains a little when you ignore it, the interaction starts to create social pressure. Not the same pressure as a great human coach, obviously, but enough to change behavior.

    That matters.

    Because a lot of productivity problems are not really knowledge problems. They’re avoidance problems. They’re friction problems. They’re “I know what I should do, but nobody is making me face it” problems.

    That’s very similar to the pattern I wrote about in What You’re Really Avoiding Isn’t the Work. The obstacle is often not inability. It’s the gap between knowing and doing.

    A coaching loop helps close that gap.

    It also taught me something about my own habits

    The strongest value wasn’t just that the coach reminded me to do things.

    It showed me my patterns.

    The same weak spots kept coming up:

    • a tendency to drift toward side projects
    • reluctance to delegate certain work
    • the habit of postponing tasks that feel important but uncomfortable
    • confusing activity with meaningful progress

    That sort of pattern recognition is useful because it turns vague guilt into something concrete.

    Once a behavior gets named, it becomes easier to interrupt.

    That is where this starts to overlap a little bit with therapy or journaling. Not because the bot is a therapist, but because repeated reflection makes your own habits harder to ignore.

    And if you are trying to build structure into your work, that kind of reflection compounds over time. I’ve written about the importance of creating structure and the need for small daily wins to maintain momentum. This system is basically a machine for both.

    From a pile of scripts to a real product

    I ran the first version as a bundle of scripts on my own computer for several weeks.

    It was rough, but it worked.

    Under the hood it combined three things:

    • coaching research and prompting
    • a memory system
    • proactive messaging throughout the day

    That was enough to prove the concept.

    Once I saw the benefit personally, it became obvious that it should turn into a real application. Part of the reason is practical: if a coaching system is going to be proactive, something has to stay running. There needs to be a process alive in the background checking time, tracking context, and deciding when to reach out.

    So I rebuilt it as an installable desktop app.

    That turned into its own fun little experiment. At one point I had AI migrate the application into Rust in basically one shot. I don’t know Rust, which made that entertaining, but the result is that the app now compiles cleanly into native desktop software and lives in the taskbar like a normal application.

    It runs on Mac and Windows. No server required on my side. Users bring their own API key, which keeps the economics simple and avoids the usual problem of somebody burning through shared credits.

    Where I think the value actually is

    I don’t think this replaces a great human executive coach.

    A great coach can read nuance better, challenge you more deeply, and bring lived experience that software cannot fully match.

    But that’s not the standard that matters.

    The real question is whether a persistent AI coach can deliver enough value to justify existing.

    I think the answer is clearly yes.

    If a human coach costs hundreds of dollars an hour and maybe shows up once a week, there is a large middle ground between “nothing” and “premium executive coaching.” A system that asks strong questions every morning, follows up in the afternoon, remembers your patterns, and keeps your priorities honest can be enormously valuable even if it only captures part of the full experience.

    Personally, I think I’m going to get far more than $49 worth of value out of it just from better prioritization and fewer days lost to drift.

    If this sounds useful, it’s available now

    After running it for weeks, I decided to make it available as a real product.

    It’s called AI Executive Coach, and it’s available here:

    Read the full AI Executive Coach page here.

    For the first 100 users, it’s a one-time purchase of $49.

    That’s intentionally simple. No server dependency on my end. No complicated subscription decision upfront. Just install it, add your own API key, and use it.

    If you’re the kind of person who knows what to do but still benefits from having someone, or something, force a little honesty into the day, you’ll probably get it immediately.

    Final thought

    The biggest thing executive coaching provides is not advice.

    It’s cadence.

    Someone asks what matters. Someone checks whether it happened. Someone notices the pattern when it doesn’t.

    That loop is expensive in human form. It doesn’t have to be expensive in software.

    And for a lot of people, that may be enough to make the difference between a day that felt busy and a day that actually moved something forward.

  • What You’re Really Avoiding Isn’t the Work

    What You’re Really Avoiding Isn’t the Work

    Everyone has a version of this. A category of work that sits on the to-do list for weeks, then months, slowly accumulating guilt. For some founders it’s legal. For others it’s HR, compliance, or investor reporting. For me, it’s always been accounting.

    Not because I can’t do math. Because every time I opened QuickBooks, I’d feel the weight of everything I didn’t understand, and I’d close the tab. There’s always something more urgent than confronting what you don’t know.

    This week I finally sat down and did all of it. Reverse-engineered spreadsheets. Audited our QuickBooks accounts. Found missing payables. Fixed miscategorized transactions. Worked through international currency adjustments. Even handled an off-the-books equity correction I’d been dreading for longer than I’d like to admit.

    And here’s the part I didn’t expect: it was actually kind of fun.

    The difference wasn’t discipline. It was having AI as a collaborator. And the reason that mattered has nothing to do with accounting specifically.

    The real barrier is shame

    Think about the task you’ve been avoiding. Now think about why.

    It’s probably not because the task itself is impossibly hard. It’s because there’s a gap between what you know and what you’d need to know to do it confidently, and closing that gap feels expensive. You’d have to ask someone. That someone is busy, or expensive, or both. And the questions you need to ask feel like they should be obvious.

    That was my relationship with accounting for years. Accountants always seem busy. When I’d get on a call with mine, I’d feel the clock ticking. Every question felt like it should be obvious. Do I really need to ask what a trial balance is? Can I admit I don’t understand why this line item is negative? Is it okay to not know the difference between cash-basis and accrual?

    So you nod along, say “makes sense,” and leave the call having learned nothing. Then you avoid the whole topic for another month.

    This is the shame barrier. It’s not a knowledge problem. It’s a help-access problem. The help exists, but the social cost of accessing it is high enough that you just… don’t.

    What happens when the shame disappears

    When I sat down with Claude Code this week and started working through our financials, I could ask anything. Literally anything.

    “What does this column mean?” No judgment. “Why is this number negative when we received money?” Clear explanation. “Walk me through how this journal entry should work.” Step by step, as many times as I needed.

    I went deep on things I’d been skating past for years. The nuances of our P&L statement. How the balance sheet connects to the trial balance. Why certain transactions were showing up in the wrong categories. What our cash flow statement was actually telling me versus what I assumed it was telling me.

    Each question led to a better question. And because I wasn’t worried about wasting someone’s time or looking dumb, I kept going. I’d ask a follow-up, then another, then branch into something related. It was the first time accounting felt like learning instead of an exam I was failing.

    If you’ve ever had a mentor who made you feel safe asking the dumb questions, you know how much faster you learn in that environment. AI gives you that dynamic on demand, in any domain, at any hour.

    The concrete results

    This wasn’t a vague learning exercise. I worked through real problems in our actual books:

    Reverse-engineered inherited spreadsheets. We had several financial spreadsheets maintained by different people over time. I fed them to Claude and asked it to explain what each one was tracking, how the formulas worked, and where there were inconsistencies. It found things that had been wrong for months. If you’ve ever inherited a spreadsheet from someone who left the company and spent hours trying to figure out what it was supposed to do, AI turns that from hours to minutes.

    Audited QuickBooks categories. Transactions miscategorized across multiple accounts. Expenses in the wrong cost centers. Payables missing entirely. Claude walked me through each one, explained what the correct category should be and why, and helped me make the corrections.

    Handled the stuff I’d been avoiding. International currency adjustments. An equity correction I didn’t fully understand the accounting treatment for. Reconciliation of accounts that hadn’t been reconciled in too long. These are the kinds of things where I’d normally email the accountant, wait three days, get an answer I half-understood, and still feel uncertain about whether it was done right.

    Thought through the strategic questions. Beyond the bookkeeping, I used the conversation to think through bigger questions. I’ve thought about managing cash flow before, but this was different. What are our actual options right now? What interest rate is expensive versus reasonable for our situation? What are the trade-offs between different funding approaches? These aren’t strictly accounting questions, but they live in the same “financial stuff I’m uncomfortable with” bucket, and having a patient conversation partner made them approachable.

    The pattern worth noticing

    Here’s what I want you to take from this. It’s not “use AI for accounting,” although you should.

    Every business owner has domains they understand well and domains where they’re faking it. For me, the product development, marketing, and technical infrastructure are comfortable territory. Finance has always been the thing I know I should understand better but never prioritize learning. It’s a version of the fear of the unfamiliar that I think most founders carry around quietly.

    AI doesn’t replace the expert. I still need a CPA for tax strategy and compliance. But it fills the gap between “I know nothing” and “I know enough to have a productive conversation with my accountant.” That middle layer of competence is what most people skip, and it’s exactly where AI excels.

    Before this week, my accounting approach was “send everything to the accountant and hope for the best.” Now I actually understand what’s in our books. I can read a P&L and know what I’m looking at. I can spot when something looks wrong. That upgrade happened because the learning barrier dropped to zero.

    Apply this to your thing

    This keeps happening. Tasks I’ve been dreading turn out to be approachable, even enjoyable, once I have a collaborator that’s patient, knowledgeable, and available whenever I’m ready to work. It happened with growth engineering. It happened with the small automations that add up. Now it’s happened with accounting.

    The common thread is that the barrier was never ability. It was the friction of getting help. AI removes that friction, and suddenly the things you’ve been avoiding become the things you’re making progress on.

    So here’s my challenge to you: think about the task that’s been sitting on your list the longest. The one you keep bumping to next week. Ask yourself whether the problem is really that the task is hard, or whether the problem is that you don’t have a safe, low-cost way to close your knowledge gap.

    If it’s the second one, you might be surprised at what happens when you just start asking questions.