You Are Behind in Using AI. Here’s How to Catch Up

If you feel behind on AI, you probably are. The tools are improving faster than most teams are changing how they work. I see this daily in my coaching of individuals and teams in industries spanning medtech, space, consumer electronics, and even AI.

The good news is that catching up doesn’t require a grand strategy, a transformation program, or perfect clarity. It requires a different posture; and a willingness to act before you feel ready.

What follows is the advice I’ve been giving leaders and senior engineers repeatedly, in real conversations, about what actually works. This advice is not just me regurgitating what I’ve read; I’ve been spending almost all of my free time (neglecting car projects) for the past 9+ months actually using the AIs “in anger” in getting Terminal.Gui to Beta. Terminal.Gui is a beast of complexity, and while it’s mostly for fun, it has serious engineering, legacy, and existing customers behind it.


1. Lead With Curiosity → Doing (Not Reading → Doing)

The teams that make progress don’t start with confidence; they start with curiosity.

Specifically: curiosity about a real problem they already own.

What I see stalling teams is not ignorance, but the desire to be right before they act. They want a complete mental model of AI, a clear strategy, and confidence they’re “doing it correctly.” That mindset guarantees delay.

The better pattern is:

  • Be curious about a concrete problem in front of you.
  • Try AI on that problem immediately.
  • Treat whatever happens as information, not success or failure.

Curiosity creates permission to experiment without certainty. Action creates the understanding you thought you needed first.

Reading and watching others can support curiosity; it cannot substitute for it. Curiosity that never turns into action is just procrastination with better branding.


2. Prototype Aggressively; Be Careful Later

Most organizations default to caution, evaluate thoroughly, compare tools, run pilots, socialize decisions. That instinct used to be reasonable. Right now, it’s counterproductive and, frankly, stupid in this age of rapid acceleration of AI capabilities.

The AIs are evolving weekly. Three weeks ago, Copilot’s CLI sucked. Now it’s surpassing Claude Code in some ways. And Claude Code has added capabilities just this week that make Copilot look dumb. Any evaluation process measured in months is obsolete before it finishes.

The guidance I keep repeating is simple:

  • Prototype fast.
  • Expect throwaway results.
  • Optimize for learning speed, not correctness.

There is almost no downside to trying something quickly on a real problem, other than spending some time. The upside is insight you cannot get any other way.

You can be careful later after you’ve learned something worth being careful about.


3. Use AI on Real Work, Not Toy Problems

You don’t learn much about AI by playing with trivial examples. You learn by pointing it at messy, production-adjacent work and seeing where it breaks, surprises you, or helps more than expected.

That means:

  • Real codebases.
  • Real constraints.
  • Real edge cases.

Even when the output isn’t production-ready, and often it won’t be, the speed at which you can explore ideas, validate approaches, or prototype alternatives changes how you think about the work.

Toy problems produce toy conclusions.


4. The Skill Has Shifted: From Prompting to Orchestrating

A year ago, the differentiating skill was “prompt engineering.” That mattered. It still does—but it’s no longer the frontier.

The emerging skill is orchestration: breaking a large problem into parts, delegating those parts to multiple AI agents, coordinating dependencies, and intervening when things go sideways.

I describe it as herding cats. Remember Tamagotchi? Orchestrating AI agents is a bit like trying to keep a bunch of Tamagotchi’s happy and healthy.

Two things are consistently true:

  • This is a very different skill than traditional software engineering.
  • You cannot do it well unless you already understand the technology deeply.

Actionable move:

  • Identify someone on your team who’s curious and motivated.
  • Make it an explicit goal for them to become excellent at managing AI agents—ideally on work you actually own, not side demos.

5. Principle‑Driven Prompting: Use Amazon PE Standards Explicitly

One of the highest‑leverage techniques I’ve seen is principle‑driven prompting.

Instead of asking AI “what should I do,” define who it should be and what standards it must uphold.

For example: prompting the AI to act as an Amazon Principal Engineer, explicitly grounding it in principles like pragmatic judgment, long‑term maintainability, empathy for operators, and raising the engineering bar.

The difference is stark:

  • With PE principles: you get targeted, realistic refactoring guidance.
  • Without them: you often get elegant but impractical greenfield rewrites.

This reveals something important: AI amplifies the values and judgment you give it. It does not supply them for free.

Here’s a prompt you might use:

“You are a Principlal Engineer who raises the bar for the Amazon Principal Engineering Community Tenets. Your role is to be a design reviewer for new engineering plans for my team. As you review and guide my engineers on their plans, you will continue to raise the bar for these tenets.”

I tweeted about this here:


6. Set Goals That Force Usage, Not Interest

“Use AI more” is not a goal; it’s a wish.

The goals that actually change behavior are usage‑forcing. Examples I’ve been recommending:

  • Use AI so much that leadership asks why token costs are spiking.
  • Designate a go‑to person for deciding which model to use when.
  • Increase, month over month, the number of issues where AI does the first pass.
  • Apply AI to processes—not just code.

If your goals don’t force different behavior, they won’t produce different results.


7. Apply AI to Processes Before Code

Writing code faster is obvious. Improving the system around the code is often higher leverage and lower risk.

Good starting points:

  • Log analysis.
  • Bug triage.
  • Updating stale documentation and code comments.
  • Reviewing designs and requirements for clarity and modularity. See the Amazon PE prompt above.

These uses build trust, save time immediately, and help teams develop intuition about where AI helps and where it doesn’t.


8. Intentionality Beats Permission

The teams making real progress aren’t waiting for an official AI strategy. They’re acting with intent.

That means:

  • Making exploration explicit, not “when you have time.”
  • Assigning ownership.
  • Setting concrete goals.
  • Sharing learnings early, even when results are imperfect.

Waiting for permission is just another way to fall further behind.


The Bottom Line

If you’re behind, acknowledge it and move. You don’t catch up by being careful, exhaustive, or perfectly informed.

You catch up by being curious, acting quickly on real work, and learning faster than your comfort level would prefer.

AI doesn’t reward hesitation. It rewards momentum.

I’m happy to chat about this with anyone curious to learn more. My office hours are free and open.

Debate this topic with me:

This site uses Akismet to reduce spam. Learn how your comment data is processed.