The JIT Learning Trap: How Google Analytics Exposed a Bigger AI Problem

Why “Just Enough” Learning Is Good—Until It Isn’t

I set up Google Analytics (GA4) the way most people do:

✅ Copy the script → ✅ Paste it into my site → ✅ See traffic appear.

Mission accomplished, right?

Except… something felt off.

At first, I didn’t care. I just wanted to see something light up—to know GA4 was running.

But once I started actually looking at the reports, I noticed a problem:

  • Localhost traffic was mixed with real visitors
  • I had zero idea where users were coming from
  • Clicks, engagement, and actual user behavior? Not tracked.

I had learned just enough to get GA4 working—but not enough to make it useful. As a previous boss used to say. Just enough to be dangerous.

This was JIT (Just-In-Time) Learning in action—I learned only what I needed at the moment.

And that worked… until I needed better insights.


JIT Learning vs. Intentional Learning

🔹 JIT Learning (Just-In-Time) is efficient but reactive.

🔹 Intentional Learning is slower up front but future-proof.

JIT learning was fine when I just wanted to see a number on the screen.

But when I wanted to measure the impact of each blog post, I hit a wall.

The UTM Tag Epiphany

I realized I needed to track:
✅ Which blog posts were driving traffic?
✅ Which channels were sending people to my site?
✅ Which visitors were actually engaged, not just passing through?

That’s when I discovered UTM tags—the tracking parameters that tell GA4 where traffic actually comes from.

It was a game changer.

I added UTMs to a few links:

  • One blog post shared on LinkedIn
  • Another shared directly in a message

And suddenly, I could see the difference in how people arrived at my site.

This was a deeper level of learning—moving beyond “Is this working?” to “How do I make this work for me?”

🚀 But then… I hit another wall.


The Ethical Dilemma: Tracking Feels Like Spying

Adding UTMs was great for analytics—but it also felt… weird.

I started thinking:

  • “Am I okay tracking every click someone makes?”
  • “Would I like it if someone did this to me?”
  • “Do I really need to know this level of detail, or am I just being a data hoarder?”

At first, I rationalized it:
🛠 This is just a tool. I’m just learning how to use it.
📊 It’s just data. I’m not tracking anything personal.

But the moment I considered sending a direct message with a tracked link, I hesitated.

💡 I realized something important: JIT learning had gotten me functional—but now, intentional learning required me to think beyond just “what works” and ask “what’s right?”


The Google Analytics Lesson: AI Doesn’t Match Human Communication

My GA4 learning experience mirrored how we interact with AI today.

The way AI prompting works today assumes people:

✅ Know exactly what they want
✅ Ask structured, logical questions
✅ Think like scientists

But real people don’t communicate that way.

Most people (even engineers) think in loops of discovery:

  1. Try something
  2. See what breaks
  3. Learn what matters
  4. Adjust & refine

Even engineers don’t want to engineer every prompt up front—they want a feedback loop that works.

The way I set up GA4 mirrored this exact process:

  • I started with just enough
  • Realized the gaps only after seeing the data
  • Then iterated to make it actually useful

This is how most learning happens.

Yet, AI prompt engineering expects perfection upfront.


AI Prompts Were Designed by Scientists, Not Communicators

Most AI tools assume people will know exactly what to ask.

This is why “prompt engineering” is a thing—because AI isn’t designed to handle the natural, messy, trial-and-error way people learn.

What if AI worked the way we actually learn?

  • Instead of needing a perfect first prompt, the AI should guide you in real time
  • Instead of needing engineering precision, it should understand exploratory loops
  • Instead of punishing vague prompts, it should refine and clarify them dynamically

Right now, AI prompting works like a search engine from 2005:

“Tell me exactly what you want, or I’ll give you garbage.”

But humans learn and communicate through iteration.

We don’t start with the perfect question—we figure it out as we go.


What AI Can Learn from My Google Analytics Mess

The Google Analytics experience proved that:

✅ Learning starts with action (even if it’s flawed)
✅ Only after seeing the outcome do we know what really matters
✅ Iteration is how knowledge actually gets refined

The future of AI shouldn’t be “prompt engineering.”

It should be “iterative learning loops.”

Right now, AI is built for static Q&A.

But humans work in dynamic discovery.

Until AI understands this, we’ll keep pretending “prompt engineering” is a thing—when really, it’s just a patch for a fundamental design flaw in AI-human interaction.


Final Thought: The Danger of Staying in “JIT Mode”

🚀 JIT learning got me started with GA4.
🔥 Intentional learning made GA4 actually useful.

The same applies to AI.

If we only ever learn just enough to get by, we’ll never shape the systems we use.

But if we take control and guide the iteration process, we’ll stop being passive users of AI—and start designing AI that actually works for humans.


Final Thought: Where This Is Going Next

In Blog 2, Im going to look at the contrast being experimental learning and more execution and repeatable focused learning by looking at Salesforce Agentforce.