You can call me "Coach."
RRCA Level 1 Certification: complete. Also, some thoughts on the limits of LLMs.
Earlier this morning, at around 9:30am, I found myself pounding on a dummy’s chest—hard enough to try to bring it out of its plastic coma. No, this isn’t some sort of sneaky critique of American-style hyper-consumerism. I was practicing chest compressions in a CPR class at the American Red Cross in Indianapolis.
In a few hours, once I receive the final email from our terrific instructor, Natalie, I will have crossed off the final requirement for my Road Runners Club of America (RRCA) level 1 coaching certification: completing the in-person portion of my CPR/First Aid training.
As you can see from the pic, my training consisted of the full enchilada: adult, child, and infant CPR/First Aid with training on the AED (the little machine that let’s you yell “clear!” with a firm, commanding voice). But this is only the last piece of the puzzle. RRCA certification requires a lot more than just dummy pounding and infant slapping, though there’s plenty of that, too.
Back in December, I hopped on Zoom for eighteen hours over the span of a weekend to learn with some thirty or so other coaching hopefuls from around the US how to develop training plans, deal with common running injuries, coach runners over mental hurdles, talk about nutrition, and generally provide support and knowledge to runners of all shapes, sizes, ages, speeds and slownesses (h/t Spinoza), and abilities. As many of you know, I’ve been actively involved with the leadership of Indy Runners for going on six years now, and I have been a member of the club since 2015. My hope is to be able to use some of what I’ve learned through RRCA to bring more folks into the distance running fold this winter and spring, starting with Indy Runners’ longstanding Spring Training Program, which officially kicks off tomorrow morning with our first Saturday run of the program.
Obviously, the First Aid/CPR stuff is essential for keeping people safe on the course. Road running can be a contact sport (like when you fall and make contact with the cold, unforgiving ground), so you have to be prepared for anything. What struck me this morning wasn’t just the physicality of the training, though there is something sobering about how exhausting good CPR actually is. One of the points they make repeatedly in the online training modules is how important it is to switch off chest compressions mouth-to-mouth with another person (if possible) to avoid fatigue. At the time, I sort of thought this was overstating things, but I learned quickly saving lives—even plastic ones!—is hard work.
Rather, it was the way the whole certification process insists on judgment, presence, and responsibility rather than just box-checking and watching online videos. You can memorize protocols, sure, and the online portion drills this into you (“Check, Call, Care”), but the real challenge is keeping all this straight in your head while keeping calm, reading a situation quickly, navigating consent, adapting on the fly when things don’t go according to plan, and acting decisively in the moment. That’s true whether you’re responding to a fall on a long run, helping someone through a mental low point at mile nine, or figuring out how to support a runner whose goals or abilities don’t look anything like your own. None of that shows up neatly on a certificate, but it’s the real point of the exercise.
And that, I think, is what I keep coming back to as I move into this next season with Indy Runners. The RRCA process isn’t about becoming an expert who dispenses wisdom from on high, but learning how to show up well for other people, especially when conditions are less than ideal—as they often are in winter in the Midwest. Running teaches that lesson over and over again: plans change, weather turns foul, bodies get injured, confidence dips, things fall apart. The job of a coach—or really, anyone in a position of care—is to help others keep moving forward anyway, safely and with a bit more self-trust than they had before. If pounding on a plastic chest is part of what it takes to do that responsibly, I’m happy to do the pounding.

Lately it feels hard to ignore the sense that large language models (LLMs) may be approaching a kind of conceptual ceiling. There has been a lot of talk lately about the supposed shift from text-based LLMs to so-called “world models,” which exist now but only in the earliest experimental stages. I don’t mean to imply that LLMs have stopped improving—clearly they haven’t, there have been a number of marked improvements in accuracy and detail, even in just the last six months—but in the sense that the gains AI makes feel increasingly incremental, more polish than radical transformation.
World models, conceptually, are physics-aware models with a high degree of spatial intelligence. LLMs, by contrast, are playing with an entirely text-based version of the world; another way of saying this is that LLMs can only “know” via text (training data). This seems obvious to write out loud, but it bears saying plainly: LLMs don’t live in the “real world.” They simply predict the string of words most likely to come next in a sentence based on weights and training data. The effects are amazing, but limited in obvious ways because they don’t “understand” the world in terms of reality, space, time, cause-and-effect, etc. If predicting the next token has limits, the thinking goes, then maybe predicting the next frame, the next interaction in a simulated physical world, opens up a different path forward. (“What happens if I pull this part out of the flux capacitor?”) The language coming out of the major labs suggests as much: a turn from words to worlds, from disembodied text to models (with an inborn mania for completion at all costs) that can reason in space, time, and consequence. Whether that’s a genuine paradigm shift or just the next scaling story remains very much an open question.
What gives me pause, though, is how familiar this move feels. I’ve seen versions of it before in other domains: the hope that if we can just model reality with enough fidelity, good judgment will follow. But experience can be tricky, especially experience in situations where things go wrong (like distance running or complex car repairs). You can know every procedure and still fail to act well when conditions are messy or ambiguous or emotionally fraught. Or when you bust your hand open trying to loosen a part. Aristotle’s ancient notion of phronesis names this gap: the difference between knowing rules and knowing how to decide when rules are insufficient. Phronesis depends on understanding something about how the world works, and LLMs in their current text-enthralled manifestation don’t have direct access to that kind of understanding. They’re like the bookish kid you grew up with who for some reason never goes outside. World models may address a real weakness in current LLMs, especially their lack of grounding in physical causality (i.e., reality). Still, it’s not obvious that richer simulations translate into better practical judgment rather than simply more detailed forms of procedural competence.
I find myself wondering, then, whether the excitement around spatial intelligence risks repeating an old mistake: mistaking improved representations for wisdom itself. Coaching, teaching, and even first aid training have taught me that good judgment is less about accurately modeling environments than about responding responsibly to people within them. That kind of knowing is relational, situational, and ethically loaded in ways that resist clean abstraction—and AI, currently, is all about clean abstraction in the service of efficiency, legibility, and control. Maybe world models will push AI in genuinely new directions; maybe they’ll prove essential for certain kinds of tasks. I’m not ready to rule that out. But I’m also not convinced that moving from text to space, from language to physics, gets us any closer to phronesis—or that it solves the deeper problem of what it means to act well when the model, however sophisticated, inevitably runs out.
AI is no plastic dummy, but it’s certainly not human either.


