Why our gestures still matter
A few weeks after reflecting on the agentic turn in coding with LLMs, I’m still drawn to the question: what do we lose, and what do we cultivate when we let AI take the wheel? My last piece focused on how skill atrophy can creep in when we delegate too much. But as I’ve kept practicing “conversational coding”, my preferred term for this back-and-forth with LLMs, I’m realizing that something deeper is at stake. It’s not just about personal competence, but about how we connect, learn, and build meaning, individually and together.
Not all conversational coding is the same, though. I keep noticing two very different patterns. One looks like assistance: “Explain this concept to me,” or “This is what I observe: can you find tangible elements in this massive amount of information to help me analyze it?” These prompts keep you thinking. The agent removes friction while preserving your engagement. The other pattern looks more like replacement: “Write my essay,” or just “It doesn’t work” without explaining what you tried, or the ultimate abdication, “Just do it, you only live once (yolo).” You offload entirely. No dialogue, no validation.
The distinction matters because learning is part of any task worth doing. The question isn’t “Should I learn this?” but “Is AI helping me learn or preventing me from learning?”
Your brain learns through practice. A French educator recently posted a video, La Fabrique à Idiots (The Idiot Factory), that lays out the neuroscience. Three stages: theory, where the hippocampus encodes new information; practice, where the basal ganglia form automatisms through repeated retrieval; metacognition, where dopamine signals from prediction errors refine your mental pathways. When AI skips the practice stage by generating outputs for you, your brain doesn’t develop the automatisms it needs. This isn’t metaphorical: it is measurable brain atrophy.
The evidence is already visible. Eighty percent of French high schoolers now use LLMs for their homework. Teachers report that homework is dead: they can’t distinguish student work from ChatGPT output. Students get outstanding grades, but they “can’t define the words they used.” Children are asking: “Why learn grammar if AI can produce text?”
The brain atrophy is real. But there’s another loss, subtler yet just as important, a loss that is harder to measure but deeply felt. It’s not just “can you think independently?”, but also, “can you form bonds with others?”, “can you be part of a community?” We use the word community constantly in research or in software development: the Python community, the Rust community, the Data Visualisation community. It’s about people gathering to exchange ideas and stimulate each other.
Some tasks are essentially human; not because AI can’t perform them technically, but because the value is in the human doing them. These activities create connection, meaning, presence. When we ask “Should I delegate this to AI?”, we should also ask: “Is this task’s purpose about the output or about the process?” Bureaucratic forms that machines will review anyway, formatting, data processing: these we can delegate. But learning, teaching, caring, creating bonds with others and with our own developing capacities: these are different.
Lazy AI offloading causes double loss. Cognitive: your brain never develops the automatisms you need. Relational: your life never develops the connections that make it meaningful. Both losses happen together, and that’s what I want to explore.
The Japanese word waza is often translated as “technique” or “skill,” but it implies something deeper: embodied knowledge, the act of creation itself. When an expert, a teacher, a coder, a parent, uses body and mind to give form to something, that’s a generative gesture. It’s not just about completing a task, but about creating a new reality that didn’t exist before. For example: designing a software structure and writing code accordingly (instead of prompting AI to write it), a parent cooking a meal for family, or a teacher adapting their lesson when something unexpected occurs.
When someone performs such a gesture, they make a space that others can enter. The instant someone steps into that space is called an occasion, a genuine opportunity for connection.
One morning, when I was in elementary school in Southern France, a dolphin beached itself near our school. Something very official was scheduled for that day, probably a grammar lesson about past subjunctive or integer division with three digits, exactly the kind of activity required by the Ministry of Education. But the teacher stopped everything and said, “Just drop your bags, we’re going out.” We met the dolphin. Caregivers were already on the scene. For the rest of the day, we learned about dolphins, their habitat, and how they sometimes become stranded. That was a true generative gesture: an elementary school teacher seeing the moment and creating an experience that could never have happened otherwise.
We saw the dolphin together, gasped together, learned together. That occasion formed bonds, friendships through collective memory. “Remember when we saw the beached dolphin?” becomes a shared foundation. We also formed bonds with the teacher who recognized the moment. The field trip was an occasion for shared discovery. Struggling with code, or with new concepts, and asking for help is another occasion; presenting at an academic conference is yet another.
AI can’t recognize the moment because it has no embodied presence to sense that something unexpected happened. It can’t make the adaptive judgment between grammar and dolphin. It can’t create shared presence: a VR dolphin experienced in separate rooms isn’t a collective embodied encounter. Life is full of once-in-a-lifetime events. A dolphin, snow blocking the streets, an earthquake, or a tragedy impacting the community. AI agents will never adapt to that tangible environment. Teachers always will.
Occasions are singular encounters, the initial spark. Bonds and communities form through many occasions, over time. Occasions come first; you cannot become long-term collaborators or build deep friendships without those initial moments. Study groups, mentorship, parent-child relationships all begin with occasions and grow into bonds through repeated shared experiences. This is the chain: a generative gesture creates occasions, and repeated occasions form bonds.
When AI replaces your generative gesture, this chain breaks.
Imagine elder care through an AI agent. The traditional expectation goes like: a caregiver visits daily, adapts to the elder’s needs, brings embodied presence. This creates occasions: conversations, shared moments, reading the elder’s mood and responding. Through repetition, bonds form: trust, companionship, a genuine care relationship. An AI agent can monitor, remind, provide scripted chat. It may be efficient monitoring, but there is no generative gesture because the AI can’t bring embodied presence. No genuine occasions, only scripted exchanges. No bond because there’s no mutual recognition. Machine assistance is definitely a plus in the hands of a human caregiver, but a full offload would not be a convenience, just abandonment.
Or consider an international couple where partners speak different languages. If one learns the other’s language, or if both do, both partners practice, make mistakes, struggle together. That’s the generative gesture. It creates occasions: navigating confusion, moments of breakthrough, frustration. The bond deepens through this mutual effort to understand. If you only use translation, there’s no embodied practice. No connection, because unlike people sharing a native language, you no longer navigate together. You can communicate information, but are you connecting as humans? Translation tools can help while you’re also learning, but the question is about effort and intention: are you trying to learn and bond, or accepting that you won’t understand each other? As a matter of fact, would you consider a romantic relationship where you can only communicate through instant translation, never learning each other’s language?
Students who rely on LLMs face both losses, cognitive and relational. They skip the practice stage that builds cognitive capacity, and they fail to make use of their teachers. They don’t ask for help, don’t create occasions for mentorship, never form educational bonds. The teacher is present but the student has opted out.
Where does this double loss bite hardest?
In education, children are asking the right question and getting the wrong answer. “Why learn grammar if AI produces text?” The utilitarian answer, “You need to be competitive,” accepts that outputs matter more than development. The real answer has both dimensions. Your brain needs practice to develop thinking capacity. Without it, you are dependent forever. And learning together creates bonds. School is not just information transfer. It is where you learn to create occasions for connection.
There is a reason why, years later, kids remember field trips and not ordinary lessons. A field trip is memorable because it creates real-world encounters that surprise you and correct your assumptions. You get to practice adapting instead of just memorizing facts. The teacher’s decision to organize the trip is what creates that special occasion. All the students share the experience together. Then bonds form through this kind of collective discovery.
This cannot be replicated virtually because VR skips embodied practice. Your body is not actually there. You’re not experiencing the same occasions when you’re in separate rooms.
Good educational AI would force students to practice. It would create occasions for collaboration. “Your classmate struggled here too, so you discuss together.” It creates occasions for students to bond, does not replace the teacher’s presence. It’s more about teachers using AI to help them design better courses than leaving these tools to children when they’re inadapted for them at this developmental level.
Education is not the only domain facing this crisis. Software development has its own competence pipeline, and it’s breaking too. Where do senior engineers come from? Junior engineers develop their generative gesture. They debug, create occasions to encounter real problems. They join communities, form bonds through mentorship. Years of practice develop automatisms. Years of encounters deepen expertise. They become senior through both dimensions. If juniors use AI from their very first day, they never practice enough to develop real automatisms. The entire pipeline breaks down. In fact, many new juniors do not even know what questions to ask an AI assistant, let alone how to prompt it effectively.
After years of this practice, a practitioner crosses an expertise threshold. AI assistance works when you’ve already completed the practice stage. A senior developer using Copilot can judge the suggestions because they have automatisms. A junior developer using Copilot cannot judge because they lack automatisms and drown in code they don’t understand. Augmenting expertise works; replacing apprenticeship destroys the foundation.
This isn’t just about productivity or advancing in your career: the best work is enjoyable in itself. Solving tough problems, experimenting, building something new, getting stuck and then unstuck: these are experiences that make work and learning meaningful. If you skip the practice and hand everything over to AI, you miss the satisfaction and sense of achievement that come from doing meaningful things yourself.
Then there’s the leisure question. Some people love crosswords. AI can solve them instantly. Is the pleasure in having the crossword solved by a machine or in solving it? There’s a different pleasure in coding a solver for Sudoku or Wordle. You don’t want a tool to solve Sudoku for you: you want to build the tool and learn the concepts. Building the tool is not using AI assistance: it’s you performing a different generative gesture.
Same for crafts and cooking. Technology can make things efficiently, but efficiency wasn’t the point. The point was engagement: practice plus gesture. The point was your generative gesture, not the output. If AI does all the work, what would you do with your hands in the time you gain? Just become a 10x developer? What practice do you want? What gestures create meaning for you? What connections do you want to form?
So what does good AI assistance actually look like? It pushes you through the practice stage and preserves your generative gestures. It removes friction, not engagement.
Bad AI practice skips the practice stage and takes over your generative gestures. It removes engagement, not just friction.
These examples point toward design principles.
First: preserve the generative gesture by forcing practice.
-
Good design: “Write your answer, I’ll suggest improvements.”
The student must produce and AI provides feedback. Practice stage preserved. -
Bad design: “I’ll write the answer.”
The AI’s gesture, not yours. Student bypasses practice. No learning happens.
Second: create occasions for human bonding.
-
Good design: “Your classmate struggled here too: discuss together?”
AI facilitates human connection, creates occasions for encounter between students. -
Bad design: “I’ll give you the solution, no need to ask around.”
AI replaces human presence, blocks bonds from shared struggle. Students never create occasions with teachers or peers.
Third: make limits visible.
-
Good design: “I can suggest structure, you must validate architecture.”
Shows where human judgment is needed, maintains metacognition.Anthropic’s Constitution admits AI needs human oversight, but the tools are deployed in ways that hide this limitation. Better to make the boundary explicit: “I’ve done X, but Y requires your human judgment.”
Fourth: provoke reflection.
- Good design forces you to ask “Why am I doing this?” It exposes the difference between tedious and meaningful tasks, clarifies what you value. When AI can do something, it forces you to ask whether you value the output or the process. Is this practice you need or tedium to eliminate? Is this a generative gesture you want to keep or bureaucracy to automate? This reflection clarifies values and forces intentionality.
We’re choosing between two futures.
One world optimizes: maximize output volume, minimize time on “tedious” tasks. Efficiency above all. High productivity but low capacity. Many outputs, no understanding. Individuals who can’t think, can’t connect. AI takes over all the generative gestures; humans just consume. Very capitalistic.
The other world cultivates: develop human capacity, preserve generative gestures, be intentional about what to keep versus delegate. Lower output volume but higher human capacity. Fewer outputs, deep understanding. Individuals who think independently and bond genuinely. Humans cultivate generative gestures and AI assists.
Currently, I am afraid that we are drifting into the first world without choosing consciously.
The questions to ask are not “Can AI do this?” but:
- Can AI help me grow while assisting me?
- Do I want to practice this myself?
- What capacities do I want to develop?
- What connections do I want to form?
- What world am I creating for children?
This isn’t inevitable technological progress. It’s a choice about what activities are worth inhabiting, not just completing. What practices build both capacity and connection. What kind of humans we want to be.