Learning Lessons from Technologists
When I joined Learning Tapestry in 2018, my goal was to view learning from a different angle. As I have gone deeper down the technology rabbit hole, there have been many instructive comparisons between technology and learning. The most core transferable idea for me is that I now view learning from a systems and process lens. That has revealed: the purpose of standards, the concept of interoperability (e.g., the ability of two systems to “talk to each other” and share information), and the concept of emergence and creating conditions over controlling inputs. I’ve written about these ideas in many of the posts I’ve shared so far.
Now AI has entered the conversation. This is opening up a whole new set of lessons that I think are worth noting as we navigate new opportunities and their associated risks.
One interview I listened to recently was between Lenny from Lenny’s Newsletter and Simon Willison, a career software engineer and thought leader. While the content of the interview was solely focused on the impact of AI on software engineers, I believe there are several comparisons to teaching and learning:
What’s easy to measure gets automated first.
AI makes creation easier, but responsible use and defining quality requires human expertise.
When AI can do the work and producing it no longer defines the job, we have to rethink what the job is.
Proof of use matters.
What’s easy to measure gets automated first.
Simon speaks to this idea from the perspective of coding. He explains that AI tools are largely doing the coding work because it is quantifiable and verifiable. You code. You run it. You can see that it works (or doesn’t). He says his job has largely shifted from doing the coding to managing the AI agents who are doing the coding. As such, where he spends most of his time now is very different than it was 6 months ago. He even suggests that by the end of this year (2026) that a majority of programmers will be using AI coding tools to do 95% of their coding work.
How does this apply to education?
Our education systems have largely relied on the idea of standards to dictate what we teach and assessments to show whether we taught it. We have built learning systems in education that run on the exact kinds of outputs AI tools can now produce without us. The outputs we seek for learning are no longer hard to come by. They don’t distinguish someone who has learned from someone who has not. This is disruptive to what we’ve come to accept as a “truth” in education.
If the goal of teaching in our learning system is to teach what is easy to measure, then we might as well let AI tools replace us. Like programmers, there is a need to rethink what the job of teaching and learning is and where to focus our efforts.
However, this is a tricky comparison. Coding is a specialized skill. It is something learned after there is a general foundation of reading and language arts, math, science, and history.
I firmly believe that we need human teachers as the relationship between teacher and students is key to learning. Trust creates safety, which in turn creates optimal conditions for learning. That relationship also motivates learning. I wrote about this in “Beyond Replication.” But if teachers keep only teaching what’s assessed, it makes it much easier to make the case that they can be replaced.
There are many, many problematic uses of AI technologies for learning. For example, I do not think that AI tutors will be able to replace teachers. In the interview, Simon shares that AI tools can actually be very helpful for onboarding the new programmers, both in teaching them how to code and assessing their work. Are there particular group of students for which existing AI tools might be more supportive? For example, would having written or oral conversations with an AI tool enhance language learning for multi-lingual learners? What learning is foundational and needs to be taught directly even as it can be performed by AI? Is there learning that can be automated? Do we need to consider AI tools for older children only?
Programmers are learning to adapt and figure out new ways of working with AI tools so that they are still relevant. If we outright reject AI for learning and refuse to adapt, then we risk inviting the very thing we’re trying to avoid. As the adage goes, “If we don’t bend, we break.”
AI makes creation easier, but responsible use and defining quality requires human expertise.
Simon says that AI “came for us first” because code is verifiable. He says that the application of AI in other fields like law or education are lagging behind because it is much harder to determine whether the AI has done a good job. There isn’t as clear a “right” or “wrong.” He also expresses appreciation for the democratization of “getting a computer to do stuff for you, of automating tedious things in your life by knocking out these little tools.” However, he adds a warning that there are limits to what we can do responsibly. Simon also suggests we need experts to define what those limits are.
How does this apply to education?
Teachers must be decision makers and implementers. Developing and passing out policy for AI use that teachers must follow will likely not lead to responsible use by teachers nor students. But this also isn’t a “teachers need more training” argument. Teachers must become the bearers of quality when integrating AI tools into classrooms. Our learning systems should be designed around their judgment of quality, not compliance.
This is where teachers and learning experts are super important. I see too often that companies design products for students rather than for teachers. But then, they expect a teacher to implement the product with students. This happens again and again with curriculum. I wrote about this in “The Case for TX.”
If we are designing products to be used in a classroom, we HAVE to design those products with teachers as the first-line user. If the expectation is for a teacher to implement a product exactly as designed, then we might as well cut the teacher out of the delivery pipeline and just send the product directly to students. In some scenarios, that’s exactly what companies intend.
I get it. It’s risky. What if a teacher doesn’t do the best thing and the product doesn’t perform as hoped? But the on-the-ground reality is that if we try to design around teachers, then we’re saying that teachers aren’t important to learning. This continual bypassing of the teacher undermines their role in the system and makes them easier to cut out, especially as AI tutors are knocking on the door of our school systems.
Instead of routing around teachers, we need to be elevating their expertise. Instead of spending money and time developing policy for teachers to follow, we should be working with teachers to figure out what responsible use of AI tools in learning looks like for their classrooms. That might look very different at different grade levels or in different communities even within the same district.
Simon speaks about “agentic engineering” where a professional software engineer is using AI coding tools for different parts of the process to get really good, production-ready results. He suggests that it takes a more expert level of knowledge and skill and understanding to be able to make use of AI agents inside these AI coding tools to accomplish this kind of software development. Applying that to education, I could see where “agentic teaching” involves making use of AI tools for different parts of teaching to enhance learning. Maybe that means an AI tool creates different versions of a lesson plan. An expert teacher, then, has to be able to vet those options to ensure they are quality. If the technologies aren’t producing quality outputs, expert teachers won’t make use of them. More novice teachers might.
The lesson for me from the technologists here is to upskill and then empower teachers to define the limits and make the expert decisions about learning that we charge them with daily.
When AI can do the work and producing it no longer defines the job, we have to rethink what the job actually is.
As Simon and Lenny discuss in the interview, AI has become increasingly good at coming up with ideas, which raises the question: If AI can do things that we once considered important to our work, where does that leave us humans? Simon makes two main suggestions:
Deliberately lean into and use AI to challenge yourself, expand your skills, and take on more ambitious learning rather than letting it replace your thinking: “How do I help this make me better?”
As AI accelerates change, the most essential human skill is agency or the ability to choose meaningful problems and use technology intentionally to grow, adapt, and do new things.
How does this apply to education?
The one desire that I think is universally held by most educators is getting rid of uncertainty. When things move around us, we cling to our default beliefs, institutionalized norms, and untouchable assumptions. We cite research and provide evidence of it working in some other state or district and assume that it will work for us because “kids are kids, you know?” That creates a feeling of certainty that we can rely on to move forward. But there really is no certainty in that. Things around us continue to change. The context in which we grew up and then we learned how to teach children is different. We now have smartphones instead of books and social media instead of playgroups and AI tools instead of ____? Do we continue to operate as normal hoping that kids really are kids? Or do we accept the uncertainty of change and that what we “know” to be true is being challenged?
In the interview, Simon argues that “the only universal skill is being able to roll with the changes” because “everything is changing so fast right now.” He provides examples of skills that he learned as an engineer that now AI tools can do lightning fast and without rest. This has led him to reflect on what it means to be a software engineer.
I think we need to reflect on the same in education: What is “learning” and how it is done? Does it mean checking off a list of knowledge and skills defined by standards? Does it mean performing well on a multiple-choice assessment? There’s ample evidence that AI has already mastered this kind of learning. What we need to grapple with is how do we develop humans to be able to have this knowledge and perform those skills so that they can then manage the AI tools?
One of the main complaints that I hear about AI use related to learning is the concept of “cognitive outsourcing” and the potential risks to our ability to think deeply about ideas if we turn all of our thinking over to AI tools. Simon expresses a similar concern focused on his work as a programmer. He says, “A lot of people worry about skill atrophy. If the AI is doing it for you, you’re not learning anything. I think if you’re worried about that, you push back at it. You have to be mindful about how you’re applying the technology and think, okay, I’ve been given this thing that can answer any question and often gets it right, doesn’t always get it right. How can I use this to amplify my own skills, to learn new things, to take on much more ambitious projects?”
He then adds that he is doing different thinking and even as he gets his work done so much faster, he is mentally exhausted at the end. He is using his brain so differently that it is surprising to him how much effort it takes. There is research to back this up. Willingham speaks to the idea that thinking requires a great deal of effort and we’re generally not good at it, so our brains often seek ways to optimize and reduce our mental effort. That means that our brains are drawn to what we know or what we can be certain about. If we already have well-worn thinking pathways, developing new ways of thinking is both scary and exhausting. But over time, it will become what we know and feel certain about.
Given this bias, our default will be to try to fit AI tools into our existing systems and models. That will lead us to create AI tutors that coach kids on standards or how to get the right answer or AI apps that transform worksheets into different languages. While there might be some value there, is that ambitious enough? Does it allow us to envision a future where students are using AI tools to do more learning or amplify their thinking or produce things that didn’t even seem feasible 6 months ago?
What I’m arguing for is a more agile way of approaching teaching and learning, which requires us to manage uncertainty rather than trying to get rid of it. It requires us to be more adaptable in our thinking.
This makes me think of fashion. Every season there is some new fashion trend that I see and I’m immediately repulsed. I think to myself, “How ugly is that! Why are people paying for that? Who would ever wear that?” Then, fast forward 3-6 months. Not only do I now find that exact thing cute that was repulsive to me a few months earlier, but I am now buying it. What on Earth happened? That shift is so subtle that I don’t even notice it. It’s like a gradual thawing that hits some kind of tipping point where my perception flips. And it goes the other way, too. Something I was obsessed with 2 years prior may now feel outdated and ugly to me.
This constant change in perception and shift in what I value and desire seems harmless when it comes to fashion (except my bank account does take a hit). When we talk about learning, we tend to be more rigid in what we define as “quality” and for good reason. We don’t want learning to be defined by what is on trend or in vogue.
But let’s also think about raising children. Guidance for sleeping positions of babies changes based on what we learn works best. Car safety and rules around booster seats and where children can sit is updated as changes to vehicles and speed limits change the context. What was once considered the “gold standard” can now seem barbaric once we have more information and examples available to us.
In education, not only do we hold fast to learning ideas that are outdated or disproven (e.g., “learning styles” anyone?), but we also attach moral judgment to those ideas as well. If you are NOT teaching this content then you must be doing it wrong. This leads us to become more entrenched in our thinking.
The emergence of AI technologies is providing the impetus for us to evolve our thinking about learning. One of the ideas shared at the Common Sense Summit was, “We’ve known for over a hundred years about child cognitive development. It hasn’t changed.” Really? We have known how children develop cognitively and it is so rigid and defined that we can and should continue to use the same methods and approaches that we did in the 1920s? There’s no chance that our brains have evolved due to the conditions and context of our society? Language evolves. Society evolves. Surely what we consider to be important learning for math, science, history, and reading has changed over time.
Relating this back to my fashion analogy, I think the initial response to something new is to be repulsed by it, but at some point, our perceptions change.
I like to view the idea of “cognitive outsourcing” with this lens. AI feels threatening and ugly because it challenges our notions of what is quality learning. If we define learning by standards, then anything that challenges those, feels wrong. But what if the standards are outdated? What if the tools we can use can now largely do the thinking captured by the standards? Does that mean the tools are wrong, or we need to redefine what we mean by quality learning?
Proof of use matters.
Simon explains that software engineers have largely relied on the development of “production quality” working software with documentation and tasks as a sign of quality coding. As AI tools can now, with almost no time or human effort, spin up working software with documentation and tests, Simon argues that we can no longer rely on those as signals of quality. He suggests that what matters more is whether users have actually used the product and built confidence through real-world application. Without that lived experience, Simon says that even well-built tools can feel untrustworthy or unproven. In an AI coding context then, “proof of use” becomes a more meaningful signal of quality than “proof of creation.”
How does this apply to education?
In coding, results may now matter more than inputs. The whole concept of vibe coding is that you don’t even look at the code that the AI tools produce. Alpha Schools are probably the education equivalent of “vibe coding.” As Simon suggests, use might begin to matter even more. Have you used this curriculum in multiple classrooms and across multiple districts with lots of different learners? What were the results?
We’re quick to label something as “working” and then try to replicate it elsewhere. But what worked in one classroom, with one group of students, for one purpose, doesn’t mean that it will hold up in another context. The real test isn’t whether something worked somewhere. It’s whether it works here, with these students, for this purpose. That has to be worked out in practice.
Unfortunately, getting schools to use these curricula, methods, and approaches is often impossible without data on their effectiveness. The concept of incremental improvement through co-creation, testing through use, reflection and revision, and additional testing through use is often viewed as too risky. Maybe a big lesson to learn from technology is how to be more agile in our design and development of our instructional materials.
A pattern across all four lessons is that AI is exposing issues in education that we’ve largely ignored. The more we’ve tried to create certainty in learning, the more we’ve built systems that can be automated. If the “right” answer no longer requires understanding, then getting a correct answer is not a reliable signal of learning. If completing tasks can be automated, then completion cannot be our goal. If a machine can pass a test we administer, then maybe the test isn’t capturing what matters.
That leaves us in an uncomfortable but necessary place: We can’t ignore AI and we can’t just plug AI into our existing systems. Instead of worrying whether AI is going to replace teachers or lead to complete outsourcing of all thinking, we need to be focused on examining and redefining what we mean by “quality learning.” What do we value as evidence of it? What role do teachers play in developing it?
This means that we need to account for learning that isn’t easily measured. It means centering teacher judgment and skill in our learning systems. It means designing teaching and learning to embrace complexity and uncertainty.
This is hard work, and it will feel uncomfortable. I believe it will be worth it in the end.

