One of the questions that is being asked a lot these days is "what is the right thing to teach (or learn) given the capabilities of AI?" It's a valid question. We teach arithmetic even though few of us do a lot of it manually now - but it's still important to know how numbers work. What's the line, now that AI can do so much more for us? How much programming should you learn, vs just learning to get a model to do it.
The easy answer to this tends to be something like "we happen to be teaching precisely the correct fundamentals to have a complete understanding of the world. Yah, liberal arts education! Just do that." I think there is some truth to that, but it's more complex than that.
Epistemology is the study of knowing (roughly). How do we know what we know? There are (in my layman's sense) roughly three ways to "know" something: experience it personally, hear it from a trusted source (expert), or believe a large enough crowd of non-experts. Each of these involves some form of trust - in your senses, in the expert or whatever system produced them, or in statistics.
The internet was already breaking all of these mechanisms, and AI will break them further: we will have better and better fake evidence, so you have to be careful about primary observation, there are better and better fake experts (the famous argument about AI being good at definitive bullshit), and social media already gave us crowds that believe pretty much anything you want (hello, flat earthers, I hear you're all around the globe!) and AI will give us that even more strongly: assistants that potentially tell us whatever we want to hear, at elaborate, patient detail.
So...what is the skill set to teach? It has to be "teach the ability to learn well". What does that mean? Well, it's a mixture of things: some fundamentals about how the world works, to start. Things like Occam's razor or the idea of a falsifiable hypothesis, the different forms of cognitive bias and error, the fundamentals of statistics and probability, etc. And some specifics about science, math, biology, art, politics, and history.
The goal is to be able to come into a complex environment, with contradictory opinions but good resources (like search and AI) and be able to rigorously and accurately form a good opinion about a topic you are formerly unfamiliar with. This is useful not just for navigating the world but for navigating tasks - it's likely that much of what work will become is that kind of abstract behavior, where you will have to use the AI to help navigate something unfamiliar, because by definition anything familiar will become rote and automated soon.
This is as hard as any educational process ever is, because there are always opinions about what "truth" is, and what is worth learning. For myself, I look to pragmatism: what works? I like the scientific method - the science community has all kinds of problems and perverse incentives, but it is as robust and rigorous as they can make it, it's constantly trying to improve, it is centered on doubt and asking questions rather than certainty and making assertions, and the basic patterns of hypothesize, test, control, observe, iterate have gotten us pretty far.
The things we have needed to learn have changed many times over our history. Until fairly recently, most of what you needed to know was how to feed yourself, or the basics of your trade. Then the world opened up and we began teaching skills that applied more broadly to the industrial age. We are on the cusp of another mass shift. The industrial revolution was the first time we had massive surplus physical energy available as a species, this is the first time we've had massive surplus cognitive energy available. What we will need to learn and be able to do will shift accordingly, just like it has in the past.
4 Comments
2 more comments...No posts
To this end, I'm trying to establish a theory and community of practice around the discipline of "Semantic Engineering." It is the fruit of a growing understanding of the value of applied semantics, especially from a data-centric perspective, but also a recognition of the inaccessibility of the process (and work products) of semantic ontology as it exists today, and the maturing of natural language interfaces.
Several disciplines have converged around the craft of modeling and managing information and knowledge, but well intended initiatives that do actually improve the organization's understanding of itself and its needs most often end their life cycle as reference works. Semantic Engineering uses those techniques in a landscape that includes (but is not limited to) intelligent systems to align projects and organizations from intention to operation by helping us be clear about what we're doing and why in ways that enable not just clear communication, but also drive actual operational impact.
Fundamentally, though, it's about helping to ask the questions that make our intentions and interactions understandable, building those out of interconnected structures of meaning and knowledge, and using the underlying modularity/abstraction at the heart of language to build and integrate systems and organizations.
Need to get you out in the world speaking on this topic - fascinating, timely, important!