We are going to need proof of work for everything
Much of the world depends on difficulty that is vanishing
The world of crypto offers a useful technical lesson, even if you generally dislike the industry. It solves a specific problem that the rest of the world is about to face: the collapse of “proof of work.”
This starts with the “copy/paste” problem. In the digital realm, creating a copy of something is usually free. This is great for sharing information, but it is terrible for money. If I can copy-paste a digital dollar the same way I copy-paste a URL, the currency collapses. Bitcoin solved this with Proof of Work. It forced a computer to expend a verifiable amount of energy to validate a transaction. It introduced a “cost of creation” to the digital world to ensure security. It made it economically infeasible to fake the record.
The idea that hit me this morning is that society has always relied on its own implicit version of Proof of Work for all kinds of things. We have operated on the assumption that if a document, a photo, a technical report, slide deck, student essay or any “cognitive artifact” was complex, it was hard to produce. That difficulty was our proxy for truth. If you saw a photo of a car accident, you trusted it because faking a realistic photo required immense skill and time. If a student turned in a ten-page paper, the artifact itself proved they had engaged with the material.
AI is effectively removing that proof of work from human society. It has driven the cost of that production to zero. We can now trivially generate voices, images, video, code, homework, research papers, political analysis, news articles, medical reports and on and on. This is the exact environment that forced Satoshi to write the Bitcoin paper: a world where digital artifacts are impossible to trust because they are too cheap to create.
The “homework crisis” in education is the clearest example of this signal collapse. It used to be that the “homework artifact”—the essay or assignment itself—was proof that the student had done the learning. Now, those artifacts can be generated in seconds. The correlation between the paper (the result) and the thinking (the work) has been severed.
This is true up and down the stack. A long, well-formatted research report used to imply weeks of attention and energy. Now it implies nothing more than a decent prompt. We are entering a low-trust world where “seeing is believing” is a liability. The implicit trust we placed in documents, images, and recordings is going to be disrupted fundamentally because the barrier to entry for faking reality has evaporated.
So where does this leave us? We likely won’t solve this with energy-intensive mining like Bitcoin, as that doesn’t scale to social interaction, and it doesn’t have the same approachable mathematical underpinning.
We might move toward a “signed reality.” Just as we use SSL certificates to prove a website is genuine, we may soon demand that cameras and microphones cryptographically sign their content at the moment of capture. If a photo of a car accident doesn’t have a digital chain of custody proving it came from a specific lens at a specific time, insurance companies simply won’t accept it.
Second, reputation will replace the artifact. Since we can no longer judge the work itself—because a bot could have made it—we will have to judge the source. We might see a return to a “web of trust,” where I only believe a report if it was signed by a specific person I know, who is vouched for by others I know. The open internet where anonymous content has merit might shrink; closed loops of verified identities might grow.
I think we might also shift from proxies to outcomes for a lot of things. I see this in engineering - I can’t really evaluate an engineer now on code production, or even complex designs. I have to see outcomes - are they getting something useful to happen? In a weird way, we might start to care less about these artifacts - I care less and less about code now - and more about the effects of them. Maybe school turns into demonstrations of skill, not just homework. Scientific papers aren’t interesting without working examples, etc. We may all just have to become much more skeptical.
It’s tempting to say that we will go back to the analog - you can only trust something you put your hands on, see in person, etc. I hope not - that’s going to be a much slower, harder to scale, more complex and less fun world. But that’s probably the default unless we figure out something better. I don’t like this at all - I am a strong believer in the promise of AI to unlock human creative potential and make our lives better, but a collapse of trust that leads us to a slower, less efficient age would wipe out a lot of that benefit. And we are going to get the cheap artifacts whether or not we get the upside - that’s already here to a large degree, and it’s likely that within a year or so, almost nothing AI created will be easy to detect as such.
We are going to need proof of work, or something like it, for much of the world, and soon.

I'm reminded of Donald Hoffman's view, that consciousness is an interface that allows us to see and interact only with the parts of reality that are useful to us. Everything else is invisible.
No amount of effort can render a useless truth visible, nor a useful lie invisible.
https://www.youtube.com/results?search_query=donald+hoffman+reality+is+an+illusion
Cryptocurrency faced the same problem with the proof of work not being feasible and introduced proof of stake. I think this is where it will go in the future