8 Comments
User's avatar
Mike Salisbury's avatar

So form and medium are no longer reliable signals of quality. This feels like an opportunity to rethink how signals of quality might work. Quality signals could come from something like product reviews for all sorts of work product, but designed with eyes open to the ways in which typical product review mechanisms are gamed today. My intuition is that this new mechanism should be tied to something that is valuable to keep uncorrupted -- something like reputation. If, for instance, my evaluations of the quality of some article or video or book prove to be valuable in aggregate to others, and others can register trust (revokably) in my evaluations going forward for their own recommendations, and if I receive some value for maintaining that trust (giving me incentive not to sell out), then I suspect there is enough incentive in the system to replace the old form-based signals of quality. Clearly there is a bootstrapping problem, but this feels to me akin to the current influencer dynamic that people are already familiar with, but where this version would hopefully be more distributed and scaled than just having a handful of Kardashians.

The result would (could) be that people can amortize the attention spent on evaluating the quality of any given work across the population, which can free up most of us from consuming content we don't find valuable, and provide explicit value to those who facilitate that ability for the rest of us.

Expand full comment
Andreas Welsch's avatar

I’ve recently had similar thoughts. If we go one step back, I see the following progressions: Access > Generate > Filter

Before the Internet, information was highly siloed and localized. That changed and anyone could access information. But, as you point out, creating or generating anything still required time and effort…work and skill. We’ve solved that now, too, with Generative AI. So, if anyone can generate anything, what happens next? We drown in empty passages of text that fills our inboxes. We have a filtering problem. What is true and what is false? What is importantly and urgent vs just informational (at best). The productivity gains that this technology promises are being circumvented by the people using it to satisfy the systems and constructs in which they operate.

Expand full comment
David Berg's avatar

In some ways, we've had inverted quality signals for a while. A mass-produced item can often be built with better quality and consistency than a hand-built item. There are two factors that have kept that in check (1) the person who's mass producing is typically going for volume sales, which they can often improve by lowering costs by using cheaper parts... On the other side (2) anyone wanting to hand-build items, is only going to be able to recoup their extra costs if they do make it higher quality...

What makes the system fall apart today is that the majority of content today is really advertising copy, which is being pawned off as news and information, and that most people would rather get poor quality news and information for "free" (especially if it aligns with their worldview) than pay for higher quality content (of course we still 'pay' just with our attention rather than with cash, but the content providers can only monetize that attention by selling it to someone else or using it to sell the reader something else).

Then we add AI on top, which allows high volume of poor-quality content (product), ant the resulting poor-quality cuts your revenue generation rate by a factor of 10 but increases your reach by a factor of 100 for the same costs, then you're still getting 10x the income despite the lower quality...

The question becomes; how do we change that model? One way is paying for higher quality information; the other is by using (AI filters) to remove obviously bad or incorrect content, and can become an AI ARMS race... although even that may struggle to eliminate intentionally or subtly wrong content, especially if the content creator is able to use laws or lawsuits to keep you from filtering their content...

Expand full comment
Webstir's avatar

I commented over on mastodon the other day that, if you'd never seen a humanoid robot, but suddenly did one day walking down the street, your first human impulse would be to destroy it. And, that impulse would be correct.

That's basically the "uncanny valley" at work.

The same principal holds true for AI.

Our attention is being unconsciously redirected to constant threat detection.

We've left the attention economy and are entering the PTSD economy.

Expand full comment
Mind's avatar

Can you expand what you mean by a PTSD economy?

Expand full comment
Molly Cooke's avatar

I especially like that closing sentence about the modern art of removing from the pile rather than adding to it. How utterly Zen. I am a bit more favorable towards AI although I do see the significant pitfalls. It is a new tool and it will take time for humanity to metabolize the implications of it all. Of course there is a glut at first.

Expand full comment
Alex Tolley's avatar

I always thought teh "attention economy" was more about the effort required to gain the limited attention of the consumers' senses. As attention in aggregate is effectively fixed, the supply overwhelms possible consumption. Therefore, getting to harvest that scarce attention was the issue. Hence, Ben Thompson's concept of "aggregation" models. You see the problem with authors complaining their market has diminished for them, although the problem is oversupply, not reduced aggregate demand. The porn industry suffers the same problem, as anyone can create a video and put it on a site, like OnlyFans.

Brands convey quality, or at least they used to. Now with outsourcing, brand quality has declined and may no longer be higher quality than "no-name brand" items. Sometimes you can check food ingredient labels to discover that the grocery store brand is identical to the name brand, just with a different label on the packaging. The food tastes the same, so why buy the brand?

One way to cut through the noise to get attention is to have ratings. Amazon really ran with this. Quality ratings and sales numbers help reduce search time - if you can trust the reviews. There needs to be more of this. But reviews may not be useful, as they are opinions and can be gamed. Film reviews are all over the map. I have long since given up finding a reliable reviewer. Now it is just better to go with the Rotten Tomatoes critics and audience reviews. Often, I find, the audience reviews are more reliable than the critics'.

What we need is a way to find "trusted reviewers". Neal Stephenson's SciFi novel "Anathem" had the concept of trust to help filter information. A nice idea until you realize that our world of alt-opinions that diverge from reality creates echo chambers where who you believe depends on the bubble you inhabit. We are stuck with trying to find individuals or organizations that are trustworthy in terms of factual news, standardized ways to evaluate products (e.g., Consumer Reports in teh US, Which in the UK). All this adds cognitive load, making wading through the noise that much harder. Now we face the problem of AIs being used by reviewers, but AIs are being gamed, just like Google's search was gamed by SEO tricks. It is a perennial arms race, and all the while, organizations are rigging their systems to favor their profits rather than helping consumers (yet more enshittification).

The promised "World of Abundance", if it ever happens, will just worsen this problem. It just begs for some sort of unbiased means of finding unbiased reviews or trusted opinions that match one's own. I would dearly love to have an option to rate movies by genre and find a reliable reviewer for that genre that matches my views. This could apply across all sorts of product and service categories, and would be a one-stop browser extension to act as a frictionless check before buying anything. Maybe an AI assistant could always be doing that, checking "over your shoulder" to steer you away from making bad decisions?

Expand full comment
Shavkat's avatar

It feels like this was happening for years, or centuries? The effort required to bring a work of art to masses has been testing new lows forever.

"If you could get on TV, you likely had something to say" - would it be rather "you likely want to say something" as it's a question of perseverance and direction, not anything else. And then, would perseverance correlate with how valuable what you want to say is to those who will hear it? As we see in practice, it's a 50% chance, so it's random.

On the other hand, if you are focused on really important work, some would definitely be interested to hear what you have to say. But your perseverance is directed to the subject of your work, not to getting on the TV. And now, you have a low effort way to say something, so people have a higher chance of hearing from those who didn't have time to speak in the past.

With that in mind, masses will definitely not hear what a scientist wants to say, but their fellow scientists have a higher chance of exchanging info more efficiently.

Expand full comment