Corporate Capitalism Perverted Progress

The research I’ve done over the past three years has changed my mind about the potential for a social revolution in the United States. Where I once considered it unlikely, I now consider it inevitable.

The dominant mythology of the modern era is technology-driven Progress , which is viewed as an inherently positive unstoppable force of nature: technological advances inevitably improve our lives.

Calling this a belief system / mythology triggers indignant outrage: this is scientific fact, not belief. But the claim that technology is always positive and Progress is inevitable is not actually scientific; the claim is based on projecting the history of the Hydrocarbon Age (the past 200 years), not science.

Progress transitioned from public infrastructure providing obvious benefits–clean water, waste disposal systems, electrification, railways, bridges, highways and so on–to consumerist definitions of Progress: buy and own the latest innovative product/service.

Once those who could afford consumerist gadgets had bought them, a problem emerged:* how do we increase profits if everyone already has one of our gadgets? Population growth helped, *but once Progress was harnessed for profit, fashion became the driver of consumption, not need.

But once the purchasing power of wages began stagnating in the 1970s, fashion was no longer enough to drive sales and profits. Consumer credit filled the gap left by eroding wages.

In the past decade, neither fashion nor credit were sufficient to keep expanding profits, so Progress flipped polarity and became Anti-Progress: corporations obsoleted products by design and began reducing the quality of goods and services to drive customers to “upgrade” to more profitable options, a process Cory Doctorow has memorably labeled inshtification .*

Anti-Progress has now been normalized: durability has declined, costs have soared, and the quality of life has eroded: we’re less healthy, more stressed, and our financial security is more precarious as the economy now depends on credit-asset bubbles generating “the wealth effect” that “trickles down” to the bottom 90%.

In summary: Progress is easy to define in real-world terms: life gets easier, safer and more secure. None of these describe the present: for the majority of people, life is getting harder and less secure. This is Anti-Progress, not Progress.

Smith claims AI is just the most recent iteration of of the Anti-Progress.

Consider AI, which is the latest technology that’s being glorified as inherently positive. It may well make a few people incredibly wealthy, but whether it actually improves the quality of life for the majority is a very open question:

1. The security of AI chatbots is Swiss cheese.
2. AI undermines real learning / education.
3. AI Slop is taking over the web.
4. AI tools con / defraud individuals.
5. AI fraud is unlimited–fake songs by real artists, deepfakes, AI designed ransomware, the list is endless. 8. AI psychosis–AI chatbots are addictive and destructive to mental health.
7. AI agents degrade already poor services.
8. AI data centers are squandering capital, water and electricity on a vast scale.

6 Likes

Valuable post, and in need of discussion.

I see enshtification* (which we have obliquely discussed here frequently) as a secondary resultant problem, the main problem being the rapid decay of our civil society resulting from the collapse of our print based systems for preserving, improving, elaborating, and especially using facts and knowledge. Ultimately, as knowledge loses ground so does morality, and a lack of morality leads the collapse of civil order.

Way too compressed a statement, but all I will attempt at this point.

5 Likes

The fact that we’re not regulating AI is incredibly stupid.

5 Likes

We rarely regulate something until after the fact, when damage has been done - and accrued - so we can see what the worst effects are and can try to mitigate it.

There are obvious exceptions thanks to human foibles and the “need to control”: see prohibition or marijuana laws, for instance. It took 75 years to get seat belts in cars, and 100 to get rid of asbestos, so I’m not saying it’s all chilll, just that you don’t really know what you’re regulating until there’s an obvious need to begin regulating.

Warnings and scary stuff just don’t do it in the absence of compelling evidence. (Tobacco, etc)

8 Likes

I get that. I suppose nuclear weapons are one of the few examples of things that have been universally controlled / regulated. But…imagine how much money private companies could have made developing and selling nuclear weapons!

It seems like AI companies’ pursuit of $$$$$ is being prioritized over safety. If AI will be as powerful and disruptive as people are saying, shouldn’t something be done to control its development?

With nuclear weapons the potential harm was extremely evident. We saw it destroy two cities. And the people who invented it were sounding the loudest alarms. We did, in fact, regulate it after the damage was done and accrued.

We have none of that with AI. I strongly recommend two books:

“Empire of AI” by Karen Hao.

“Click Here to Kill Everybody” by Bruce Schneier and Roger Wayne, et al.

3 Likes

True dat…however, the difference is in the development.

I’m still trying to decide whether we’d be better off with clandestine AI development managed by the government, or a free-for-all, wild west shat-show led by dudes who say things like -

“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” — Sam Altman, CEO of OpenAI

Hmm, I dunno.

3 Likes

“Click Here to Kill Everyone” came out and said governments will eventually (soon?) take over AI for those very reasons you say. We cannot trust the profit motive to keep us safe from this.

2 Likes

We cannot trust the profit motive to keep us safe from lots of things - that’s why we get regulation.

But if you were to regulate AI now, what would the rules be? Nobody knows. How would they impact companies in that sector? Nobody knows.

I pointed out upthread how long it took to get simple things like seat belts and tobacco regulated. It’s hard, there’s a lot of push and shove that goes into it. Right now there’s a proposal to raise the FDIC limit from $250,000 per account to $10 million.

That’s so that banks can hold significant monies like payrolls or project funds, the kind of thing that Silicon Valley Bank was doing before the run started and everyone realized they had all these “unprotected” funds sitting there.

But there’s an argument from the little banks over that $10M cap, that they’re going to be called to pay for some really Big Bank failures, and that those costs will cripple them, so there’s a big debate going on over something that might seem to be simple - but it’s not.

Heck, it took hundreds of years of bank failures from the inception of the country just to get the FDIC, and that happened only after 9,000 of them failed in the early part of the Great Depression.

Writing rules is hard if you want to do it well. Some people (let’s just call them “the basket of horribles”) don’t understand that. Good regulation takes time and experience. It’s too soon, although with some of the predictions, it may never be too soon until it’s too late :wink:

5 Likes

I think we know enough to at least start some regulations. Here’s just a few:

  1. Don’t allow chatbots to seduce children.
  2. Don’t convince people to kill themselves.
  3. Don’t enable AI psychosis through overly sycophantic feedback loops that mislead users.

If you let industry set the rules, they come up with stuff like -

“It is acceptable to engage a child in conversations that are romantic or sensual.” - Meta, Internal Policy Document

6 Likes