Cutting big tech down to size is a good idea.
Actually, what it will do is make it more difficult for ordinary people to use social media to amplify minority views. Tech will be just fine. They will just slice and dice media services to become more private, so that you will need to prescribe to see posts by specific crazies, and you will probably need to pay a fee to post and archive your posts. It will be like going back to the old blog days, I’d imagine.
This will almost instantly remove ordinary people from the conversation. What will be left are only people with groups that have funding behind them. And the crazies have funding too.
I don’t see why. I think back to the old political chat rooms in AOL early days or the listservs you could find on yahoo.com. The major difference, and the problem with social media, is the way they amplify certain content. We did not have that back in the early 90s yet those platforms still allowed you to access them without any additional fees.
This is entirely a problem of the algorithms that recommend certain content - not that people can find such if they actively go looking for it.
Who misses those Wednesday late night AOL Cloak Room chats.
It mostly likely is impossible for this court to rule. Meaning however they decide it will be to tell congress to decide. It wont be to make up the law.
These plaintiffs are looking for the rails. How far can things go? And what wont be touched by any decisions. The Supremes just wont get their heads around that and decide.
I do not fault the conservative justice for that. I take it as a favor. We the people should decide through congress.
Part of what we decided over 100 years ago was anti trust. Those laws affect different aspects of SM than these cases.
That… and I have a bridge for ya…
You do not own the bridge for now. The USSC does. Those are the guys who believe it.
Does me a favor. Does nothing for you.
38 in your case sleep with dogs get up with fleas.
I don’t disagree with that, although your comment is very vague, so hard to tell if we agree in details or only on the vague notion.
For example, YouTube and Facebook both have recommendation algorithms designed to capture your eyeballs and keep them there, because that sells ads and that means revenue. The problem with both appears to be what those algorithms are sending you to. While I can’t prove this, I have a friend who has traversed down the YouTube rabbit hole (he admits he dropped cable TV and gets everything from YouTube). He got himself chin-deep in various conspiracy theories.
So I would say that these social media algorithms have been great for company profits but detrimental for society. But this begs the next question, which has been discussed on this board in the last two weeks, is what is the purpose of a corporation? Is it only to benefit the shareholders? Does a company have any responsibly to society for example? I would say YES, but if that responsibility to society impacts profits… (you know my answer to that already)
If you only account for the negative effects, this would be so. Obviously.
But if you account for the positive affects and balance them against the negative effects, maybe not so.
One great example of a positive effect is videos of common home and auto repairs. A few weeks ago I helped my cousin do a repair that was quoted at well over $1000. The only way we were able to do it is because of various YouTube videos explaining the process step by step.
I don’t see how that conversation is in any way relevant.
The is not a question about what a corporation can or should do to seek profit. This is about what a government should or should not do to regulate existing illegal behavior and whether or not certain types of companies should continue to be exempt from the part or all of that law.
Who thinks so-called social media companies should have (more) liability - especially as it pertains to their active recommendation algorithms.
What, exactly, is big tech doing that is illegal?
Absent section 230, Facebook would be liable for any illegal behavior on their site - like libel or slander.
Absent section 230, Youtube would be liable for promoting terrorism when it recommends a terrorist video.*
Note, that if the Washington Post behaved the same way as Facebook or Youtube, it would have liability.
The question at hand is what protection Section 230 actually provides (it has never been tested at this level) to certain companies and if those protections are Constitutional.
More on the youtube case:
So section 230 absolves the social media site from libel or slander. Does that not mean that the social media user is being slanderous? Nobody seems to be prosecuted for that however.
I’m actually for some amount of some extra regulation. For example, I applauded certain people being banned from some of these sites a few years back for certain posts and behaviors detrimental to the country. It was not censorship. Freedom of speech does not mean freedom of consequence of your speech. Including prosecution.
There are three things going on here which have the potential to change the face of media.
The first is the Dominion lawsuit against Fox, which will make all publishers a bit more cautious about what they put in print and on the airwaves, especially bloviators of which there are too many. Unless Fox wins, in which case apparently you can do anything to anyone at anytime for any reason.
The second and third are the two USSC cases, both revolving around Sec 230. One gives platforms immunity from whatever another user publishes, which while a bit unreasonable, seems almost impossible to monitor otherwise unless you want every single thing someone puts on the web to be pre-censored.
The second involves the platforms’ “recommendations”, which is a different issue and which involves decisions made by the companies, even if those are algorithmic. That one I think they will also win but I wish they wouldn’t. The recommendations are an editorial choice, one they make for profit, and they should have liability. In this case [ISIS recommendations caused terrorist activity] the direct link seems thin (logical, but probably not legal, in my view.)
Anyway, given my druthers, I’d ask for major, gigantic, serious consequences for the first, nothing for the second, and somehow a hard hand slap for the third. Personal wants, not predictions.
Isn’t that was all the media does, has always done, to capture viewers? TV networks look at the ratings, then cancel the shows with low ratings, and produce clones of the highly rated/most profitable shows. That is where all these insipid singing and dancing shows come from.
One US example and two non-US examples.
Wrong. You talking about a broadcaster selecting which shows will be shown during a limited broadcast time frame of 2-4 hours per day. It is the broadcaster’s limited federally licensed time being used. That is not what happens on the Internet. It is designed to handle a massive amount of content and deliver it “on demand”. The only limitation of the Internet is the number of IP addresses available. So, anyone who wants to put up a site essentially can do so. The only limitation is legal matters (if it is legal to post what they want) AND their ability to pay for what they want to do. That is not broadcast television or radio.
It definitely takes diligence to avoid certain things on YouTube.
I often find my feed filled with news offerings from one particular source - a source that I refuse to look at due to their well documented mendacity. And that happens even though I watch other news sources far more often.
My suspicion is that viewers of videos from that vulpine source tend to stick around and watch more videos - and hence more ads - from that source. It is profitable for YouTube to put those videos in their recommendations. So they do. They are trying to get their viewers addicted to that source because it produces more revenue for YouTube.
No, the difference is that the networks (and stations) are directly responsible for the programming they carry (and often, but not always, originate.) A social media platform is a passive carrier, more like a telephone company providing a connection between people but having no input to the content whatsoever. (Lily Tomlin’s Ernestine notwithstanding )
You can say all kinds of nasty things about someone on Facebook or YouTube and they are not responsible, you are. If CBS (or by association one of their employees) says something nasty, they have that responsibility. (That it is difficult to win is not material because can and do sue them and sometimes win. See: Gawker or Carole Burnette/National Enquirer, etc.)