The IT Rules mandate social media and other intermediaries to exercise due diligence in preventing users from sharing harmful content. However, there is ongoing debate over whether tech platforms should be held responsible for the content they carry. The Supreme Court has stated that companies can be held liable for web content provided by a third party, but publishers of news cannot be held liable for their readers’ actions. In February 2023, the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act was introduced to reform Section 230 of the Communications Decency Act.
The law generally precludes providers and users from being held legally responsible for information provided by another party. However, this interpretation of the law may change in the future, allowing social media networks to be held financially liable for libelous false information and damage caused by incitements. Web hosting companies may also face liability risks for third-party malware on hosted websites.
Participants in a conspiracy become criminally responsible for the reasonably foreseeable acts of any co-conspirators committed during the Conspiracy. In Section IV, a legally identifiable harm will be introduced as a trigger for liability in spreading false news. However, for decades, site operators have been immune to liability under U.S. law. Congress needs to make these reforms to Section 230 to hold online platforms accountable when they unlawfully censor speech, especially when they do so.
📹 InfoWars, Alex Jones’ far-right conspiracy theory site, has filed for bankruptcy. Here’s why… #news
What is Section 230 of the Internet law?
Interactive computer service providers and users are not considered the publisher or speaker of information provided by other providers. Section 230(c) offers “Good Samaritan” protection from civil liability for operators who remove or moderate objectionable content. This provision was developed in response to lawsuits against online discussion platforms in the early 1990s, which led to different interpretations of whether service providers should be treated as publishers or distributors of user-created content. The authors, Christopher Cox and Ron Wyden, believed interactive computer services should be treated as distributors, not liable for the content they distributed, to protect the growing Internet.
Are websites not liable for content their users post under Section 230?
Section 230 is a legal framework that protects users and services from lawsuits for forwarding email, hosting online reviews, or sharing objectionable content. It was passed in 1996 to ensure that the growing internet would make it impossible for services to review every user’s speech. By 2019, over 4 billion people were online, with 3. 5 billion using social media platforms. The legal framework allows niche websites, big platforms like Amazon and Yelp to host user reviews, and users to share photos and videos on platforms like Facebook and blogs.
It also allows users to share speech and opinions everywhere, from vast conversational forums like Twitter and Discord to the comment sections of the smallest newspapers and blogs. Without Section 230, many online intermediaries would filter and censor user speech, and some may not host user content at all.
Should social media platforms be liable for content they post?
Section 230(c) of the Communications Decency Act provides social media platforms with immunity from liability for harmful content originating from third parties. This is on the grounds that social media platforms offer social benefits and employ algorithms for the purpose of providing users with a personalized experience.
Should an internet provider be liable for the content that appears online?
Section 230 of the Communications Decency Act (CDA) shields Internet Service Providers (ISPs) from liability for user content online. This makes ISPs different from newspapers and magazines. In the Drudge Report case, an aide of President Clinton sued AOL for defamation, claiming he had a history of spousal abuse. The court ruled that AOL was protected by the CDA, making ISPs not responsible for user content. This protection is also applicable in countries like England and Germany, where laws vary and it is crucial to understand the protections for content in these countries.
What is the social media liability law?
Social Media Liability refers to claims for libel, slander, harassment, invasions of privacy, intellectual property rights violations, and improper employment practices resulting from the use of social media sites like Facebook, InstaGram, Twitter, YouTube, and blogs. Business insurance policies typically include personal and advertising injury coverage, providing protection for libel, slander, derogatory remarks, and invasion of privacy. Some homeowners and renters policies also provide personal and advertising injury coverage.
Standard business forms may contain limited coverage due to material published on the internet or electronic communications. Casual users of social network sites may inadvertently post defamatory comments about a current or former lover, especially after a divorce or messy breakup.
Can you sue someone for what they post on social media?
Social media slander can be sued, but proving the case can be challenging due to the need for evidence and legal support. Politicians or celebrities with powerful legal teams often win cases more easily than private figures. However, winning a lawsuit against social media platforms like Twitter, Facebook, or YouTube is possible, as they are granted immunity under Section 230 of the Communications Decency Act, making them immune from lawsuits unless a key exception applies.
Are there exceptions to Section 230?
Section 230 does not apply to federal criminal law, intellectual property law, or electronic communications privacy law. However, courts have held that it allows hosts to establish and implement acceptable use standards without risking liability. However, posting guidelines is still a good idea, as people often appreciate guidance on what is or is not acceptable. EFF has an archive of key cases addressing Section 230.
Under what circumstances does Section 230 immunity apply to an ISP or website?
Blog hosts are generally not liable for editing or deleting comments on their blog, as Section 230 protects them from liability for actions taken in good faith to restrict access to or availability of objectionable material. This includes editing or deleting posts considered objectionable, even if those posts would be protected by the First Amendment against government censorship. While it is possible to edit comments to change the meaning and make commenters seem like crazed defamers, it is important not to abuse this power. Section 230 protects actions taken in good faith, but it is crucial not to abuse the power to edit comments.
What is Section 203 of the Communications Decency Act?
Section 203(c)(l) of the Communications Decency Act provides a legal shield for interactive computer services such as YouTube, Google, Facebook, and Twitter, preventing them from publishing information provided by another information content provider, such as a YouTube video or a Facebook statement.
Who is responsible for user-generated content?
The legal and ethical ownership of user-generated content (UGC) is retained by its creator, irrespective of the nature of the content in question. This encompasses not only instances where users promote products or services online, express their views, or publish news, analysis, or opinion pieces, but also any other forms of user-generated content.
Are websites responsible for user content?
In the United States, Section 230 covers offline harms caused by the Internet mediating conversations between buyers and sellers. Internet companies are only held liable for how they shared information between parties, which would hold them responsible for third-party content. This is a key role in cases where internet companies invoke Section 234, such as eBay not being liable for publishing listings that cause damage or personal injury to people.
In some cases, Section 230 may not be the basis for liability, but other legal doctrines still protect Internet services. The starting premise is that Internet companies aren’t liable for offline injuries caused by allowing people to talk to each other. However, plaintiffs continue to try and this area has not definitively been resolved.
The future of society is moving online and into cyberspace, and the boundaries may blur. Liability may change as more interactions occur online. However, the Internet’s efficiency in making markets more efficient allows buyers and sellers to find each other at lower transaction costs than in the offline world. This could raise transaction costs and potentially foreclose some markets.
One question is whether it is better to have markets with known risks or foreclose markets due to potential risks. Insurance plays a role in this situation, as companies like Airbnb, Uber, and eBay may have insurance programs that cover some risks while reducing transaction costs while providing compensation for the small percentage of consumers affected by harm.
📹 The Supreme Court Could Destroy the Internet Next Week
GOT A VIDEO IDEA? TELL ME! ·························· Send me an email: …
CESTA / FOSTA was a truly terrible law that not only did everything LegalEagle said, but makes it harder for law enforcement to investigate it, because the sites no longer exist or are now hosted offshore. The Justice Department even sent a memo to Congress saying that’s exactly what would happen, and it did. Terrible law that pays lipservice to solving a problem, but does worse than nothing.
As a computer scientist, it is fascinating to me how often we discuss if developers or companies in the internet/digital domain are liable for people using their services maliciously. I know that this particular lawsuit is not about the mere use of the platform, but it still falls into that broader discussion. However, when in 2015 a truck was driven into a Christmas market in Berlin by an ISIS member killing 13 people, no one dragged Scania in front of a court, questioning the responsibility of the company as the manufacturer of the vehicle. Maybe I’m missing something, but that is something that stood out to me in the last couple of years.
Man I really empathize with the issue that caused this lawsuit to be brought forth, but I feel some people don’t understand just how much content is uploaded on the internet. It is literally impossible to fully, reasonably moderate all content ever. If they can prove that a company is willingly allowing harmful content to exist, then fair enough, but to point to the existence of some as evidence of liability, that really would kill any form of mass communication Edit: To put it out there, by some reports Youtube has 500 hours of content uploaded every *minute*. There are quintillions of bytes of data uploaded to the internet every single day. And these figures are only growing as time goes on
I feel for a family who is grieving the loss of their loved one and demands Justice and wants answers. When your CHILD has their life rippled away, you want to see everyone and everything that could have caused it burned to the ground, and that sort of trauma creates a feeling of vengeance and vendetta that that’s unquenchable because the sad truth is nothing will bring your loved one back. It’s worse than “sucks”, it’s worse than “sad.” It’s absolute tragedy in its purest form. I think it’s important to clarify that the loss of their child is NOT an acceptable price to pay for free speech, but the people who engaged in this horrible crime have been brought to justice and creating internet-wide censorship and accountability for information sharing platforms is such a dangerous precedent that can lead to the absolute misuse and manipulation of social media for political control, and demanding that the entire social media landscape be censored because a group of radicalists committed genocide as a result of having access to radicalizing information is an unjust and unconstitutional pursuit.
I watched a documentary on sex trafficking and on it they talked about backpage. The police officers interviewed that work in the sex trafficking special unit used backpage as a way to centrally locate sex trafficking, intercept the girls and give the girls safe haven to leave their situation. they also used backpage to post ads for sting operations. Once backpage was gone, they said it set them back significantly because now it forced the sex traffickers to dispurse and find a new outlet to exploit their victims. So in a way those 2 laws hurt sex trafficking efforts.
The main problem with using an algorithm to filter content is that things like “violence” and “adult content” don’t necessarily mean anything illegal by themselves. Service providers should be allowed to use an algorithm to moderate their trillions of hours of content, but it’s hard for an algorithm to separate things like violent crime from just plain old violence.
If platforms are made legally liable for user content, this will inevitably make it pretty much impossible for smaller platforms to compete with the big platforms, as only bigger platforms have the resources needed to mitigate and absorb this risk. It would only consolidate and exacerbate already existing monopolies.
Another thing to add about CESTA/FOSTA with regards to Tumblr, these new laws didn’t stop porn-bots at all (they are still a problem now), but it shut down a lot of sexual health blogs. Any blog on Tumblr that talked about sexually related topics (regardless of whether it was educational or not) was forced to shut down, however, while your experience may vary, I saw that a lot of individuals could still post up rather explicit content on their blogs, and Tumblr wasn’t forcing them to take it down. I also had a friend who ran a Tumblr blog about stim toys (usually toys or items used as Disability Aids), now my friend went to great lengths to make their blog as neutral and welcoming as possible, the stim toy blog did not have sexual content posted to it at all. This did not stop Tumblr from flagging non-sex related items as inappropriate, and users like myself had to click on these posts and tell Tumblr that the post had been incorrectly flagged. I don’t know if this has come up in your research for this topic, however, Facebook has teams that moderate the worst of it’s content. These people are often overworked, underpaid, and are exposed to horrific content on a daily basis. A lot of them have developed serious mental health issues, PTSD, or have comitted suicide. There’s multiple factors here, but a big factor is that Facebook weren’t providing enough help or support to these workers. Another factor preventing these workers from getting help that they needed was that they had to sign NDA to order to get the job, which would make speaking to a therapist very difficult, let alone reporting these types of things to the media, government organisation, or whistleblower organisation (if any exist).
It’s really sad… a torn family is trying to meddle and destroy things they neither know nor care about, because they’ve been driven past reason by grief… I feel for them, and I hope they succeed in healing, but I hope they don’t succeed in this endeavor. If I succeeded at everything I ever wanted to do in a moment of grief and anger, even righteous anger… I’d be in prison. Sometimes we’re just not meant to get every last thing we want. It’s not fair. And it’s not always okay. But if they destroy the internet over it… isn’t that letting the terrorists win? There was a horrific, years-long period of drug addiction in my life wherein I underwent trauma of nearly every kind… now that it’s over and my mind is finally clearing, I sometimes wake up in anger from the dreams and flashbacks and exhaustion from keeping those things at bay, and it takes me hours to dissipate it… but recently I’ve gotten so exhausted of that, I’ve realized that staying angry, staying exhausted, staying in pain… there’s no better way to stay losing. It is of vital importance to stem the tide of those things. It’s easier said that done, yes, but so is literally everything.
This lawsuit situation, regardless of the horrible tragedy that spawned it, just reminds me of the heavy-handed attempts from copyright companies to do the exact same thing with bills like SOPA and ACTA 10 years ago. I feel for the family, but it does make me wonder who is paying their legal fees. It seems similar to trying to sue the mail carrier for delivering a package that turned out to contain poison (as an example). They didn’t make or send the package, only facilitated its transport, and they can’t open every one due to both the number being sent as well as privacy laws.
I hope more people appreciate what a wonderful service your website is providing the public. You take complex topics and raise your viewership’s knowledge and understanding instead of just oversimplifying the issues. Your presentations seem balanced and are as objective as possible. You are an effective communicator and teacher that presents things in a professional manner while still allowing the occasional injection of humour thus keeping your audience engaged even when a topic might be a bit dry. On top of all of this, when an injustice happens or you see a potential one coming, you actually take action against it, thus making an effort to protect countless people from the fallout that could occur. I just wanted to leave a note saying how very much appreciated you are and I wish you continued success. Stay awesome.
Minor correction, there was no attack on the Champs Elysées on 13 November 2015, because they weren’t targetting tourists. I just checked and Nohemi Gonzalez died on Rue de Charonne, in the 11th district, which is very popular with young Parisians. I know it doesn’t really matter to the article but it bugged me. Very interesting article overall though!
I’m truly sorry for their loss, but this won’t help them, or anyone else, at all. They are just going to stifle free speach and any sort of view that isn’t mainstream + making it harder for law enforcement to stop – say – trafficking, paedos etc. This is a wicked play, that I am extremely worried about.
There needs to be an age limit to the courts and congress. How can we expect people that can’t program their own TV remote to understand and make such a big decision about technology they don’t understand. Not that the current court cares they will destroy and run everything into the ground. There is this and 3 other cases they could hear that will detrimentaly change the internet we know today
I think the best solution would be to allow open and easily understandable algorithms (such as sorting by date, only allowing websites the user chose, by popularity etc.) to not be liable, but having proprietary secret algorithms liable. We know that Facebook experimented with prioritizing positive vs negative posts, and they should be liable if the users didn’t choose it (because the service hides the algorithm used), but if it selected using only recent posts it should have been fine.
I really hate that view point of the law where “if you didn’t do something to stop bad things from happening fine” or “you did something about it (remove some content) which means you should have been able to get 100%”. Really makes companies not try at all. Might get in less trouble in the long run.
This is already the case for Australian Media on social media, cant remember the exact details, but the end result of what happened is half of them have turned of comments permanently, and the other half heavily moderates them. Would love to hear you try explain it (Unfortunately its Australian law not US)
if a kid writes something horrible on a chalkboard, does the chalkboard get taken down and destroyed? no, the kid gets punished. Making providers responsible for their users is not only unrealistic and ridiculous, but also shows that the prosecutors have absolutely no idea what damage they are attempting to do. A free internet is bound to have bad actors, and censoring the bad actors would in turn censor everyone.
On a technical level, you literally can not display any list of content (even search results) without an algorithm to determine that list. If you just want to display all posts in chronological order, that is still an algorithm. If you want to use text matches to list search results, that is also an algorithm. In everyday discussion this might just be irrelevant pedantry, but when it’s a legal case like this I think it would be very important to make the distinction of what exactly one is talking about when one says “an algorithm”. Otherwise you could end up with a law that bans everything, because literally everything a computer does is “an algorithm”.
Sad someone died, but this has got to be the stupidest case to make it to the supreme court in a while. It’s the equivalent of someone calling to buy your TV, when you meet-up they murder you, then the family sues the phone company for not moderating calls enough to prevent connecting you with a murderer. If my untimely death was used to push an argument of these merits I’d be rolling in my grave.
I think the biggest problem regarding YouTube moderation is that they seem to be too busy targeting individual crators that said a bad word or played a slightly violent game (things like that) than removing the actual porn, gore and radical articles that, let’s be honest, most many of us know do in fact exist on the platform. A part of me is glad there’s some light given to this situation (the fact that articles like that just exist on the site), but the other one just knows how over the top Google goes with “fixing” this stuff, and in the end most times it doesn’t even work and it just stays there being an obstacle for the rest of the community
Craigslist was held responsible for content on their site as you pointed out but they were actually moderating “in good faith” (in so far as we can call moderation good faith these days) and took down posts that broke their terms, usually within minutes. But them doing their level best just wasn’t good enough apparently.
I feel like this case will fail due to the huge risk of the internet potentially being wiped out, too many people rely on the internet to do stuff, everything from health care and hiring people for jobs to businesses and communication to doing your taxes and getting paid and banking. I feel like what’s going to happen is that the case will fail but companies will start to enforce or revise their policies and some laws might get rewritten and maybe new laws will go up and maybe some old laws taken down. There’s a saying that goes “Too big to fail” meaning if that big thing fails everything underneath will collapse and we might not have something as successful again.
It’s the internet, user discretion is always recommended. The unfortunate thing is, legally or not, with an internet connection, some basic skills and above all enough funds anyone can gain access to almost anything. That’s not even talking about the dark web, Reddit has some pretty dark places in itself. I cannot imagine what any family of the victims of November 15th went and goes through, but I cannot imagine them going after a landlord if they planned the whole attack from a rental room and there was a small chance that said landlord, who lived in the room next door could hear them. It goes to show how dangerous the internet is and how important the “Report” button is, whose existance almost all of us forget every single day.
A lot of the attention of these arguments are on the effects of social media, but not very many are talking about the effects this could have on search engines. Since search engines tend to also use algorithms to personalize search this could also take down the main way people navigate the internet. (Also haven’t finished the article yet. Will edit when finished.)
I feel like the missing piece here is that the recommendations are neither banine nor based just on “individuals watch history;” you arent just shown things exactly like whatever you just watched, the algorithm is actively designed for certain goals, and that ends up facilitating this radicalization in ways that definitely dont fall under “just hosting content”
Important clarification: The shutdown of Backpage and indictment of the founder happened before FOST-SESTA was enacted, and under pre-existing law. Ironically, whatever you think of that action, it only serves to show that FOSTA-SESTA was not necessary, or at least as necessary. Certainly the chilling effect that led Craigslist and others to shut down certain forums was a result of both the action against Backpage and the passing of FOSTA-SESTA, but those are two different things that happened to coincide.
It’s upsetting when emotionally hurt people go to court to punish/get compensation for their private matters at the expense of everyone else and twist law to make someone pay for their feelings. This feels like making a phonebook responsible if someone sells you a bad spare part and you found the car shop from the phone book (otherwise you would’ve taken the car to landfill and buy a new one).
It’s a mixed bag because these algorithms really can run wild. I believe that detrimental effects exist when article sorting goes unchecked, but this is not the way to go about it. The deeper concern should be the amount of influence advertising has cultivated through pruning and censoring a massive catalogue of media autonomously.
Some good harsh truths to consider here for the general public. I learned a long time ago that balancing the screening of content was never quite easy no matter how a lot of people think, the trouble with having a platform open to so many people is that you risk angering large groups one way or the other, not an easy problem to deal with in general (Though that said, I would say YouTube has been handling it somewhat poorly lately). Great article with great info and insight to ponder!
It would only end the internet (as they know it) for the US. The rest of the world would and legally could ignore all of this. Big US companies could just more their operations overseas and again the supreme court can do nothing (it would be a hassle for them but within months, even weeks they’d be back to normal service).
This SCOTUS case will have massive implications on making suggestions at all. Even if it’s a narrow ruling that doesn’t invalidate Sec 230, it could still affect developing software in its infancy that makes suggestions on other content. It could destroy AI-generated closed captions. Basically, I don’t think “this will destroy the internet”, but I do think it will weaken it significantly if there are any wins for Gonzales. People generally do like suggestions for other articles/games/etc. Currently, this is moving more and more to AI, but the results often encourage echo-chambers.
Something tells me this lawsuit is a combination of a lack of basic understanding of how the systems work and why they are implemented the way they are, and wanting money. If it is a lack of understanding, give them a live feed of everything being posted and tell them to filter out a specific item. Then after 5 seconds when they are overwhelmed ask them how they would solve it, when they say automate it, say that’s what we did. I hate they lost their daughter but this is ridiculous.
For once, I’m on YouTube/Google’s side for this. The Internet is accessible to millions of people and it would be impossible to monitor everything that gets uploaded. Yes while they did make the algorithm that suggests the articles, they have almost no idea what the algorithm will suggest. It’s completely random.
I feel like one deeper issue that isn’t addressed here is one that’s been exposed a number of times over the past seven or so years, where companies were shown to be completely disregarding the effects their algorithms were having on users with respect to radicalizing them. Radicalized users were more profitable, so they were perfectly fine with doing so. Back in 2015, when this lawsuit stems from, was rather the height of all that going on, as far as I recall, before the profit-driven disinformation and radicalization became something with more public awareness.
Key thing about Youtube is that they interleave recommended articles based on previous searches and pure wildcards. Amazon makes a distinction between these, general recommended products and “users who bought this also bought…” Youtube should need to make this distinction more clear and more fairly enforce demonetization if they want to be protected.
The one discussion point that I saw somewhere that I wish Destin had addressed was about the nature of the algorithm. YT’s algorithm has varied in its goal over time between helping the user find the content they want and making the user find the content that will entice them to use YT longer (and watch more ads). If the algorithm has been designed to serve the needs of the platform can that platform really argue that it is merely responding to the users’ requests and thus not taking on an editorial role?
I would love to see a collab of you and ChubbyEmu! You two are the only two websites on YouTube where I’ve watched every single article xD It would be interesting to see his medical take followed by your legal take on liability (especially for the products/medicines/vitamins/protein powders/etc) when it comes to whatever the patient was exposed to.
even employing a moderator could be too risky as one laps of judgement could make the case for the platform to be sued. Consider that we are humans are prone to make mistakes no matter how hard we try to avoid it, its not unreasonable that platforms would shutdown their services, making the internet essentially useless.
The problem with 230 is the website is not liable for speech but they can still dictate what speech is allowed. This has lead to may sites such as Twitter colluding with government groups such as the FBI and special interest groups to stifle any space that is not in their interest. 230 needs to be repealed and these sites need to decide are they going to moderate in the way that they have under 230 and be liable or not moderate (unless the speech is truly illegal) and be immune from liability. This has been the way it has always been in every other traditional setting. Why would internet forums get special privileges.
19:48 Regarding these examples, couldn’t these websites simply claim that they were unaware of the content being posted? The key word of the anti-sex-trafficking laws here is “knowingly.” Once discovering content relating to prostitution, they could take it down. Even YouTube operates this way: they rely on users to report inappropriate content and act accordingly. Edit: I think I hallucinated the word “knowingly” because after checking just now it was not there. 🙂 18:51 It does change the situation.
“They’re misunderstanding the nature of the internet and social media”. Well, we can rest easy, knowing that the people who are determining the future of the internet have extensive experience and probably have even watched a article before, if they have grandkids. And if not, they’re probably more than willing to get familiar enough with the subject matter to treat this important topic with the care it deserves. Edit: guess I’ll at the /s afterall :p
On a personal, emotional level the outrage of the victim’s family makes sense. And its understandable. However on a larger scale, I totally agree that it’s unreasonable to take down the whole system because of the injustice. And that while there are negatives, there are echo chambers of communities that come from the way things are sorted and recommended on the internet, that doesn’t mean the system as a whole is bad
YouTube didn’t upload the articles. But they control their own algorithm and are defiantly more interested in keeping viewers glued then actually making sure the content isn’t dangerous. (If the article doesn’t get reported, how likely is it a moderator at YouTube actually catches it themselves?) I do think these companies need to be held accountable for parts of the problem they do directly control…And be on top of people who attempt to game the system to skirt regulations and such. (Remember when those traumatizing articles popped up on YouTube Kids? Yeah. That kind of system gaming.) The rabbit hole of Youtube articles goes dark very quickly. THIS… is something YouTube is responsible (directly) for and something they control and manipulate (directly).
It’s weird to me that a platform like Facebook or Youtube has terms of agreement where any content added to the platform by its users becomes the intellectual property of that platform but then isn’t liable for what is done with that in property by other users… I could understand if they laid no property rights to the content on the platform, but that is not how they consider it.
I don’t fully agree with the reasoning for going after youtube/Google. But I do think there’s something to be fixed with the algorithm. I’ve noticed it tends to favor things that are negative/hateful/ ect. And maybe that’s because more people interact with those things because of rage idk. But maybe that should be weighed differently in the algorithm
The fact that a family is willing to derail THE ENTIRETY of society as we know it because of one person’s death truly astounds me. Yes, it’s horrible what happened to Gonzales, I am deeply sorry for that person. But THIS!? This is way too extreme. I’m sure that in the afterlife, if it exists, Gonzales would be utterly insulted to have her name be placed on a lawsuit that could set us back literal centuries because of people who don’t understand just how dependent this entire species has become on the internet. For the sake of the world please may this case fail and world-wide chaos doesn’t ensue.
Wouldn’t holding the website liable for its algorithm be similar to holding a bookstore liable for how it displays its books for sale? If a bookstore is not liable despite having some influence around the visibility of certain books, even if controversial; then it doesn’t make sense (to me) that a website would be liable for how it makes content visible.
I listened to the Nina Toten-bag piece on ATC driving home today, and my appraisal, from the quotes she picked, is that the Supremes know this one’s fundamental, and they also know that their clerks understand the Internet a LOT BETTER than they do, and they don’t sound inclined to drive off a cliff, just yet anyway.
It’s almost like people forget that prostitution does not equal trafficking. It’s also as if people don’t want to realize that prostitution (out of the person’s free will) should not be illegal, because it doesn’t make prostitution go away. It only makes some people feel better about themselves and make traffickers stronger.
You have this court ruling to worry. Here in the uk there is a law in the process of being passed known as the Online Safety Bill or the Online Harms Bill that would make it a legal mandate. This bill that the UK government want to make law has clauses sound similar to what could be the worst case scenario for this court case especially with the ‘legal but harmful content’ clause.
There is a third option: The DMCA route. If content is reported for certain reasons, the site blocks access to it and notifies the user, who then can challenge the report and have a manual review to see if it breaks the law. This, of course, is still not a great solution, but it’s better than having to moderate _everything_.
There’s actually a really interesting series on this sort of moderation conundrum that the “Extra Politics” series covered. A proposed counter to the idea of spreading misinformation, hate speech, or defamation is to increase the scrutiny based on the amount of people something reaches. The more people it reaches the stricter the guidelines for moderation, fact-checking, and so forth.
I hope nobody is actually naive enough to believe that would actually stop those sorts of message from spreading. It will just drive them underground and make it harder to keep track for investigators and the like. The root of the problem is not having websites where these groups can spread their message. Those will always exist as demonstrated by thousands of years of history. The root of the problem is why people are swayed by the messages.
Talking about “recommendations of content” vs searched for content makes me think a lot about advertising. It is not content you asked for, but someone (in this case, Google/YouTube) thinks you should see, because it might get you to use their services more. I could see how algorithmic recommendations could be assessed legally very similarly to advertisements.
It’s the wild west on the internet. Formerly, to be published, you had to go through a publisher and an editor. That’s no longer a buffer against deliberately damaging misinformation/disinfo for people who read. Moderators are the only firewall between willful maliciousness and the public at large online. I have worked in the newspaper industry since 1984. I know that business. Newspapers are no longer “a thing” so I have switched to online communications over the past 25 years. I have also done moderation on YouTube live chats. It’s not fun and it doesn’t pay a dime but somebody has to be aware and on guard. For the benefit of all. Moderators are NOT censors. That needs to be known. Moderators enable conversations to remain conversations. Without us the whole thing will turn into a flame-fest of raw hate and violent threats. No place you’d want to go.
I can definitely see this resulting in the SECOND GREAT DEPRESSION. Especially if the Supreme Court sides with the plaintiff. This is mostly due to how many people use the internet and rely on it to aid in finding applicable jobs, sell merchandise or services, transfer goods and money, and the ability to easily broker deals and make agreements or contracts. Not to mention advertisment and the entertainment industry as whole would be affected drastically especially movies, TV shows, television providers, and online streaming services and providers like Disney, Paramount, Hulu, Netflix, Twitch ECT….. The people in the Supreme Court likely don’t even know what the internet is let alone how to use it or how it works and especially how interconnected it is to everything we do from the articles you watch, to the house you live in and maintain, to even what we eat and drink. I hope beyond hope that they see how big and dangerous of a can or worms they would be opening by siding with the plaintiff. I hope that they see above the emotion (which I do agree what happened was tragic) or the politics of the case and look at how vastly and likely very deleterious consequences of what would happen to the livelihoods and lives of an uncountable number of people and businesses big or small (especially the small ones) that would be drastically hampered if not flat out destroyed by such a decision. Of course if the worst does come to pass and all the bad things happen I hope that they will swiftly realize the gravity of such a decision and resend or otherwise fix or replace it.
I think an important distinction courts need to make is the difference between curated digital content and not. Showing content per the user’s own filtering should be protected under section 230, but I do not think algorithm-based content delivery should. The functional difference between a person filtering content to what someone wants and an algorithm doing it is negligible, I feel like its just a question of if courts will be willing to rule on that or if that needs to come from legislative updates.
I think any law concerning this issue should broadly take into consideration the technical faculties of recommendation systems, and then take decisions based on what is technically possible to do, without incurring in arbitrary censoring. I studied recommendation systems a while ago, so I might be outdated, but one outline is as follows: recommendation systems have tags both for the users and the content. Over several iterations, they learn how to match both of them, while at the same time modifying them to fit new tendencies. For example if there was a user tag for ‘color’, and I my color tag was ‘blue’, I could be recommended lots of content with the ocean. But if a start searching and viewing forest content, my tag would change to, lets say, green so the algorithm would recommend forest related content. In reality, the tags generated are not necessarily interpretable for humans, but the developer can make them to fit certain needs. So, one could force those systems to develop sensible tags for the content and tags for, lets say, peculiar users. Not with the aim of pursuing the users, but to use them to flag new sensible content. So the more ‘peculiar’ people congregate around new content, the higher the probability of that content of being sensible. Then, highly probable sensible content could be flag for reviewing by humans, or for AI specialized in detecting ‘offensive’ content before being reviewed by humans. There are a lot of details that need to be examined, but my point is that these technical issues need to be addressed in order to determine how much a platform could be deemed for its content within a certain legal frame.
I’ll always come back to this – it’s the selection and the algorithm that are problematic, not the concept of people info sharing on platforms. When Facebook first did this “curated” timeline, I noticed how rapidly it became a feedback loop. Things that were posted when browsers were online would get more interactions, be promoted, get more interactions. You may be interested in things you didn’t interact with – gone. It should be transparent why something is being recommended, and they should be held accountable for not having a better response to content moderation, but locking everything down is not a solution.
This idea, helding liable the platform owner, is exactly like the two EU articles people freaked out a couple of years ago (pre-2020 era, can’t remember any date XD ), technically now in EU countries yt is responsible for violation of copyright etc. If I remember correctly, now yt has to remove similar content and technically youtubers could sue for damages if such thing is done improperly (like strike for a fair use content). So, while it could end the internet as we know it, we still aren’t see problem in Europe, despite now such articles being law for… almost half a decade? It’s also true that every single EU country has to create their own version of the law, there may be weird exception and such. In case, EU laws are available on their site, and in all languages of EU countries
Would this end the internet, or just the internet for the USA/North America? Could websites just decide to block access to users from that region, while leaving access open to the rest of the world, and in the case of larger companies like Google, just moving their headquarters/etc overseas to say Europe or Asia so that operations would no longer fall under US law? Although even then there might not be that much of an impact as everyone in NA would just get a VPN to set their region to anywhere but NA and thus still access all the now blocked content?
I work in I.T and kept hearing things about this section 230, sort of got the idea of it but not much more. I can see this, and from what I have been told when they did prohibition in the 1920s it just made things worse. I have issues with the algorithums, false news, conspiracy theroies, etc. But I also would have never found half the things i subscribe to if including htis if it was things i had to search for. Now when I am talking to a friend about I want a new car and in a day adds show up for this even though i had never searched it that is creepy and i think needs to stop.
I would argue that its not Google curating or promoting content. Users are setting their own preferential content every article they watch or like. Google provides the home page with articles according to the users request of their preferences. No one at google is filling out individual users preferences for them. The users are filling out and using the algorythm themselves.
We could also live in a world where the “publishers” while not liable for every word published did take proper steps to investigate and deal with stuff reported to them rather than ignore it… After all this was the reason they were given them out to allow them to remove stuff… They do fail this test too often
Honestly, I’d kind of love an internet where people had to proactively post content to their own website. This new internet where there are only a couple platforms employing mysterious algorithms have caused huge unintended problems. Isn’t it a little weird that youtube can profit off of content they have never seen, but aren’t at all liable for losses that content incurs?
A better header would be “(…) Destroy the Internet inside the USA border (…)” xD Since Europe put stronger rules over userdata and all, most of the majors websites here are officially managed by an “independent-ish” company (eg. google ireland limited, almost all of those company are in Ireland…) So those website would have the possibility to still exist outside if they were to only use non-USA server/CDN and under one of those so called company.
This extreme cases scale sucks. The “allow everything, zero moderation” or “moderate everything and allow nothing” situation is terrible. There was a certain website that was basically not shown on Search if you were inside the US, apparently. I am not, so I searched and the website was there for me. But all the U.S residents couldn’t find it. They could find results of other websites and articles talking about said website, but not the website specifically, even though the website wasn’t banned. (It was perhaps what we called “shadowbanned”). It was also during some election times, I think. This is a problem, because with the algorithm being a closed source and private code, we can’t really know if indeed Google/YouTube applies it equally to every piece of content, or not. And seeing how YouTube works in general (for example, the fact that I get 8 out of 10 recommendations from websites I follow or similar, and then 2 articles that last 20 minutes+ with 700 views or less from websites with 100 subscribers – which isn’t bad on itself, but it’s a weird way of the algorithm to work), I don’t think it does. But can’t confirm as we can’t see the code. Another example, is verified Twitter accounts from Hollywood actors with claims in favor of certain economic systems or countries as “heaven on earth” (such as Cuba). The most recent one, is where Mark Hamill joined the rest of Avengers (while they were saying Avengers Unite) suddenly speaking fluent Portuguese in support of Lula Da Silva, the recent victor of Brazilian presidential elections.
Even if Youtube goes down or internet use in general, just like piracy, there will be some sort of syndicate that would provide a similar experience with it as there is always. Internet going down is as if, all factories would have to be forced to pollute the enviroment, no way this passes since so many loose from it.
Something like this has already been happening in Australia, ALL news sources on youtube only publish with comments off. Some small politician sued Google for defamation because he had already sued the youtuber who made the content and failed. Google barely participated and lost and took a fine smallet than thier expected legal fees, but this set a precedent companies don’t want to fall on the wrong side of. (Because the politician got to present his case wholy unimpeded he managed to get the court to find the youtuber in criminal contempt despite not being a part of the trial)
Third option: Algorithmic selection and sorting dies, thus getting rid of any suggestion of the sites knowing what their content is, but rather than the sole alternatives of fire-hose or targetted search, set up a much better system of filters that are entirely user selectable. Time posted, topics, genres, included content, language, nudity, etc. Creators would have to get much better about meta-tagging their articles, and falsely tagging could be penalized by automatically preventing a user from using that tag again if there were too many complaints. And sure, let the users decide if they only want to see content from those who are providing shorts, but maybe you’re someone who really doesn’t care about shorts, so don’t want that critiera. The key is, let me DELIBERATELY choose, then it’s me doing it and my fault if I see something I don’t like. But when YouTube itself makes these criteria as to whether you get seen or not, there is an argument that they’re responsible for what you do see, and thus liable.
So what’s the chance that their potential ruling just takes away the protection websites have for using algorithms to promote content? Sounds to me like it’d be a return to search functions that have intense filters that the site has to index and the removal of “recommended for you” shit on most websites? Archival sites are really good at those kinds of search functions and most websites were like this before about 2009, including youtube
The thing is that if you moderate sites too much or too little it becomes unusable. I believe we have a relatively good balance of that right now with the current state of the internet. If someone posts something that violates terms of service that they agreed to then they are knowingly breaking the agreement and should be held accountable.
The problem is that the major social media platforms are playing both sides of the game. Publishers, newspapers, etc., are not common carriers and are liable for what they print. Phone companies, ISPs, etc., are common carriers, and are not liable – but they are also not allowed to control or restrict that content. The social media platforms want to control content, but refuse liability. There’s a real conflict here, and I haven’t a clue what is the best balance. But what we have now isn’t it.
It seems like there are plenty of opportunities for user content to exist should the Supreme Court side with the plaintiff, no? If YouTube can’t recommend content, then users can – something akin to curators on Steam. (You’d still have the issue of finding the curators, but that’d be an easier problem to solve than sorting through ‘millions of uploads a day’.) You might also see a rise in services/websites like Nebula and Curiosity Stream, and have more platforms that accept the role of publisher with more specific target audiences. I don’t hate that idea, in theory.
Secrecy and black-box nature of recommendation systems may be important here. Holding arcane content recommendations with unknown biases and heuristics liable could make the internet much better. YouTube, for example, would probably remove the inane bell system and allow us to actually see articles that are released by content creators we actually want to see content from. More transparent recommendation systems could easily argue they are not producing content themselves but actually acting as distributors.
If this lawsuit works, and takes away section 230, this will broadly destroy the careers of MILLIONS of people. Anything any content creator could ever say could be used against YouTube. Thus, why would YouTube want to keep that kind of Avenue for lawsuits up? They would have to severely restrict the website or outright take it down. This would kill creative content creators, instructional content creators, musical content creators, etc. Effectively destroying the career and/or income of so many people. This would destroy advertising economies on the internet as someone who is offended by an Ad could sue the site where the Ad is shown. This is a slippery slope and would destroy everything the free web has tried to accomplish since it’s conception.
Hey legal eagle. Kinda off topic here but I’ve been seeing alot of these magnet fishing articles where people find firearms in rivers and I can’t seem to find any real answers on the legalities surrounding those firearms like what can be kept and what must be turned in and whether or not the firearm needs to be reported regardless of whether you can keep it and I was wondering if you might make a article covering this sometime in the near future
Honestly, there’s a major lie in what YT said about the recommendation being based on the user’s intent only: the goal of the recommendation optimizations are to drive up engagement for YouTube, and in that, I do think that they are not just acting as an online provider, they are acting as a publisher. This may make large websites, particularly social media sites much harder to operate, and frankly, good.
There’s something that none of this takes into consideration… algorithms vary greatly. The algorithms that serve content to the user on one site are not the same as they are on another site. Also, those algorithms can be changed, quickly and quietly and the user is not even aware of any changes. This could be something that would make the websites liable because they do it all the time in the name of “improving”.
I don’t think people realize just how far this would go. This would apply to everything*. It would even apply to article conferencing and chat. Discord would have to moderate *every chat message before it’s posted. Either that, they’d have to turn a blind eye to all content, including illegal material like violent terrorist content. Copyright holders such as record companies definitely wouldn’t like the result. Web sites would not be able to proactively scan for copyrighted material. They’d have to wait until the copyright owner finds it on their own, and requests a take-down.
Is YouTube acting as a publisher by using algorithms to push content it prefers according to your search? So if you are looking for ISIS articles then you will get ISIS like content. I doubt YouTube knows exactly what you are looking for. No person is pushing ISIS content. The algorithm is organizing this context. Section 230 protected. If a YouTube employee is pushing a certain type of content then that is acting as a publisher. Lately with all the things you can’t create or put in content I think YouTube is acting more and more like a publisher. Section 230 protects YouTube as long as they are not involved in creating the content. With all the things you can’t say on YouTube then yep they are deciding what can be said and what can’t be said. Kagan is on to something here.
I’m new to the website but I dig your content so far. If you haven’t already I’d loooove to see your take on the 1A and 2A audit scene. Specifically I’d love to see you review Audit the Audit’s assessments of police interactions and the likely legal outcomes. If you have done it I plead nolo contendre to not digging far enough into your body of work to find it on my own.
This is one occasion where the clickbaity title isn’t actually clickbait. The more I watch the more I realize they literally could destroy the internet, and connected interaction entirely. articles like this developed by intelligent people allow others to develop their own intelligent opinions… without content created by people at the highest levels in their field being shared with everyone, humanity is less. The unfortunate side effect is that some areas are subjective and gray when it comes to what people see as intelligent, and in many cases content that is hateful or negative deploys methods that can exploit platforms and their users in a way that introduces radicalization to unsuspecting viewers. That all said, I cannot see the supreme court going the direction against content creators. That would be an attack on free speech itself, like saying Starbucks is liable for every conversation hadon premesis. They’d have to gag customers… and the people targeting others with hateful radicalizing content would just find another way to achieve their goals, and would be more successful in doing so since everyone else has fewer resources of reason to support them against radicalization. This would only hurt the people who use these services for positive means. This comes down to a battle YouTube and content creators have been having for years. Creators want to be visible to more users and not subject to an algorithm, Youtube wants to make sure what’s visible is what people WANT to see.
The point of without algorithm you wouldnt find articles you wouldnt search for, is not an argument. This is beside the point. This can be true, even if we ethically blame YouTube for the consequences of the hosted content. Then the discussion becomes ethical. It is clear that patforms like YouTube have a lot of influence in the formation of peoples values. Algorithms are tbe reason, and as this is relatively new, we shouldnt assume that the ethics and laws we have now, are sufficient. If it turns out these ISIS attacks wouldn’t have happened if YouTube hadnt suggested certain articles to them, this would have major implications. The problem is that you cant prove this. The absence of proof is not proof of absence.
I think platforms shouldn’t be liable for everything on their sites, but more liable for any implicitly pushed recommendations they target onto users. These platforms hide behind “it’s just an algorithm” — but they intently tune that algorithm to maximize profits (which often correlates with encouraging outrage).
great informativ content. What i am interested in is the following question: The internet is much more then the US companys who provide content. Americans cannot sue a company, as long as their services are not provided via american servers, since they are not providing their services in the US but rather the US internet users wanders of to them, whereever they are. How would, should the plaintiffs win their case, that have any impact in the long run ? Companys could and would leave the US and be stationed for example in ireland or any other country within the EU. There they would be outside of the reach of US regulators as well as any law jurisdiction by the US. While i am sure it would be a shitshow for us citizens, the world could not care less. other then thanking the US for handing over the largest part of their internet companies. Am i wrong ? is there something i overlooked? Because from a european standpoint this is, while scary in its own terms, nothing that concerns the rest of the world. For the first time us companies would get the benefit of laws of other nations around the world, like so many of them bloodied their noses on in the past.
The distinction with this case is likely related to recommendation algorithm. If google has the power to handle the type of recommendations people see, then they potentially can be held liable if they recommend harmful content that encourages violence. Having people post on your platform will remain protected, but recommending/advertising content may not.
I feel like there should be alot more accountability for platforms than we have currently but this case might not be it. The algorithms work perfectly for promoting similar content when it comes to normal stuff like gardening, gaming, or cooking. But when it comes to hateful and radical content this is where it can get ugly. Those same algorithms can take someone who might be just slightly disgruntled, and eventually guide them down the path of deep radicalization. We’ve seen this quite thoroughly with the alt-right pipeline and the events caused by it. What should be done about it? I’m honestly not quite sure but it’s often understated how influential these algorithms, if not filtered and moderated correctly, can be in terrorist and hate group recruitment.