Tech

The case of platform design adjustment


In the summer In 2017, three teenagers in Wisconsin were killed in a high-speed car crash. At the time of the collision, the boys were recording their speed using Snapchat’s Speed ​​Filter—123 mph. This is not the first such incident: Similar filters have been linked to several other incidents between 2015 and 2017.

Parents of teenagers in Wisconsin have sued Snapchat, alleging that its product, which grants “titles, achievements, and social recognition” to users reaching speeds of 100 mph, was designed negligent design to encourage dangerous high-speed driving. Initially, a lower court found that Section 230 of the Communications Decency Act exonerated Snapchat from liability, stating that the app was not liable for its content. third parties created by users of its Speed ​​Filters. But in 2021, the Ninth Court reversed the lower court’s ruling.

Platforms are largely not liable for this type of content under Section 230. However, in this important case–Lemmon v. Snap–Ninth Circuit has made an important distinction between a platform’s own harmful product design and the hosting of harmful content by third parties. The argument is not that Snapchat created or hosted harmful content, but that it sloppyly designed a feature, Speed ​​Filter, that encouraged dangerous behavior. The ninth round correctly found that the lower court erred in invoking Section 230 in its defense. It is the wrong legal tool. Instead, the court focused on Snapchat’s negligent design of the Speed ​​Filter—a common liability violation for the product.

Frustratingly, in recent years, and most recently, during the U.S. Supreme Court’s oral arguments last month about Gonzalez sues Google, the courts have failed to understand or distinguish between harmful content and harmful design choices. The judges hearing these cases and lawmakers working to curb online abuse and harmful activity must keep this distinction in mind and focus on the platforms’ sloppy product design. instead of being distracted by broad claims of Section 230 immunity for harmful content.

At the center of Gonzalez is the question of whether Section 230 protects YouTube not only when YouTube hosts third-party content, but also when YouTube makes targeted recommendations about what users should watch. Gonzalez’s attorney argued that YouTube should not have received a Section 230 immunity from recommending videos, and claimed that the third-party material management and recommendation actions YouTube displayed were creative. content in its own right. Google’s lawyers retorted that its recommendation algorithm is neutral, treating all content it recommends to users in the same way. But these arguments miss the mark. There is absolutely no need to invoke Section 230 to prevent harm considered in this case. It’s not YouTube’s recommendation feature that generates new content, but the “neutral” recommendation algorithm sloppily designed to not distinguish between ISIS videos and cat videos, for example. In reality, Positive recommendations in favor of harmful and dangerous content.

Recommendation features like YouTube’s Watch Next and Recommended for You–at the heart of Gonzalez– contribute to serious harm because they prioritize offensive and sensational material, and they incentivize and reward users for creating such content. YouTube has designed its recommendation features to increase user engagement and ad revenue. The creators of this system should have known that it would encourage and promote harmful behavior.

Although most courts have accepted the sweeping interpretation of Section 230 that does not merely exempt platforms from liability for dangerous third-party content, some judges have gone further and began to impose stricter scrutiny on sloppy design by invoking product liability. For example, in 2014, Omegle, a video chat service that pairs random users, paired an 11-year-old girl with a 30-year-old man who would continue to groom and sexually abuse her. baby for many years. In 2022, the judge to hear this case, AM sues Omegle, found that Section 230 largely protects the actual document submitted by both parties. But the platform is still responsible for the negligent design choice that connects sexual predators with underage victims. Just last week, a similar case was filed against Grindr. A 19-year-old man from Canada is suing the app because it connects him to adult men who raped him over a 4-day period when he was a minor. Again, the lawsuit claims that Grindr was negligent in its age verification process, and that it actively sought to lure underage users into the app by targeting its ads on TikTok to minors. These cases, like Lemmon v. Snapaffirms the importance of focusing on harmful product design features rather than harmful content.

These cases set a promising precedent for how to make platforms more secure. As efforts to curb online abuse focus on third-party content and Section 230, they become mired in thorny free speech issues that make it difficult to make meaningful change. means. But if petitioners, judges, and regulators ignore these content issues and focus instead on product liability, they will find the root of the problem. Holding platforms accountable for negligent design choices that encourage and monetize the creation and dissemination of harmful content is key to addressing the many dangers that persist online.


WIRE Opinions publishes articles by external contributors representing a variety of perspectives. Read more comments Thisand see our submission guidelines This. Submit an op-ed at [email protected].

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button