The internet is about to get a lot safer

This article is from The Technocrat, the MIT Technology Review’s weekly technology policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

If you use Google, Instagram, Wikipedia, or YouTube, you’ll start to notice changes in content moderation, transparency, and security features on those sites over the next six months.

Why? It goes back to some major tech legislation that passed in the EU last year but didn’t get enough attention (IMO), especially in the US. I’m referring to a pair of bills called the Digital Services Act (DSA) and the Digital Markets Act (DMA), and this is your sign, as they say, to identify them.

This business is actually quite revolutionary, setting a global gold standard for technology regulation when it comes to user-generated content. The DSA deals with digital safety and transparency from technology companies, while the DMA deals with antitrust and competition in the industry. Let me explain.

Two weeks ago, DSA reached a major milestone. By February 17, 2023, all major technology platforms in Europe were required to self-report their size, which was used to group companies into different tiers. The largest companies, with more than 45 million monthly active users in the EU (or roughly 10% of the EU population), are dubbed “Very Large Online Platforms” (or VLOPs) or “Very Large Online Search Engines”. (or VLOSEs) and will adhere to the strictest standards of transparency and regulation. Small online platforms have much fewer commitments, which was part of a policy designed to encourage competition and innovation while still holding big tech companies accountable.

“If you ask [small companies]To hire 30,000 brokers, Henri Verdier, the French ambassador for digital affairs, told me last year to kill small businesses.

So what would DSA actually do? So far, at least 18 companies have declared themselves eligible to be VLOPs and VLOSEs, including most of the well-known players like YouTube, TikTok, Instagram, Pinterest, Google and Snapchat. (If you want a full list, LSE law professor Martin Hosovik has a great Google doc showing where all the major players are holding back and has written an accompanying explanation.)

DSA will require these companies to assess the risks on their platforms, such as the potential for illegal content or election manipulation, and develop plans to mitigate those risks through independent audits to verify safety. Small businesses (those with fewer than 45 million users) will also have to meet new content management standards that include removing illegal content “expeditiously” once it is flagged, notifying users of such removal, and increasing enforcement of existing company policies.

Supporters of the legislation say the bill will help end the era of self-regulation for tech companies. “I don’t want companies to decide what is and isn’t forbidden without any separation of powers, without any accountability, without any reporting, without any possibility of appeal,” Verdier says. “It is very dangerous.”

However, the bill makes clear that platforms are not responsible for illegal content generated by users, unless they are aware of the content and fail to remove it.

Perhaps most importantly, the DSA requires that companies significantly increase transparency, through reporting obligations on Terms of Service notices and regular, audited reports on content moderation. Organizers hope this will have a broad impact on public conversations about the societal risks of big tech platforms such as hate speech, misinformation, and violence.

What will you notice? You will be able to participate in and formally oppose content moderation decisions made by companies. The DSA would effectively ban shadow bans (the practice of deprioritizing content without notice), limit cyber violence against women, and ban ads targeted to users under the age of 18. Account management runs on the platforms, shedding new light on how the biggest tech companies operate. Historically, technology companies have been very reluctant to share platform data with the public or even with academic researchers.

What then? Now the European Commission (EC) will review the reported user numbers, and has time to challenge or request more information from the tech companies. One noteworthy issue is the deletion of porn sites from the “very large” category, which Hosovik described as “shocking”. He told me he believed the reported user numbers should be challenged by the European Commission.

Once volume groups are confirmed, the largest companies will have until September 1, 2023, to comply with the regulations, while smaller companies will have until February 17, 2024. Many experts predict that companies will roll out some changes to all users, not just those who live in the EU. As reform of Section 230 seems unlikely in the United States, many American users would benefit from a safer internet mandated abroad.

What else am I reading about

More chaos and layoffs on Twitter.

  • Elon once again had a big news week after he laid off another 200 people, or 10% of Twitter’s remaining staff, over the weekend. Presumably, these employees were part of a “hard core” group who agreed to abide by Musk’s aggressive working conditions.
  • NetBlocks has reported four major site outages since the beginning of February.

Everyone is trying to make sense of the nonsense of generative AI.

  • The Federal Trade Commission issued a statement warning companies not to lie about the capabilities of their AI systems. I also recommend reading this helpful article from my colleague Melissa Hekila on how to use generative AI responsibly and this explanation of 10 legal and business risks of generative AI by Matthew Ferraro of Tech Policy Press.
  • The dangers of technology have already started making the news. This reporter hacked into his bank account using AI-generated audio.

There have been more internet shutdowns than ever before in 2022, continuing the trend of authoritarian censorship.

  • This week, Access Now published its annual report, which tracks lockdowns across the world. India, once again, tops the list with the most lockdowns.
  • Last year, I spoke with Dan Keyserling, who worked on the 2021 report, to learn more about how lockdowns can weaponize. During our interview, he told me, “Internet shutdowns are becoming more frequent. More governments are experimenting with limiting internet access as a tool to influence citizen behavior. It can be said that the costs of internet shutdowns are increasing because governments are becoming more sophisticated about how they deal with this, but also We live most of our lives online.”

What I learned this week

Data brokers sell mental health data online, according to a new report from the Duke Cyber ​​Policy Program. The researcher asked 37 data brokers for mental health information, and 11 responded voluntarily. The report details how selected data brokers offered to sell information on depression, ADHD, and insomnia with few restrictions. Some of the data was linked to people’s names and addresses.

In an interview with PBS, project lead Justin Sherman explained, “There’s a group of companies that aren’t covered by our narrow health privacy regulations. And so they’re legally free to collect and even share and sell this kind of health data, enabling a group of companies that don’t have access to That’s naturally — advertising companies, Big Pharma, even health insurance companies — to buy that data and do things like show ads, profile customers, potentially make health plan pricing decisions. These data brokers enable these companies to get around health regulations.”

On March 3, the FTC announced a ban preventing online mental health company BetterHelp from sharing people’s data with other companies.

Leave a Reply

Your email address will not be published. Required fields are marked *