Home » In the News » It’s time for responsible social media

It’s time for responsible social media

Read the article on Engineering & Technology

US lawmakers are under pressure to address the harmful impact social media can have on children, communities and national security. Is this a long time coming?

“We must finally hold social media companies accountable for the experiment they are running on our children for profit.And it’s time to pass bipartisan legislation to stop Big Tech from collecting personal data on kids and teenagers online, ban targeted advertising to children, and impose stricter limits on the personal data these companies collect on all of us.”

President Joe Biden got a standing ovation from Democrats and Republicans when he proposed tough regulation of social media in February’s State of the Union address. But getting something into federal law is proving tricky.

The US has lagged the UK and the EU on online regulation. Europe’s General Data Protection Regulation has become a global template for privacy and has for now been retained in British law.

Until more recently, the UK had been considered ahead of the game in moving to the next stage, combatting harmful content, with its proposed Online Safety Bill. Currently in the House of Lords, it is expected to become law this autumn.

However, following the departures of Boris Johnson as prime minister and his culture secretary Nadine Dorries (a former nurse who favoured strong measures), there are reports of it being watered down. They reflect strong views among Conservative MPs on protecting freedom of speech, promoting free-market economics and passing ‘light touch’ post-EU regulation. Moves by the EU on child safety are meanwhile in consultation after publication of a draft strategy in May 2022.

What the US does matters regardless, because it is home to most leading social media platforms and thus seen as best placed to influence the culture that underpins much of Big Tech – an often libertarian ideology, the emphasis on disruption, and its profits-first mindset. All are blamed for having caused social media to act as an accelerant for a diverse group of social problems.

However, passing effective legislation means US lawmakers must balance three issues and navigate a hefty lobbying effort by the main companies. Based on public records, Meta (Facebook), Alphabet (Google) and Twitter have spent nearly $100m on that effort since 2020, according to the tracking group Open Secrets.

Those three factors can be broadly categorised as concerning the Constitution, the economy and national security.

The First Amendment is clear: “Congress shall make no law… abridging the freedom of speech.” This necessarily shifts the primary goal of legislation to liability for content rather than the content in itself. Otherwise, any regulation would run the risk of being struck down by the Supreme Court.

However, liability raises the economic challenge. In 1996, Congress passed the Communications Decency Act. Its Section 230 has become an infamous because it absolves online platforms of legal responsibility for content posted on them, giving them a privileged status apart from traditional publishers.

It meant to promote a then emerging digital economy. Many would say it more than achieved that but has since had deleterious unforeseen consequences. Nevertheless, rolling back Section 230 in total is seen as virtually impossible, even among some of social media’s strongest critics.

There is also fear today of putting a further brake on technology’s contribution to GDP at a time when the sector already faces challenges. How much tech matters there was underlined just weeks ago by the Biden administration’s rapid response to the collapse of Silicon Valley Bank.

The third factor – a fast emerging one – is artificial intelligence (AI). On one level, there is the risk of legislating for yesterday’s problems and not capturing the impact generative and other forms of AI are starting to have as they are integrated within new and existing products – technology again outpacing regulation.

On another, there is the blunt reality that all the companies facing regulation are key players (and likely to be important government partners) in the AI market, either directly in the case of Alphabet and Meta or indirectly in the case of Elon Musk and the companies he owns.

The Biden administration has developed an AI strategy aimed at not only preserving US technological leadership economically but also enhancing national security – a core theme in the $280bn CHIPS and Science Act and, more recently, the National Cybersecurity Strategy. Big Tech may no longer be the golden goose atop Capitol Hill, but it is still seen more as frenemy than outright threat.

There have been many attempts to introduce legislation regulating platforms since the companies began spending big on lobbyists. Several are being reintroduced during the current session of Congress, a clutch of which address child safety.

The most prominent is arguably the Kids Online Safety Act (KOSA). It would impose a duty of care on platforms to “prevent and mitigate the heightened risks of physical, emotional, developmental, or material harms to minors posed by materials”.

As one House member explained: “We could legislate and shoot ourselves in the foot. Or we could legislate and have the Supreme Court throw it all out as unconstitutional. Something needs to happen, but it will have to be very smart. I don’t think anything we’ve seen yet passes the test.”

He nevertheless noted that the need to do ‘something’ is greater than ever, with Washington under intense pressure from multiple sides. Sources include more organised lobbying from civil society, international and state-level governments, and even the US Supreme Court.

Not only US politicians but also the social media companies accept that regulation is inevitable. Some have for quite some time. Here’s ‘free speech absolutist’ Elon Musk from 2018: “I think there should be regulations on social media to the degree that it negatively affects the public good. We can’t have like willy-nilly proliferation of fake news, that’s crazy.”

This is Facebook chief Mark Zuckerberg a year later: “From what I’ve learned, I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability.”

The problem is inevitably one of degree, from how far any new rules should go to the size of any penalties. This has caused things to progress slowly and technocratically in Washington.

Elsewhere, things are happening. Partly inspired by the GDPR, the State of California extended its online rules with the Consumer Privacy Act, which took effect in 2020, and recently passed the Age-Appropriate Design Code Act (CAADCA), extending what its politicians consider specific protections for children.

Its progress illustrates continuing tensions between government and industry. Scheduled to go into force in 2024, CAADCA has been legally challenged by NetChoice, a trade association representing the leading social media players. Its lawsuit claims the new act violates the US constitution because it “presses companies to serve as roving censors of speech on the internet.” Regulation up to a point, Lord Copper?

The CAADCA is more than simply an important test case. It also an example of states taking things into their own hands. Some examples seem ideologically rather than socially motivated. Two states are reviewing legislation to prevent platforms limiting access to content posted by right-wing politicians (reflecting claims on that side that its posts are ‘shadowbanned’ by operators).

Others are following California and the EU’s lead on privacy. Meanwhile in Washington State, legislators are considering regulation around the role of minors as online influencers, either individually or with their families, in the context of child labour.

These initiatives sit alongside what is happening in the UK and EU as guidance towards what Washington DC can do. Congresspeople have noted that much legislation elsewhere first addresses online harm threats to minors, as the most vulnerable members of our society. A starting point.

More broadly, they give an indication of what voters now consider acceptable and necessary. They also show civil society becoming more proactive.

House Bill 1627 currently before Representatives in Washington State is titled ‘Protecting the interests of minor children featured on for-profit family vlogs’. It is legally based on labour protections for child actors, but its passage has foundations in a grassroots movement started by an 18-year-old student, Chris McCarty.

He decided to act after hearing the story of a child whose adoptive parents profited from a series of YouTube videos but ultimately gave her back into state care. After launching an online campaign, Stop Clicking Kids, McCarty’s call to action was joined by other activists, academics, parents and some former child influencers.

While so far progress has been local, the campaign has caught attention nationally and overseas, highlighting issues that go beyond what children are exposed to online. It is also one of several more consolidated lobbying efforts looking to influence regulation.

One of the most powerful is the Council for Responsible Social Media, launched last autumn and led by Issue One, a cross-party “political reform group”. It brings together 50 influential figures to promote the adoption of “significant bipartisan solutions… to the technological harms to our kids, communities, and national security” it sees spreading online.

Members include the Facebook whistleblower Frances Haugen, Nobel Prize-winning journalist Maria Ressa, two former CIA directors in Leon Panetta and Porter Goss, and two former Capitol Hill heavyweights, Dick Gephardt from the Democrats and Chuck Hagel from the Republicans.

The Council’s strategy is coalescing, but it is already offering important observations. At its launch, Haugen and fellow Council member Tristan Harris, a former Google ethicist and co-founder of the Centre for Humane Technology, highlighted how some harmful content can be mitigated using relatively simple code akin to techniques used to counter Covid-19 disinformation.

More recently, three members testified before the Senate Judiciary Committee, including Kristin Bride, a mother who lost her son at just 16 because of cyber bullying.

“It should not take grieving parents filing lawsuits to hold this industry accountable for their dangerous and addictive product design. Federal legislation like the Kids Online Safety Act… is long overdue,” she says. “We need lawmakers to step up, put politics aside and finally protect all children online.”

Another example of consolidated activism emerged at March’s annual meeting of the American Association for the Advancement of Science (AAAS). The Coalition for Trust in Health and Science brings together 50 influential bodies including the AAAS, the American Medical Association and advocacy group Research!America (RA).

“People who actually are in the business of generating good solid information need to stand up and be counted and know how to engage,” said RA’s president Mary Woolley at the launch, although she later added that regulation is not on the new coalition’s immediately agenda.

Nevertheless, in uniting representatives of the US’s diverse healthcare sector and developing its own online resources to combat mis- and disinformation, it does join a trend. Strength in numbers.

Civil society is finding a much louder voice in the debate, and as it does focusing attention back on the tragic stories of people like Kristin Bride’s son which, many in these groups fear, risk getting lost among the legislative sausage-making. While she noted that this issue should not come down to lawsuits, in some respects that may be what is about to happen.

On 13 November 2015, Nohemi Gonzalez was among 90 victims of the attack by Islamic State gunmen on the Bataclan theatre in Paris. Another 40 died in other coordinated attacks.

Her family has since launched an action against Google saying that it is liable for promoting IS via the algorithms used on its YouTube video channel. The lawsuit, which has been rejected by more junior courts, is nevertheless being reviewed by the Supreme Court. At its heart is a legal nuance: do the protections given by the controversial Section 230 extend to the recommendation technologies they deploy?

It is not the only difficult decision the court’s nine justices face. Nawras Alassaf died in another IS gunman’s attack at the Reina nightclub in Istanbul on New Year’s Day in 2017. It left 39 dead.

In this case, the point of contention is whether, in spite of Section 230, Twitter failed to take sufficient steps to prevent terrorists exploiting the microblog and thus remains liable under the US Anti-Terrorism Act.

Decisions are expected during the court’s current session, and at the initial hearing, there came signs that the justices are wary about both. “You know, these are not like the nine greatest experts on the internet,” notes Justice Elena Kagan.

Although the court has often been accused of breaking on partisan lines on a number of critical issues – most recently, women’s rights to abortion – calling these cases is harder given the consensus that social media needs to be restrained.

Judgements in favour of either suit would upend the big players’ business models and, especially over algorithms, lead to a massive amount of litigation around not just terrorist attacks but all instances of demonstrable fatal and non-fatal online harms.

Even if the cases are not accepted, the conclusions that the court reaches could still determine the future of both Section 230 and the direction that US social media regulation must take.

In designing platforms to encourage engagement – or, if you prefer, addiction – social media has much to answer for. Too much research now exists pointing out increasing online harms from teen depression to political radicalisation. As that research pile has grown, Silicon Valley has been slow to react.

At the same time, the ways in which platforms have scaled beyond expectations in users and usage have left the main players struggling to stay on top of the problems we now face.

They accept that themselves, and it was Mark Zuckerberg who said that, in retrospect, it would probably even have helped had there been more and stronger regulation in place when Facebook was moving out of a Harvard dorm.

Lawmakers, too, accept that action is now required and civil society is becoming increasingly loud in its calls for it. Yet the complexity of what needs to be done, including (though not addressed in detail here) probably a generation-long rethink of how engineers, investors and founders are trained, raises another worrying question – not so much ‘Too big to care?’ as ‘Too big to cope?’

We are, at least, finally close to getting an answer to that second question, but, as the Congressman, said: “It will have to be very smart.”

Home