online-safety-bill

Data access and the Online Safety Bill

The Online Safety Bill, which the government says delivers its ‘manifesto commitment to make the UK the safest place in the world to be online while defending free expression’, is back in Parliament after some revision and delay. Having made it through the House of Commons, it is now at Committee Stage in the House of Lords, where peers will examine the Bill line by line. It must then make it through the remaining legislative stages:

  • report stage (where the Lords consider any further amendments)
  • third reading (where the Lords will debate and vote)
  • ping pong (where the Commons considers any amendments made by the Lords, and the Lords then gets to consider any changes the Commons have made, and so on) and
  • Royal Assent, where the bill becomes an Act and part of the law.

It must receive Royal Assent before the end of this parliamentary session in autumn 2023, otherwise the Bill will fall. The Bill was first introduced to the Commons back in March 2022; it was first published in draft in May 2021, following extensive consultation (when it was known as ‘Online Harms’); and the idea can be traced back even further, at least to Karen Bradley’s 2016-18 tenure at DCMS under Theresa May, since when there have been three more prime ministers, six more secretaries of state and two responsible departments. Most of the discussion (and controversy) has focused on protecting children online, freedom of expression (including ‘legal but harmful’ content), the regulation of content rather than systems and platforms, criminal penalties for executives at big tech companies and many other subjects in what has sometimes been referred to as a ‘Christmas tree’ or ‘kitchen sink’ bill, with an ever-expanding range of issues being included (from consumer protection to outlawing content about channel boat crossings). But what about data?

Researcher access and the evidence base

Where data has made an appearance in the debate, it has been in the context of recommending government utilise data to better understand online harms, and making such data more available to external researchers. For example, in their 2022 annual report, the factchecking organization Full Fact called for Ofcom to be given a remit for researching and understanding the harms caused by misinformation and disinformation online and publishing such information. It noted several initiatives already underway (including the Online Safety Data Initiative, convened by the Centre for Data Ethics and Innovation, and other DCMS-funded research), and made some specific recommendations about a proposed committee on disinformation and misinformation (to oversee Ofcom’s research and for Ofcom to establish a panel to understand citizen views on online harms). But it also floated the idea of an independent evidence centre – perhaps based on the models provided by the government’s network of What Works Centres and similar organisations, like the Economic Statistics Centre of Excellence (ESCoE). Government itself has acknowledged problems around a lack of data and metrics on data and digital issues in general (for example, in its monitoring and evaluation framework for the National Data Strategy). Several organisations – including the Ada Lovelace Institute, Demos, Digital Action, Doteveryone and Reset, as well as Full Fact – have called for a ‘wider ecosystem of inspection’, such as researchers at academic and other research institutions, to have greater access to data from platforms. The head of Ofcom also called for provisions around independent researcher access to be strengthened. These calls come in the context of tech companies restricted access to external researchers in recent years (such as Facebook, and Twitter announcing it will charge for API access just last week), and with the EU’s Digital Services Act having greater transparency provision for accredited researchers. The Online Safety Bill merely requires that Ofcom produce a report which should describe the extent of access for those carrying out independent research, explore the issues constraining access, and assess the extent to which greater access to information might be achieved, rather than doing anything to provide greater access. Outside the UK, Brookings have criticized the lack of data availability mandated by the Bill. A related area is the nature of the transparency reports that Ofcom requires from certain online service providers. Schedule 8 of the Bill outlines lists of ‘matters about which information may be required’ of companies providing user-to-user services (like social media platforms), such as the incidence of illegal content and users encountering it. To make such data – which appears to be a higher-level, statistical summary compared to the more granular data sought by researchers on an ongoing basis – usable and comparable would require it to be published consistently, openly and to a high quality, but no detail is specified. Nonetheless, publishing these higher-level statistics should be more straightforward, legally, technologically and practically, than the more granular data about individual harms that would help regulators and researchers. Social media companies may be wary about sharing personal data with third parties given controversies like Facebook’s relationship with Cambridge Analytica. The ODI’s work on data intermediaries and data institutions, though, suggests that it is possible to develop a body that would allow such data to be shared on a trusted basis for the purposes of research and regulation – the Center for Democracy and Technology in the US has recently made sensible recommendations on this, too. Requiring social media companies to make data available in this way may seem novel, but it also follows in a long tradition. For example, the Companies Act – whose predecessors go back to 1862 – requires companies to file public accounts. Decisions to make available data should not always be solely in the gift of the organisation that has collected it – there are democratic and societal considerations which should rightly be the subject of public debate.

Coroner access to data

One of the most high profile cases in the Online Safety debate is that of Molly Russell, a 14-year-old girl who killed herself in 2017. The 2022 inquest into her death found that online content contributed ‘in a more than minimal way’ to her death. Her father, Ian Russell, recently told BBC Newsnight (from 41:46) that most of the delay behind the inquest was due to social media platforms not supplying data which would have allowed the investigation to understand the habits his daughter had followed in her digital rights. Baroness Kidron has tabled an amendment to the Online Safety Bill in the House of Lords which would compel platforms to provide such data – the government has said it will work to address this problem (whether as part of the Bill or in separate legislation). This discussion slots into wider ones about death in the digital age – what happens to our profiles and data when we die? – and striking a balance between maintaining children’s rights (including privacy), and ensuring their safety online, which has been at the heart of controversy about the Bill.

Algorithmic transparency

Related to data about online harms and researcher access to data is the specific question around the algorithms that service providers use to moderate and recommend content. Several civil society organisations have criticized the Bill for not providing sufficient information, for example allowing them to monitor algorithms on an ongoing basis. Recommendations for improving this have included assessing Ofcom’s audit powers to ensure they are sufficient (and comparable to regulators in other fields), requiring large and high-risk service providers to commission annual third-party audits of their algorithms, and giving Ofcom an explicit power to undertake its own audits. The UK government is currently piloting an algorithmic transparency standard within the public sector, building on examples elsewhere.

Other areas

  • Skills and literacy: Requiring regulators to oversee and audit activity in these new fields will require them to upskill. Ofcom has grown to accommodate its expected new powers under the Bill, but government and others also need to consider how to increase data literacy among other government organisations in the regulatory ecosystem, campaigners and civil society, and for the public in general.
  • Scrapable data: More generally, there has been discussion about how data from social media platforms should be used in public interest research, such as automated collection or scraping of data from such sites.

The ODI will continue to monitor the passage of the Online Safety Bill and the debates around internet safety in general, understanding what the consequences might be for data and data rights, and where new data institutions could play a valuable role in building a better future, online and offline.