On the whole, I think the new Labour government has made a reasonable start on regulation. Labour has said clearly that it intends to have a regulatory framework for artificial intelligence, in contrast to the confused and often sycophantic approach of Rishi Sunak’s government. The main social media companies were called in by the new Science and Technology Secretary during the riots and their responsibilities made clear. There has been some uneven media coverage of what the Online Safety Act actually means once implemented, and very little coverage of the other key drivers of social media regulation, such as data protection. I will return specifically to AI regulation in a post on another day.
New technologies are not born into a year zero without existing regulations. They are shaped by the norms, laws and expectations of actually existing countries with particular political economies, forms of regulation and legislation. The technology that forms them may produce challenges for existing laws and regulations, not all of which become obvious immediately a new technology is deployed. But, as the Welsh cutural critic Raymond Williams said long ago, ‘the moment of any new technology is a moment of choice’. Sometimes societies act swiftly to prevent threats emerging, as happened when Facebook wanted to launch a crypto-currency. Then, the Governor of the Bank of England, Mark Carney, issued a stark warning on behalf of central bankers worldwide:
unlike in social media, for which standards and regulations are only now being developed after the technologies have been adopted by billions of users, the terms of engagement for any new systemic private payments system must be in force well in advance of any launch.
Often, however, the apparently benign nature of new technologies means that it is some time before effective regulation takes place. So with social media, where the problems of data accumulation, hate speech, disinformation and misinformation, amplification by algorithm geared to provoking more clicks and more recommendations, only became clear to policy-makers slowly, even if researchers had been aware of them for a while. Often, also, the complexity of the dynamics of new technologies initially escape attention. On occasion, existing legal fields - such as competition policy - lack concepts for addressing potential harms, as in the context of data concentration, where data was gathered by services which were ostensibly free apart from their consumption of user data and time or attention. Regulators, governments and international institutions published major reports on these issues as they developed their understanding and strengthened their discursive capacity around these issues. The danger is that new governments have to start up the learning curve again, and new parliamentary committees also have to go through a similar process.
I am not a lawyer, but I have been in and around discussions on media and social media policy and regulation for 35 years. Over the last eight years, I have published occasionally on these themes. Most recently, I have a chapter out in this book, which builds on my previous Working Paper for the UK Centre for Regulating the Digital Economy. My main interest has been in advertiser-funded platforms such as Google and Facebook, although I have looked at ‘Big Tech’ platform power more generally. I follow the House of Lords Select Committee on Artificial Intelligence in defining ‘Big Tech’ as companies ‘who have built business models partially, or largely, focused on the “aggregation of data and provision of cloud services”’. I will give a summary of the argument here, then set out one or two of the challenges ahead.
It was never entirely true that social media companies, or search engine companies, or app store owners, or digital marketplaces, were unregulated: they depended on a system of regulation (company law, intellectual property law, and financial services regulation) which guaranteed for them and their founders the rights to enjoy their profits in perpetuity. They weren’t unregulated: they were under-regulated. But the myth of the absence of regulation has persisted.
We cannot avoid the peculiar retreat from regulation in the early days of the public internet. Internet governance theorists frequently relied on the notion that the internet was and should be ungovernable. In practice, the absence of regulation or international treaties meant that the rules of internet governance were imposed by the most powerful actors, whether authoritarian states or, in democracies, private corporations. Additionally, the United States sought to export its then anti-regulatory environment internationally through trade treaties which sought to exclude platforms from data and other laws. With the bulk of key platform companies originating in the United States, American discourses on speech (including paid advertising), data, competition policy, regulation and innovation had been able to dominate, with stronger pushback coming in Europe in recent years. Thankfully, under the Biden administration, there has been a tougher domestic response, judicially and through regulation, in the US.
There are at least seven ways in which the power of the platforms challenged governments and regulators in their evidence-collecting exercises over recent decades. First, the provision of ostensibly free services to consumers evaded the dominant orthodoxy in competition policy, that of the consumer welfare standard: only latterly was it understood how the mining of personal data was a price the consumer paid for access to services. Second, the accumulation of data in itself was not originally understood as central to the operations of the largest platform companies, and certain acquisitions, notably Facebook’s acquisition of Instagram, were allowed through without effective scrutiny. Third, the scale of the platform companies, the speed of their growth and their internal secrecy, reinforced by the rewards systems for their staff, meant that there was significant information asymmetry between them and state authorities, so that private deals which might have engaged the attention of competition authorities and regulators never came to light until recent court cases. Fourth, the complexity of the companies’ own models, including the algorithms which shaped search, or social media newsfeeds, or programmatic advertising, were immensely complicated and not easy for outsiders to understand, requiring immense efforts to build state regulatory and discursive capacity in new spheres. Fifth, the US base of the key platform companies – Facebook, Google (including Youtube), Amazon, Apple – and the robustness of US policy in seeking to export US deregulatory norms, raised questions of the territorial legitimacy of the regulatory space. Sixth, the platforms themselves changed as entities over time, in terms of their business operations, their acquisitions and entry into new markets, their algorithms and their policies. Seventh, the power of the companies to buy up expertise, to recruit staff from regulatory bodies and government departments, supporting extraordinary and well-funded lobbying operations and aggressive litigation of regulatory and judicial decisions, or proposed decisions, were designed to out-manoeuvre regulators, challenge the evidential base, and ultimately delay sanctions.
The strategic market power of the Big Tech companies is now more widely understood and regulators have a more granular understanding of their business models, what this means for the economic, social and political power of platforms and how their structural dominance poses regulatory challenges of a bigger scale. This was captured in the conclusions of the UK’s Furman review, which accepted that platforms shared some of the features of natural monopolies, given that the scale of the data they collect and the centrality of that data to their business model was unique. There was clear concentration in certain markets such as search, social media and online advertising, and incumbents had a significant data advantage which was a barrier to entry and therefore likely to lead to unchallenged persistent dominance because of their market power. Indeed, that data advantage has more recently been key to their involvement in the development of AI.
As Philip Schlesinger notes, recent years have seen a ‘regulatory turn’. This has played out in a number of different jurisdictions, including Australia, Canada, the European Union, national-level legislation in some of the EU’s member states including France, Germany and Ireland, India, the United Kingdom and the United States. While each jurisdiction is in a different phase of regulatory development and intervention, many of the issues concerned are similar and there has been significant cooperation, and in some cases coordination, by legislatures, governments and regulators. The driver for this has been the increasing evidence of platform misbehaviour which demands policy action. The response has been the re-assembling of state capacity in many jurisdictions to understand the new challenges that the platforms provoke.
There is both public pressure and market pressure to regulate and this increasingly forms part of the evidence on which regulators and governments seek to act. Public pressure is magnified by media coverage: as just one example, the UK Information Commissioner’s investigation into Cambridge Analytica was explicitly triggered by reports in the UK Observer newspaper. Market pressure has been seen in the campaigns by advertiser and media interests, as well as by deliberate policy interventions by competitors, such as Apple’s changes to its privacy controls which had a $10 billion or so adverse impact on Facebook. Judicial proceedings have been a form of corrective action and have exposed company practices that appear anti-competitive. Company whistle-blowers have become more courageous, more visible and more vocal, with Facebook whistle-blower Frances Haugen name-checked in President Biden’s 2022 State of the Union address. They have provided further evidence of relevance to regulators and law-makers.
Although ‘online harms’ often attract the bulk of media attention, the new challenges to platform power to 2020 principally came from the intersection of data protection policy and competition policy. Most recently, national security issues have come to the fore, not least in the context of Russia’s invasion of Ukraine, although these were evident earlier in relation to the question of Russian interference in elections and referendums and in the response of the ‘Five Eyes’ governments to Facebook’s plans for full encryption of digital messaging and the potential risk to policing of terrorism and child abuse.
It is only in recent years that regulators came to understand how these issues are interdependent. The regulators themselves have had to develop more sophisticated understanding of these interactions. The economic perspective has restored political economy to the heart of discussions about platforms, rather than a narrow focus on measures to address harms through platform liability for harmful and illegal content alone. Ofcom for example, argued that market failures can contribute to consumer and societal harms, including competition, consumer, data protection, cyber security, media policy, content policy and public health, and interventions can benefit from tackling several market failures at once, tackling several harms at once, and effective ‘trading off’ of specific harms. Platforms have an incentive to maintain attention and accumulate personal data, and this logic drives approaches to privacy, content, diversity exposure, addiction, data analytics and algorithms, personalization of advertising, information asymmetry and behavioural biases. Ofcom’s 2019 economic analysis identified ‘complex interactions’ which require cooperation between regulators.
Law, as Ariel Ezrachi says in the context of competition regulation, is a social construct. Karl Weick’s description of the making of law as a sense-making act is particularly helpful:
when people enact laws, they take undefined space, time and action, draw lines, establish categories, and coin labels that create new features of the environment that did not exist before.
One retired UK law-maker, former Labour MP Ian Lucas, set out in detail in his book Digital Gangsters, how this process of sense-making unfolded for him when he was a member of the Digital, Culture, Media and Sport Select Committee after the Cambridge Analytica Scandal. This was an extraordinary Select Committee inquiry which produced revelatory evidence and powerful reports, ably chaired by the Conservative MP, Damian Collins. As I have said before, Facebook boss Mark Zuckerberg avoided visiting the UK during the period of this inquiry, though he was happy to visit Brussels and Dublin.
We need to conceive of regulation not as the passage of a defined piece of legislation or set of regulatory rules but as a process, commencing with sense-making around a specific problem area through evidence-gathering, a political struggle or discursive interaction between public, industry, regulatory and legislative actors (both in legislatures and in governments), the development of and passing of the necessary legislation, the implementation – again, with likely further discursive interaction – of regulatory rules and administrative arrangements, decision-making on particular regulatory breaches and the imposition of remedies, and their effective agreement and adoption or appeal and/or litigation against the decisions. It should be noted here that major ‘Big Tech’ companies, notably Facebook and Google, regularly, if not routinely, litigate regulatory decisions against them.
That is where we are in the UK at the present time. Regulation is a process. The process is unfinished. Ofcom has yet to implement key sections of the Online Safety Act. Even when it does, there will be resistance from Big Tech companies. Meanwhile, the Information Commissioner is waiting for a response from X on its use of user data to train its AI machine Grok. The Sunak government took its foot off the regulatory brake when it came to Big Tech, with its naive AI obsession and indulgence of Musk, during which time governmental rhetoric changed from the ‘pro-competition’ approach of the Furman Report, which recognised Big Tech companies could be obstacles to competition and innovation, to a so-called ‘pro-innovation’ approach which sounded much more like the ‘permissive innovation’ demanded by some in Silicon Valley, and runs counter to the tradition of the precautionary principle.
Labour has the opportunity to re-set this agenda on the back of public concern about the role of Big Tech in the recent riots. It will need to be tough, and face down Big Tech bosses. It will need to look at these issues in a comprehensive and integrated way, and understand how online harms are a product of a wider political economy which has allowed data accumulation and abuse to thrive. It will need to re-assert the spirit of the tech-lash which was evident prior to the pandemic. New technologies can be highly beneficial. But they must benefit the public, not the Big Tech ‘broligarchs’. These political moments do not come around too often.