It was a relief to see Keir Starmer saying last week that his government recognises
the basic principle that publishers should have control over and seek payment for their work, including when thinking about the role of AI.
It’s a departure from the confused approach to AI regulation that typified the Sunak government. It also seemed to contradict a position suggested in a UK Government consultation document just a few weeks before.
It was also good to see the Prime Minister saying in the same article that his government would use the Digital Markets, Competition and Consumers Act to
rebalance the relationship between online platforms and those, such as publishers, who rely on them.
Regulation of Big Tech matters.
In July I attended the Tony Blair Institute (TBI) conference. While it was great, four days after the General Election, to catch up with old friends as we celebrated Labour’s success, it was something of an AI hypefest, as the picture above shows. The journalist William Cullerne Bown has a good take on the Blair tech project.
Let me say at the outset, I am an optimist about technology. But I am not uncritical, and I am not a techno-optimist, which is an ideological position. At the TBI conference, it was clear that ‘AI’ had now assumed the status that ‘digital’ had in the early to mid nineties. It’s become a cult phrase - an example of magical thinking. Wes Streeting was right to point to the benefits of artificial intelligence approaches in diagnostics and drug discovery. But as he also pointed out, the NHS currently runs on pagers and fax machines. And that’s not just in England. It’s only four years ago that I was asked to send a fax to confirm an X-Ray appointment for my mother. I haven’t owned a fax machine since about 1999. As Hetan Shah writes:
It hardly needs saying that public services are in a poor state, and this is especially true in computer systems. A few years ago, thousands of NHS computers were found to be running Windows XP, which was supposed to be phased out in 2014. Datasets across government could be linked more effectively; the underlying infrastructure will need overhauling if any benefits are to be realised. If it takes the hype of AI to give political cover to this kind of important investment in basic IT and data systems, then so be it.
So let’s talk more intelligently about AI, and the technological needs of our public services, and get beyond the Broligarchical incantations which have resulted in two narratives - one of boom, and the other of doom.
Let's start by talking about what is artificial and what is intelligent about AI.
Kate Crawford says in her book The Atlas of AI
AI is neither artificial nor intelligent. Rather artificial intelligence is both embodied and material, made from natural resources, fuel, human labour, infrastructures, logistics, histories and classification. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately destined to serve existing dominant interests.
Or more simply, as the Financial Times journalist Gillian Tett said recently:
we tend to talk about the internet and AI as if it they were a purely disembodied thing (like a “cloud”). As a consequence, politicians and voters often overlook the unglamorous physical infrastructure that makes this “thing” work, such as data centres, power lines and undersea cables. But this oft-ignored hardware is essential to the operation of our modern digital economy, and we urgently need to pay it more respect and attention.
I've been thinking about these things as a lay-person since 2016, when I started to get interested in social media algorithms. And I started thinking about how politicians with little technical training could get to understand these processes and think about appropriate frameworks for regulation that wouldn't stifle their development, but would address imbalances of power.
I would say that in the UK up to about 2020, there was a discursive process around AI policy development that thought about both potential benefits of AI and potential harms. This identified a range of potential algorithmic harms alongside potential benefits –discrimination, corporate law-breaking, manipulation, propaganda, brand contamination, machine learning algorithms too complicated for humans to understand. There was also clear evidence of a global regulatory turn against Big Tech between 2017-2020 which targeted its business model of data capture. Techlash was the FT’s word of the year in 2018. In recent years, however, that focus has dissipated.
Essentially, there were two causes.
The UK, like other developed countries, had a growth problem after Austerity and Covid, but particularly in the UK's case, after Brexit.
Then came the launch of ChatGPT as a consumer form of AI - ‘A Sputnik moment for humanity’ said the UK Science Secretary at the time.
Of course, during the pandemic, Big Tech had made itself useful through its data analysis and processing, its distribution of information and the way it dealt at the time with misinformation. Big Tech subsequently positioned AI as a potential path to growth, with the UK as one of the world's leaders in the field. Part of their argument in all of this, of course, was that only Big Tech can deliver AI at scale, so Big Tech must be part of the solution.
Looking back at the last couple of years, it's very clear to see that what's emerged is a kind of binary narrative of boosterism and doomsterism. Boosterism treats many important problems as tame, and doomsterism focuses on the admittedly critical problem of potential existential risk.But the reality is that attention to the spectrum of problems has been lost and the evidence that have been gathering to 2020 of a potential regulatory moment of challenge to Big Tech may have evaporated.
The Doomster narrative says AI will kill us all. And it's clearly one of the harms that has to be weighed by public leadership. As the former chief executive of the UK National Cybersecurity Centre, Professor Ciaran Martin. says, even the wildest optimist would concede that some type of monitoring of existential risk is necessary.
Under Rishi Sunak, the UK hosted the first global AI safety summit in 2023. And some of the leading scientists working in the field have expressed their concerns about the potential of artificial intelligence to destroy or subjugate humanity.
They've been echoed by historians and philosophers as well.
But there's also a narrative of boom set against the narrative of doom. People suggest that AI has the potential to cure all diseases, that it can help farmers in Africa increase productivity of their crops, and many other examples.And no doubt forms of artificial intelligence will be used as part of technologies to advance humanity.
But as a number of experts have warned, such as the professor of machine learning at Oxford, Neil Lawrence, and the Nobel winning economist Daron Acemoglu, AI won't live up to the hype. And these sentiments have been reinforced by a recent publication from the investment bankers Goldman Sachs.
Professor Dave Karpf said recently that the narrative around AI is ‘textbook Silicon Valley mythmaking’.
One of the challenges that wasn't anticipated in my earlier research is the threat posed by artificial intelligence machinery to the climate and to the environment in terms of energy consumption and indeed water consumption. Well, see this from Morgan Stanley. There has been a rash of recent media articles showing how data centres are straining the electricity systems, how big tech companies are consuming more and more data and therefore consuming more and more electricity and indeed water. And there have been protests in certain nations, particularly against their water consumption, such as in Chile. Oh, and then there’s concrete.
Another of the issues which I didn't anticipate in my early research was something that we've really only seen since the launch publicly of ChatGPT and its competitors, which is the tendency of large language models to hallucinate, producing results which bear no resemblance to reality, inventing things that haven't happened, inventing academic articles and so on.I'm not going to focus so much on that.
Today, Big Tech leaders tend to be incredibly optimistic about artificial intelligence and the climate challenge. Former Google executive Eric Schmidt said recently, we're not going to hit the climate goals anyway, so he'd rather bet on AI solving the problem.
One of The founders of OpenAI, Ilya Sutskever, said in a television documentary that you can still watch on BBC iPlayer, iHuman, that, and I quote, ‘I think it's pretty likely that the surface of the earth will be covered with solar panels and data centres.’
I’ve got to say, this is not a vision that gets me up early in the morning.
So there are people, including Bill Gates, who believe AI can help meet climate goals despite its energy drain. There are other very significant tech investors like Roger McNamee, who say that the energy and water consumed by AI are putting unsustainable pressure on the power grid and clean water supplies. And they talk about the hype machine, which is generating more and more investment, but failing to demonstrate really valuable uses of the technology, which is essentially what Goldman Sachs is saying as well. Recent research on the climate suggests that the Big Tech moguls who believe that AI will solve the problem are avoiding the fact that some aspects of climate change are irreversible.
Another of the issues that has blown up since the launch of ChatGPT and its rivals is the whole issue of copyright theft, with record companies, authors, media companies in general, complaining about the ways in which Big Tech companies have been using or trying to use their media, their cultural and creative output to train AI databases. Even Mumsnet has taken action on this. However, it is clear now that many publishers have struck content licensing deals with publishers.
And there's no question that AI companies such as OpenAI have been lobbying for a relaxation of laws on copyright to train their models.
There are of course also privacy issues. Facebook and Xitter want to train their AI machinery on our data, on what we say and do on their platforms. LinkedIn may be going the same way.
I've had correspondence with the Information Commissioner's Office about this.
They say they are aware of these issues, but it's not entirely clear what they are doing about it. There's no question that X, for example, didn't even inform consumers that it was planning to train its AI machinery on our data.
Lurking in the background also are significant competition issues. Big Tech's dominance in the field of data is enabling it to branch out into the development of artificial intelligence machinery. And in the UK, for example, the Competition Authority looking at the relationship between Microsoft and OpenAI. So the big question arises again, is Big Tech stifling innovation?
Whistleblowers have also raised concerns that some of these AI companies have refused to let staff talk about safety issues.
There is also quite a lot of hype around the number of jobs that are going to be created by AI projects. An initial announcement from Labour suggested there would be thousands employed in a new AI project in Northumberland, whereas it looks like it's more likely to be around 300 jobs.
The speculation by AI companies sometimes verges on the super hype.
OpenAI in one of its prospectuses for investors says ‘it may be difficult to know what role money will play in a post AGI world’. That means in a world after we have artificial general intelligence.
OpenAI stated early on to investors that they could lose their entire capital contribution and not see any return. It would be wise, they say, to view any investment in OpenAI ‘in the spirit of a donation’.
The cultural thinker, the late Mark Fisher, once noted that the phrase ‘it is easier to imagine an end to the world than an end to capitalism’, which he said was attributed at different times to both the cultural critic late Frederick Jameson, and to the philosopher Slavoj Zizek, encompassed the essence of ‘capitalist realism’.
Well, OpenAI actually does dare to envisage the end of capitalism.
This brings me back to regulation. we know that big tech companies present a regulatory challenge that is virtually unprecedented. We know their power is discursive as well as material.
We know that regulation is a social, incremental and multifaceted process involving many actors, including governments, regulators, legislators, other companies such as competitors and those whose business has been damaged by Big Tech, the media, the wider public, civil society organisations and employees of big tech companies themselves.
We had in the UK up to about 2020 evidence of a regulatory turn against Big Tech which targeted its business model of data capture, returning political economy to the debate about Big Tech.
This called for ‘pro competition’ regulation, as the report by the committee of experts chaired by former Obama advisor and economist Jason Furman suggested.Now that pro competition regulation itself may have been relatively weak, but at least it led to a focus on competition and Big Tech's threat to that.
But there was subsequently a discursive shift during the period of Rishi Sunak's premiership. The growth agenda under Sunak entailed a shift of language from ‘pro- competition’ to ‘pro-innovation’ regulation. It echoed, in essence, Silicon Valley nostrums that regulation is a threat to innovation.
There was a strategic steer given to the competition of markets, authority and other regulators about how they needed to emphasise a pro growth agenda. There was a weakening of data protection.
Inserted in the Bletchley declaration at Rishi Sunak's AI Safety Summit was a focus on pro innovation governance of AI. Sunak himself said the UK's answer is not to rush to regulate.
’How can we write laws that make sense for something we don't yet fully understand?’
Well, the truth of it, of course, is that there is regulation already in place for forms of artificial intelligence.
For example, high frequency trading algorithms are regulated.
Under the Sunak government, there was no evidence of even the milder suggestion that Big Tech could be an obstacle to innovation.
Indeed, Big Tech's involvement in the pursuit of artificial intelligence, and artificial general intelligence, was taken for granted.
It was evident that Big Tech had been lobbying to water down the pro competition regime suggested by Jason Furman and his collaborators. Furman and his committee
complained to the Prime Minister, Rishi Sunak, about the impact of this on the Digital Markets, Competition and Consumer Bill.
The reality when it comes to regulation is that it's not a choice between regulating and not regulating, regulation or innovation. Certain aspects of artificial intelligence are already regulated, not least in respect to the financial system, data protection and increasingly social media. Regulation is already part of the artificial intelligence landscape in the UK.
Rishi Sunak's own objections to regulation of AI were arguably contradicted by one of his own ministers in the House of Lords, who had this to say
We have very large areas of law and regulation to which all AI is subject. That includes data protection, human rights legislation, competition law, equalities law and many other laws. On top of that, we have the recently created central AI risk function, whose role is to identify risks appearing on the horizon, or indeed cross-cutting AI risks, to take that forward.
As I have said before, regulation is a process, not a text.
There were some good things that were done under the Sunak government.
The creation of the AI Safety Institute was one of those. So, of course, was the creation of ARIA, the UK's Advanced Research and Innovation Agency.
But let's hope that under the new government we can have a more balanced discussion of these issues, which doesn't counterpose innovation and regulation in a false binary divide.
Of course, now we know the result of the US Presidential election, our ability in the UK to stop the Broligarchs - and others - burning up the planet is pretty much zero.
Welcome to Ukania.