What does AI Safety really mean? Looking Ahead from the UK’s AI Safety Summit
What does AI Safety really mean? Looking Ahead from the UK’s AI Safety Summit
Prasad Prabhakaran, Generative AI Practice Lead at esynergy, examines the pros and cons of regulating AI and the UK's AI Summit, in the wake of the EU AI Act.
AI

The AI Safety Summit in November 2023 was the first the world had ever seen. The conference featured memorable moments such as UK Prime Minister Rishi Sunak interviewing Elon Musk, the major players in AI calling for more regulation, and a wave of debates and predictions around what AI will become in the coming years.

We are now in uncharted territory with this technology. At the Summit, Elon Musk labelled AI “the most disruptive force in history”.  However, the ‘disruptiveness’ of AI has, ironically, bridged gaps between the different political, diplomatic and corporate parties in attendance – all united to come up with the best solutions to regulate something brand new and little understood, even by its creators. The most pressing issue these disparate stakeholders were seeking to address at the Sumit was AI regulation. Should it be regulated? Can it be regulated?

What would effective regulation look like? It is crucial that we take our time to ensure that we are making the right decisions and putting in resilient safeguards at this critical time. The challenges that we continue to face in this area were mirrored by the AI Summit itself: it is incredibly difficult to regulate something so unknown and rapidly changing – especially while all stakeholders want regulation for different reasons. A small number of powerful companies control the lion’s share of the debate, and too often the people who will be affected the most by AI regulations are not invited to the conversation at all. 

Uncharted territories 

Delegates from the 28 nations in attendance at the summit grappled with what AI safety might mean across their different countries and cultures, culminating in the signing of the Bletchley Declaration. Widely lauded as a major diplomatic achievement, this agreement seeks to mitigate the risks of frontier AI models. This policy would work in tandem with the UK’s pledge to establish the first ‘testing facility’ to provide independent testing for AI models, including the parameters and data sets used to build them, to ensure that they are safe to use before released to the public.  

Unlike other technologies, which have developed more slowly and been much more thoroughly tested before being widely released, generative AI has been placed in the hands of the public with little to no background knowledge or control measures in place. Introducing more rigorous pre-deployment testing is a vital step in safeguarding against the risks that these machine-learning models potentially pose. 

For the many, in the hands of the few

At first glance, tech giants such as Google, Microsoft, Meta and OpenAI pushing for regulation seems to broadly align with the goals of world governments. For some, companies asking for regulation may come as a surprise. 

While regulations and legal limitations may risk stifling innovation and growth potential, let alone profitability for these companies, the likes of Open AI and Google seem to have a different primary concern, however: the open-source question. 

Open-source machine-learning models cannot be tracked in the same way as closed-source ones can, as users do not need to make an account to access the software. Therefore, it is much more difficult to track activity back to a specific individual in the event of misuse. However, open-source models also have the potential to turbo-charge the speed and potential of innovation by enabling free collaboration and knowledge-sharing. 

For the private sector tech giants, this poses a risk to their position at the top of the industry where they are largely in control of the tech and profits that come from it.  It is unsurprising, therefore, that they have a vested interest in regulating open AI systems – though for very different reasons than the governments calling for the same safeguards. 

The forgotten majority

Teachers, shop assistants, administrators – those with ‘everyday’ jobs and not involved in the tech sector – were notably left out of the Summit. Ironically, this larger section of society will likely feel the impact of AI on their day-to-day lives the most: advanced machine-learning will mean that administrative tasks increasingly become automated, school lessons will be both supplemented by and undermined by chatbots, and it will be possible for shelf-stacking in shops to be done by robots. As Musk put it – “AI will put an end to work”. For some, this may be an exciting prospect, but for many, potential job losses in an already-fraught labour market and amid an ongoing cost-of-living crisis are a terrifying and depressing prospect.

National security, innovation potential and the debate between open and closed source models are all vital topics for discussion. However, the majority of the population must not be forgotten – as in many ways they will be impacted the most. Representatives from industries which are not directly involved in the building of these technological products and solutions must also be invited to the table to discuss AI safety and regulation, so that their concerns can be addressed and their voices heard. Governments and tech companies are not the only stakeholders in this case, and it is only by incorporating all perspectives that regulations will be effective. 

The year ahead

0The AI Safety Summit is now set to take place every year, along with an additional half-year smaller event in the first half of 2024. Large organisations are looking to events like these and international regulation to help guide them on their path to an AI-enabled future. Whilst many of the leaders I speak to are still hesitant about applying AI technologies on a large scale in their business, I have seen a widespread agreement that now is the time for further experimentation to find the true capabilities of this technology. 

By identifying very specific business use cases and taking the first steps, we will be able to make sure that the output of AI is more reliable and less biased than what we are currently seeing. From there, trust in AI will grow, opening up further opportunities which will see these technologies deliver on the lofty ambitions they promise, in a controlled and safe manner.      

Future events also mean that there will be opportunities to reconsider who is invited, and insure that delegates provide a more accurate representation of the societies. To achieve truly robust and effective regulation, diplomatic collaboration must take place across all stakeholder groups, including the general public. If we do not listen to those whose daily lives will be most drastically changed by the technology, and acknowledge the differing motivations at play, discussions around the safe use of AI will always be an empty storm. 

The AI safety summit was a groundbreaking event. Through continuing these conversations, we will be able to learn and push forward the most positive outcomes from this incredible technology.

About the author: Prasad Prabhakaran, Generative AI Practice Lead at esynergy, is a results-driven Product Manager with over 23 years of experience building and managing data and technology products. Throughout his career, he has been dedicated to driving product market fit, fostering community engagement, and leveraging emerging technologies to create value for users and businesses.

Share this article: