Hollywood, Business, and the Impact of AI: Mission Very Much Possible

With everyone from film producers to parliamentarians expressing concerns about the impact of artificial intelligence, regulation of the technology seems imminent. However, with governments still trying to determine what form that regulation should take, Commercial and Technology Partner James Teare explains why it’s essential for companies to put their own AI policies in place now to avoid leaving themselves open to challenges in the future.

Ever since the days of the Luddites, the development of new technology has generated both excitement for its advocates and concern among those who believe that will undermine their social or professional well-being.

In recent years, it seems that anyone with a social media account or passing interest in news bulletins has been unable to avoid debate about the consequence which so-called 'deepfake' videos might have on national elections or celebrities' reputations.

Nevertheless, it seems that - as is quite often the case - it has taken Hollywood's leading lights to bring it to mass attention.

A strike by some of the world's best-known actors about working conditions has underlined the creeping impact of artificial intelligence (AI) on the entertainment industry (https://news.sky.com/story/brian-cox-and-simon-pegg-among-british-stars-rallying-in-support-of-hollywood-strike-ai-is-taking-our-jobs-12925022).

The industrial action has coincided with similar worries voiced by less high profile figures, including members of the House of Lords, one of whose number last month expressed fears that he might soon be rendered obsolete by machines with “deeper knowledge, higher productivity and lower running costs” (https://www.theguardian.com/technology/2023/jul/24/house-of-lords-told-it-could-be-upgraded-with-peer-bots).).

Anxieties have, perhaps unsurprisingly, also cascaded down to those in the UK's workforce without star-billing or a generous parliamentary attendance allowance.

Driven home by research conducted by the likes of the investment bank Goldman Sachs - which concluded that AI "could replace the equivalent of 300 million full-time jobs" (https://www.bbc.com/news/technology-65102150)- such reports have even the most well-balanced individuals asking questions about whether their careers might fall casualty to a march of the robots.

Even those sceptical about the prospects of a 'Terminator'-style global conflict resulting from errant computers (a scenario which has apparently actually been brainstormed by the US military - (https://www.thenation.com/article/world/artificial-intelligence-us-military/).

With economic stability and world peace at stake, governments appreciate the importance of regulation.

Some have been involved in what could be described as a race to claim credit for taking the lead.

In April 2021, the European Union unveiled the Artificial Intelligence Act, which it described as "the world’s first comprehensive AI law" (https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#:~:text=In%20April%202021%2C%20the%20European,mean%20more%20or%20less%20regulation).

It aimed to establish something of an overarching structure which might then be accommodated by different nation states into their own domestic legislation in much the same way as the General Data Protection Regulation (GDPR) has influenced attitudes to data protection.

However, the text of the EU's AI Act is still being refined, partially because in a desire to be "comprehensive", it has had to to take account of new and evolving AI technologies.

The UK, meanwhile, is playing host to what it has described as the first major global summit on AI safety this autumn (https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence).

Prime Minister Rishi Sunak has spoken of this country's "global duty to ensure this technology is developed and adopted safely and responsibly".

Rather than employing the umbrella approach of the EU, the UK favours being more reactive and using existing legislation to address issues how, when and where they occur.

In addition, as with any pressing public policy matter there are any number of organisations jostling to have their views represented.

They include the Ada Lovelace Institute, named after the 19th-century mathematician whose pioneering work lead to the creation of the first computer.

The Institute has called for regulation to build public confidence in AI - and hold individuals to account "when things go wrong" (https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/).

A foundation established by a former occupant of 10 Downing Street, Sir Tony Blair, has argued that regulation should frame "a matter becoming so urgent and important that how we respond is more likely than anything else to determine Britain’s future" (https://www.institute.global/insights/politics-and-governance/new-national-purpose-ai-promises-world-leading-future-of-britain).

One might deduce that AI is a thing of the future when, in fact, it is already here and being used in lots of companies by lots of people.

Some sense of scale was given by a study compiled by the business consultancy Deloitte, which found that some four million people across the UK had already used AI at work (https://www2.deloitte.com/uk/en/pages/press-releases/articles/more-than-four-million-people-in-the-uk-have-used-generative-ai-for-work-deloitte.html).

What almost certainly lies in the future is some form of regulation.

Therefore, it is in my opinion critical that we come up with ways to protect businesses, their employees, clients and partners who are already using or are likely to use AI before that regulation comes into force.

It is eminently possible and desirable for companies to have their own AI policies to ensure oversight of what might already be happening with or without their knowledge.

There are, I believe, several different ways why firms need this kind of policy.

In some cases, companies might not be officially using AI but have employees who are doing so, exploring the free or paid-for versions of platforms like ChatGPT.

They may be testing its capabilities using real world commercial information, inadvertently leaving their employers in breach of confidentiality clauses and open to legal action for breach of contract too.

Other businesses might be keen to embrace the benefits of AI but are wondering how best to procure and then deploy the technology without hampering ongoing processes.

Finally, there are those firms looking to develop bespoke AI to suit their own particular objectives and services.

Workplace policies governing the use of data, employment, business development and even social media accounts can offer real clarity and even provide a formula for dealing with any issues should they arise well before AI-specific regulation becomes law.

We have, within the last few months, already seen organisations such as the European Commission introduce their own AI rules (https://www.politico.eu/wp-content/uploads/2023/06/01/COM-AI-GUIDELINES.pdf).

It is much too soon, of course, to tell how effective such iniatives are on a practical basis.

I should at this point make clear my own position: I am confident that AI has the potential to greatly help business do business better.

I tend to agree with the outcomes of studies like one by the Washington Post, which suggested that AI could "augment" our ability to work rather than take our jobs altogether (https://www.washingtonpost.com/technology/interactive/2023/ai-jobs-workplace/).

Technology has transformed our lives in and out of the office and will continue to do so.

It is far more advantageous, is it not, to understand how best to capitalise on its potential than be tripped up by it.

That is where workplace policies - clear to everyone from board level to shop floor - to control its use have real merit.

Like other areas of business, though, such as employment or client contracts, there is a need to take advice in order that enthusiasm isn't tripped by reality and the law.

Having good, early guidance - especially in those areas like technology which might seem complex at first glance - can mean the difference between bosses recognising how AI might help their companies instead of being fearful of the consequences.

To discuss any of the above further, please feel free to contact James Teare: jamesteare@bexleybeaumont.com  |  07709 733459