Just Entrepreneurs

View Original

Knowledge is power: Why standardised AI training is needed

Artificial intelligence; it’s been the talk of the tech town over the past few months, and unlike NFTs, this isn’t a fleeting tech madness.

AI has raised many practical and ethical questions and there is a lot of uncertainty surrounding what the future might look like. People are scared that they will lose their jobs, and students are already relying on ChatGPT to write their essays. Will critical thinking even become redundant as we increasingly rely on this technology?

There is no denying that AI is a very powerful tool: it can generate revenue, reduce costs and save time. Navigating how we use AI appears to be posing lots of challenges and if there is one lesson we’ve learned (or perhaps have never learned) as humans, it’s that once we’ve opened Pandora’s Box, we can’t shut it again – particularly when there is money to be made.

So, how do we tread these muddy waters and find a solution?

Who will use generative AI?

Firstly, we must take a step back and look at who is going to be using AI in the coming years.

Whether you’re a teacher, an office worker or a technologist, we will all end up using AI at some point in the years to come. In 2020, it was estimated that 15% of all UK businesses had adopted at least one AI technology with 40% of those being purchased off-the-shelf AI systems. Since the rise of ChatGPT, this number will inevitably now be higher. This is because it does have many benefits and shortcuts to make our very busy lives a lot easier.

One of the main challenges with this is that people who don’t understand AI are using it and using it a lot. To the untrained eye, programmes such as ChatGPT do almost appear magical. And this is where cracks begin to appear; as a country, many are in a state of misunderstanding and AI fear. We need to move to AI enlightenment.

But how do we get there?

Understanding AI

Education is the only way that we can demystify and demonstrate what AI is and the power it holds; both for good and bad.

With such a rapidly developing technology, we need to be careful of what information we use to train it. AI systems have been known to ‘hallucinate’ e.g., confidently provide plausible-sounding fake statements. Or in other cases, it’s been known to fuel discriminatory biases. For example, an analysis of more than 5,000 images created with Stable Diffusion found that images generated for high-paying jobs were dominated by subjects with white and lighter skin tones, whereas subjects with darker skin were in low-paying work categories.

Although AI offers opportunities for innovation, growth and prosperity, it also creates a wide range of new risks. One of the main concerns with AI is how it ‘learns’. The ‘intelligence’ it demonstrates is only as good as the information it has been trained on. This leads to problems around infringements on privacy, individual rights and copyright issues on the data used to build its knowledge.

AI can also adapt or learn over time. As it is continuously fed new data, the technology may change the way it makes decisions and actions. This could cause issues with the validity of answers as well as being unknowingly non-compliant with regulations and laws as well as a potential infringement on privacy. This issue becomes much more impactful if AI is making decisions around driving a vehicle or diagnosing medical conditions.

It starts with education

Implementing high-quality training is essential to the future of the relationship between humans and this technology. And this shouldn’t be industry specific; there should be mandatory training so employers can ensure their employees have a basic level of AI training. This should equip employees with the foundational skills and understanding to use AI to make sure we’re leveraging it for its best outcomes.

AI is currently advancing at such a rapid rate and training needs to start imminently and from an early stage. With 97% of 0-18-year-olds having access to the internet in 2022, virtually everyone in the UK will have generative AI at their disposal. Education in schools needs to commence. Children need to know from a young age the potential dangers that AI could pose and therefore how we should and shouldn’t use it.

What should training look like?

With such expansive technology that is changing every day, the training should be reviewed regularly. And there are various levels of training that are needed across the board:

  • For day-to-day users – how to spot generated AI, whether that’s videos or social media posts

  • For the more advanced users – how to use it, its biases, its limitations and what information you should and shouldn’t feed it

  • On a national level – identify those who have the potential to design and develop AI tools and give them the skills they need.

It goes without saying that this technology is going to be game-changing and will have a really big impact on many people’s lives. However, there is a caveat to that; without the full knowledge of its capabilities, many are likely to fall victim at some point to the risks AI poses. There needs to be strong regulations around AI as well as standardisation of training and education.

What is the government doing?

The UK has been consistently ranked as one of the best places in the world to start an AI business and the UK’s regulatory environment has supported this growth. The UK has real potential to be the hub of AI innovation with annual investment in UK AI companies increasing from £252 million in 2015 to over £2.8 billion in 2021.

And the UK has wanted to be a leader for a while now, not just since the explosion of ChatGPT. In 2021 the National AI Strategy set the objective for the UK to be an AI superpower.

Currently, we have seen a lot of movement in the UK Government towards grappling with the potential challenges and the rapid advancements of AI. Most recently, Prime Minister Rishi Sunak announced that the UK will host the first major global summit on AI safety.

What should they be doing?

However, there are some key areas of focus that the government isn’t looking at closely enough. Education and training amongst the general population have not been widely spoken about.

Although the UK is aiming to be a global AI leader, this cannot mean ignoring most of the UK’s input towards this. To be a leader, we need to make sure those who are using AI, whether for professional or personal reasons, are using it correctly and safely.

AI is so accessible now and with this comes the risk that people who don’t know the dangers of it could use it wrong. This is why many large enterprises have taken the decision to ban or limit the use of ChatGPT at work such as Apple, Samsung and Accenture.

Knowledge is power

It’s clear that the UK’s journey with AI isn’t going to be an easy one and inevitably there will be bumps and hurdles along the way. With something so new and uncertain, there will be an adjustment period.

But we cannot let the wider population go into this empty-handed and without the knowledge needed to properly utilise and understand how to use AI. With tools so powerful, they need to be used for the greater good and doing that effectively starts with education.

In the years to come, there will be a significant increase in the use of this technology and the UK Government needs to impart the wisdom of experts on how we should interact with AI to leverage it in the best way possible for our futures.