Skip to content
How Can We Help?

Search for answers or browse our knowledge base.

< All Topics

Large Language Models

What are they?

Large Language Models (LLMs) have been trained on vast quantities of diverse texts such as articles, blogs and digital books from the internet. The naturalness of the output is a result of fine-tuning the model using human feedback to produce responses that are as close to natural language as possible. The same techniques are being used for audio as well as static and moving images.

Are they accurate

Impressive though these chat bots may appear there are questions about the truthfulness and accuracy of outputs generated. When Chat GPT3 was asked the question ‘what happens when you smash a mirror’ the generated reply was ‘you get seven years bad luck’! Amusing though this example might be, it becomes problematic if people begin to trust the output of a machine because it seems human and plausible.

Tests on 18 generative AI models against a truthfulness text data set showed that they were on average only 25% truthful in their generated responses.

Is the output true?

All this poses a major epistemological challenge in the use of Generative AI, how will we know what is true or what is real. This challenge is accentuated when bad actors access this technology and use it to generate conspiracy theories, fake news around election time or to intimidate women with fake pornographic video of themselves. The latter is what happened to Indian journalist Rana Ayyub when she spoke out against the governments response to the rape of an 8 year old girl.

A number of organisations have sought to alert the public to the dangers through creating fake videos themselves such as the late Queen’s Speech in 2020, or of Boris Johnson purporting to endorse Jeremy Corbyn for Prime Minister. Whilst there are organisations using AI algorithms to try and spot and filter fake images, text and videos, they are not perfect and unless they are used on all social media platforms, will be inaccessible to the ordinary member of the public.

An existential threat?

The emulation of the human characteristics represented by the various facets of AI technology, especially Generative AI, poses one of the biggest threats to truth and reality in our times. We know that the devil is the father of lies so we can expect Generative AI to be a tool that he will use against humanity. The scale and ubiquity of the digital world ensures that advances are rapidly disseminated and taken up and the potential for bad actors is almost unlimited.

I think AI is one of the biggest threats (to humanity)

Elon Musk

AI Summit, Bletchley Park, 2023

Some senior industry players have resigned their posts to speak out about the dangers of the rate of AI development. Elon Musk, CEO of Tesla and one of the early investors in London based DeepMind acquired by Google in 2014, collected thousands of signatures calling for a six month ban on development.

Will it take our jobs?

Concerns have been raised in educational circles about pupils submitting essays and work using tools like ChatGPT. Creatives are questioning the way in which such technologies will invade their space whether in creating movie scripts or generating a complete movie without real people. 

References

Daniel Zhang, Nestor Maslej, Erik Brynjolfsson, et al., The AI Index 2022 Annual Report, AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022.

Rana Ayyub, I Was The Victim Of A Deepfake Porn Plot Intended To Silence Me, 21 November 2018. Retrieved on 12 July 2023 from https://www.huffingtonpost.co.uk/entry/deepfake- porn_uk_5bf2c126e4b0f32bd58ba316

Was this article helpful?
0 out of 5 stars
5 Stars 0%
4 Stars 0%
3 Stars 0%
2 Stars 0%
1 Stars 0%
5
Please Share Your Feedback
How Can We Improve This Article?
Table of Contents