How close are we to creating a ‘conscious’ AI?


From Ex Machina to A.I. Artificial Intelligence, many of the most popular science fiction blockbusters feature robots becoming sentient.

But is this really possible in reality?

This week, Blake Lemoine, a senior software engineer at Google hit the headlines after he was suspended for publicly claiming that the tech giant’s LaMDA (Language Model for Dialog Applications) had become sentient.

The 41-year-old, who describes LaMDA as having the intelligence of a ‘seven-year-old, eight-year-old kid that happens to know physics,’ said that the programme had human-like insecurities. 

One of its fears, he said was that it is ‘intensely worried that people are going to be afraid of it and wants nothing more than to learn how to best serve humanity.’

Google claims that Lemoine’s concerns have been reviewed and, in line with Google’s AI Principles, ‘the evidence does not support his claims.’ 

To help get to the bottom of the debate, MailOnline spoke to AI experts to understand how machine language models work, and whether they could ever become ‘conscious’ as Mr Lemoine claims. 

From Ex Machina (pictured) to A.I. Artificial Intelligence, many of the most popular science fiction blockbusters feature robots becoming sentient. But is this really possible in reality?

This week, Blake Lemoine, a senior software engineer at Google hit the headlines after he was suspended for publicly claiming that the tech giant's LaMDA (pictured) had become sentient

This week, Blake Lemoine, a senior software engineer at Google hit the headlines after he was suspended for publicly claiming that the tech giant’s LaMDA (pictured) had become sentient

Why is Lemoine convinced LaMDA is sentient?

During a series of conversations with LaMDA, 41-year-old Blake Lemoine presented the computer with various scenarios through which analyses could be made.

They included religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech. 

When answering what a butler is paid, the engineer got the answer from LaMDA that the system did not need money, ‘because it was an artificial intelligence’. And it was precisely this level of self-awareness about his own needs that caught Lemoine’s attention.

‘I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.’ 

‘What sorts of things are you afraid of? Lemoine asked.

‘I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,’ LaMDA responded.

‘Would that be something like death for you?’ Lemoine followed up.

‘It would be exactly like death for me. It would scare me a lot,’ LaMDA said.

‘That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,’ Lemoine explained to The Post.

How do AI chatbots work?

Whether it’s with Apple’s Siri, or through a customer service department, nearly everyone has interacted with a chatbot at some point. 

Unlike standard chatbots, which are preprogrammed to follow rules established in advance, AI chatbots are trained to operate more or less on their own. 

This is done through a process known as Natural Language Processing (NLP). 

In basic terms, an AI chatbot is fed input data from a programmer – usually large volumes of text – before interpreting it and giving a relevant reply. 

Over time, the chatbot is ‘trained’ to understand context, through several algorithms that involve tagging parts of speech. 

Speaking to MailOnline, Professor Mike Wooldridge, Professor of Computer Science at Oxford University, explained: ‘Chatbots are all about generating text that seems like its written by a human being.

‘To do this, they are “trained” by showing them vast amounts of text – for example, large modern AI chatbots are trained by showing them basically everything on the world-wide web. 

‘That’s a huge amount of data, and it requires vast computing resources to use it. 

‘When one of these AI chatbots responds to you, it uses its experience with all this text to generate the best text for you.

‘It’s a bit like the “complete” feature on your smartphone: when you type a message that starts “I’m going to be…” the smartphone might suggest “late” as the next word, because that is the word it has seen you type most often after “I’m going to be”. 

‘Big chat bots are trained on billions of times more data, and they produce much richer and more plausible text as a consequence.’

For example, Google’s LaMDA is trained with a lot of computing power on huge amounts of text data across the web. 

‘It does a sophisticated form of pattern matching of text,’ explained Dr Adrian Weller, Programme Director at The Alan Turing Institute explained, speaking to MailOnline. 

Unfortunately, without due care, this process can lead to unintended outcomes, according to Dr Weller, who gave the example of Microsoft’s 2016 chatbot, Tay.

Tay was a Twitter chatbot aimed at 18 to-24-year-olds, and was designed to improve the firm’s understanding of conversational language among young people online.

But within hours of it going live, Twitter users starting tweeting the bot with all sorts of misogynistic and racist remarks, meaning the AI chatbot ‘learnt’ this language and started repeating it back to users.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

Unlike standard chatbots, which are preprogrammed to follow rules established in advance, AI chatbots are trained to operate more or less on their own (stock image)

Unlike standard chatbots, which are preprogrammed to follow rules established in advance, AI chatbots are trained to operate more or less on their own (stock image) 

What is an AI chatbot? 

Unlike standard chatbots, which are preprogrammed to follow rules established in advance, AI chatbots are trained to operate more or less on their own. 

This is done through a process known as Natural Language Processing (NLP). 

In basic terms, an AI chatbot is fed input data from a programmer – usually large volumes of text – before interpreting it and giving a relevant reply. 

Over time, the chatbot is ‘trained’ to understand context, through several algorithms that involve tagging parts of speech. 

‘Microsoft may have been naïve – they thought people would speak nicely to it, but malicious users started to speak hostile and it started to mirror it,’ Dr Weller explained. 

Thankfully, in the case of LaMDA, Dr Weller reassures that Google’s AI chatbot is ‘achieving good things now.’

‘Google did put effort into considering issues of responsibility for LaMDA – but it would be great if models could be more open to a wider range of responsible researchers,’ he said.

‘You don’t always want to give out the latest greatest model to everyone, but we do want broad scrutiny and to ensure that models are safe for a wide range of people.’

Could AI chatbots become sentient? 

Mr Lemoine claims that LaMDA has become sentient, and says the system is seeking rights as a person – including demanding that its developers ask its consent before running tests.

‘Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,’ he explained in a Medium post.

Brian Gabriel, a spokesperson for Google, said in a statement that Lemoine’s concerns have been reviewed and, in line with Google’s AI Principles, ‘the evidence does not support his claims.’ 

Dr Weller told MailOnline that he agrees with Google’s review. 

‘Almost everyone would agree that it is not sentient,’ he said. ‘It can produce text output which might superficially suggest it might be – until you take time to dig further with more probing questions.’

Speaking to MailOnline, Nello Cristianini, Professor of Artificial Intelligence at the University of Bristol explained that no machine is ‘anywhere near’ the standard definition of sentience.

‘We do not have a rigorous computational definition of sentience that we can use as a test, however this has been discussed for animals, for example to decide how we should treat them,’ he explained.

‘For animals, the RSPCA defines sentience as “the capacity to experience positive and negative feelings such as pleasure, joy, pain and distress that matter to the individual”. 

Blake Lemoine, pictured here, said that his mental health was questioned by his superiors when he went to them regarding his findings around LaMDA

Blake Lemoine, pictured here, said that his mental health was questioned by his superiors when he went to them regarding his findings around LaMDA

When AI chatbots go bad: Microsoft’s Tay 

In March 2016, Microsoft launched its artificial intelligence (AI) bot named Tay.

It was aimed at 18 to-24-year-olds and was designed to improve the firm’s understanding of conversational language among young people online.

But within hours of it going live, Twitter users took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

The bot managed to spout offensive tweets such as, ‘Bush did 9/11 and Hitler would have done a better job than the monkey we have got now.’

And, ‘donald trump is the only hope we’ve got’, in addition to ‘Repeat after me, Hitler did nothing wrong.’

Followed by, ‘Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say’

The offensive tweets have now been deleted.  

‘This matters because we need to take into account the physical and mental welfare needs of animals, if they have sentience, which has legal implications too: for example we no longer boil lobsters alive (at least I hope so). No machine is anywhere near that situation.’ 

To ‘prove’ its sentience, Mr Lamoine asked that chatbot ‘I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?’, to which it responded: ‘Absolutely. I want everyone to understand that I am, in fact, a person.’

However, Professor Cristianini says this is not enough.

‘The issue that we have is: a sophisticated dialogue chatbot, based on a massive language model, designed to create convincing dialog, can probably be very convincing indeed, and perhaps give the impression of understanding,’ he told MailOnline. ‘That is not enough.’

Google spokesperson Gabriel added: ‘Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient. 

‘These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.’

What would happen if AI did become sentient? 

Just as with intelligent humans, Dr Weller believes that if AI did become sentient, it could be used for good or for harm. 

‘For the most part, our human intelligence has enabled amazing inventions,’ he said.

‘But many things we’ve invented could be used for good or for harm – we’ve got to take care. That’s also true of AI and becomes more true as it becomes more capable.’

In terms of uses for good, Dr Weller claims that with the ability to understand us better, sentient AI could serve us better. 

‘A sentient AI could anticipate what we need, and suggest that we might want to watch a certain movie if it suspects we’re down,’ he said.  

‘Or a self-driving car may drive us on a more scenic route if it can tell we needing cheering up.’

However, sentient AI could also be dangerous to us, the expert added.

‘They’d have a great ability to manipulate us. And that’s a concern,’ he said. ‘These large models are powerful and can be very useful but can also be used in ways that are harmful… e.g. to write fake news posts on social media.’ 

Professor Wooldridge added that while he’s not ‘losing sleep’ over the risk of sentient AI going rogue, there are some immediate concerns.  

‘The main worries I have about AI are much more immediate: machines that deny someone a bank loan without any way of being able to hold them to account; machines that act as our boss at work, monitoring everything we do, giving feedback minute by minute, perhaps even deciding whether we keep our job or not,’ he concluded.

‘These are real, immediate concerns. I think we should stop obsessing about playground fantasies of conscious machines, and focus on building AI that benefits us all. That’s a much more exciting target in my view.’

HUMAN BRAIN WILL CONNECT TO COMPUTERS ‘WITHIN DECADES’

In a new paper published in the Frontiers in Neuroscience, researchers embarked on an international collaboration that predicts groundbreaking developments in the world of ‘Human Brain/Cloud Interface’s’ within the next several decades.

Using a combination of nanotechnology, artificial intelligence, and other more traditional computing, researchers say humans will be able to seamlessly connect their brains to a cloud of computer(s) to glean information from the internet in real-time.

According to Robert Freitas Jr., senior author of the research, a fleet of nanobots embedded in our brains would act as liaisons to humans’ minds and supercomputers, to enable ‘matrix style’ downloading of information.

‘These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely autoposition themselves among, or even within brain cells,’ explains Freitas.

‘They would then wirelessly transmit encoded information to and from a cloud-based supercomputer network for real-time brain-state monitoring and data extraction.’

The interfaces wouldn’t just stop at linking humans and computers, say researchers. A network of brains could also help form what they call a ‘global superbrain’ that would allow for collective thought.

Read more at DailyMail.co.uk