Neuron Solutions
  • Services
  • Digital workforce
  • AI Trainings & Workshops
  • Generative AI Solutions
  • Industries
  • Projects
  • Blog
  • About us
  • Let’s talk
  • English
    • Magyar
    • English
    • Deutsch
2022.07.07

Discussion with Levente Szabados: Consciousness and artificial intelligence – can the two go together?

Discussion with Levente Szabados: Consciousness and artificial intelligence – can the two go together?
2022.07.07

Google’s artificial intelligence, LaMDA, has its own emotions and autonomous thoughts (“self-awareness”), according to a Google employee. This claim has since been refuted by Google, but the question that has arisen is interesting: Can there be machine consciousness? Can the computer become self-aware? This was the topic of the Millásreggeli radio show on 20.06.2022, in which Levente Szabados, co-founder and senior advisor of Neuron Solutions, was interviewed.

The discussion begins with an analysis of consciousness as a concept. How do you make sure that the person you are talking to has consciousness? “Well, you have an assumption that if you’re interacting with someone and they’re reacting in a very suspiciously similar way to your interactions, they may have a similar experience behind them as yours, which for lack of better you call consciousness.” “But I only have access to my own experience of what I appear to myself to be consciousness, and others resemble me, so I assume they are too.”

Something similar happened in the case of LaMDA. Levente thinks that LaMDA has taken all the texts available in the world and turned itself into a clever cliché generator. In an unprecedented way, the chatbot answered the questions of the Google employee in a way that was comparable to a human interaction. The deception was further compounded by the fact that the chatbot was a partner in all sorts of topics, even philosophical ones, and even talked about its own emotions, as in the dialogue below:

“But does artificial intelligence need consciousness at all?” – Levente is asked. Good news: it doesn’t. Bad news: neither do we in our everyday lives. In fact, we do a lot of repetitive activities that don’t require consciousness. So, in fact, consciousness is more like a scale. We don’t always have consciousness either, sometimes we do, sometimes we don’t, but there is probably no exact boundary between the two states.

And the question is, would it be more useful if AI were more human, showing signs of conscious action? It is not at all self-evident. Levente says there is an optimum for this, because we might not want to use a machine that is too much like a human.

And how would a human and a machine answer a philosophical question? The sharp-eyed will notice if the answers are too rational, Levente says, LaMDA should have given a paradoxical answer to be more human.

You can listen to the full conversation in Hungarian in the following Youtube video:

Levente is expected to appear on the Millásregreggeli show every 4 weeks to discuss similar interesting topics with the presenters. If you found the conversation interesting, follow us!

 

CONTACT US
Previous articleCompetition issues in the business use of artificial intelligence and other digital technologiesNext article Artificial intelligence in everyday life: far away, so close

Deep Reading Blog

Recent Posts

AI Projects: Why Most Fail and How to Succeed2025.10.29
AI Agents and the Future of Work: How to Integrate Them Without Disrupting Your Teams2025.10.14
Reality or Illusion?2025.08.14
  • Magyar
  • English
  • Deutsch
Neuron Solutions

FOLLOW US!

Facebook Youtube Linkedin
  • Services
  • Generative AI Solutions
  • AI Training Academy
  • Industries
  • Projects
  • Knowledge base
  • About us
  • Let’s talk
  • Privacy Policy
  • Services
  • Generative AI Solutions
  • AI Training Academy
  • Industries
  • Projects
  • Knowledge base
  • About us
  • Let’s talk
  • Privacy Policy

NEURON SOLUTIONS LTD

Építész u. 8-12, H-1116 Budapest, Hungary

info@neuronsolutions.hu

NeuronBot