On 29.08.2022, Levente Szabados, co-founder and senior advisor of Neuron Solutions, took part in a new conversation in the Millásreggeli radio show, in which they continued the discussion about artificial intelligence. In the previous conversation – which you can find our blog post about here – we had the chance to hear where artificial intelligence is today, which is already an integral part of our everyday life, and to find out where it is going. In contrast, in this conversation we can learn about ourselves – Levente is asked what AI is teaching us about ourselves. Well, let’s learn about ourselves!
If you have 300 likes on Facebook, machine learning can use this information to predict certain personality traits about you significantly better than your own life partner – Levente’s colleagues in Frankfurt showed in 2014. Amazing, isn’t it? Because there are so many things that can be behind a like: you might like the style, the character or the writer, you might be interested in the topic, or you might just be asked to like the post. But that’s the power of machine learning: if you have enough data, you can extract the underlying pattern very well.
Okay, so they know a bit about us by our actions, but we know ourselves better. Or don’t we? Often, even before you have made the final decision, about 11-12 seconds before you press the buy button, they can predict (with a very good probability) whether you will buy. All they use for prediction is your mouse movement, how you move your cursor on the screen. Amazing, isn’t it?
But even more shocking to think that with this information in hand, they can even influence your decision. Just think, you are in the middle of deciding whether or not to buy a product, when at the right moment a 10% discount pops up.
And this is where moral boundaries come into the picture. What is it that we still allow and what is no longer tolerated in terms of AI use? Some people will seriously reject such attempts to influence, while others are not bothered, and of course many are unaware that their decisions may have been influenced.
Different attitudes can also be observed at the level of nations. While China already has full video surveillance and places the interests of society above the freedom of the individual, this attitude is unthinkable in Hungary, for example.
The hosts asked the AI expert about what can be done to prevent loss of control. Levente thinks it’s very important to think about data protection, to give back control to the data owner, about what can be predicted about them. Because if our behaviour cannot be observed, it cannot be predicted. But what can be predicted, someone will try, because machine learning is cheap. Technically, in short, we could describe machine learning as “cheap prediction”.
According to Levente, this will be the great conflict of the 21st century: what we allow to be predicted and where we draw the line of losing our freedom. And perhaps a good final thought is that “No technology is independent of the social environment that uses it”. Because we can see that we influence and control technology, and it also shapes us.
If you also like listening to Levente’s thoughts, follow us, as we’ll be following up soon with a report from Levente’s September show visit.
You can listen to the full conversation in Hungarian in the following Youtube video: