Artificial intelligence is no longer just assisting, it can now predict potential health risks, even before symptoms appear.
British researchers have developed a model based on data from 57 million people, which could warn users in advance about the likelihood of developing certain conditions.
The goal: prevention, fewer complications, and less need for hospitalization.
But if an algorithm makes a prediction, who gets to decide what to do?
This was the topic of a conversation between Áron Kovács-Nagy and Levente Szabados, Associate Professor at the Frankfurt School of Finance & Management and Senior Consultant at Neuron Solutions.
Prediction isn’t new, but this is a different level
According to Levente Szabados, medicine has always aimed to identify risks as early as possible.
We measure blood pressure, use diagnostics, and analyze lab results to receive early warnings, before illness sets in.
In that sense, AI doesn’t introduce something entirely new. But it does offer two key advancements:
- It works from much finer signals and more complex data sets, allowing it to detect risks long before symptoms appear, sometimes even years in advance.
- To benefit from this, we must learn how to interpret these predictions, and how to make meaningful decisions based on them, within the context of each person’s unique situation.
What happens if the AI is wrong?
A fair question, after all, every diagnostic tool can be wrong, and an AI model is no exception.
The goal, Szabados says, is not to assume perfection, but rather to build systems that are prepared to manage and respond to errors.
Just as traffic safety systems or traditional medical diagnostics are not flawless, AI is also a probabilistic tool whose reliability must be continuously monitored, interpreted, and adjusted.
Where’s the line between prevention and prejudice?
What happens when an algorithm says: “You are at high risk of developing a disease”?
Does this help us, or does it cross into dangerous territory?
It’s a classic ethical dilemma. As Szabados points out: saying “smoking causes cancer” is a statistical statement.
Not everyone who smokes will get cancer, but the probability is clear. The same applies to AI: the model doesn’t predict with certainty but instead signals probabilities.
And ultimately, the decision is ours, how we interpret that information, whether we act on it, and how it influences our lifestyle or medical choices.
AI can support us, but it doesn’t decide for us
An AI-generated prediction isn’t a verdict, it’s a new kind of early signal.
It can create real value only if we approach it with awareness, critical thinking, and collaboration.
🎧 You can listen to the full conversation with Levente Szabados by clicking here!

