Our co-founder, Levente Szabados, had an insightful discussion with Áron Kovács Nagy on Trend-idők about the role and challenges of artificial intelligence (AI) in medical diagnostics.
The application of AI in medicine holds promising opportunities but is not without its challenges. A recent study reveals that AI models sometimes draw surprising and irrelevant conclusions. For example, when analyzing knee X-rays, the AI concluded that the patient drinks beer. This phenomenon highlights an issue known as short cut learning.
The Short Cut Learning phenomenon
Short cut learning occurs when AI models make decisions based on irrelevant patterns. For instance, as ice cream consumption increases, so do shark attacks. The reason is not that ice cream is related to sharks, but that people consume ice cream and go to the sea during good weather, where they might be attacked by sharks.
Dangers in healthcare applications
This phenomenon can be particularly dangerous in healthcare applications, where accuracy and reliability are crucial. For example, in cancer diagnosis, several doctors review X-rays, and individual doctor stamps may appear on the images. These must be removed immediately because the AI algorithm could easily learn that where there are more stamps, the probability of cancer is higher, leading to false conclusions.
Risks of prejudices and superstitions
It is essential to understand that AI algorithms are not prejudiced or superstitious by themselves, but the data they receive can be. If we only present one type of person, the AI can easily conclude that certain jobs can only be filled by men because it has only seen men in those jobs. Therefore, it is crucial to control and consciously arrange the data to direct attention to relevant factors, such as education or skills, rather than gender or other irrelevant characteristics.
How can we protect ourselves?
We can defend against short cut learning and data biases in several ways. One method is to strive for diverse and balanced data sets when developing and applying AI models. Additionally, it is important to remove irrelevant information, such as doctor stamps on X-rays, during data preprocessing. The functioning of AI systems must be continuously monitored and fine-tuned to ensure the reliability and accuracy of decisions.
The interview highlights that while AI has the potential to revolutionize medicine, it must be applied carefully and cautiously to avoid irrelevant and potentially dangerous conclusions.