Gergő Szabó and Balázs Bódi from Neuron Solutions brought some very exciting topics to the Budapest Deep Learning Reading Seminar in November 2021.
Gergő Szabó, gave a presentation on ‘representation learning’ with several applications and his own experiences. But what is representation learning actually?
“Representation learning is basically the encoding of important features/properties of an object, indent or resource.” A great example of this is the illustration on the cover of the movie Jaws, which captures the essence and does not show unnecessary details.
Gergő’s first example was a feature engineering exercise, an area that he sees as a precursor to representation learning, where the aim is to reduce the number of variables and compress known information as much as possible.
In the presentation, we saw examples of supervised and unsupervised forms of representation learning, and we also learned that Gergő finds unsupervised tasks more exciting. Next, we heard about methods to evaluate representations (representation evaluation).
Gergő also showed us an example of his own – he trained a neural network on data recorded from the brain waves of alcoholic and non-alcoholic people – which he managed to activate the audience with.
Then a series of unsupervised examples came, and we learned that representation learning methods are good at completing sentences or filling in incomplete pictures. Gergő listed a number of other applications, and ended his talk by explaining the interesting relationship between transfer learning and representation learning.
Presentation by Balázs Bódi on ‘One (quantum) to rule them all‘ was followed.
Particle or wave? How to decide? This was the question Balázs started his presentation with, recalling the atmosphere of physics classes.
Jumping into quantum computation and quantum computing, we learned that quantum bits can take many different values and are usually represented as the surface atmosphere of a unit sphere.
Balázs told us about the measurement and entanglement of quantum bits, and then he told us about a possible application, quantum teleportation. This does not mean physical teleportation, but teleportation of information, which could be used for quantum cryptography, for example.
“So quantum computers are just computers that are super-fast?” is a common question asked of Balázs. The answer is simple: no. Quantum computers have a completely different logic.
Balázs showed us some NP-hard problems that could be solved by quantum computers in polynomial time. Why is this so special? Because NP-hard problems are, by definition, problems for which there is no algorithm that can solve it in polynomial time.
An example of such an NP-hard problem is the factorisation of an unimaginably large number into two prime divisors. This would require roughly 4,000 logical quantum bits, which would be about 5,000,000 physical quantum bits. To see where we are now, IBM’s state-of-the-art quantum computer had 65 quantum bits in 2021. So Balázs estimated that a solution to such a difficult problem could be realised in about 15 years.
Afterwards we learned that the key to quantum neural networks is the number of connections, which are also still very low in today’s machines. So the conclusion is that ‘Quantum Machine Learning’ is still a distant future, for now the algorithms of classical computers are far superior to the quantum version.
But let us not despair, there are areas where the quantum world could soon make progress. Balázs told us about one such area, so-called probabilistic graphical models, which also give quantum computers a big advantage because of their NP-hardness.
Were you interested in the talks? You can watch them on our brand-new Youtube channel: