Neuron Solutions has supported the last Deep Learning Reading Seminar whose topic was how to cope with complex AI models for natural language processing.
The sheer size of the latest generation of giant pretrained language models such as GPT-3 and Megatron-Turing makes even their fine-tuning unfeasible in many application scenarios. This led to intensive research into prompt-engineering, that is, methods for optimizing language model prompts for specific downstream tasks. András Simonyi, NLP researcher & developer, AI lecturer in several universities has provided an introduction to the problem and overviewed of the most important approaches to solving it.
The full presentation of András is available on this recorded video: