By clicking on the headers for each session, you will find the materials used in each respective session.

Session 1 materials (Apr 25th): Introduction

In this introductory session we discuss some recent examples to start the discussion of LLMs. Then, we provide an overview of the course and the contents which will be discussed in the first half of the semester in lecture format.

Session 2 materials (May 2nd): Core LLMs

In this session, we provide an overview of the fundamental concepts behind training and evaluating core language models. That is, we take a look at training language models to predict the next word, the step that is taken before pre-trained language models undergo further fine-tuning. We also discuss common NLP benchmarks and evaluation metrics.

Session 3 materials (May 9th): RLHF, recent models & prompting

In this session, we will take a look at the core technology used to convert LLMs pretrained on plain language modeling into maximally helpful, flexible and interactive assistants — Reinforcement Learning from Human Feedback (RLHF). The core idea of this approach is to adapt the models so that they satisfy human preferences. We also provide a birds-eye overview of currently popular LLMs and discuss a practically relevant aspect – creating effective prompts for current LLMs.

Session 4 materials (May 16th): Implications for linguistics

As their core task, language models are trained to produce human-like language; therefore, it is natural to look at LLMs from the perspective of linguistics — the study of human language. In this session, we discuss what kind of linguistic capabilities language models represent once they are trained and how these could be measured. We also glimpse at some ongoing debates about the extent to which LLMs might be human-like with respect to the mechanisms they employ in order to learn and use language in text form.

Session 5 materials (May 23rd): Implications for cognitive science

In this session, we discuss the implications of LLMs for (philosophy of) cognitive science, with a particular focus on understanding and explanations. First, key theoretical approaches to these concepts are highlighted, and then we discuss recent studies inspecting different aspects of LLMs’ ‘understanding’. Then, we switch the point of view and look at LLMs from the perspective of usefully engineered tools and provide a toy demonstration of how they can be used within larger systems powered by the framework LangChain. We discuss how LLMs could be embedded in scientific explanatory models.

Session 6 materials (June 6th): Implications for society

Due to large scale deployment in customer-facing applications, LLMs, and AI more broadly, have profound impacts on society. These range from concerns about differential quality of AI powered products in certain use cases, over regulation of such products to management of advanced AI-related risks. In this session, we touch upon these different topics related to societal impacts of LLMs and AI. We provide a birds-eye snapshot of the current state of the field when it comes to alignment, several ethical aspects connected to the use of AI and LLMs, and socio-political impacts (legislation, economy, environment and more).