Prompting & Current LMs#
Slides from the fourth lecture introducing prompting strategies in the context of base (or, foundational) language models, as well as some of the current large language models can be found here.
Additional materials#
If you want to dig a bit deeper, here are (optional!) supplementary readings introducing the prompting strategies and the mentioned LMs that were covered in class:
Wei et al. (2023) Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Kojima et al. (2023) Large Language Models are Zero-Shot Reasoners
Webson & Pavlick (2022) Do Prompt-Based Models Really Understand the Meaning of Their Prompts?
Nye et al. (2022) Show Your Work: Scratchpads for Intermediate Computation with Language Models
Wang et al. (2023) Self-Consistency Improves Chain of Thought Reasoning in Language Models
Liu et al. (2022) Generated Knowledge Prompting for Commonsense Reasoning
Yao et al. (2023) Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Xie et al. (2022) An Explanation of In-context Learning as Implicit Bayesian Inference
Min et al. (2022) Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Lampinen et al. (2022) Can language models learn from explanations in context?
Touvron et al. (2023) LLaMA: Open and Efficient Foundation Language Models
Touvron et al. (2023) Llama 2: Open Foundation and Fine-Tuned Chat Models