Human-in-the-loop is the process of harnessing human input across the ML lifecycle to improve the accuracy and relevancy of models. Humans can perform a variety of tasks, from data generation and annotation, to model review, customization, and evaluation. Human intervention is especially important for generative AI applications, where humans are typically both the requester and consumer of the content. It is therefore critical that humans train foundation models (FMs) how to respond accurately, safely, and relevantly to users’ prompts. Human feedback can be applied to help you complete multiple tasks. First, creating high quality labeled training datasets for generative AI applications via supervised learning (where a human simulates the style, length, and accuracy of how a model should respond to user’s prompts) and reinforcement learning with human feedback (where a human ranks and classifies model responses). Second, using human-generated data to customize FMs on specific tasks or with your company and domain specific data and make model output relevant for you. And lastly, using human evaluation and comparison to select the FM that is best suited for your use case and project requirements.