OpenAI streamlines the fine-tuning process for ChatGPT. This will soon allow large organizations to eliminate intermediate steps in the process and it will be possible to compare different tailored LLMs to see if the additional training leads to improvements.
OpenAI gives companies more tools to optimize and fine-tune models with their own data. This tuning is done via an API, where most of the improvements are located. The first innovation concerns checkpoints. The fine-tuning is carried out in several passes or Epochs in technical jargon. In each epoch, a model starts working with the same additional training data, but each time there is a new chance for something to go wrong. The checkpoints ensure that organizations can return to a previous era and not have to start over if something like this happens.
playground
Developers who start using the API gain further access to it playground. This is an interface that allows you to easily compare models with each other. For example, you can give the same request to GPT 4 or a version of GPT 3.5 Turbo that you have optimized with your own data to see if the fine-tuning produces the desired results.
OpenAI now makes it possible… Hyperparameters that determine the behavior of a model can be adjusted via the dashboard. Previously this was only possible via the API or SDK. This makes it easier for users to customize models to their liking. Finally, there are some other improvements, such as more accurate data for validation and integration of third-party applications.
Focus on business
OpenAI wants to focus heavily on enterprises in 2024. ChatGPT started the AI hype a year and a half ago and quickly brought generative AI to the mainstream, but ChatGPT doesn’t necessarily have a reputation as the most suitable enterprise-level solution. Options for more refined control can be helpful.