⚠️

High performance GPU required

The process of training your own LLM locally is very resource intensive and will require a GPU with at least 16GB of VRAM. If you are under this specification this process will not work.

How to get started

Local fine-tuning is currently unavailable and will be available in the GitHub repo (opens in a new tab)

This documentation will be updated once available.