⚠️
High performance GPU required
The process of training your own LLM locally is very resource intensive and will require a GPU with at least 16GB of VRAM. If you are under this specification this process will not work.
How to get started
Local fine-tuning is currently unavailable and will be available in the GitHub repo (opens in a new tab)
This documentation will be updated once available.
当前内容版权归 AnythingLLM 或其关联方所有,如需对内容或内容相关联开源项目进行关注与资助,请访问 AnythingLLM .