Use local models instead of API-GPT #223
Replies: 7 comments 35 replies
-
This is tricky because training a local model is resource intensive and all of the ones out there are no where close to gpt-4's quality. |
Beta Was this translation helpful? Give feedback.
-
Someone ask Auto-Chatgpt to rewrite Auto-chatgpt so it can work with local vicuna or alpaca models :) done |
Beta Was this translation helpful? Give feedback.
-
-Better but more ressource intensive: https://www.youtube.com/watch?v=jb4r1CL2tcc&ab_channel=MartinThissen Virtual memory you dont need 30gb Model comparisons: https://chat.lmsys.org/ |
Beta Was this translation helpful? Give feedback.
-
Please consider that there is active work on getting plugins to work, especially by @BillSchumacher who is making good headway there and also managed to hook up autoGPT to Vicuna (I think). The idea is that such a connection to a local LLM is then made through plugins. This will avoid bloating of autoGPT but also allow the community to participate much more easily by just focusing on plugins. |
Beta Was this translation helpful? Give feedback.
-
I have a question: are the models need to be fine tuned like a chatbot or just normal models are better for this task? |
Beta Was this translation helpful? Give feedback.
-
I think if we imitate the OpenAI REST API and use it as a middleware between oogabooga text gen webui, we can just point the auto-gpt to the api replica. Just an idea, I am not doing this. If this happens to come into the main branch, we can use it by changing the param in the .env file to the name of the local llm, and the system understand it and point to a local REST connected to the ooogabooga. |
Beta Was this translation helpful? Give feedback.
-
Duplicate of #395 |
Beta Was this translation helpful? Give feedback.
-
You guys think this is in reach? Hackathon...
Beta Was this translation helpful? Give feedback.
All reactions