Replies: 11 comments 21 replies
-
would it be possible to use GPT4ALL instead of OpenAi ? |
Beta Was this translation helpful? Give feedback.
-
Second that.
From: franklin050187 ***@***.***>
Sent: Friday, April 7, 2023 10:58 AM
To: Torantulino/Auto-GPT ***@***.***>
Cc: Subscribed ***@***.***>
Subject: Re: [Torantulino/Auto-GPT] Auto-GPT - Benefits of a fully local instance (Discussion #395)
would it be possible to use GPT4ALL instead of OpenAi ?
-
Reply to this email directly, view it on GitHub<#395 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AUHVVE2PIKCHNORWO5KQYFDXAA2QTANCNFSM6AAAAAAWWUYLO4>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.******@***.***>>
|
Beta Was this translation helpful? Give feedback.
-
I’d love to have an option to combine multiple models or select from them. |
Beta Was this translation helpful? Give feedback.
-
Just my 2 cents... But how difficult would it be to have it running through the normal GPT, instead of the API .. I personally have a paid account which gives you unlimited gpt 3.5 and 20 something messages every 3-4 hours in GPT4 as most people here are probably aware of... I'm just thinking... It might seem like abit of an old fashioned way of doing it.. But instead of running it through the API which uses tokens.. Is it possible to have it run a script which opens up a browser ( or for people running it locally a similar function ) and use it like normal? I haven't even played around with it yet, i have only seen videos.. but the whole idea of burning through tokens and what not isn't overly appealing when im already paying for gpt 4. If i get some time in the next few days i might see what i can come up with |
Beta Was this translation helpful? Give feedback.
-
Check out the issue I have posted here #461, newer more powerful, yet still light weight and nearly free models are out now. |
Beta Was this translation helpful? Give feedback.
-
An Auto-Gpt running, only using my 13B model would be nice. Not very powerful but as I see it it could do the trick enabling me to use natural language communicating with my AI to do advanced reasoning, planning and execution of my tasks. Adding code ad hoc. Simple tasks albeit useful.
|
Beta Was this translation helpful? Give feedback.
-
change the gpt4 for open asistans model that is released on April 15 that does not have to use any api key that can be run locally ? |
Beta Was this translation helpful? Give feedback.
-
My vote would be to integrate with oobabooga/text-generation-webui which can easily manage various AI models on your machine locally. And with the advancements of vicuna, LoRA, 4-bit quantized, and cpp models what's possible right now is amazing. It's been stated objectively, though I don't know if there is a true way to be objective in this manner, that Vicuna has reached 90% of GPT4 (and others say at least the level of 3.5). |
Beta Was this translation helpful? Give feedback.
-
we could have the local agen communicat with another gpt model running on the same machine or on a second machine Objective: The goal of this project is to create a locally hosted GPT-Neo chatbot that can be accessed by another program running on a different system within the same Wi-Fi network. This will replace the current dependency on OpenAI's API, allowing the chatbot to be used without the need for an API key and internet access to OpenAI's servers. Project Workflow: Set up the local environment Install necessary Python libraries, including Transformers and Flask. Create a Flask app with a single endpoint (e.g., /generate) to handle chatbot requests. Run the Flask app on the local machine, making it accessible over the network using the machine's local IP address. Update the program to send requests to the locally hosted GPT-Neo model instead of using the OpenAI API. Ensure that the other system can successfully send requests to the locally hosted GPT-Neo chatbot and receive accurate responses. Create a detailed project description, including the objective, workflow, and any necessary instructions for setting up and using the chatbot. If you decide to host both projects on the same machine, the workflow will become more straightforward. You can directly integrate the GPT-Neo model into the other program without needing to create a separate web service using Flask. Here's an updated workflow for this scenario: Set up the local environment Install necessary Python libraries, including Transformers. Update the program to incorporate the GPT-Neo model directly instead of making API calls to OpenAI. Ensure that the program can successfully use the locally hosted GPT-Neo model and receive accurate responses. Create a detailed project description, including the objective, workflow, and any necessary instructions for setting up and using the integrated GPT-Neo model. |
Beta Was this translation helpful? Give feedback.
-
Its been done already with AutoLLM : https://github.com/kabeer11000/autollm My conclusion was the best configuration of LLMs would be something like default to Vicuna 8-bit, so we can get the 4096k tokens, and something like codegen 7b for specifically writing code, maybe GPT4-X-Alpaca for brainstorming/creative writing, specfic models for specific purposes with a default model. |
Beta Was this translation helpful? Give feedback.
-
Merging this with #34 |
Beta Was this translation helpful? Give feedback.
-
I wanted to ask the community what you would think of an Auto-GPT that could run locally. Currently I have the feeling that we are using a lot of external services including OpenAI (of course), ElevenLabs, Pinecone.
I personally think it would be beneficial to be able to run it locally for a variety of reasons:
I think we should strive for a fully local Auto-GPT. What do you think? Is this something you feel is important too?
Beta Was this translation helpful? Give feedback.
All reactions