Replies: 2 comments 1 reply
-
I 100% agree |
Beta Was this translation helpful? Give feedback.
1 reply
-
Duplicate of #34 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Gpt-4 will become limited, and has already shown flaws that auto-gpt can't fix. Ontop of this its reliant on closed source data and keys, that belong to openai, and therefore will always have a cost related to it, whether in the ai itself, or an actual physical cost eg its token usage.
Local llm's will be able to circumvent these and make for an allround better auto-gpt due to the nature of open source trained models.
The ability to integrate auto-gpt into a custom tuned llm model for the purpose of usage with auto-gpt?
Sure there not as well developed now, but their rapidly getting to that point, expecially for specific use cases.
Whether this is integrated as a plugin first, to make alternative calls to local llm's running, or via full integration, either way it would make auto-gpt so much more versatile and useful.
Beta Was this translation helpful? Give feedback.
All reactions