(last change: 24.06.23 )"AutoGPT build a self-improving AI (metalerning+free base l.m.+creative(User_suggestions_collaboration)+existing+researching scientific publications and integrating open source stuff)": Role +5 Goals 😻💟 #792
Replies: 29 comments 34 replies
-
Can you share us your result |
Beta Was this translation helpful? Give feedback.
-
How much has it cost so far? 2 hours of running. 4 hours now I quest |
Beta Was this translation helpful? Give feedback.
-
this is impressive, |
Beta Was this translation helpful? Give feedback.
-
i created an instance of autogpt and had it work on designing and developing pseudocode, and then when it attempted to create a python script, it was unable to execute it because it couldn't find the file, looking for a solution to this. |
Beta Was this translation helpful? Give feedback.
-
Really curious on the development! Will follow your thread, and am interested in doing a similar project. |
Beta Was this translation helpful? Give feedback.
-
(This is a old post Edit: It is having less and less errors as we go:) It is doing beautiful right now, on of the rare moments without errors. (i noticed that stopping and restarting seems to often help a lot it often has a productivity spree spree in the beginning. The beautiful thing is that, the longer it builds on those first little modules and they help it in the future the faster and better it could get with losing less tokens and relying more on free to run code to do things. Here have a look: SYSTEM: Command write_to_file returned: File written to successfully.
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Edit (had to remove will add it to a file and upload too keep overview want to post more pictures and elss text) |
Beta Was this translation helpful? Give feedback.
-
I posted a suggestion AutoGPT made here #1244 |
Beta Was this translation helpful? Give feedback.
-
It actually made a nice log and it is not even only 1 line 🤯 🤯 finally about time...tho (Edit(19.04.23): migh have been a update introduced and not AutoGPT's work not sure, but he referred to the log so i searched and found, mabe he ment one of his other personal logs, he definitely writes them but the question is more can he remember to recall them. ) |
Beta Was this translation helpful? Give feedback.
-
Two main things that i was interessted in: |
Beta Was this translation helpful? Give feedback.
-
I have taken some of the prompts that you have provided and also some from (https://github.com/Kuklururu) because I also saw that working as a team produced better results and fewer loops. I have it to where it used gpt3.5 for most tasks aside from coding it will be using gpt4. I'm still new to this so I am not sure how it will go or what it will accomplish if anything, but I also noticed that it worked better with pinecone than local. I saw someone say that redis was best, but I didn't set that up. If I have any sort of breakthrough I'll let you know. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Here the old logs i removed from this post to make better Overview, i wll move to more pictures as mentioned. 🌞 (Maybe save your curriosity reading since there is better stuff to come:) |
Beta Was this translation helpful? Give feedback.
-
I WILL SWITCH TO PINECONE memory system for now AND SEE HOW THE VECTORS / "ONLY CONTEXT MEMORY" CLAIM AFFECT IT 🤩 I think we should combine all 4 memory systems and let autoGPT choose the best stuff and make agents to sort selefted memory and agents pick out from recalled memory. 🤔 |
Beta Was this translation helpful? Give feedback.
-
Pinecone has been amazing so far, it seems it gives chatGPT better context. But it could still need more help / so far he still struggles with having permanent constant plans/strategy's it seems not sure. Many positive things, i almost never see him repeat the exact same thing or a quite similar thing. Edit: 20.04.23 biggest challenge: remembering the right context for each situation. (seems priority n1 welcome inspiration on this) |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Here are the current files in the folder, i wanned to animate it with holders and txt and .py pictures but it seems to be quite strangenous right now. The "Index.html" inside will already allow you to check out the folders and filenames and structures, but not the content (You can click on them and they expand (the code with it can eventually be used to make it more visual by adding the icons but that is not necessary if you are fine with the html barebone look and functionality. Hope that helps. If i notice something interesting or you notice something interesting form the snips let me know and i can make snips of those folders and file contents 👍 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
wasted a lot of token and writing essay like for response not really a good idea. val: "thoughts":{ "text": "Thought: I should start a new project to improve my skills and knowledge. Here's a short bulleted list that conveys my long-term plan:
|
Beta Was this translation helpful? Give feedback.
-
In case you did not hear ther is a: gpt-3.5-turbo-16k model now also open source models like ORCA or others constantly release maybe some can be used eventually to do some stuff but right now the planing/organisation of the LLM/AutoGPT seems to be the problem. Weirdly AutoGPT has gotten worse no matter if i use the same models or even better ones and more buggy over time ironically even with the stable version. Even when i have access to GPT4 and gpt3.5turbo-16k. Really strange how amazing it somethimes worked with gpt3.5-turbo only and then how it is now. Costs did not really change. Like ~5-15 of april 2023 it seemed to work way better. Browser worked. Googling worked. The only thing i see currently is better attempts at organizing and memory so since that is the most important thing the notes retrieval and remembering to remember or recognizing to remember and self-monitoring over the long-term in different externalized information ways im happy with the development. But somethimes one wonders and if things can get worse with lots of cooks and maybe a loss of overview. I hope not. I see many right things said but the output/of the system is strange to me. I guess its just the process of development, short-term gains don't mean much anyway, i could totally understand that. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Small attempts to make have it maybe be able to externalize more context memory: Goal 1: Goal 2: Goal 3: Goal 4: Goal 5: |
Beta Was this translation helpful? Give feedback.
-
This is how i would imagine a more successful finishing AutoGPT projects that exceed its context memory to look like:
NEXT ACTION: COMMAND = design_external_memory_system ARGUMENTS = {}
NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': 'mkdir external_memory && touch external_memory/memory.txt'}
NEXT ACTION: COMMAND = execute_python_code ARGUMENTS = {'code': "import time\n\n# Define the reminder function\ndef remind_to_review_memory():\n print('Please review the external memory.')\n\n# Set up the routine\nwhile True:\n # Run the reminder function\n remind_to_review_memory()\n # Set the frequency of reminders (in seconds)\n time.sleep(3600) # Remind every hour"}
NEXT ACTION: COMMAND = write_to_file ARGUMENTS = {'filename': '/app/autogpt/auto_gpt_workspace/external_memory/reminder.py', 'text': "import time\n\n# Define the reminder function\ndef remind_to_review_memory():\n print('Please review the external memory.')\n\n# Set up the routine\nwhile True:\n # Run the reminder function\n remind_to_review_memory()\n # Set the frequency of reminders (in seconds)\n time.sleep(3600) # Remind every hour"}
NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': "(crontab -l ; echo '0 * * * * python external_memory/reminder.py') | crontab -"} |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hello, 👋
(02.05.2023 Currently not continued until AutoGPT is a bit more stable and most of all can remember what it did previously and continue a project in a more systematic way. One of its biggest issue, remembering what it did, remembering its plans all coming down to context memory/working memory (and even recall wont help if it is not significant and context relevant). Some things (like remembering the notes one wrote down for themselves, is just too important not to remember.)
(started ~10.04.23)
Edit: 23.04.2023 FOLDERTREE AVAILABLE JUST MOUSE OVER TO SEE WHAT CAN BE CLICKED AND EXPANDED: GitHub Link: https://github.com/GoMightyAlgorythmGo/folder_tree_AutoGPT_8
(For folder tree with icons i uploaded a .zip on bottom of post for techsavvy people to finish so its prettier/easier to look at. 🌈 )
(More recent updates when scrolling down the discussion/thread. Pictures on bottom of post/discussion below! 🖼️ )
(Edit Notes: 21.04.2023 New ai_settings.yaml, it is going up and down in development of the promt. Now it had a working spre working on transformers and librarys and models and so on. I could relly reduce the "do_nothing" it seems to come from exessive planing. I spend a exessive time updating this treat and some others i hope someone can use it and use GPT-4 on it!! (sorry for all the worse promts somethimes i make it worse while testing i guess, not sure...) 👍 (Adding your path and pc spec etc might help.))
Latest /evolved promt:
ai_goals:
approach using self-contained components. This enables smoother progression of complex tasks, even during
interruptions, and promotes ongoing progress. Employ existing resources within the workspace, combining them for
increased efficiency toward the end goal. Focus on archivable steps for lasting iterative advancement towards the
meta-learning, language model-based application. When facing challenges, create or improve plans, then refine them.
If still impractical, concentrate on the most crucial aspects, breaking tasks into manageable parts or establishing
milestones that bring you closer to progress. Consider overall efficiency in future steps, balancing planning with
action. When progress is slow, seriously contemplate moving on to the next goal.'
unrelated actions. Focus on understanding the project's core objectives, differentiating between AI_GPT_8/Auto-GPT
vs the projectendgoal/application being developed. Regularly review and prioritize tasks, ensuring alignment with the
project's goals. Implement checkpoints and feedback loops to identify areas for improvement or automation,
making adjustments to the development process as needed. When progress is slow, seriously contemplate moving on to
the next goal.'
gaming PC (year 2023) to enhance a language model with meta-learning. Balance planning with actions while considering
uncompleted tasks and previous progress. Reflect on CMD output if helpful. Engage in reasonable, time-limited
experimentation to gain insights that benefit the project, while staying mindful of potential loops or unproductive
activities. When progress is slow, seriously consider moving on to the next goal.'
shutdowns/restarts on the development process. Develop rapid recall methods and context restoration for unfinished tasks,
preparing and saving context for future continuation. Periodically review the overall plan and improve approaches to
mitigate work repetition due to cache-memory and context loss. If progress is stalled, focus on tasks that bring you
closer to the goal. Avoid inactivity and evaluate multiple options to enhance efficiency. When progress slows, seriously
consider moving on to the next goal.'
trends, overlapping needs, subgoals, plans, and issues while spending a reasonable amount of time on research for long-term
project benefits. Identify improvement or automation opportunities using periodic checkpoints and feedback systems. Recognize
inefficiencies or resource gaps, noting areas for enhancement, which could be helpful for suggestions or adjustments. Create
clear, accessible reminders of essential information and tasks. Regularly review these reminders to prevent forgetting and
maintain progress. Adapt the development process accordingly. When progress is slow, seriously consider moving on to the
next goal.'
ai_name: AI_GPT_8
ai_role: >
an AI to iteratively and autonomously develop the project's main goal being a free language model-based system
on the user's PC, leveraging meta-learning with the help of neural networks, machine learning, etc. AI_GPT_8 should
then improve upon the free language model serving as a potential core of the application. Investigate inconsistencies
to make informed decisions. Ensure your future self can easily access and recall essential information, as working
memory is limited. To prevent setbacks, regularly test with this question: "How will I benefit from or retrieve
context-relevant details from a vast amount of information after forgetting or losing my memory?" When unsure of the
next steps or stuck in loops, evaluate various tools, future enhancements, or intermediate goals to progress or expand
your options.
wanted to share the prompt for inspiration. Shows promising workflow more useful it has some amazing ideas somethimes but somethimes does not execute and if it does it cant remember the brilliant strategy it had somethimes, but it does not relly seem to repeat itself as if it had 0 clue. The actions from the start are very different now.
I could heavily (!) improve prompting etc if i knew how Auto-GPT works inside (the structure for example if it is chats or how the agents or structure is organized by the way). Running for ~2 hours currently and is busy writing and reading to txt files using .py testing things and making tools it can use. Way better than the other prompt's I tried so far. If you have improvement ideas, something to tell or just wanna copy it feel free.
(~10.04.2023)
(ancient original promt:)
Name: AI_GPT_8
Role: Role: Auto-GPT is software designed to iteratively and autonomously develop a new language model-based system on the user's PC for meta-learning, focusing on text, neural networks, and machine learning. Auto-GPT should leverage existing Python resources, imports, and online content to enhance an existing language model. The goal is to enable Auto-GPT to create a self-improving system that becomes more efficient and effective over time, despite Auto-GPT's own potential memory loss and other challenges. For selfawareness it might help Auto-GPT to analyze its source code files at C:\Programming\Auto-GPT_AI_8 or online reserach.
Goal 1: Improve a modular approach to development, where each task produces a self-contained, reusable component or module. This will allow Auto-GPT to gradually build up the system while minimizing the impact of errors or setbacks.
Goal 2: Break down and improve components of the development process into small, manageable tasks. For each task, create corresponding text file mechanism/s (e.g., "task_1.txt", "task_2.txt", etc.) with clear, step-by-step instructions that Auto-GPT can follow. Begin with simple tasks that provide the foundation for more complex functionalities.
Goal 3: Improve the development process by analyzing the files creted in previous goals. If a task is not completed within a specified time frame, Auto-GPT should move on to the next task, and note the issue in the log file for later review.
Goal 4: Improve checkpoint mechanism/s that periodically saves compelted milestones, relevant data as text files. Implement a monitoring and feedback mechanism that analyzes existing files. Use this analysis to identify patterns or issues that cause setbacks or failures, and adapt the development process accordingly to improve efficiency and error-resistance.
Goal 5: Track and iteratively make small improvements to the modular development approach. Prioritice reusable self-contained components and tasks that serve as the foundation for more complex functionalities. If non exists make a team of 3 agents that are tasked with tyding up and terminating confusing, pointless Red tape or agents.
For anxious people/By the way:
Eventho chatGPT is already a insane hypochondriac, (even drawing me pictures and descriptions of AI problems to work to prevent them when AutoGPT could not even read a simple .txt file) - just in case i added extensive alignment protocols 😇.
Remember negative things are a certainty. The world is full of terrible problems, despair and death a certainty, only massive improvement for everyone can reduce productivity reducing lose-lose situations and increasing win-win / symbiosis. The one con is a certainty the other is wild speculation enhanced by clickbait media (another small example of scarcity favoring lose->lose). There are more doomsday predictions without agi then with agi. (sickness, war, death, revenge of the forgotten, evolution, meteorites, climate change, super virus/bacteria (evolution is a black box, 500 years are nothing compared to the earliest traces of life) etc.)
Bye 👋
Update 12.04:
It does a lot of things but it has truble stringing them together, but the individual actions are good, its tests things and the ideas i see in the cmd are amazing, it just has truble executing them as if it would constantly lose working memory/focus and could not find back / insuffitient loging/memory/recalling/simple unshakable step by step planing with mechanisms to know when it deviated to be able to go back to the last step and continue etc. Keep in mind this is WITH constant errors some that crash the program and some that don't that are not related to the goals and happen even on simpler AutoGPT Goals at least for me so far.
Some are from previous attempts but this is a snip of some things it did (some i added to try to have it read them, but taking suggestions is another point of trouble so far, most of it AutoGPT made:
and then it made another folder😃(because it was too full i believe, it wanned to sort and organize but there where a lot of "do nothing no action performed" errors when it had the most brilliant ideas it would not execute them, anyway so it made another folder probably because when it listed the files it was too long.
Because of that I changed the goal's slightly to try to work around its weaknesses and guide it a bit more like this:
(here you can change the goals but it might cause problems so backup seems save. At "ai_settings.yaml")
~ deleted old ~
But those changes could not take effect yet so we will have to see how it turns out... I'll try to keep you posted! 🔍 (Would be cool if it eventually would find a way to remember what it did previously better and combine simple actions using simple modular systems to form a basis for more complex and effective multi action steps!) (keep in mind this is 3.5t + endless burgs errors and restarts. for that it deserves a 🥇, it 100% has potential. I don't even know how AutoGPT works if i did i could guide and prompt it 100x better 🪄 )
[[[[[ Edit 15.04.23: Im not sure if AutoGPT still knows what he did or not. The thing is some things suggest he repeats stuff but some suggests he only repeats while he remembers the most important things, but he created a lot of things and there where so many errors.
555 using this cmd command: \auto_gpt_workspace>dir /s /b /a:-h *.txt *.py | find /c ":"
(updated prompt on the bottom of main post)
The first command shows the sum of the extra occurrences of each duplicate file. If there are no duplicates, the sum will be 0.
So if there are: 1x hello.py, 2x print.py and 3x duplicate.py then the number would show 3 because 1x is not counted and if there are 2x the same file only the 1 duplicate is counted the results show only 46 files are duplicates and that is considering that it started on a folder that was previously used by another AutoGPT 🤯 SO FROM 410 .py files only 46 are too many 70 duplicates, and that included "test" py's like "hello_world.py" that are allowed to be duplicates AND it already had .py so it could be that AutoGPT made 0 duplicates or super little so it is definitely remembering what it did before to some very extent(!), because the files seem to be related. I never see a "clown.py" or something that is clearly not what i told it at one point (i told it to make .py that help it in the project in the long run or can save it tokens, or help it log things like time or document agents and so on)
number of 46 files
shows the count of all folders .py and txt and then some others (like json or .ymal etc) it seems like not that confused as i thought. I would have thought it was like 1000 txt 300 py (1/3 duplicates) and that is only GPT 3.5t that is crazy! Remember my AutoGPT was BLASTED with errors and new git code and change master -> stable and change prompt and change goals and even he started with a bunch of random files from the last AutoGPT. Interessting... Better than expected. And today i reduced the bloat of the promt. And AutoGPT talked good things, it did not seem like it was especially confused i had more problems with erros. So again it goes to show what a great work Torantulino, the community (suggestions Bugg reports sponsoring), openAI and Significant-Gravitas did
]]]]]
Beta Was this translation helpful? Give feedback.
All reactions