Releases: Significant-Gravitas/AutoGPT
AutoGPT v0.5.1
New Features ✨
-
New OpenAI Models 🤖
gpt-4-turbo
is now supported and set as the defaultSMART_LLM
model.- Default
FAST_LLM
changed fromgpt-3.5-turbo-16k
togpt-3.5-turbo
. - Default
EMBEDDING_MODEL
changed fromtext-embedding-ada-002
totext-embedding-3-small
. - Added support for
gpt-4-0125-preview
andgpt-4-turbo
models.
-
Agent Protocol Server Enhancements 🌐
- Made API server port configurable via
AP_SERVER_PORT
environment variable. - Added documentation for configuring the API port.
- Implemented task cost tracking and logging in the
AgentProtocolServer
.
- Made API server port configurable via
-
CLI Usability Improvements 💻
- Added a check to ensure the specified API server port is available before running.
- Improved handling of invalid or empty tasks provided by the user.
- Display information on whether code execution is enabled or not on CLI startup.
-
Web Browsing Enhancements 🌐
- Added browser extensions to handle cookie walls and ads when using Selenium.
- Added
extract_information
function to extract pieces of information from webpage content based on a list of topics of interest. Theread_webpage
command now supportstopics_of_interest
andget_raw_content
parameters to leverage this capability.
-
Telemetry & Error Tracking 📊
- Integrated Sentry for telemetry and error tracking.
- Added configuration flow and opt-in prompt for enabling telemetry.
- Distinguish between
production
anddev
environments based on VCS state. - Capture exceptions for LLM parsing errors and command failures.
-
File Storage Abstraction 📂
- Fully abstracted file storage access with the
FileStorage
class. - Renamed
FileWorkspace
toFileStorage
and updated associated classes and methods. - Updated
AgentManager
andAgentProtocolServer
to use the newFileStorage
system.
- Fully abstracted file storage access with the
-
History Compression 📜
- Implemented history compression to reduce token usage and increase longevity when using models with limited context windows.
Fixes 🔧
-
JSON Parsing Robustness
- Implemented
json_loads
for more tolerant JSON parsing of llm responses. - Updated
extract_dict_from_response
to handle bothjson and
JSON blocks in responses. - Fixed boolean value decoding issues in
extract_dict_from_response
.
- Implemented
-
Error Handling
- Added error handling for loading non-existing agents.
- Fixed handling of
action_history
related exceptions in CLI and Server modes. - Implemented self-correction mechanism for invalid LLM responses by appending error messages to the prompt.
-
Artifact & File Handling
- Fixed handling of artifact modifications by setting
agent_created
attribute instead of registering a new Artifact. - Fixed
read_file
command in GCS and S3 workspaces.
- Fixed handling of artifact modifications by setting
-
Dependency Updates & Security Fixes
- Updated
aiohttp
andfastapi
dependencies to mitigate vulnerabilities. - Fixed Content-Type Header ReDoS vulnerabilities in
python-multipart
.
- Updated
-
TTY Mode Enhancements
- Fixed
finish
command behavior in TTY mode. - Agent now properly raises
AgentTerminated
exception to exit the loop.
- Fixed
-
Miscellaneous Fixes
- Fixed
summarize_text
andQueryLanguageModel
abilities to handle OpenAI API changes. - Improved representation of optional command parameters in prompts.
- Fixed GCS workspace binary file upload.
- Moved
auto-gpt-plugin-template
to regular dependencies to fix missing module error.
- Fixed
Chores & Refactoring 🧹
- Updated
agbenchmark
andautogpt-forge
dependencies across the project. - Upgraded OpenAI library to v1 and refactored code to accommodate API changes.
- Improved logging by capturing errors raised during action execution.
- Cleaned up unused imports and fixed linting issues across the codebase.
- Sped up
test_gcs_file_workspace
by changing the fixture scope.
Pull Requests
Note: most of the changes mentioned above were made through direct commits. See also the full changelog.
- [Documentation Update] Updating Using and Creating Abilities to use Action Annotations by @himsmittal in #6653
- AGBenchmark: Codebase clean-up by @Pwuts in #6650
- fix: cast port to int by @orarbel in #6643
- Add documentation on how to use Agent Protocol in .env.template by @ntindle in #6569
- feat(benchmark): JungleGym WebArena by @Pwuts in #6691
- fix No modules named 'forge.sdk.abilities' and 'forge.sdk' #6537 #5810 by @kfern in #6571
- feat(agent/web): Add browser extensions to deal with cookie walls and ads by @Pwuts in #6778
- fix(forge): no module named 'sdk' by @MKdir98 in #6822
- Adding support to allow for sending a message with the enter key by @zedatrix in #6378
- Fixed revising constraints and best practices by @ThunderDrag in #6777
- Set subprocess.PIPE on stdin and stderr and just let pytest run the currentt file when running main() by @aorwall in #5868
- Update execute_code.py by @ehtec in #6903
- fix(agent/execute_code): Disable code execution commands when Docker is unavailable by @kcze in #6888
- OPEN-133: Fix trying to load non-existing agent by @kcze in #6938
- Fixing support for AzureOpenAI by @edwardsp in #6927
- OPEN-165: Improvement - check for duplicate command by @kcze in #6937
- chore: Change
agbenchmark
to directory dependency inautogpt
andforge
by @Pwuts in #6946 - Create Security Policy by @joycebrum in #6900
- feat(agent): Abstract file storage access by @kcze in #6931
- fix(autogpt): Fix GCS and S3 root path issue by @kcze in #7010
- feat(autogpt/forge): Send exception details in agent protocol endpoints by @kcze in #7005
- fix(agent): Handle
action_history
-related exceptions gracefully by @kcze in #6990 - fix(autogpt/cli): Loop until non-empty task is provided by the user by @kcze in #6995
- docs: Redirect AutoGPT users from Forge tutorial with warning by @ntindle in #7014
- docs: replace
polyfill.io
by @SukkaW in #6952 - feat(autogpt/cli): Check if port is available before running server by @kcze in #6996
- feat(autogpt/cli): Display info if code execution is enabled by @kcze in #6997
- feat(agent): Implement more tolerant
json_loads
function by @kcze in #7016 - ci(agent): Matrix CI tests across Linux, macOS and Windows by @Pwuts in #7029
- feat(agent): Handle OpenAI API key exceptions gracefully by @kcze in #6992
- test(agent): Fix VCRpy request filter for cross-platform use by @Pwuts in #7040
- ci(agent): Add macOS ARM64 to AutoGPT Python CI matrix by @Pwuts in #7041
- security(agent): Replace unsafe
pyyaml
loader withSafeLoader
by @matheudev in #7035 - fix(agent): Fix check when loading an existing agent by @kcze in #7026
- fix(agent, forge): Conform
web_search.py
toduckduckgo_search
v5 by @kcze in #7045 - fix(agent, forge): Conform web_search.py to duckduckgo_search v5 by @kcze in #7046
- fix(agent): Make
save_state
behave like save as by @kcze in #7025 - refactor(agent): Refactor & improve
create_chat_completion
by @Pwuts in #7082
New Contributors
- @himsmittal made their first contribution in #6653
- @orarbel made their first contribution in #6643
- @kfern made their first contribution in #6571
- @MKdir98 made their first contribution in #6822
- @zedatrix made their first contribution in #6378
- @ThunderDrag made their first contribution in #6777
- @ehtec made their first contribution in #6903
- @edwardsp made their first contribution in #6927
- @joycebr...
AutoGPT v0.5.0
First some important notes w.r.t. using the application:
run.sh
has been renamed toautogpt.sh
- The project has been restructured. The AutoGPT Agent is now located in
autogpts/autogpt
. - The application no longer uses a single workspace for all tasks. Instead, every task that you run the agent on creates a new workspace folder. See the usage guide for more information.
New features ✨
- Agent Protocol 🔌
Our agent now works with the Agent Protocol, a REST API that allows creating tasks and executing the agent's step-by-step process. This allows integration with other applications, and we also use it to connect to the agent through the UI. - UI 💻
With the aforementioned Agent Protocol integration comes the benefit of using our own open-source Agent UI. Easily create, use, and chat with multiple agents from one interface.
When starting the application through the project's new CLI, it runs with the new frontend by default, with benchmarking capabilities. Runningautogpt.sh serve
in the subproject folder (autogpts/autogpt
) will also serve the new frontend, but without benchmarking functionality.
Running the application the "old-fashioned" way, with the terminal interface (let's call it TTY mode), is still possible withautogpt.sh run
. - Resuming agents 🔄️
In TTY mode, the application will now save the agent's state when quitting, and allows resuming where you left off at a later time! - GCS and S3 workspace backends 📦
To further support running the application as part of a larger system, Google Cloud Storage and S3 workspace backends were added. Configuration options for this can be found in.env.template
. - Documentation Rewrite 📖
The documentation has been restructured and mostly rewritten to clarify and simplify the instructions, and also to accommodate the other subprojects that are now in the repo. - New Project CLI 🔧
The project has a new CLI to provide easier usage of all of the components that are now in the repo: different agents, frontend and benchmark. More info can be found here. - Docker dev build 🐳
In addition to the regular Docker release images (latest
,v0.5.0
in this case), we now also publish alatest-dev
image that always contains the latest working build frommaster
. This allows you to try out the latest bleeding edge version, but be aware that these builds may contain bugs!
Architecture changes & improvements 👷🏼
-
PromptStrategy
To make it easier to harness the power of LLMs and use them to fulfil tasks within the application, we adopted thePromptStrategy
class fromautogpt.core
(AKA re-arch) to encapsulate prompt generation and response parsing throughout the application. -
Config modularization
To reduce the complexity of the application's config structure, parts of the monolithicConfig
have been moved into smaller, tightly scoped config objects. Also, the logic for building the configuration from environment variables was decentralized to make it all a lot more maintainable.
This is mostly made possible by theautogpt.core.configuration
module, which was also expanded with a few new features for it. Most notably, the newfrom_env
attribute on theUserConfigurable
field decorator and corresponding logic inSystemConfiguration.from_env()
and related functions. -
Monorepo
As mentioned, the repo has been restructured to accommodate the AutoGPT Agent, Forge, AGBenchmark and the new Frontend.- AutoGPT Agent has been moved to
autogpts/autogpt
- Forge now lives in
autogpts/forge
, and the project's new CLI makes it easy to create new Forge-based agents. - AGBenchmark ->
benchmark
- Frontend ->
frontend
See also the README.
- AutoGPT Agent has been moved to
Pull Requests
Note: most of the changes mentioned above were made through direct commits. See also the full changelog.
- Sync release into master by @lc0rp in #5118
- Add files via upload by @ntindle in #5130
- Fix speak mode with elevenlabs falling into error by @dannaward in #5127
- Agent loop v2: Planning & Task Management (part 2) by @Pwuts in #5077
- fix(docker): add gcc installation in order to build psutil by @alexsoyes in #5059
- Fix elevenLabs config error by @dannaward in #5131
- Update init.py to support image_gen commands by @dmoham1476 in #5137
- Fixed stream elements speech function by @NeonN3mesis in #5146
- Restructure Repo by @merwanehamadi in #5160
- Refactor/remove abstract singleton as voice base parent by @collijk in #4931
- Add support for args to
execute_python_file
by @MauroDruwel in #3972 - read me update for monorepo by @SilenNaihin in #5199
- Change agbenchmark folder by @merwanehamadi in #5203
- Integrate benchmark and autogpt by @merwanehamadi in #5208
- Migrate AutoGPT agent to poetry by @Pwuts in #5219
- Adding Space After Colon in Console Input Prompt by @WilliamEspegren in #5211
- AutoGPT: use config and LLM provider from
core
by @Pwuts in #5286 - Rename Auto-GPT to AutoGPT by @merwanehamadi in #5301
- AutoGPT: Move all the Agent's prompt generation code into a PromptStrategy by @Pwuts in #5363
- Fixed stacking prompt instructions by @NeonN3mesis in #5520
- AutoGPT: Implement Agent Protocol by @Pwuts in #5612
- Fix typo in exceptions.py by @eltociear in #5813
- docs: fix typos in markdown files by @shresthasurav in #5871
- feat: Add support for new models and features from OpenAI's November 6 update by @Pwuts in #6147
- Improve the accuracy of the extract_dict_from_response method's JSON extraction. by @HawkClaws in #5458
- Allow port numbers to be used in local host names. e.g. localhost:8888. by @aaron13100 in #5318
- Streamline documentation for getting started by @Pwuts in #6335
- Fix docker build. Changing agbenchmark dependency as git reference instead of folder reference. by @warnyul in #6274
- refactor(agent/config): Modularize
Config
and revive Azure support by @Pwuts in #6497 - feat(agent/workspace): Add GCS and S3 FileWorkspace providers by @Pwuts in #6485
- Add dependencies required to use PostgreSQL by @ntindle in #6558
- Updating
duckduckgo-search
version to v4.0.0 to unbreakweb_search
command by @zedatrix in#6557d820239
New Contributors 🧙🏼
- @dannaward made their first contribution in #5127
- @alexsoyes made their first contribution in #5059
- @dmoham1476 made their first contribution in #5137
- @MauroDruwel made their first contribution in #3972
- @WilliamEspegren made their first contribution in #5211
- @shresthasurav made their first contribution in #5871
- @HawkClaws made their first contribution in #5458
- @aaron13100 made their first contribution in #5318
- @warnyul made their first contribution in #6274
- @zedatrix made their first contribution in
#6557d820239
agbenchmark-v0.0.10
Update CI pipy (#5240)
AutoGPT v0.4.7
AutoGPT v0.4.7 introduces initial REST API support, powered by e2b's agent protocol SDK. It also includes improvements to prompt generation and support for our new benchmarking tool, Auto-GPT-Benchmarks.
We've also moved our documentation to Material Theme at https://docs.agpt.co. And, as usual, we've squashed a few bugs and made under-the-hood improvements.
What's Changed
- Integrate AutoGPT with Auto-GPT-Benchmarks by @merwanehamadi in #4987
- Add API via agent-protocol SDK by @ValentaTomas in #5044
- Fix workspace crashing by @merwanehamadi in #5041
- Fix runtime error in the API by @ValentaTomas in #5047
- Change workspace location by @merwanehamadi in #5048
- Remove delete file by @merwanehamadi in #5050
- Sync release v0.4.6 with patches back into
master
by @Pwuts in #5065 - Improve prompting and prompt generation infrastructure by @Pwuts in #5076
- Remove append to file by @merwanehamadi in #5051
- Add categories to command registry by @Pwuts in #5063
- Do not load disabled commands (faster exec & benchmark runs) by @lc0rp in #5078
- Verify model compatibility if OPENAI_FUNCTIONS is set by @Pwuts in #5075
- fix: Nonetype error from command_name.startswith() by @lc0rp in #5079
- slips of the pen (bloopers) in autogpt/core part of the repo by @cyrus-hawk in #5045
- Add information on how to improve Auto-GPT with agbenchmark by @merwanehamadi in #5056
- Use modern material theme for docs by @lc0rp in #5035
- Move more app files to app package by @collijk in #5036
- Pass TestSearch benchmark consistently (Add browse_website TOKENS_TO_TRIGGER_SUMMARY) by @lc0rp in #5092
- Bulleting update & version bump by @lc0rp in #5112
New Contributors
- @ValentaTomas made their first contribution in #5044
Full Changelog: v0.4.6...v0.4.7
Auto-GPT v0.4.6
What's Changed
Improvements ✨
- Integrate
plugin.handle_text_embedding
hook by @zhanglei1172 in #2804 - Agent loop v2: Planning & Task Management (part 1: refactoring) by @Pwuts in #4799
- Add config options to documentation site by @lc0rp in #5034
- Gracefully handle plugin loading failure by @eyalk11 in #4994
- Update memory.md with more warnings about memory being disabled by @NeonN3mesis in #5008
Bugfixes 🐛
- Fix orjson encoding text with UTF-8 surrogates by @ido777 in #3666
- Fix
execute_python_file
workspace mount & Windows path formatting by @sohrabsaran in #4996 - Fix configuring TTS engine by @Pwuts in #5005
- Bugfix/remove breakpoint from embedding function by @collijk in #5022
- Bugfix/bad null byte by @collijk in #5033
- Fix path processing by @Pwuts in #5032
- Filepath fixes and docs updates to specify when relative paths are expected by @lc0rp in #5042
Re-arch 🏗️
- runner.cli parsers set as a library by @ph-ausseil in #5021
- fix the forgotten + symbol in parse_ability_result(...) in parser.py by @cyrus-hawk in #5028
- Move all application code to an application subpackage by @collijk in #5026
New Contributors
- @ido777 made their first contribution in #3666
- @sohrabsaran made their first contribution in #4996
- @ph-ausseil made their first contribution in #5021
- @eyalk11 made their first contribution in #4994
Full Changelog: v0.4.5...v0.4.6
Auto-GPT v0.4.5
This maintenance release includes under-the-hood improvements and bug fixes, such as more accurate token counts for OpenAI functions, faster CI builds, improved plugin handling, and refactoring of the Config class for better maintainability.
Release Highlights 🌟
We have released some documentation updates, including:
How to share system logs
- Visit docs/share-your-logs.md to learn how to how to share logs with us via a log analysis tool graciously contributed by E2B.
Auto-GPT re-architecture documentation
- You can learn more about the inner-workings of the Auto-GPT re-architecture released in 0.4.4, via these links:
New Contributors & Notable Catalysts 🦾
- @antonovmaxim made their first contribution in #4946
- @mlejva made their first contribution in #4965
- @GECORegulatory made their first contribution in #4972
What's Changed 📜
- Organize the configuration args by @collijk in #4913
- Drop AbstractSingleton as a parent class of the vector memory by @collijk in #4901
- Remove dead agent manager by @collijk in #4900
- Fix test_browse_website by @Pwuts in #4925
- Rebase
MessageHistory
onChatSequence
by @Pwuts in #4922 - Fix type of
Config.plugins
asAutoGPTPluginTemplate
by @Pwuts in #4924 - Restructure logs.py into a module; include log_cycle by @Pwuts in #4921
- Improve token counting; account for functions by @Pwuts in #4919
- Move path argument sanitization for commands to a decorator by @Pwuts in #4918
- Speed up CI by @Pwuts in #4930
- Sync release v0.4.4 +
stable
back intomaster
by @Pwuts in #4947 - Disable proxy for internal pull requests by @Pwuts in #4953
- Add links to github issues in the README and clarify run instructions by @collijk in #4954
- Documentation/collate rearch notes by @collijk in #4958
- Refactor/move functions in app to agent by @collijk in #4957
- Refactor/rename agent subpackage to agents by @collijk in #4961
- Allow absolute paths when not restricting to workspace root by @antonovmaxim in #4946
- Add initial share logs page by @mlejva in #4965
- Replaced Fictitious color name Fore.ORANGE by @GECORegulatory in #4972
- Fix loading the plugins config by @Pwuts in #5000
Full Changelog: v0.4.4...v0.4.5
Auto-GPT v0.4.4
Auto-GPT v0.4.4 is dedicated to the core re-arch tram, led by @collijk.
Release Highlights 🌟
This release is noteworthy for two reasons.
Auto-GPT-4
Firstly, it comes hot on the heels of OpenAI's GA release of GPT-4. Auto-GPT users have eagerly awaited the opportunity to unlock more power via a GPT-4 model pairing. In v0.4.4, the SMART_LLM (formerly SMART_LLM_MODEL) defaults to GPT-4 once again, and we have implemented adjustments to ensure the correct usage of SMART_LLM and FAST_LLM (formerly FAST_LLM_MODEL) throughout the code-base. The smarter option is used consistently for areas requiring state-of-the-art accuracy, such as agent command selection. At the same time, the faster LLM assists with tasks that even the speedier GPT-3.5-turbo excels at, like summarization.
Note: GPT-4 is costlier, so please review your SMART_* and FAST_* settings. You can also use --gpt3only and --gpt4only command line flags to adjust your model preferences at runtime.
Autogpt/core
The second reason, and the reason for the dedication at the beginning of these release notes, is equally exciting. The much-anticipated re-arch is now available! The team, led by @collijk, has worked tirelessly over the past few months to put the "Auto" back in Auto-GPT, nearly doubling the code available in the master branch. The autogpt/core
folder contains the work from the re-arch project, which is now systematically making its way to the rest of the application, starting with the Configuration modules. Watch for improvements over the next few weeks. There is still much to do, so if you wish to assist, please check out this issue.
New Contributors & Notable Catalysts 🦾
- @uta0x89 made their first contribution in #4789
- @lukas-eu made their first contribution in #4810
- @u007 made their first contribution in #4876
- @NeonN3mesis made their first contribution in #4855
- @kerta1n made their first contribution in #4471
- @scottschluer made their first contribution in #3774
- @jayden5744 made their first contribution in #4875
- @zachRadack made their first contribution in #4902
- @IANTHEREAL made their first contribution in #4098
- @VenkatTeja made their first contribution in #4863
What's Changed 📜
Besides the highlights above, this release cleans up longstanding Azure configuration rough edges, fixes plugin incompatibilities and plugs security. Read on for a detailed list of changes.
- Retry ServiceUnavailableError by @uta0x89 in #4789
- Use Configuration of rearch by @merwanehamadi in #4803
- Filtering out ANSI escape codes in printed assistant thoughts by @lukas-eu in #4810
- Sync
stable
version v0.4.3 back tomaster
by @lc0rp in #4828 - Add fallback token limit in llm.utils.create_chat_completion by @lc0rp in #4839
- Fix Config type hint problems caused by #4803 by @Pwuts in #4840
- fix for #4813 by @u007 in #4876
- Update CODEOWNERS by @ntindle in #4884
- Hotfix - call model_post_init explicitly until pydantic 2.0 by @lc0rp in #4858
- Simplified plugin log messages by @lc0rp in #4870
- Re-arch hello world by @collijk in #3969
- Update OpenAI model ID mappings by @Pwuts in #4889
- Fix plugin loading issues by @Pwuts in #4888
- Fix errors in Mandatory Tasks of Benchmarks by @uta0x89 in #4893
- New Challenge test_information_retrieval_challenge_c by @NeonN3mesis in #4855
- Update Wiki to reflect changes with
docker-compose
by @kerta1n in #4471 - Utilize environment variables for all agent key bindings by @scottschluer in #3774
- Fix potential passing of NoneType to remove_ansi_escape by @u007 in #4882
- Fix Azure OpenAI setup problems by @jayden5744 in #4875
- Use GPT-4 in Agent loop by default by @Pwuts in #4899
- Add CLI args for
ai_name
,ai_role
, andai_goals
by @rocks6 in #3250 - Bugfix fixtts by @zachRadack in #4902
- Fix the incompatibility bug of azure deployment id configuration with gpt4/3-only by @IANTHEREAL in #4098
- Pull master into Release v0.4.4 by @lc0rp in #4903
- Fix PLAIN_OUTPUT for normal execution by @Pwuts in #4904
- Bug Fix: summarize_text function usage by @VenkatTeja in #4863
- Sync
stable
intorelease-v0.4.4
(#4876) by @Pwuts in #4907 - Document function get_memory in autogpt.memory.vector by @sagarishere in #1296
- Bugfix/broken azure config by @collijk in #4912
- Fix bugs running the core cli-app by @collijk in #4905
- Improve command system; add aliases for commands by @lengweiping1983 and @Pwuts in #2635
- Syncing bugfixes and newer commits from master to release-v0.4.4 by @lc0rp in #4914
- Fix regression where we lost the api base and organization by @collijk in #4933
Full Changelog: v0.4.3...v0.4.4
Auto-GPT v0.4.3
We're excited to present the 0.4.3 maintenance release of Auto-GPT! This update primarily focuses on refining the LLM command execution, extending support for OpenAI's latest models (including the powerful GPT-3 16k model), and laying the groundwork for future compatibility with OpenAI's innovative function calling feature.
Release Highlights 🌟
- OpenAI API Key Prompt: Auto-GPT will now courteously prompt users for their OpenAI API key, if it's not already provided.
- Summarization Enhancements: We've optimized Auto-GPT's use of the LLM context window even further, boosting the effectiveness of summarization tasks.
- JSON Memory Reading: Support for reading memories from JSON files has been improved, resulting in enhanced task execution.
- New "replace_in_file" Command: This nifty new feature allows Auto-GPT to modify files without loading them entirely.
- Enhanced Token Counting: We've refined our token counting system to provide more precise cost estimates.
Deprecated Commands ❌
As part of our ongoing commitment to refining Auto-GPT, the following commands, which we determined to be either better suited as plugins or redundant, have been retired from the core application:
- analyze_code
- write_tests
- improve_code
- audio_text
- web_playwright
- web_requests
Progress Update on Re-Architecting 🚧
As you may recall, we recently embarked on a significant re-architecting journey to future-proof the Auto-GPT project. We're thrilled to report that elements of this massive overhaul are now being integrated back into the core application. For instance, you may notice less reliance on global state being passed around via singletons.
Stay tuned for further updates and advancements in our future releases! Head over to the discussion forums or discord to share your feedback on this release, and we appreciate your continued support.
New Contributors & Notable Catalysts 🦾
- @DrMurx made their first contribution in #2821
- @BorntraegerMarc made their first contribution in #1569
- @javableu made their first contribution in #4167
- @scenaristeur made their first contribution in #4561
- @Qoyyuum made their first contribution in #2486
What's Changed 📜
- Add
replace_in_file
command to change occurrences of text in a file by @bfalans in #4565 - Update OpenAI model info and remove duplicate modelsinfo.py by @Pwuts in #4700
- Implement loading
MemoryItem
s from file inJSONFileMemory
by @Pwuts in #4703 - Count tokens with tiktoken by @merwanehamadi in #4704
- Refactor module layout of command classes by @erik-megarad in #4706
- Remove analyze code by @merwanehamadi in #4705
- Remove write_tests and improve_code by @merwanehamadi in #4707
- Remove app commands, audio text and playwright by @merwanehamadi in #4711
- Improve plugin backward compatibility by @lc0rp in #4716
- Fix summarization happening in first cycle by @merwanehamadi in #4719
- Bulletin.md update for 0.4.1 release by @lc0rp in #4721
- Use JSON format for commands signature by @merwanehamadi in #4714
- Fix execute_command coming from plugins in 0.4.1 by @erik-megarad in #4730
- Fix execute_command coming from plugins by @erik-megarad in #4729
- Pass config everywhere in order to get rid of singleton by @merwanehamadi in #4666
- Remove config from command decorator by @merwanehamadi in #4736
- Fix issues with execute_python_code responses by @erik-megarad in #4738
- Retry 503 OpenAI errors by @merwanehamadi in #4745
- Sync release v0.4.1 back into master by @lc0rp in #4741
- Merge Release v0.4.2 back to master by @merwanehamadi in #4747
- Remove config singleton by @merwanehamadi in #4737
- Make JSON errors more silent by @merwanehamadi in #4748
- Fix up Python execution commands by @Wladastic in #4756
- OpenAI Functions Support by @erik-megarad in #4683
- Create run_task python hook to interface with benchmarks by @merwanehamadi in #4778
- ❇️ Improved OpenAI API Key Insert to Env by @Qoyyuum in #2486
- Link all challenges to benchmark python hook by @merwanehamadi in #4786
- Prevent docker-compose.yml and Dockerfile from being written by @erik-megarad in #4761
- Only take subclasses of AutoGPTPluginTemplate as plugins by @ppetermann in #4345
- Filtering out ANSI escape codes in printed assistant thoughts by @lc0rp in #4812
- Unregister commands incompatible with current config. by @lc0rp in #4815
- Bulletin.md updates and version toggling by @lc0rp in #4816
- Release v0.4.3 by @lc0rp in #4802
Full Changelog: v0.4.2...v0.4.3
Auto-GPT v0.4.3-alpha
We're excited to present the 0.4.3 maintenance release of Auto-GPT! This update primarily focuses on refining the LLM command execution, extending support for OpenAI's latest models (including the powerful GPT-3 16k model), and laying the groundwork for future compatibility with OpenAI's innovative function calling feature.
Release Highlights 🌟
- OpenAI API Key Prompt: Auto-GPT will now courteously prompt users for their OpenAI API key, if it's not already provided.
- Summarization Enhancements: We've optimized Auto-GPT's use of the LLM context window even further, boosting the effectiveness of summarization tasks.
- JSON Memory Reading: Support for reading memories from JSON files has been improved, resulting in enhanced task execution.
- New "replace_in_file" Command: This nifty new feature allows Auto-GPT to modify files without loading them entirely.
- Enhanced Token Counting: We've refined our token counting system to provide more precise cost estimates.
Deprecated Commands ❌
As part of our ongoing commitment to refining Auto-GPT, the following commands, which we determined to be either better suited as plugins or redundant, have been retired from the core application:
- analyze_code
- write_tests
- improve_code
- audio_text
- web_playwright
- web_requests
Progress Update on Re-Architecting 🚧
As you may recall, we recently embarked on a significant re-architecting journey to future-proof the Auto-GPT project. We're thrilled to report that elements of this massive overhaul are now being integrated back into the core application. For instance, you may notice less reliance on global state being passed around via singletons.
Stay tuned for further updates and advancements in our future releases! Head over to the discussion forums or discord to share your feedback on this release, and we appreciate your continued support.
New Contributors & Notable Catalysts 🦾
- @DrMurx made their first contribution in #2821
- @BorntraegerMarc made their first contribution in #1569
- @javableu made their first contribution in #4167
- @scenaristeur made their first contribution in #4561
- @Qoyyuum made their first contribution in #2486
What's Changed 📜
- Add
replace_in_file
command to change occurrences of text in a file by @bfalans in #4565 - Update OpenAI model info and remove duplicate modelsinfo.py by @Pwuts in #4700
- Implement loading
MemoryItem
s from file inJSONFileMemory
by @Pwuts in #4703 - Count tokens with tiktoken by @merwanehamadi in #4704
- Refactor module layout of command classes by @erik-megarad in #4706
- Remove analyze code by @merwanehamadi in #4705
- Remove write_tests and improve_code by @merwanehamadi in #4707
- Remove app commands, audio text and playwright by @merwanehamadi in #4711
- Fix summarization happening in first cycle by @merwanehamadi in #4719
- Bulletin.md update for 0.4.1 release by @lc0rp in #4721
- Use JSON format for commands signature by @merwanehamadi in #4714
- Fix execute_command coming from plugins in 0.4.1 by @erik-megarad in #4730
- Fix execute_command coming from plugins by @erik-megarad in #4729
- Pass config everywhere in order to get rid of singleton by @merwanehamadi in #4666
- Remove config from command decorator by @merwanehamadi in #4736
- Fix issues with execute_python_code responses by @erik-megarad in #4738
- Retry 503 OpenAI errors by @merwanehamadi in #4745
- Sync release v0.4.1 back into master by @lc0rp in #4741
- Merge Release v0.4.2 back to master by @merwanehamadi in #4747
- Remove config singleton by @merwanehamadi in #4737
- Make JSON errors more silent by @merwanehamadi in #4748
- Fix up Python execution commands by @Wladastic in #4756
- OpenAI Functions Support by @erik-megarad in #4683
- Create run_task python hook to interface with benchmarks by @merwanehamadi in #4778
- ❇️ Improved OpenAI API Key Insert to Env by @Qoyyuum in #2486
- Link all challenges to benchmark python hook by @merwanehamadi in #4786
- Prevent docker-compose.yml and Dockerfile from being written by @erik-megarad in #4761
- Only take subclasses of AutoGPTPluginTemplate as plugins by @ppetermann in #4345
Full Changelog: v0.4.2...v0.4.3-alpha
Auto-GPT v0.4.2 (hotfix)
The 503 error has been more frequent the past hours so we added a hotfix to retry the call if this error is returned, otherwise Auto-GPT stops.