We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am trying to configure Crew AI to connect to a locally hosted LLM using a custom API endpoint. Here’s what I’ve done so far:
LLM API Details API Endpoint: [https://localhost:8080chat/completions]
Input Example:
{ "model": "Custom_model", "messages": [ {"role": "system", "content": "You are a chatbot."}, {"role": "user", "content":"Tell me about India"} ], "max_tokens": 2048, "temperature": 0.7, "top_p": 0.9, "stream": 0 }
Response Example:
{ "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "India is a country located in South Asia, officially known as the Republic of India...", "role": "assistant" } } ], "created": 1677652288, "id": "chatcmpl-123", "model": "Custom_model", "object": "chat.completion", "system_fingerprint": "fp_44709d6fcb", "usage": { "completion_tokens": 476, "prompt_tokens": 18, "total_tokens": 494 } }
I found the following snippet in the Crew AI documentation:
llm = LLM( model="custom-model-name", base_url="https://api.your-provider.com/v1", api_key="your-api-key" ) agent = Agent(llm=llm, ...)
Using this as a reference, I tried configuring my locally hosted LLM:
llm = LLM( model="custom/Custom-model", base_url="https://localhost:8080chat/completions" ) agent = Agent(llm=llm, ...)
But I keep getting the error " string_response = response_json["data"][0]["output"][0] string_response = response_json["data"][0]["output"][0] ~~~~~~~~~~~~~^^^^^^^^ KeyError: 'data'
KeyError: 'data' "
Questions
.
Windows 11
3.11
Venv
The text was updated successfully, but these errors were encountered:
is your base_url right? I think it is missing a /
/
https://localhost:8080chat/completions
Sorry, something went wrong.
Besides that, I think the reason you are facing this error is that you need to pass only your local endpoint, not your /chat/completions
/chat/completions
try configure only https://localhost:8080
https://localhost:8080
No branches or pull requests
Description
I am trying to configure Crew AI to connect to a locally hosted LLM using a custom API endpoint. Here’s what I’ve done so far:
LLM API Details
API Endpoint: [https://localhost:8080chat/completions]
Input Example:
Response Example:
I found the following snippet in the Crew AI documentation:
Using this as a reference, I tried configuring my locally hosted LLM:
But I keep getting the error
"
string_response = response_json["data"][0]["output"][0]
string_response = response_json["data"][0]["output"][0]
~~~~~~~~~~~~~^^^^^^^^
KeyError: 'data'
KeyError: 'data'
"
Questions
Steps to Reproduce
.
Expected behavior
.
Screenshots/Code snippets
.
Operating System
Windows 11
Python Version
3.11
crewAI Version
.
crewAI Tools Version
.
Virtual Environment
Venv
Evidence
.
Possible Solution
.
Additional context
.
The text was updated successfully, but these errors were encountered: