Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] How to Configure Crew AI to Use a Custom LLM Endpoint? #1775

Open
VishnuPJ opened this issue Dec 17, 2024 · 2 comments
Open

[BUG] How to Configure Crew AI to Use a Custom LLM Endpoint? #1775

VishnuPJ opened this issue Dec 17, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@VishnuPJ
Copy link

VishnuPJ commented Dec 17, 2024

Description

I am trying to configure Crew AI to connect to a locally hosted LLM using a custom API endpoint. Here’s what I’ve done so far:

LLM API Details
API Endpoint: [https://localhost:8080chat/completions]

Input Example:

{
    "model": "Custom_model",
    "messages": [
        {"role": "system", "content": "You are a chatbot."},
        {"role": "user", "content":"Tell me about India"}
    ],
    "max_tokens": 2048,
    "temperature": 0.7,
    "top_p": 0.9,
    "stream": 0
}

Response Example:

{
    "choices": [
        {
            "finish_reason": "stop",
            "index": 0,
            "logprobs": null,
            "message": {
                "content": "India is a country located in South Asia, officially known as the Republic of India...",
                "role": "assistant"
            }
        }
    ],
    "created": 1677652288,
    "id": "chatcmpl-123",
    "model": "Custom_model",
    "object": "chat.completion",
    "system_fingerprint": "fp_44709d6fcb",
    "usage": {
        "completion_tokens": 476,
        "prompt_tokens": 18,
        "total_tokens": 494
    }
}

I found the following snippet in the Crew AI documentation:

llm = LLM(
    model="custom-model-name",
    base_url="https://api.your-provider.com/v1",
    api_key="your-api-key"
)
agent = Agent(llm=llm, ...)

Using this as a reference, I tried configuring my locally hosted LLM:

llm = LLM(
    model="custom/Custom-model",
    base_url="https://localhost:8080chat/completions"
)
agent = Agent(llm=llm, ...)

But I keep getting the error
"
string_response = response_json["data"][0]["output"][0]
string_response = response_json["data"][0]["output"][0]
~~~~~~~~~~~~~^^^^^^^^
KeyError: 'data'

KeyError: 'data'
"

Questions

  1. Is there a recommended way to adapt the input and output formats when using a custom API?
  2. Do I need to extend or override any components in Crew AI to handle this configuration?
  3. Is there additional documentation or examples for integrating custom endpoints?

Steps to Reproduce

.

Expected behavior

.

Screenshots/Code snippets

.

Operating System

Windows 11

Python Version

3.11

crewAI Version

.

crewAI Tools Version

.

Virtual Environment

Venv

Evidence

.

Possible Solution

.

Additional context

.

@VishnuPJ VishnuPJ added the bug Something isn't working label Dec 17, 2024
@joaoigm
Copy link
Contributor

joaoigm commented Dec 18, 2024

is your base_url right? I think it is missing a /

https://localhost:8080chat/completions

@joaoigm
Copy link
Contributor

joaoigm commented Dec 18, 2024

Besides that, I think the reason you are facing this error is that you need to pass only your local endpoint, not your /chat/completions

try configure only https://localhost:8080

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants