Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting logprobs from CrewAIOutput (crew.kickoff()) #1768

Open
brunodifranco opened this issue Dec 16, 2024 · 2 comments
Open

Getting logprobs from CrewAIOutput (crew.kickoff()) #1768

brunodifranco opened this issue Dec 16, 2024 · 2 comments
Labels
feature-request New feature or request

Comments

@brunodifranco
Copy link

brunodifranco commented Dec 16, 2024

Feature Area

Agent capabilities

Is there any way to get the logprobs from a Crew execution ? I've looked at every result attribute but couldn't find the logprobs, even setting up the logprobs=True parameter in the LLM (Obs: I also tested with "from crewai import LLM" instead of the langchain version).

Bellow is my code:

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from crewai import Agent, Task, Crew, Process
from crewai.crews.crew_output import CrewOutput

load_dotenv()


class LLMPlanning:

    def __init__(
        self,
        question: str,
        response_format: str,
        context: str,
        expected_output: str,
        temperature: float = 0.3,
        llm_model: str = "gpt-4o-mini",
        verbose: bool = False,
    ):

        self.llm_model = llm_model
        self.verbose = verbose
        self.llm = ChatOpenAI(
            model=self.llm_model,
            temperature=temperature,
            timeout=45,
            model_kwargs={"response_format": {"type": "json_object"}},
            logprobs=True,
        )

        self.planning_llm = ChatOpenAI(
            model=self.llm_model, temperature=temperature, timeout=45, logprobs=True
        )

        self.agent = Agent(
            llm=self.llm,
            role="Juridical documents analyst",
            goal="""Use the following context (pieces of the healthcare insurence contract) to answer the question at the end.
            If you don't know the answer, just say that you don't know, don't try to make up an answer.
            Keep the answer as concise as possible. Always reply in Portuguese from Brazil.""",
            backstory=f"You are a specialized analyst in juridical documents related to healthcare plans.",
            verbose=self.verbose,
            # max_iter=MAX_ITER
        )
        self.task = Task(
            description=f"""
            ## QUESTION: {question}

            ## HINTS AND RESPONSE FORMAT: {response_format}

            ## CONTEXT:
            The document content starts and ends between the following delimiters (don't use any text above as context):
            <<<START OF DOCUMENT>>>
            {context}
            <<<END OF DOCUMENT>>>
            
            Your Answer:
            """,
            agent=self.agent,
            expected_output=expected_output,
        )

    def get_final_result(self) -> CrewOutput:

        crew = Crew(
            agents=[self.agent],
            tasks=[self.task],
            planning=True,
            planning_llm=self.planning_llm,
            process=Process.sequential,
            verbose=self.verbose,
        )
        result = crew.kickoff()

        return result

    def run(self) -> CrewOutput:
        """
        Runs the full process.

        Returns
        -------
        final_result: CrewOutput
            The final result.
        """

        final_result = self.get_final_result()
        return final_result

Describe the solution you'd like

Being able to get the logprobs from a Crew execution. For instance:

[{'token': 'I', 'bytes': [73], 'logprob': -0.26341408, 'top_logprobs': []},
{'token': "'m",
'bytes': [39, 109],
'logprob': -0.48584133,
'top_logprobs': []},
{'token': ' just',
'bytes': [32, 106, 117, 115, 116],
'logprob': -0.23484154,
'top_logprobs': []},
{'token': ' a',
'bytes': [32, 97],
'logprob': -0.0018291725,
'top_logprobs': []},
{'token': ' computer',
'bytes': [32, 99, 111, 109, 112, 117, 116, 101, 114],
'logprob': -0.052299336,
'top_logprobs': []}]

Describe alternatives you've considered

No response

Additional context

No response

Willingness to Contribute

I could provide more detailed specifications

@brunodifranco brunodifranco added the feature-request New feature or request label Dec 16, 2024
@joaoigm
Copy link
Contributor

joaoigm commented Dec 18, 2024

The reason is because of this.

response = litellm.completion(**params)
return response["choices"][0]["message"]["content"]

src/crewai/llm.py

The logprobs is a property from choices but only content is returned

@joaoigm
Copy link
Contributor

joaoigm commented Dec 18, 2024

I made a quick analysis of the base code and concluded that to return LLM response infos like logprobs, it would be necessary to change all the crew execution layers. I don't know if this is part of the product roadmap but is a big change.

@bhancockio what do you think? I don't see any other option to return the other output infos from LLM but I also have a short time looking at the code base

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants