You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there any way to get the logprobs from a Crew execution ? I've looked at every result attribute but couldn't find the logprobs, even setting up the logprobs=True parameter in the LLM (Obs: I also tested with "from crewai import LLM" instead of the langchain version).
Bellow is my code:
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from crewai import Agent, Task, Crew, Process
from crewai.crews.crew_output import CrewOutput
load_dotenv()
class LLMPlanning:
def __init__(
self,
question: str,
response_format: str,
context: str,
expected_output: str,
temperature: float = 0.3,
llm_model: str = "gpt-4o-mini",
verbose: bool = False,
):
self.llm_model = llm_model
self.verbose = verbose
self.llm = ChatOpenAI(
model=self.llm_model,
temperature=temperature,
timeout=45,
model_kwargs={"response_format": {"type": "json_object"}},
logprobs=True,
)
self.planning_llm = ChatOpenAI(
model=self.llm_model, temperature=temperature, timeout=45, logprobs=True
)
self.agent = Agent(
llm=self.llm,
role="Juridical documents analyst",
goal="""Use the following context (pieces of the healthcare insurence contract) to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Keep the answer as concise as possible. Always reply in Portuguese from Brazil.""",
backstory=f"You are a specialized analyst in juridical documents related to healthcare plans.",
verbose=self.verbose,
# max_iter=MAX_ITER
)
self.task = Task(
description=f"""
## QUESTION: {question}
## HINTS AND RESPONSE FORMAT: {response_format}
## CONTEXT:
The document content starts and ends between the following delimiters (don't use any text above as context):
<<<START OF DOCUMENT>>>
{context}
<<<END OF DOCUMENT>>>
Your Answer:
""",
agent=self.agent,
expected_output=expected_output,
)
def get_final_result(self) -> CrewOutput:
crew = Crew(
agents=[self.agent],
tasks=[self.task],
planning=True,
planning_llm=self.planning_llm,
process=Process.sequential,
verbose=self.verbose,
)
result = crew.kickoff()
return result
def run(self) -> CrewOutput:
"""
Runs the full process.
Returns
-------
final_result: CrewOutput
The final result.
"""
final_result = self.get_final_result()
return final_result
Describe the solution you'd like
Being able to get the logprobs from a Crew execution. For instance:
I made a quick analysis of the base code and concluded that to return LLM response infos like logprobs, it would be necessary to change all the crew execution layers. I don't know if this is part of the product roadmap but is a big change.
@bhancockio what do you think? I don't see any other option to return the other output infos from LLM but I also have a short time looking at the code base
Feature Area
Agent capabilities
Is there any way to get the logprobs from a Crew execution ? I've looked at every result attribute but couldn't find the logprobs, even setting up the logprobs=True parameter in the LLM (Obs: I also tested with "from crewai import LLM" instead of the langchain version).
Bellow is my code:
Describe the solution you'd like
Being able to get the logprobs from a Crew execution. For instance:
[{'token': 'I', 'bytes': [73], 'logprob': -0.26341408, 'top_logprobs': []},
{'token': "'m",
'bytes': [39, 109],
'logprob': -0.48584133,
'top_logprobs': []},
{'token': ' just',
'bytes': [32, 106, 117, 115, 116],
'logprob': -0.23484154,
'top_logprobs': []},
{'token': ' a',
'bytes': [32, 97],
'logprob': -0.0018291725,
'top_logprobs': []},
{'token': ' computer',
'bytes': [32, 99, 111, 109, 112, 117, 116, 101, 114],
'logprob': -0.052299336,
'top_logprobs': []}]
Describe alternatives you've considered
No response
Additional context
No response
Willingness to Contribute
I could provide more detailed specifications
The text was updated successfully, but these errors were encountered: