Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add litellm inference #385

Open
wants to merge 50 commits into
base: main
Choose a base branch
from

Conversation

JoelNiklaus
Copy link
Contributor

This PR enables running inference using any model provider supported by litellm.

@HuggingFaceDocBuilderDev
Copy link
Collaborator

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@NathanHB
Copy link
Member

NathanHB commented Dec 3, 2024

Hi @JoelNiklaus ! Is this PR ready or do you need any hep with it ?

@JoelNiklaus
Copy link
Contributor Author

From my side it is ready

@NathanHB
Copy link
Member

NathanHB commented Dec 5, 2024

Nice ! I will merge main and solve possible conflict as we have made a big change on the way we call the CLI. Is it ok for you ?

@JoelNiklaus
Copy link
Contributor Author

Sounds great, thanks!

@NathanHB
Copy link
Member

NathanHB commented Dec 5, 2024

I'm trying to use it, can you provide me with the command you use ?

@JoelNiklaus
Copy link
Contributor Author

For example like this:

lighteval accelerate \
  --model_args litellm,provider=openai,model=gpt-4o-2024-08-06 \
     --tasks "leaderboard|truthfulqa:mc|0|0" \
     --override_batch_size 1 \
     --output_dir="./evals/"
``
`

@NathanHB
Copy link
Member

NathanHB commented Dec 6, 2024

Should be good to go ! Added a few logging fixes for convenience. @JoelNiklaus Tell me what you think, are you able to use it ? :)

@JoelNiklaus
Copy link
Contributor Author

When testing it I receive this error:

Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniconda/base/envs/legal_llm_evaluation/lib/python3.10/logging/config.py", line 544, in configure
    formatters[name] = self.configure_formatter(
  File "/opt/homebrew/Caskroom/miniconda/base/envs/legal_llm_evaluation/lib/python3.10/logging/config.py", line 656, in configure_formatter
    result = self.configure_custom(config)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/legal_llm_evaluation/lib/python3.10/logging/config.py", line 471, in configure_custom
    c = self.resolve(c)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/legal_llm_evaluation/lib/python3.10/logging/config.py", line 399, in resolve
    raise v
  File "/opt/homebrew/Caskroom/miniconda/base/envs/legal_llm_evaluation/lib/python3.10/logging/config.py", line 392, in resolve
    self.importer(used)
ValueError: Cannot resolve 'lighteval.logger.JSONFormatter': No module named 'lighteval.logger'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniconda/base/envs/legal_llm_evaluation/bin/lighteval", line 5, in <module>
    from lighteval.__main__ import app
  File "/Users/joel/Documents/LegalLLMEvaluation/lighteval/src/lighteval/__main__.py", line 85, in <module>
    dictConfig(logging_config)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/legal_llm_evaluation/lib/python3.10/logging/config.py", line 811, in dictConfig
    dictConfigClass(config).configure()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/legal_llm_evaluation/lib/python3.10/logging/config.py", line 547, in configure
    raise ValueError('Unable to configure '

@NathanHB
Copy link
Member

Hey @JoelNiklaus modified the way we pass the chat-templated-messages to the litellm model.
This allows for system prompt and for better managment of fewshots, now fewshots will be a succession of user and assistant messages.

@JoelNiklaus
Copy link
Contributor Author

Ok cool, thanks!

Would it make sense to add this the same way to the openai backend or remove that altogether?

@NathanHB
Copy link
Member

I don;t think we need the openai backend anymore if we have the litellm backend, i'm pretty against having 2 ways of doing the same thing

@clefourrier
Copy link
Member

Check the litellm docs in depth and I agree - I think we'll want to keep inference endpoints however (for leaderboards), even though it's slightly redundant

src/lighteval/models/litellm_model.py Outdated Show resolved Hide resolved
src/lighteval/models/litellm_model.py Show resolved Hide resolved
src/lighteval/models/litellm_model.py Outdated Show resolved Hide resolved
src/lighteval/models/litellm_model.py Show resolved Hide resolved
@clefourrier
Copy link
Member

Tests also need to be fixed ^^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants