Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: example not working? about: using_ollama_as_llm_and_embedding.py #92

Open
danshuitaihejie opened this issue Nov 5, 2024 · 4 comments

Comments

@danshuitaihejie
Copy link

When I use the "qwen2.5" LLM model in the file using_ollama_as_llm_and_embedding.py, I am unable to extract any entities.

After running the command:

$ python ./examples/using_ollama_as_llm_and_embedding.py

I waited for 4 hours and received the following logs:

INFO:httpx:HTTP Request: POST http://127.0.0.1:11434/api/chat "HTTP/1.1 200 OK" ⠹ Processed 42(100%) chunks, 0 entities(duplicated), 0 relations(duplicated) 
WARNING:nano-graphrag: Didn't extract any entities; maybe your LLM is not working. 
WARNING:nano-graphrag: No new entities found.

It appears that no entities were extracted, and there are warnings indicating a potential issue with the LLM model.


image
image

@danshuitaihejie
Copy link
Author

Same result whether I use llama3.2 or gemma2, but it works in other graphrag tests

@danshuitaihejie
Copy link
Author

@yagneshgooglegithub Great! Very helpful!

@vedprakashnautiyal
Copy link

Hey @danshuitaihejie , was your problem solved ?

I tried doing what was mentioned in the FAQ but it didn't work for me.
I am using llama3.2:1b model and I tried with a smaller document too but no luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants