Replies: 1 comment
-
Hi. Basically, I was wondering if anyone had tried fine-tuning with the parsed results from Instructor? If so, could you please share how it is done? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
It is unclear to me how to create a JSONL file with user/assistant pairs (for fine-tuning a model) if one uses Instructor to get the structured data. What would it be the real answer of the system? Is
raw_response
from the following link?Do you need to include response_model schema in user messages in your finetuning jsonl? Or should it just be the system/user/assistant unstructured messages? Or alternatively, should I try to access the prompt given to the LLM ([as discussed here](#350)) and raw_response as user/assistant pairs?
I was also considering this approach: #580 (reply in thread)
but it is unclear to me if this is the correct approach for creating a jsonl dataset for fine-tuning.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions