-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IR model under 2024.4.1 and 2022.1 #27601
Comments
|
@slyalin Okay, Thank you !! |
Theoretically, 2 years difference can affect the performance because the IR conversion/transformation pipeline has been changed since then. But I would consider this as a bug if it doesn't involve some unavoidable edge cases. So using old IR if it works in both runtimes should be OK. If you can re-convert the IR with the latest version and compare, please do and share your results. |
@slyalin okay, thank you |
@slyalin hi, I want to ask more about performance config for CPU, I used to config vino for single-threaded inference by following configs in 2022.1.0 |
Hi @Jsy0220, |
@dmitry-gorokhov okay, and two more question:
|
|
Hi, I want to upgrade openvino version from 2022.1.0 to 2024.4.1 recently, and have some questions:
compress_to_fp16
a new option in 2024.4.1 compared to 2022.1.0 ? Does it affect inference time and memory in runtime ?The text was updated successfully, but these errors were encountered: