[Performance]: Why are the inference results in Python different from those in C++? #28188
Open
3 tasks done
Labels
category: Python API
OpenVINO Python bindings
performance
Performance related topics
support_request
OpenVINO Version
No response
Operating System
Windows System
Device used for inference
CPU
OpenVINO installation
PyPi
Programming Language
C++
Hardware Architecture
x86 (64 bits)
Model used
mobilenet v2
Model quantization
No
Target Platform
No response
Performance issue description
In Python, I used the MobileNet model to infer an image, and both PyTorch and OpenVINO results were: Samoyed: 83.0%.
I exported the PyTorch MobileNet model to an OpenVINO IR file using the following Python code.
Then, using the following C++ code for inference, the result was: Samoyed: 68.7778%.
The difference in results is quite large. Is there a problem with my C++ code?
Step-by-step reproduction
No response
Issue submission checklist
The text was updated successfully, but these errors were encountered: