Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Supported precisions set is not available for Convert operation. #27764

Open
starlitsky2010 opened this issue Nov 27, 2024 · 3 comments
Open
Assignees
Labels
bug Something isn't working category: CPU OpenVINO CPU plugin support_request

Comments

@starlitsky2010
Copy link

OpenVINO Version

Master

Operating System

Android System

Device used for inference

CPU

Framework

ONNX

Model used

mobelinet-v3-tf

Issue description

LD_LIBRARY_PATH=/data/local/tmp ./data/local/tmp/benchmark_app -d CPU -m /data/local/tmp/mobelinet-v3-tf/v3-small_224_1.0_float.xml -hint throughput

Step-by-step reproduction

According to https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build_android.md I built the android ONNX with ABI x86_64 and enable the benchmark_app compilation with latest version OpenVINO master baseline (commit ID 287ab98)

But when I try mobelinet-v3-tf example mentioned in the official document above, the error occurs.
Could OpenVINO guys give some tips about it?

屏幕截图 2024-11-27 102841

Relevant log output

[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2025.0.0-17426-287ab9883ac
[ INFO ]
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2025.0.0-17426-287ab9883ac
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 33.87 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     input:0 (node: input) : f32 / [...] / [1,224,224,3]
[ INFO ] Network outputs:
[ INFO ]     MobilenetV3/Predictions/Softmax:0 (node: MobilenetV3/Predictions/Softmax) : f32 / [...] / [1,1001]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     input:0 (node: input) : u8 / [N,H,W,C] / [1,224,224,3]
[ INFO ] Network outputs:
[ INFO ]     MobilenetV3/Predictions/Softmax:0 (node: MobilenetV3/Predictions/Softmax) : f32 / [...] / [1,1001]
[Step 7/11] Loading the model to the device
[ ERROR ] Exception from src/inference/src/cpp/core.cpp:107:
Exception from src/inference/src/dev/plugin.cpp:53:
Check 'jitter != jitters.end()' failed at src/common/snippets/src/lowered/target_machine.cpp:19:
Supported precisions set is not available for Convert operation.


### Issue submission checklist

- [X] I'm reporting an issue. It's not a question.
- [X] I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
- [X] There is reproducer code and related data files such as images, videos, models, etc.
@starlitsky2010 starlitsky2010 added bug Something isn't working support_request labels Nov 27, 2024
@a-sidorova a-sidorova self-assigned this Nov 27, 2024
@a-sidorova
Copy link
Contributor

a-sidorova commented Nov 27, 2024

@starlitsky2010 Hi! Thank you for the reporting issue!

The error message shows that the exception has been thrown from the Graph Compiler "Snippets".
As temporary solution, I recommend to disable Snippets tokenization. To do that you need:

  • Create the config file config.json with the content { "CPU" : {"SNIPPETS_MODE" : "DISABLE"} }
  • Add the key -load_config config.json to your command line for benchmark_app:
LD_LIBRARY_PATH=/data/local/tmp ./data/local/tmp/benchmark_app -d CPU -m /data/local/tmp/mobelinet-v3-tf/v3-small_224_1.0_float.xml -hint throughput -load_config config.json

I believe it should help to temporary fix the problem.

@chenhu-wang May I ask you to take a look please at the exception in Snippets? Looks like Convert op on model input was not transformed to Snippets dialect: ConvertTruncation or ConvertSaturation. Thank you in advance!

@rkazants rkazants added the category: CPU OpenVINO CPU plugin label Nov 27, 2024
@chenhu-wang
Copy link
Contributor

chenhu-wang commented Nov 28, 2024

@starlitsky2010 ,Could you please provide the CPU info you used and full command line to reproduce. I see you set layout but not full displayed in the pic? Thanks!

@chenhu-wang
Copy link
Contributor

Hi @starlitsky2010, I can not reproduce locally. Anyway I analyzed the code and error message, identified possibility that report the error. Could you please try below PR if fix you issue. If not, could please provide detailed info: the host machine and OS you build binary, target machine you run app, the full command line?
#27948

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working category: CPU OpenVINO CPU plugin support_request
Projects
None yet
Development

No branches or pull requests

4 participants