-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I cannot compress the model #29
Comments
Hey @dannadori, thanks for your interest. What version of Tensorflow are you using? I think back when I implemented the pruner, it assumed the batch norm layers all had the op name as Let me know if that works for you, but if not, I can take a look later in the week. |
Thanks @oandrienko I tried. That is I replaced usedBatchNorm with FusedBatchNormV3 at two files.
So, I inserted
Any idea? |
Can you let me know what version of Tensorflow you are using? |
1.15.2 |
Hey @dannadori, sorry for the delay, I've been super busy. I can try to check it this weekend, I think |
Sorry, I couldn't do anything about this, because I was very busy. |
Hi @oandrienko , I met the same problem: Thank you in advance. |
Thanks for you great work.
I tried the training my own dataset with refering
https://github.com/oandrienko/fast-semantic-segmentation/blob/master/docs/icnet.md
And stage1 works fine. But I cannot compress the model at stage2.
And I got this error.
I tried to know which node is bad by inserting print(next_node.op) to filter_pruner.py and this output is 'FusedBatchNormV3'
Do you have any idea workaround this.
The text was updated successfully, but these errors were encountered: