You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Because the inplace_abn(the third party libs that this project used) have some bugs in 8 GPUs pytorch0.4, if we want to train in 8GPUs, we should update the newest bn in https://github.com/mapillary/inplace_abn.
But the newset bn now requires to use DistributedDataParallel instead of DataParallel. so, could you
please create a branch that use the newest bn in pytorch 1.0 ? or give me some advice that how to change this project to make it compatible with the newest bn.
Thank you very much!
The text was updated successfully, but these errors were encountered:
Good suggestion. But I'm afraid that I don't have the time to do this.
You can simply replace current inplace-ABN with the newest one. And add torch.distributed into train.py. Maybe you can get more information in Pytorch doc or some examples.
I have tried the newest Inplace-abn in my model(pspnet), and you can follow the script in this page : https://oldpan.me/archives/pytorch-to-use-multiple-gpus . Meanwhile, make sure that you have followed the steps in Pytorch doc.
Because the inplace_abn(the third party libs that this project used) have some bugs in 8 GPUs pytorch0.4, if we want to train in 8GPUs, we should update the newest bn in https://github.com/mapillary/inplace_abn.
But the newset bn now requires to use DistributedDataParallel instead of DataParallel. so, could you
please create a branch that use the newest bn in pytorch 1.0 ? or give me some advice that how to change this project to make it compatible with the newest bn.
Thank you very much!
The text was updated successfully, but these errors were encountered: