You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to develop the rotation equivariant version of this neural network. I succesfully replaced every layer of the network, but there is the [torch.nn.LayerNorm](https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html) layer that I wasn't able to replace (I put the InnerBatchNorm instead). Performance lays quite lower wrt its classical version (~15% lower on ImageNet classification) and I think this could be the culprit.
Is there a plan to support this norm layer? I'm willing to contribute, but I think I need guidance.
Cheers.
The text was updated successfully, but these errors were encountered:
Definitely interesting, I feel like I need to linger over the theory a bit more to understand how to implement it by myself. Do you think it'd be useful to take a look at our layers together?
Hi there,
thank you for your groundbreaking work.
I'm trying to develop the rotation equivariant version of this neural network. I succesfully replaced every layer of the network, but there is the
[torch.nn.LayerNorm](https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html)
layer that I wasn't able to replace (I put the InnerBatchNorm instead). Performance lays quite lower wrt its classical version (~15% lower on ImageNet classification) and I think this could be the culprit.Is there a plan to support this norm layer? I'm willing to contribute, but I think I need guidance.
Cheers.
The text was updated successfully, but these errors were encountered: