-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyTorch Lightning tests shouldn't hit network #551
Comments
Closes #547 Ref #551 Signed-off-by: Ben Firshman <[email protected]>
Closes #547 Ref #551 Signed-off-by: Ben Firshman <[email protected]>
Closes #547 Ref #551 Signed-off-by: Ben Firshman <[email protected]>
Maybe, some kind of fake data could be generated with NumPy or PyTorch instead of using a "real" dataset? |
Yeah this doesn't need to do anything real. |
Stopgap fix for #551 Signed-off-by: Ben Firshman <[email protected]>
These files are pretty small. Could just include them in the repo. Ideally we'd have a test that didn't actually do anything on real data though. https://github.com/golbin/TensorFlow-MNIST/tree/master/mnist/data |
A pytext fixture with dummy generated train data could be nice, as it could be used in future test cases easily. |
Currently the PyTorch Lightning tests download mnist and train a real model. We shouldn't do that to just test that the callback works -- they should run a fake training process of some kind.
The text was updated successfully, but these errors were encountered: