We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好! 我用自己的数据集,按照您的格式: ├── images # xx.jpg example │ ├── train2017 │ │ ├── 000001.jpg │ │ ├── 000002.jpg │ │ └── 000003.jpg │ └── val2017 │ ├── 100001.jpg │ ├── 100002.jpg │ └── 100003.jpg └── labels # xx.txt example ├── train2017 │ ├── 000001.txt │ ├── 000002.txt │ └── 000003.txt └── val2017 ├── 100001.txt ├── 100002.txt └── 100003.txt 运行: python train.py --data data/fruit.yaml --cfg models/v5Lite-e.yaml --weights v5lite-e.pt --batch-size 64 --device 0 --name z_frui 错误是: Image sizes 640 train, 640 test Using 8 dataloader workers Logging results to runs/train/z_fruit6 Starting training for 300 epochs...
Epoch gpu_mem box obj cls total labels img_size 0/299 0.902G 0.09024 0.02284 0.03911 0.1522 75 640: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [00:14<00:00, 4.42it/s] Class Images Labels P R [email protected] [email protected]:.95: 0%| | 0/4 [00:00<?, ?it/s]
Traceback (most recent call last): File "train.py", line 544, in train(hyp, opt, device, tb_writer) File "train.py", line 355, in train results, maps, times = test.test(data_dict, File "/home/zhongzw/fruit_dataset/YOLOv5-Lite/test.py", line 110, in test out, _, train_out = model(img, augment=augment) # inference and training outputs ValueError: not enough values to unpack (expected 3, got 2) 您知道是什么原因导致这个输入输出不一样吗? parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default='weights/v5lite-s.pt', help='initial weights path') parser.add_argument('--cfg', type=str, default='models/v5ite-s.yaml', help='model.yaml path') parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path') parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path') parser.add_argument('--epochs', type=int, default=300) parser.add_argument('--batch-size', type=int, default=128, help='total batch size for all GPUs') parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes') parser.add_argument('--rect', action='store_true', help='rectangular training') parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') parser.add_argument('--notest', action='store_true', help='only test final epoch') parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check') parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters') parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') parser.add_argument('--cache-images', action='store_true', help='cache images for faster training') parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer') parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify') parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers') parser.add_argument('--project', default='runs/train', help='save to project/name') parser.add_argument('--entity', default=None, help='W&B entity') parser.add_argument('--name', default='exp', help='save to project/name') parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') parser.add_argument('--quad', action='store_true', help='quad dataloader') parser.add_argument('--linear-lr', action='store_true', help='linear LR') parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table') parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B') parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch') parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used') opt = parser.parse_args() 这部分我是没有改的;
The text was updated successfully, but these errors were encountered:
对了,然后我把报错的这行代码 out, _, train_out = model(img, augment=augment) 改成: out, train_out = model(img, augment=augment)就可以跑了,不知道我改的对不对。
Sorry, something went wrong.
应该是没问题的,之前做heatmap实验遗留下来的小问题,忘记改回去了
用Tags v1.4版本,这个版本没问题的
No branches or pull requests
您好!
我用自己的数据集,按照您的格式:
├── images # xx.jpg example
│ ├── train2017
│ │ ├── 000001.jpg
│ │ ├── 000002.jpg
│ │ └── 000003.jpg
│ └── val2017
│ ├── 100001.jpg
│ ├── 100002.jpg
│ └── 100003.jpg
└── labels # xx.txt example
├── train2017
│ ├── 000001.txt
│ ├── 000002.txt
│ └── 000003.txt
└── val2017
├── 100001.txt
├── 100002.txt
└── 100003.txt
运行: python train.py --data data/fruit.yaml --cfg models/v5Lite-e.yaml --weights v5lite-e.pt --batch-size 64 --device 0 --name z_frui
错误是:
Image sizes 640 train, 640 test
Using 8 dataloader workers
Logging results to runs/train/z_fruit6
Starting training for 300 epochs...
Traceback (most recent call last):
File "train.py", line 544, in
train(hyp, opt, device, tb_writer)
File "train.py", line 355, in train
results, maps, times = test.test(data_dict,
File "/home/zhongzw/fruit_dataset/YOLOv5-Lite/test.py", line 110, in test
out, _, train_out = model(img, augment=augment) # inference and training outputs
ValueError: not enough values to unpack (expected 3, got 2)
您知道是什么原因导致这个输入输出不一样吗?
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default='weights/v5lite-s.pt', help='initial weights path')
parser.add_argument('--cfg', type=str, default='models/v5ite-s.yaml', help='model.yaml path')
parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path')
parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path')
parser.add_argument('--epochs', type=int, default=300)
parser.add_argument('--batch-size', type=int, default=128, help='total batch size for all GPUs')
parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
parser.add_argument('--rect', action='store_true', help='rectangular training')
parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
parser.add_argument('--notest', action='store_true', help='only test final epoch')
parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
parser.add_argument('--project', default='runs/train', help='save to project/name')
parser.add_argument('--entity', default=None, help='W&B entity')
parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--quad', action='store_true', help='quad dataloader')
parser.add_argument('--linear-lr', action='store_true', help='linear LR')
parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
opt = parser.parse_args()
这部分我是没有改的;
The text was updated successfully, but these errors were encountered: