You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
I am wondering, how the yolov8 (or 11) would handle detecting e.g. two objects (of different class) with the same center-coordinates?
I remember when I back in the days implemented part of the yolo3 training loop in Keras, that each output "pixel" would be encoded to only detect a single object. The output grid was always 1/32 the size of the input, and hence this could occur from time to time, but rarely.
Today I am considering using the yolov8 for a new project, in which two objects would often have roughly the same center. More often if the output grid is still 1/32 of the input res, then for a 640x640 network, any two objects within 20x20pixels would end up in the same output grid cell.
Does anyone know, if this is still an issue today for yolov8 / 11, or only for the "old" yolov3?
Additional
No response
The text was updated successfully, but these errors were encountered:
👋 Hello @dk-teknologisk-bgd, thank you for your interest in YOLOv5 🚀! Your question raises an interesting point about object detection with overlapping or similar center coordinates. While Ultralytics YOLO models have evolved significantly since YOLOv3, specific technical nuances like the one you mentioned (object encoding and grid resolution) can still depend on various factors, such as the model architecture and the training process.
If this is a ❓ Question regarding custom training or inference behavior, please share additional details about your project setup, including any datasets or use cases you're experimenting with. Verifying the trained model's outputs in situations similar to your described scenario could also yield critical insights.
If this involves a 🐛 Bug or unexpected behavior with YOLOv5, please provide a minimum reproducible example (MRE) to help us assist you more efficiently.
Requirements
Ensure you have Python>=3.8.0 installed with all dependencies configured correctly, including an appropriate version of PyTorch>=1.8. Begin with the repository’s setup instructions to confirm everything is operational.
Environments
YOLOv5 is compatible with various testing and training environments, including notebooks with free GPUs, Docker setups, and cloud services with pre-installed machine learning dependencies. For your specific concerns about overlapping objects, experimenting with one of these configurations could provide additional clarity.
Status
Please check the repository's Continuous Integration (CI) badge to confirm that all tests are passing. CI ensures compatibility and proper functionality across different use cases, including training, validation, and inference processes.
This is an automated response to guide you through initial troubleshooting steps. An Ultralytics engineer will also review your question and offer further assistance soon ✨.
Search before asking
Question
I am wondering, how the yolov8 (or 11) would handle detecting e.g. two objects (of different class) with the same center-coordinates?
I remember when I back in the days implemented part of the yolo3 training loop in Keras, that each output "pixel" would be encoded to only detect a single object. The output grid was always 1/32 the size of the input, and hence this could occur from time to time, but rarely.
Today I am considering using the yolov8 for a new project, in which two objects would often have roughly the same center. More often if the output grid is still 1/32 of the input res, then for a 640x640 network, any two objects within 20x20pixels would end up in the same output grid cell.
Does anyone know, if this is still an issue today for yolov8 / 11, or only for the "old" yolov3?
Additional
No response
The text was updated successfully, but these errors were encountered: