YOLO-NAS and YOLOv9¶
YOLO-NAS¶
Deci researchers have released a new object detection model, YOLO-NAS, that outperforms SOTA object detection models (yes, we are looking at YOLOv8) in small object detection, localization accuracy, and performance per computation ratio.
YOLO-NAS is available under an open source license with pre-trained weights available for non-commercial use in SuperGradients, Deci’s PyTorch-based open source computer vision training library. With SuperGradients, users can train models from scratch or fine-tune existing ones by leveraging advanced built-in training techniques such as distributed data parallelism, exponential moving average, automatic mixed-precision, and quantization-aware training.
In previous versions of YOLO, human experts manually designed neural network structures, relying on their experience and intuition. However, this method – which requires exploring huge design spaces with countless possible architectures – remains overly cumbersome and time-consuming.
YOLO-NAS is a new baseline model developed by Deci-AI. It is a game-changer in the world of object detection, providing the best balance between accuracy and latency.
Deci-AI uses AutoNAC technology, an optimization engine developed by Deci. AutoNAC applies Neural Architecture Search (NAS) to refine the architecture of an already trained model. This is done to improve the model’s performance when run on specific hardware, while maintaining its original accuracy as a baseline. By doing so, Deci-AI can maximize hardware utilization and make its deep learning acceleration platform even better.
NAS (Network Architecture Search)¶
To understand YOLO-NAS, it is necessary to explore how the Deci research team discovered the state-of-the-art architecture. This involves a brief introduction to the concept of NAS and AutoNAC.
NAS (Network Architecture Search) is a subfield of AutoML (automated machine learning). NAS involves the development of machine learning models that automatically design and configure deep neural network architectures with the goal of outperforming their counterparts designed, configured, and manually built by human researchers.
Advancing NAS research facilitates the efficient discovery of high-performance neural network architectures through a systematic process that considers factors such as inference speed, available computational resources, architectural complexity, and prediction accuracy. Successful design and development efforts of new architectures produced through NAS techniques have surpassed human-engineered architectures in tasks such as natural language processing, object detection, and image classification.
A significant benefit of using NAS techniques is that the simplification of the machine learning processes involved in the investigation, namely through automation, reduces the human hours involved in the investigation efforts. Furthermore, the scarcity or lack of domain expertise is countered by introducing systems that can consider multiple parameters to optimize and explore multiple internal network configurations in a vast search space through brute force, which eventually configures the optimal network with configurations that, in some cases, can only be achieved with human expertise, creativity and intuition.
NAS techniques are even more relevant today as they focus on optimizing factors such as computational resources, efficiency, accuracy, power consumption and memory usage, which are crucial for adapting architectures to edge devices (smartphones) or real-time scenarios.
Notably, models designed to run on edge devices and compute-scarce environments, such as MobileNetV3 and EfficientDet, have been discovered by designing a new neural architecture and conducting an automated search for suitable architectures that improve the state-of-the-art performance in computer vision tasks.
NAS offers significant benefits to researchers. But the ability to traverse a search space of possible deep neural network architectures and configurations consisting of millions of parameters, combined with factors such as computing resources, target accuracy metrics, and more, requires extensive computing resources. The large computational resources and specialized knowledge required to execute NAS techniques limit their effective utilization to a small number of organizations.
Automated Neural Architecture Construction (AutoNAC)¶
Automated Neural Architecture Construction (AutoNAC) is Deci’s proprietary NAS technology that efficiently explores a vast search space of diverse architectural configurations and structures, considering comprehensive block types, block counts, and channel allocations.
AutoNAC facilitated the discovery of the innovative YOLO-NAS architecture and its variants (YOLO-NAS-S, YOLO-NAS-M, and YOLO-NAS-L architectures) by searching for an optimal model architecture that combined the fundamental architectural contributions of YOLO variants and incorporated several innovative neural components from the Deci research team that enabled optimized training and inference. This brought the number of possible neural network architectures to 10¹⁴. To put the vastness of the search space into perspective, that’s larger than the number of stars in our galaxy.
The research space that led to the discovery of the YOLO-NAS architecture incorporated computing principles, design, and considerations that prioritized efficiency, scalability, robustness, and interpretability.
Efficiency refers to the modern requirement for deep learning models to meet the computational storage requirements of edge devices such as smartphones, while also meeting performance requirements in terms of inference speed and accuracy. The YOLO-NAS architecture delivers high-accuracy performance for object detection tasks without requiring large computational resources. AutoNAC technology traverses the vast architectural research space in search of architectures that balance latency (time taken to receive inference results) and throughput (image frames processed within a specific time period).
The robustness of the YOLO-NAS model is evident in its resilience to changes in input data, ability to handle noise or uncertainty, and maintain high accuracy rates even during post-training quantization. Furthermore, the principles of single-stage detection, gridded image division, bounding box prediction, multi-scale predictions, and non-maximum suppression—which are essential to YOLO architectures—equip YOLO-NAS with the robustness to effectively detect objects of various sizes in different scenarios.
And now we have reached the ‘Efficiency Frontier’. The efficiency frontier refers to the search space covering the architecture that presents an optimal balance of latency, throughput, and accuracy. AutoNAC facilitates the discovery of new neural network architectures by considering hardware availability, performance targets, quantization, etc. The efficiency frontier contains the YOLO-NAS variants YOLO-NAS-S, YOLO-NAS-M, and YOLO-NAS-L; these model variants address different computational and hardware constraints, introducing computational resource-based scalability to the YOLO-NAS model.
This Efficiency Frontier graph presents a comparison between the YOLO-NAS architecture and other YOLO architectures based on object detection performance on the COCO2017 validation dataset.
Attention Mechanism¶
As a machine learning practitioner, you are probably familiar with the machine learning technique Attention Mechanism, popularized by the paper that introduced the Transformer Neural Network: Attention is All You Need.
The YOLO-NAS architecture incorporates the attention mechanism and leverages it to selectively focus on certain portions of an image containing target object(s) relevant to the problem domain or use case.
YOLO-NAS’s incorporation of the attention mechanism into its architecture enables prioritization of areas of an image that contain a target object, effectively reducing the influence of irrelevant information such as non-target objects and image background. This application of attention refines the model’s focus and significantly enhances its object detection capabilities.
How was YOLO-NAS trained?¶
YOLO-NAS undergoes a multi-phase training process that involves pre-training on the Objects365 dataset, using the COCO dataset to generate pseudo-labeled data, and incorporating Knowledge Distillation (KD) and Distributional Focal Loss (DFL) techniques.
Pre-training on Objects365, which consists of 2 million images and 365 categories, runs for 25–40 epochs on 8 NVIDIA RTX A5000 GPUs. The COCO dataset provides an additional 123K unlabeled images, which are used to generate pseudo-labeled data to train the model.
The KD technique is applied by adding a term to the loss function, allowing the student network to mimic both the classification and DFL predictions of the teacher network. Meanwhile, DFL is used to discretize the box predictions into finite values, learning box regression as a classification task and predicting distributions over these values, which are then converted into final predictions via a weighted sum.
These training methods enable YOLO-NAS to achieve high accuracy and superior object detection capabilities.
How good is YOLO-NAS?¶
In terms of pure numbers, YOLO-NAS is ~0.5 mAP points more accurate and 10-20% faster than equivalent variants of YOLOv8 and YOLOv7.
YOLO-NAS Implementation¶
Note: After the installation is complete (it may take a few minutes), you will need to restart the runtime after the installation is complete.
!pip install super-gradients==3.1.0
!pip install imutils
!pip install pytube --upgrade
Collecting super-gradients==3.1.0
Downloading super_gradients-3.1.0-py3-none-any.whl (965 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 965.4/965.4 kB 10.6 MB/s eta 0:00:00
Collecting torch<1.14,>=1.9.0 (from super-gradients==3.1.0)
Downloading torch-1.13.1-cp310-cp310-manylinux1_x86_64.whl (887.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 887.5/887.5 MB 1.8 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.57.0 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (4.66.1)
Collecting boto3>=1.17.15 (from super-gradients==3.1.0)
Downloading boto3-1.28.84-py3-none-any.whl (135 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 135.8/135.8 kB 17.1 MB/s eta 0:00:00
Requirement already satisfied: jsonschema>=3.2.0 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (4.19.2)
Collecting Deprecated>=1.2.11 (from super-gradients==3.1.0)
Downloading Deprecated-1.2.14-py2.py3-none-any.whl (9.6 kB)
Requirement already satisfied: opencv-python>=4.5.1 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (4.8.0.76)
Requirement already satisfied: scipy>=1.6.1 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (1.11.3)
Requirement already satisfied: matplotlib>=3.3.4 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (3.7.1)
Requirement already satisfied: psutil>=5.8.0 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (5.9.5)
Requirement already satisfied: tensorboard>=2.4.1 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (2.14.1)
Requirement already satisfied: setuptools>=21.0.0 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (67.7.2)
Collecting coverage~=5.3.1 (from super-gradients==3.1.0)
Downloading coverage-5.3.1.tar.gz (684 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 684.5/684.5 kB 56.5 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: torchvision>=0.10.0 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (0.16.0+cu118)
Collecting sphinx~=4.0.2 (from super-gradients==3.1.0)
Downloading Sphinx-4.0.3-py3-none-any.whl (2.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.9/2.9 MB 92.7 MB/s eta 0:00:00
Collecting sphinx-rtd-theme (from super-gradients==3.1.0)
Downloading sphinx_rtd_theme-1.3.0-py2.py3-none-any.whl (2.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.8/2.8 MB 94.9 MB/s eta 0:00:00
Collecting torchmetrics==0.8 (from super-gradients==3.1.0)
Downloading torchmetrics-0.8.0-py3-none-any.whl (408 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 408.6/408.6 kB 44.7 MB/s eta 0:00:00
Requirement already satisfied: pillow>=9.2.0 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (9.4.0)
Collecting hydra-core>=1.2.0 (from super-gradients==3.1.0)
Downloading hydra_core-1.3.2-py3-none-any.whl (154 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 154.5/154.5 kB 20.1 MB/s eta 0:00:00
Collecting omegaconf (from super-gradients==3.1.0)
Downloading omegaconf-2.3.0-py3-none-any.whl (79 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.5/79.5 kB 11.9 MB/s eta 0:00:00
Collecting onnxruntime==1.13.1 (from super-gradients==3.1.0)
Downloading onnxruntime-1.13.1-cp310-cp310-manylinux_2_27_x86_64.whl (4.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.5/4.5 MB 82.7 MB/s eta 0:00:00
Collecting onnx==1.13.0 (from super-gradients==3.1.0)
Downloading onnx-1.13.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.5/13.5 MB 103.3 MB/s eta 0:00:00
Requirement already satisfied: pip-tools>=6.12.1 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (6.13.0)
Collecting pyparsing==2.4.5 (from super-gradients==3.1.0)
Downloading pyparsing-2.4.5-py2.py3-none-any.whl (67 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 68.0/68.0 kB 9.8 MB/s eta 0:00:00
Collecting einops==0.3.2 (from super-gradients==3.1.0)
Downloading einops-0.3.2-py3-none-any.whl (25 kB)
Collecting pycocotools==2.0.4 (from super-gradients==3.1.0)
Downloading pycocotools-2.0.4.tar.gz (106 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 106.6/106.6 kB 15.7 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: protobuf==3.20.3 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (3.20.3)
Collecting treelib==1.6.1 (from super-gradients==3.1.0)
Downloading treelib-1.6.1.tar.gz (24 kB)
Preparing metadata (setup.py) ... done
Collecting termcolor==1.1.0 (from super-gradients==3.1.0)
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: packaging>=20.4 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (23.2)
Requirement already satisfied: wheel>=0.38.0 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (0.41.3)
Requirement already satisfied: pygments>=2.7.4 in /usr/local/lib/python3.10/dist-packages (from super-gradients==3.1.0) (2.16.1)
Collecting stringcase>=1.2.0 (from super-gradients==3.1.0)
Downloading stringcase-1.2.0.tar.gz (3.0 kB)
Preparing metadata (setup.py) ... done
Collecting numpy<=1.23 (from super-gradients==3.1.0)
Downloading numpy-1.23.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.0/17.0 MB 90.7 MB/s eta 0:00:00
Collecting rapidfuzz (from super-gradients==3.1.0)
Downloading rapidfuzz-3.5.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 11.0 MB/s eta 0:00:00
Collecting json-tricks==3.16.1 (from super-gradients==3.1.0)
Downloading json_tricks-3.16.1-py2.py3-none-any.whl (27 kB)
Collecting onnx-simplifier<1.0,>=0.3.6 (from super-gradients==3.1.0)
Downloading onnx_simplifier-0.4.35-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 97.7 MB/s eta 0:00:00
Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.10/dist-packages (from onnx==1.13.0->super-gradients==3.1.0) (4.5.0)
Collecting coloredlogs (from onnxruntime==1.13.1->super-gradients==3.1.0)
Downloading coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 kB 6.5 MB/s eta 0:00:00
Requirement already satisfied: flatbuffers in /usr/local/lib/python3.10/dist-packages (from onnxruntime==1.13.1->super-gradients==3.1.0) (23.5.26)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from onnxruntime==1.13.1->super-gradients==3.1.0) (1.12)
Collecting pyDeprecate==0.3.* (from torchmetrics==0.8->super-gradients==3.1.0)
Downloading pyDeprecate-0.3.2-py3-none-any.whl (10 kB)
Requirement already satisfied: future in /usr/local/lib/python3.10/dist-packages (from treelib==1.6.1->super-gradients==3.1.0) (0.18.3)
Collecting botocore<1.32.0,>=1.31.84 (from boto3>=1.17.15->super-gradients==3.1.0)
Downloading botocore-1.31.84-py3-none-any.whl (11.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.3/11.3 MB 119.4 MB/s eta 0:00:00
Collecting jmespath<2.0.0,>=0.7.1 (from boto3>=1.17.15->super-gradients==3.1.0)
Downloading jmespath-1.0.1-py3-none-any.whl (20 kB)
Collecting s3transfer<0.8.0,>=0.7.0 (from boto3>=1.17.15->super-gradients==3.1.0)
Downloading s3transfer-0.7.0-py3-none-any.whl (79 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.8/79.8 kB 11.0 MB/s eta 0:00:00
Requirement already satisfied: wrapt<2,>=1.10 in /usr/local/lib/python3.10/dist-packages (from Deprecated>=1.2.11->super-gradients==3.1.0) (1.14.1)
Collecting antlr4-python3-runtime==4.9.* (from hydra-core>=1.2.0->super-gradients==3.1.0)
Downloading antlr4-python3-runtime-4.9.3.tar.gz (117 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 117.0/117.0 kB 18.3 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: attrs>=22.2.0 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.2.0->super-gradients==3.1.0) (23.1.0)
Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.2.0->super-gradients==3.1.0) (2023.7.1)
Requirement already satisfied: referencing>=0.28.4 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.2.0->super-gradients==3.1.0) (0.30.2)
Requirement already satisfied: rpds-py>=0.7.1 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.2.0->super-gradients==3.1.0) (0.12.0)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.4->super-gradients==3.1.0) (1.2.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.4->super-gradients==3.1.0) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.4->super-gradients==3.1.0) (4.44.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.4->super-gradients==3.1.0) (1.4.5)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.3.4->super-gradients==3.1.0) (2.8.2)
Requirement already satisfied: PyYAML>=5.1.0 in /usr/local/lib/python3.10/dist-packages (from omegaconf->super-gradients==3.1.0) (6.0.1)
Requirement already satisfied: rich in /usr/local/lib/python3.10/dist-packages (from onnx-simplifier<1.0,>=0.3.6->super-gradients==3.1.0) (13.6.0)
Requirement already satisfied: build in /usr/local/lib/python3.10/dist-packages (from pip-tools>=6.12.1->super-gradients==3.1.0) (1.0.3)
Requirement already satisfied: click>=8 in /usr/local/lib/python3.10/dist-packages (from pip-tools>=6.12.1->super-gradients==3.1.0) (8.1.7)
Requirement already satisfied: pip>=22.2 in /usr/local/lib/python3.10/dist-packages (from pip-tools>=6.12.1->super-gradients==3.1.0) (23.1.2)
Requirement already satisfied: sphinxcontrib-applehelp in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (1.0.7)
Requirement already satisfied: sphinxcontrib-devhelp in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (1.0.5)
Requirement already satisfied: sphinxcontrib-jsmath in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (1.0.1)
Requirement already satisfied: sphinxcontrib-htmlhelp in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (2.0.4)
Requirement already satisfied: sphinxcontrib-serializinghtml in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (1.1.9)
Requirement already satisfied: sphinxcontrib-qthelp in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (1.0.6)
Requirement already satisfied: Jinja2>=2.3 in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (3.1.2)
Collecting docutils<0.18,>=0.14 (from sphinx~=4.0.2->super-gradients==3.1.0)
Downloading docutils-0.17.1-py2.py3-none-any.whl (575 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 575.5/575.5 kB 52.1 MB/s eta 0:00:00
Requirement already satisfied: snowballstemmer>=1.1 in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (2.2.0)
Requirement already satisfied: babel>=1.3 in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (2.13.1)
Requirement already satisfied: alabaster<0.8,>=0.7 in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (0.7.13)
Requirement already satisfied: imagesize in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (1.4.1)
Requirement already satisfied: requests>=2.5.0 in /usr/local/lib/python3.10/dist-packages (from sphinx~=4.0.2->super-gradients==3.1.0) (2.31.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.10/dist-packages (from tensorboard>=2.4.1->super-gradients==3.1.0) (1.4.0)
Requirement already satisfied: grpcio>=1.48.2 in /usr/local/lib/python3.10/dist-packages (from tensorboard>=2.4.1->super-gradients==3.1.0) (1.59.2)
Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.10/dist-packages (from tensorboard>=2.4.1->super-gradients==3.1.0) (2.17.3)
Requirement already satisfied: google-auth-oauthlib<1.1,>=0.5 in /usr/local/lib/python3.10/dist-packages (from tensorboard>=2.4.1->super-gradients==3.1.0) (1.0.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.10/dist-packages (from tensorboard>=2.4.1->super-gradients==3.1.0) (3.5.1)
Requirement already satisfied: six>1.9 in /usr/local/lib/python3.10/dist-packages (from tensorboard>=2.4.1->super-gradients==3.1.0) (1.16.0)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard>=2.4.1->super-gradients==3.1.0) (0.7.2)
Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from tensorboard>=2.4.1->super-gradients==3.1.0) (3.0.1)
Collecting nvidia-cuda-runtime-cu11==11.7.99 (from torch<1.14,>=1.9.0->super-gradients==3.1.0)
Downloading nvidia_cuda_runtime_cu11-11.7.99-py3-none-manylinux1_x86_64.whl (849 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 849.3/849.3 kB 69.9 MB/s eta 0:00:00
Collecting nvidia-cudnn-cu11==8.5.0.96 (from torch<1.14,>=1.9.0->super-gradients==3.1.0)
Downloading nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl (557.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 557.1/557.1 MB 2.5 MB/s eta 0:00:00
Collecting nvidia-cublas-cu11==11.10.3.66 (from torch<1.14,>=1.9.0->super-gradients==3.1.0)
Downloading nvidia_cublas_cu11-11.10.3.66-py3-none-manylinux1_x86_64.whl (317.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 317.1/317.1 MB 3.0 MB/s eta 0:00:00
Collecting nvidia-cuda-nvrtc-cu11==11.7.99 (from torch<1.14,>=1.9.0->super-gradients==3.1.0)
Downloading nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl (21.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.0/21.0 MB 87.3 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
Collecting torchvision>=0.10.0 (from super-gradients==3.1.0)
Downloading torchvision-0.16.0-cp310-cp310-manylinux1_x86_64.whl (6.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.9/6.9 MB 27.8 MB/s eta 0:00:00
Downloading torchvision-0.15.2-cp310-cp310-manylinux1_x86_64.whl (6.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.0/6.0 MB 72.1 MB/s eta 0:00:00
Downloading torchvision-0.15.1-cp310-cp310-manylinux1_x86_64.whl (6.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.0/6.0 MB 118.1 MB/s eta 0:00:00
Downloading torchvision-0.14.1-cp310-cp310-manylinux1_x86_64.whl (24.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.2/24.2 MB 72.1 MB/s eta 0:00:00
Collecting sphinxcontrib-jquery<5,>=4 (from sphinx-rtd-theme->super-gradients==3.1.0)
Downloading sphinxcontrib_jquery-4.1-py2.py3-none-any.whl (121 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.1/121.1 kB 18.0 MB/s eta 0:00:00
Requirement already satisfied: urllib3<2.1,>=1.25.4 in /usr/local/lib/python3.10/dist-packages (from botocore<1.32.0,>=1.31.84->boto3>=1.17.15->super-gradients==3.1.0) (2.0.7)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1->super-gradients==3.1.0) (5.3.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1->super-gradients==3.1.0) (0.3.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1->super-gradients==3.1.0) (4.9)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from google-auth-oauthlib<1.1,>=0.5->tensorboard>=2.4.1->super-gradients==3.1.0) (1.3.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from Jinja2>=2.3->sphinx~=4.0.2->super-gradients==3.1.0) (2.1.3)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.5.0->sphinx~=4.0.2->super-gradients==3.1.0) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.5.0->sphinx~=4.0.2->super-gradients==3.1.0) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.5.0->sphinx~=4.0.2->super-gradients==3.1.0) (2023.7.22)
Requirement already satisfied: pyproject_hooks in /usr/local/lib/python3.10/dist-packages (from build->pip-tools>=6.12.1->super-gradients==3.1.0) (1.0.0)
Requirement already satisfied: tomli>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from build->pip-tools>=6.12.1->super-gradients==3.1.0) (2.0.1)
Collecting humanfriendly>=9.1 (from coloredlogs->onnxruntime==1.13.1->super-gradients==3.1.0)
Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.8/86.8 kB 13.7 MB/s eta 0:00:00
Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.10/dist-packages (from rich->onnx-simplifier<1.0,>=0.3.6->super-gradients==3.1.0) (3.0.0)
INFO: pip is looking at multiple versions of sphinxcontrib-applehelp to determine which version is compatible with other requirements. This could take a while.
Collecting sphinxcontrib-applehelp (from sphinx~=4.0.2->super-gradients==3.1.0)
Downloading sphinxcontrib_applehelp-1.0.6-py3-none-any.whl (120 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 120.0/120.0 kB 17.8 MB/s eta 0:00:00
Downloading sphinxcontrib_applehelp-1.0.5-py3-none-any.whl (120 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 120.0/120.0 kB 19.2 MB/s eta 0:00:00
Downloading sphinxcontrib_applehelp-1.0.4-py3-none-any.whl (120 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 120.6/120.6 kB 19.0 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of sphinxcontrib-devhelp to determine which version is compatible with other requirements. This could take a while.
Collecting sphinxcontrib-devhelp (from sphinx~=4.0.2->super-gradients==3.1.0)
Downloading sphinxcontrib_devhelp-1.0.4-py3-none-any.whl (83 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.5/83.5 kB 12.9 MB/s eta 0:00:00
Downloading sphinxcontrib_devhelp-1.0.3-py3-none-any.whl (83 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.5/83.5 kB 13.0 MB/s eta 0:00:00
Downloading sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl (84 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 84.7/84.7 kB 13.6 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of sphinxcontrib-htmlhelp to determine which version is compatible with other requirements. This could take a while.
Collecting sphinxcontrib-htmlhelp (from sphinx~=4.0.2->super-gradients==3.1.0)
Downloading sphinxcontrib_htmlhelp-2.0.3-py3-none-any.whl (99 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 99.2/99.2 kB 15.8 MB/s eta 0:00:00
Downloading sphinxcontrib_htmlhelp-2.0.2-py3-none-any.whl (99 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 99.2/99.2 kB 14.1 MB/s eta 0:00:00
Downloading sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl (99 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 99.8/99.8 kB 14.5 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of sphinxcontrib-qthelp to determine which version is compatible with other requirements. This could take a while.
Collecting sphinxcontrib-qthelp (from sphinx~=4.0.2->super-gradients==3.1.0)
Downloading sphinxcontrib_qthelp-1.0.5-py3-none-any.whl (89 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.4/89.4 kB 13.5 MB/s eta 0:00:00
Downloading sphinxcontrib_qthelp-1.0.4-py3-none-any.whl (89 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.4/89.4 kB 12.9 MB/s eta 0:00:00
Downloading sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl (90 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 90.6/90.6 kB 14.2 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of sphinxcontrib-serializinghtml to determine which version is compatible with other requirements. This could take a while.
Collecting sphinxcontrib-serializinghtml (from sphinx~=4.0.2->super-gradients==3.1.0)
Downloading sphinxcontrib_serializinghtml-1.1.8-py3-none-any.whl (92 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.6/92.6 kB 13.2 MB/s eta 0:00:00
Downloading sphinxcontrib_serializinghtml-1.1.7-py3-none-any.whl (92 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.6/92.6 kB 14.3 MB/s eta 0:00:00
Downloading sphinxcontrib_serializinghtml-1.1.6-py3-none-any.whl (92 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.6/92.6 kB 14.6 MB/s eta 0:00:00
Downloading sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl (94 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.0/94.0 kB 14.5 MB/s eta 0:00:00
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->onnxruntime==1.13.1->super-gradients==3.1.0) (1.3.0)
Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.10/dist-packages (from markdown-it-py>=2.2.0->rich->onnx-simplifier<1.0,>=0.3.6->super-gradients==3.1.0) (0.1.2)
Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /usr/local/lib/python3.10/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard>=2.4.1->super-gradients==3.1.0) (0.5.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<1.1,>=0.5->tensorboard>=2.4.1->super-gradients==3.1.0) (3.2.2)
Building wheels for collected packages: pycocotools, termcolor, treelib, coverage, antlr4-python3-runtime, stringcase
error: subprocess-exited-with-error
× Building wheel for pycocotools (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for pycocotools (pyproject.toml) ... error
ERROR: Failed building wheel for pycocotools
Building wheel for termcolor (setup.py) ... done
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4832 sha256=d9cbcef00a11e24ff1bd96fc26455de7687518b17b9fd6886f3efe975c7fb2de
Stored in directory: /root/.cache/pip/wheels/a1/49/46/1b13a65d8da11238af9616b00fdde6d45b0f95d9291bac8452
Building wheel for treelib (setup.py) ... done
Created wheel for treelib: filename=treelib-1.6.1-py3-none-any.whl size=18370 sha256=3f0eea89d8290322c6914327baed92bc305d47147659c4437a1b0177a38d3d7f
Stored in directory: /root/.cache/pip/wheels/63/72/8b/76569b82bf280a03c4e294c3b29ee2398217186369c427ed4b
Building wheel for coverage (setup.py) ... done
Created wheel for coverage: filename=coverage-5.3.1-cp310-cp310-linux_x86_64.whl size=235263 sha256=f786f7afbdbf1b78eff2c54f4882e282441c0b402e753c8cfbe87e71e9b07ed4
Stored in directory: /root/.cache/pip/wheels/e2/70/10/313be697f460d6024cfa94b7f0e22ffc1c53aab718fb4f42af
import os
import requests
import torch
from PIL import Image
from super_gradients.training import Trainer, dataloaders, models
from super_gradients.training.dataloaders.dataloaders import (
coco_detection_yolo_format_train, coco_detection_yolo_format_val
)
from super_gradients.training.losses import PPYoloELoss
from super_gradients.training.metrics import DetectionMetrics_050
from super_gradients.training.models.detection_models.pp_yolo_e import (
PPYoloEPostPredictionCallback
)
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
from matplotlib import pyplot as plt
import cv2
import os
import json
import math
import pandas as pd
from skimage import io
plt.rcParams["figure.figsize"] = (12,12)
from sklearn import model_selection
import shutil
path_labels = '/content/drive/MyDrive/Datasets/Letuces dataset/B&F1/gt/gt.txt'
path_imgs = '/content/drive/MyDrive/Datasets/Letuces dataset/B&F1/img'
images_files = os.listdir(path_imgs)
data = pd.read_csv(path_labels, header = None)
data = data.rename(columns={0:'Image_ID', 1:'ID', 2: 'x', 3:'y', 4:'w', 5:'h'})
data = data.drop(columns=[6,7,8])
data['Image_ID'] = data['Image_ID'].astype(str)
data['Image_ID'] = data['Image_ID'].str.zfill(6)
data['Image_ID'] = data['Image_ID'] + '.png'
data
| Image_ID | ID | x | y | w | h | |
|---|---|---|---|---|---|---|
| 0 | 000001.png | 8 | 471 | 21 | 106 | 113 |
| 1 | 000001.png | 4 | 476 | 683 | 103 | 105 |
| 2 | 000001.png | 7 | 206 | 134 | 87 | 87 |
| 3 | 000001.png | 3 | 214 | 707 | 96 | 92 |
| 4 | 000001.png | 5 | 224 | 448 | 85 | 82 |
| ... | ... | ... | ... | ... | ... | ... |
| 4543 | 000540.png | 5 | 282 | 463 | 84 | 80 |
| 4544 | 000540.png | 4 | 524 | 705 | 104 | 106 |
| 4545 | 000540.png | 3 | 261 | 720 | 97 | 88 |
| 4546 | 000540.png | 2 | 514 | 954 | 84 | 95 |
| 4547 | 000540.png | 1 | 265 | 982 | 78 | 78 |
4548 rows × 6 columns
i = 10
image_path = os.path.join(path_imgs,images_files[i])
id_img = images_files[i]
objects = data[data['Image_ID'] == id_img]
img = io.imread(image_path)
dh, dw, _ = img.shape
print(dh, dw)
for i,row in objects.iterrows():
x = row['x']
y = row['y']
w = row['w']
h = row['h']
cv2.rectangle(img, (x,y),(x+w,y+h), (255, 0, 0), 3)
plt.imshow(img)
plt.show()
1080 810
images_files_train, images_files_test= model_selection.train_test_split(
images_files,
test_size=0.2,
random_state=42,
shuffle=True,
)
print(len(images_files_train))
print(len(images_files_test))
df_train = data[data['Image_ID'].isin(images_files_train)]
df_test = data[data['Image_ID'].isin(images_files_test)]
df_train
| Image_ID | ID | x | y | w | h | |
|---|---|---|---|---|---|---|
| 0 | 000001.png | 8 | 471 | 21 | 106 | 113 |
| 1 | 000001.png | 4 | 476 | 683 | 103 | 105 |
| 2 | 000001.png | 7 | 206 | 134 | 87 | 87 |
| 3 | 000001.png | 3 | 214 | 707 | 96 | 92 |
| 4 | 000001.png | 5 | 224 | 448 | 85 | 82 |
| ... | ... | ... | ... | ... | ... | ... |
| 4535 | 000539.png | 5 | 280 | 489 | 83 | 80 |
| 4536 | 000539.png | 4 | 522 | 731 | 103 | 105 |
| 4537 | 000539.png | 3 | 258 | 744 | 97 | 90 |
| 4538 | 000539.png | 2 | 512 | 980 | 84 | 94 |
| 4539 | 000539.png | 1 | 263 | 1009 | 79 | 71 |
3648 rows × 6 columns
!mkdir letuce_data
%cd letuce_data
!mkdir train
!mkdir test
%cd train
!mkdir images
!mkdir labels
%cd ..
%cd test
!mkdir images
!mkdir labels
%cd ..
%cd ..
432 108 /content/letuce_data /content/letuce_data/train /content/letuce_data /content/letuce_data/test
INPUT_PATH = path_imgs
OUTPUT_PATH = '/content/letuce_data'
def process_data(data, data_type='train'):
for image_name in data['Image_ID'].unique():
new_data = data[data['Image_ID'] == image_name]
for _, row in new_data.iterrows():
classes = 0
yolo_data = []
x = row['x']
y = row['y']
w = row['w']
h = row['h']
x_center = x + w / 2
y_center = y + h / 2
x_center /= 810
y_center /= 1080
w /= 810
h /= 1080
yolo_data.append([classes, x_center, y_center, w, h])
yoy_data = np.array(yolo_data)
np.savetxt(
os.path.join(OUTPUT_PATH, f"{data_type}/labels/{image_name.split('.')[0]}.txt"),
yolo_data,
fmt = ["%d", "%f", "%f", "%f", "%f"]
)
shutil.copyfile(
os.path.join(INPUT_PATH, f"{image_name}"),
os.path.join(OUTPUT_PATH, f"{data_type}/images/{image_name}")
)
process_data(df_train, data_type='train')
process_data(df_test, data_type='test')
/content/letuce_data /content
f = open('/content/letuce_data/train/labels/'+os.listdir("/content/letuce_data/train/labels/")[0])
print(f.name)
for l in f:
print(l)
class config:
#trainer params
CHECKPOINT_DIR = 'checkpoints' #specify the path you want to save checkpoints to
EXPERIMENT_NAME = 'cars-from-above' #specify the experiment name
#dataset params
DATA_DIR = '/content/letuce_data' #parent directory to where data lives
TRAIN_IMAGES_DIR = 'train/images' #child dir of DATA_DIR where train images are
TRAIN_LABELS_DIR = 'train/labels' #child dir of DATA_DIR where train labels are
VAL_IMAGES_DIR = 'test/images' #child dir of DATA_DIR where validation images are
VAL_LABELS_DIR = 'test/labels' #child dir of DATA_DIR where validation labels are
CLASSES = ['Letuce'] #what class names do you have
NUM_CLASSES = len(CLASSES)
#dataloader params - you can add whatever PyTorch dataloader params you have
#could be different across train, val, and test
DATALOADER_PARAMS={
'batch_size':16,
'num_workers':2
}
# model params
MODEL_NAME = 'yolo_nas_l' # choose from yolo_nas_s, yolo_nas_m, yolo_nas_l
PRETRAINED_WEIGHTS = 'coco' #only one option here: coco
trainer = Trainer(experiment_name=config.EXPERIMENT_NAME, ckpt_root_dir=config.CHECKPOINT_DIR)
train_data = coco_detection_yolo_format_train(
dataset_params={
'data_dir': config.DATA_DIR,
'images_dir': config.TRAIN_IMAGES_DIR,
'labels_dir': config.TRAIN_LABELS_DIR,
'classes': config.CLASSES
},
dataloader_params=config.DATALOADER_PARAMS
)
val_data = coco_detection_yolo_format_val(
dataset_params={
'data_dir': config.DATA_DIR,
'images_dir': config.VAL_IMAGES_DIR,
'labels_dir': config.VAL_LABELS_DIR,
'classes': config.CLASSES
},
dataloader_params=config.DATALOADER_PARAMS
)
/content/letuce_data/train/labels/000093.txt 0 0.264815 0.009722 0.072840 0.019444
Caching annotations: 0%| | 0/432 [00:00<?, ?it/s] Caching annotations: 100%|██████████| 432/432 [00:00<00:00, 4944.49it/s]
train_data.dataset.plot()
Caching annotations: 0%| | 0/108 [00:00<?, ?it/s] Caching annotations: 100%|██████████| 108/108 [00:00<00:00, 5352.72it/s]
model = models.get(config.MODEL_NAME,
num_classes=config.NUM_CLASSES,
pretrained_weights=config.PRETRAINED_WEIGHTS
)
[2023-05-15 15:43:46] INFO - checkpoint_utils.py - License Notification: YOLO-NAS pre-trained weights are subjected to the specific license terms and conditions detailed in https://github.com/Deci-AI/super-gradients/blob/master/LICENSE.YOLONAS.md By downloading the pre-trained weight files you agree to comply with these terms.
0%| | 0.00/256M [00:00<?, ?B/s]
Downloading: "https://sghub.deci.ai/models/yolo_nas_l_coco.pth" to /root/.cache/torch/hub/checkpoints/yolo_nas_l_coco.pth
train_params = {
# ENABLING SILENT MODE
"average_best_models":True,
"warmup_mode": "linear_epoch_step",
"warmup_initial_lr": 1e-6,
"lr_warmup_epochs": 3,
"initial_lr": 5e-4,
"lr_mode": "cosine",
"cosine_final_lr_ratio": 0.1,
"optimizer": "Adam",
"optimizer_params": {"weight_decay": 0.0001},
"zero_weight_decay_on_bias_and_bn": True,
"ema": True,
"ema_params": {"decay": 0.9, "decay_type": "threshold"},
# ONLY TRAINING FOR 10 EPOCHS FOR THIS EXAMPLE NOTEBOOK
"max_epochs": 30,
"mixed_precision": True,
"loss": PPYoloELoss(
use_static_assigner=False,
# NOTE: num_classes needs to be defined here
num_classes=config.NUM_CLASSES,
reg_max=16
),
"valid_metrics_list": [
DetectionMetrics_050(
score_thres=0.1,
top_k_predictions=300,
# NOTE: num_classes needs to be defined here
num_cls=config.NUM_CLASSES,
normalize_targets=True,
post_prediction_callback=PPYoloEPostPredictionCallback(
score_threshold=0.01,
nms_top_k=1000,
max_predictions=300,
nms_threshold=0.7
)
)
],
"metric_to_watch": 'mAP@0.50'
}
trainer.train(model=model,
training_params=train_params,
train_loader=train_data,
valid_loader=val_data)
[2023-05-15 15:43:51] INFO - sg_trainer.py - Using EMA with params {'decay': 0.9, 'decay_type': 'threshold'}
The console stream is now moved to checkpoints/cars-from-above/console_May15_15_43_55.txt
[2023-05-15 15:44:03] INFO - sg_trainer_utils.py - TRAINING PARAMETERS:
- Mode: Single GPU
- Number of GPUs: 1 (1 available on the machine)
- Dataset size: 432 (len(train_set))
- Batch size per GPU: 16 (batch_size)
- Batch Accumulate: 1 (batch_accumulate)
- Total batch size: 16 (num_gpus * batch_size)
- Effective Batch size: 16 (num_gpus * batch_size * batch_accumulate)
- Iterations per epoch: 27 (len(train_loader))
- Gradient updates per epoch: 27 (len(train_loader) / batch_accumulate)
[2023-05-15 15:44:03] INFO - sg_trainer.py - Started training for 30 epochs (0/29)
Train epoch 0: 100%|██████████| 27/27 [00:43<00:00, 1.61s/it, PPYoloELoss/loss=3.82, PPYoloELoss/loss_cls=2.86, PPYoloELoss/loss_dfl=0.958, PPYoloELoss/loss_iou=0.192, gpu_mem=12]
Validation epoch 0: 100%|██████████| 6/6 [00:04<00:00, 1.37it/s]
===========================================================
SUMMARY OF EPOCH 0
├── Training
│ ├── Ppyoloeloss/loss = 3.8244
│ ├── Ppyoloeloss/loss_cls = 2.8646
│ ├── Ppyoloeloss/loss_dfl = 0.9584
│ └── Ppyoloeloss/loss_iou = 0.1922
└── Validation
├── F1@0.50 = 0.0
├── Map@0.50 = 0.0
├── Ppyoloeloss/loss = 3.96
├── Ppyoloeloss/loss_cls = 3.2262
├── Ppyoloeloss/loss_dfl = 0.8784
├── Ppyoloeloss/loss_iou = 0.1179
├── Precision@0.50 = 0.0
└── Recall@0.50 = 0.0
===========================================================
[2023-05-15 15:44:57] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 15:44:57] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 4.759918374475092e-05 Train epoch 1: 100%|██████████| 27/27 [00:38<00:00, 1.41s/it, PPYoloELoss/loss=2.23, PPYoloELoss/loss_cls=1.46, PPYoloELoss/loss_dfl=0.775, PPYoloELoss/loss_iou=0.15, gpu_mem=12] Validation epoch 1: 100%|██████████| 6/6 [00:04<00:00, 1.38it/s]
=========================================================== SUMMARY OF EPOCH 1 ├── Training │ ├── Ppyoloeloss/loss = 2.2251 │ │ ├── Best until now = 3.8244 (↘ -1.5993) │ │ └── Epoch N-1 = 3.8244 (↘ -1.5993) │ ├── Ppyoloeloss/loss_cls = 1.4626 │ │ ├── Best until now = 2.8646 (↘ -1.402) │ │ └── Epoch N-1 = 2.8646 (↘ -1.402) │ ├── Ppyoloeloss/loss_dfl = 0.7746 │ │ ├── Best until now = 0.9584 (↘ -0.1838) │ │ └── Epoch N-1 = 0.9584 (↘ -0.1838) │ └── Ppyoloeloss/loss_iou = 0.1501 │ ├── Best until now = 0.1922 (↘ -0.0421) │ └── Epoch N-1 = 0.1922 (↘ -0.0421) └── Validation ├── F1@0.50 = 0.008 │ ├── Best until now = 0.0 (↗ 0.008) │ └── Epoch N-1 = 0.0 (↗ 0.008) ├── Map@0.50 = 0.3046 │ ├── Best until now = 0.0 (↗ 0.3046) │ └── Epoch N-1 = 0.0 (↗ 0.3046) ├── Ppyoloeloss/loss = 26.932 │ ├── Best until now = 3.96 (↗ 22.972) │ └── Epoch N-1 = 3.96 (↗ 22.972) ├── Ppyoloeloss/loss_cls = 26.3924 │ ├── Best until now = 3.2262 (↗ 23.1663) │ └── Epoch N-1 = 3.2262 (↗ 23.1663) ├── Ppyoloeloss/loss_dfl = 0.65 │ ├── Best until now = 0.8784 (↘ -0.2284) │ └── Epoch N-1 = 0.8784 (↘ -0.2284) ├── Ppyoloeloss/loss_iou = 0.0858 │ ├── Best until now = 0.1179 (↘ -0.032) │ └── Epoch N-1 = 0.1179 (↘ -0.032) ├── Precision@0.50 = 0.004 │ ├── Best until now = 0.0 (↗ 0.004) │ └── Epoch N-1 = 0.0 (↗ 0.004) └── Recall@0.50 = 0.9896 ├── Best until now = 0.0 (↗ 0.9896) └── Epoch N-1 = 0.0 (↗ 0.9896) ===========================================================
[2023-05-15 15:45:58] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 15:45:58] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.30463406443595886 Train epoch 2: 100%|██████████| 27/27 [00:37<00:00, 1.40s/it, PPYoloELoss/loss=1.89, PPYoloELoss/loss_cls=1.16, PPYoloELoss/loss_dfl=0.772, PPYoloELoss/loss_iou=0.137, gpu_mem=12] Validation epoch 2: 100%|██████████| 6/6 [00:04<00:00, 1.30it/s]
=========================================================== SUMMARY OF EPOCH 2 ├── Training │ ├── Ppyoloeloss/loss = 1.8936 │ │ ├── Best until now = 2.2251 (↘ -0.3315) │ │ └── Epoch N-1 = 2.2251 (↘ -0.3315) │ ├── Ppyoloeloss/loss_cls = 1.1647 │ │ ├── Best until now = 1.4626 (↘ -0.2979) │ │ └── Epoch N-1 = 1.4626 (↘ -0.2979) │ ├── Ppyoloeloss/loss_dfl = 0.7718 │ │ ├── Best until now = 0.7746 (↘ -0.0028) │ │ └── Epoch N-1 = 0.7746 (↘ -0.0028) │ └── Ppyoloeloss/loss_iou = 0.1372 │ ├── Best until now = 0.1501 (↘ -0.0129) │ └── Epoch N-1 = 0.1501 (↘ -0.0129) └── Validation ├── F1@0.50 = 0.0079 │ ├── Best until now = 0.008 (↘ -0.0) │ └── Epoch N-1 = 0.008 (↘ -0.0) ├── Map@0.50 = 0.4886 │ ├── Best until now = 0.3046 (↗ 0.184) │ └── Epoch N-1 = 0.3046 (↗ 0.184) ├── Ppyoloeloss/loss = 5.7292 │ ├── Best until now = 3.96 (↗ 1.7692) │ └── Epoch N-1 = 26.932 (↘ -21.2028) ├── Ppyoloeloss/loss_cls = 5.229 │ ├── Best until now = 3.2262 (↗ 2.0028) │ └── Epoch N-1 = 26.3924 (↘ -21.1635) ├── Ppyoloeloss/loss_dfl = 0.6245 │ ├── Best until now = 0.65 (↘ -0.0255) │ └── Epoch N-1 = 0.65 (↘ -0.0255) ├── Ppyoloeloss/loss_iou = 0.0752 │ ├── Best until now = 0.0858 (↘ -0.0106) │ └── Epoch N-1 = 0.0858 (↘ -0.0106) ├── Precision@0.50 = 0.004 │ ├── Best until now = 0.004 (↘ -0.0) │ └── Epoch N-1 = 0.004 (↘ -0.0) └── Recall@0.50 = 1.0 ├── Best until now = 0.9896 (↗ 0.0104) └── Epoch N-1 = 0.9896 (↗ 0.0104) ===========================================================
[2023-05-15 15:46:59] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 15:46:59] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.48858708143234253 Train epoch 3: 100%|██████████| 27/27 [00:38<00:00, 1.41s/it, PPYoloELoss/loss=1.82, PPYoloELoss/loss_cls=1.11, PPYoloELoss/loss_dfl=0.758, PPYoloELoss/loss_iou=0.133, gpu_mem=12] Validation epoch 3: 100%|██████████| 6/6 [00:04<00:00, 1.32it/s]
=========================================================== SUMMARY OF EPOCH 3 ├── Training │ ├── Ppyoloeloss/loss = 1.8167 │ │ ├── Best until now = 1.8936 (↘ -0.0769) │ │ └── Epoch N-1 = 1.8936 (↘ -0.0769) │ ├── Ppyoloeloss/loss_cls = 1.1057 │ │ ├── Best until now = 1.1647 (↘ -0.059) │ │ └── Epoch N-1 = 1.1647 (↘ -0.059) │ ├── Ppyoloeloss/loss_dfl = 0.7579 │ │ ├── Best until now = 0.7718 (↘ -0.0138) │ │ └── Epoch N-1 = 0.7718 (↘ -0.0138) │ └── Ppyoloeloss/loss_iou = 0.1328 │ ├── Best until now = 0.1372 (↘ -0.0044) │ └── Epoch N-1 = 0.1372 (↘ -0.0044) └── Validation ├── F1@0.50 = 0.0174 │ ├── Best until now = 0.008 (↗ 0.0094) │ └── Epoch N-1 = 0.0079 (↗ 0.0094) ├── Map@0.50 = 0.3863 │ ├── Best until now = 0.4886 (↘ -0.1023) │ └── Epoch N-1 = 0.4886 (↘ -0.1023) ├── Ppyoloeloss/loss = 3.2664 │ ├── Best until now = 3.96 (↘ -0.6936) │ └── Epoch N-1 = 5.7292 (↘ -2.4628) ├── Ppyoloeloss/loss_cls = 2.7611 │ ├── Best until now = 3.2262 (↘ -0.4651) │ └── Epoch N-1 = 5.229 (↘ -2.4679) ├── Ppyoloeloss/loss_dfl = 0.6137 │ ├── Best until now = 0.6245 (↘ -0.0109) │ └── Epoch N-1 = 0.6245 (↘ -0.0109) ├── Ppyoloeloss/loss_iou = 0.0794 │ ├── Best until now = 0.0752 (↗ 0.0042) │ └── Epoch N-1 = 0.0752 (↗ 0.0042) ├── Precision@0.50 = 0.0088 │ ├── Best until now = 0.004 (↗ 0.0048) │ └── Epoch N-1 = 0.004 (↗ 0.0048) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
Train epoch 4: 100%|██████████| 27/27 [00:40<00:00, 1.49s/it, PPYoloELoss/loss=1.77, PPYoloELoss/loss_cls=1.06, PPYoloELoss/loss_dfl=0.756, PPYoloELoss/loss_iou=0.133, gpu_mem=12] Validation epoch 4: 100%|██████████| 6/6 [00:04<00:00, 1.26it/s]
=========================================================== SUMMARY OF EPOCH 4 ├── Training │ ├── Ppyoloeloss/loss = 1.7698 │ │ ├── Best until now = 1.8167 (↘ -0.0469) │ │ └── Epoch N-1 = 1.8167 (↘ -0.0469) │ ├── Ppyoloeloss/loss_cls = 1.06 │ │ ├── Best until now = 1.1057 (↘ -0.0456) │ │ └── Epoch N-1 = 1.1057 (↘ -0.0456) │ ├── Ppyoloeloss/loss_dfl = 0.7558 │ │ ├── Best until now = 0.7579 (↘ -0.0021) │ │ └── Epoch N-1 = 0.7579 (↘ -0.0021) │ └── Ppyoloeloss/loss_iou = 0.1327 │ ├── Best until now = 0.1328 (↘ -1e-04) │ └── Epoch N-1 = 0.1328 (↘ -1e-04) └── Validation ├── F1@0.50 = 0.0341 │ ├── Best until now = 0.0174 (↗ 0.0167) │ └── Epoch N-1 = 0.0174 (↗ 0.0167) ├── Map@0.50 = 0.4857 │ ├── Best until now = 0.4886 (↘ -0.0029) │ └── Epoch N-1 = 0.3863 (↗ 0.0994) ├── Ppyoloeloss/loss = 3.5614 │ ├── Best until now = 3.2664 (↗ 0.295) │ └── Epoch N-1 = 3.2664 (↗ 0.295) ├── Ppyoloeloss/loss_cls = 3.0237 │ ├── Best until now = 2.7611 (↗ 0.2626) │ └── Epoch N-1 = 2.7611 (↗ 0.2626) ├── Ppyoloeloss/loss_dfl = 0.6386 │ ├── Best until now = 0.6137 (↗ 0.0249) │ └── Epoch N-1 = 0.6137 (↗ 0.0249) ├── Ppyoloeloss/loss_iou = 0.0874 │ ├── Best until now = 0.0752 (↗ 0.0122) │ └── Epoch N-1 = 0.0794 (↗ 0.008) ├── Precision@0.50 = 0.0173 │ ├── Best until now = 0.0088 (↗ 0.0086) │ └── Epoch N-1 = 0.0088 (↗ 0.0086) └── Recall@0.50 = 0.9896 ├── Best until now = 1.0 (↘ -0.0104) └── Epoch N-1 = 1.0 (↘ -0.0104) ===========================================================
Train epoch 5: 100%|██████████| 27/27 [00:39<00:00, 1.45s/it, PPYoloELoss/loss=1.73, PPYoloELoss/loss_cls=1.02, PPYoloELoss/loss_dfl=0.752, PPYoloELoss/loss_iou=0.133, gpu_mem=12] Validation epoch 5: 100%|██████████| 6/6 [00:04<00:00, 1.28it/s]
=========================================================== SUMMARY OF EPOCH 5 ├── Training │ ├── Ppyoloeloss/loss = 1.7312 │ │ ├── Best until now = 1.7698 (↘ -0.0386) │ │ └── Epoch N-1 = 1.7698 (↘ -0.0386) │ ├── Ppyoloeloss/loss_cls = 1.0214 │ │ ├── Best until now = 1.06 (↘ -0.0387) │ │ └── Epoch N-1 = 1.06 (↘ -0.0387) │ ├── Ppyoloeloss/loss_dfl = 0.7522 │ │ ├── Best until now = 0.7558 (↘ -0.0036) │ │ └── Epoch N-1 = 0.7558 (↘ -0.0036) │ └── Ppyoloeloss/loss_iou = 0.1335 │ ├── Best until now = 0.1327 (↗ 0.0007) │ └── Epoch N-1 = 0.1327 (↗ 0.0007) └── Validation ├── F1@0.50 = 0.0103 │ ├── Best until now = 0.0341 (↘ -0.0238) │ └── Epoch N-1 = 0.0341 (↘ -0.0238) ├── Map@0.50 = 0.4811 │ ├── Best until now = 0.4886 (↘ -0.0074) │ └── Epoch N-1 = 0.4857 (↘ -0.0045) ├── Ppyoloeloss/loss = 3.6071 │ ├── Best until now = 3.2664 (↗ 0.3406) │ └── Epoch N-1 = 3.5614 (↗ 0.0456) ├── Ppyoloeloss/loss_cls = 3.0805 │ ├── Best until now = 2.7611 (↗ 0.3194) │ └── Epoch N-1 = 3.0237 (↗ 0.0568) ├── Ppyoloeloss/loss_dfl = 0.638 │ ├── Best until now = 0.6137 (↗ 0.0244) │ └── Epoch N-1 = 0.6386 (↘ -0.0006) ├── Ppyoloeloss/loss_iou = 0.083 │ ├── Best until now = 0.0752 (↗ 0.0079) │ └── Epoch N-1 = 0.0874 (↘ -0.0043) ├── Precision@0.50 = 0.0052 │ ├── Best until now = 0.0173 (↘ -0.0122) │ └── Epoch N-1 = 0.0173 (↘ -0.0122) └── Recall@0.50 = 0.9688 ├── Best until now = 1.0 (↘ -0.0312) └── Epoch N-1 = 0.9896 (↘ -0.0208) ===========================================================
Train epoch 6: 100%|██████████| 27/27 [00:38<00:00, 1.41s/it, PPYoloELoss/loss=1.71, PPYoloELoss/loss_cls=1.01, PPYoloELoss/loss_dfl=0.751, PPYoloELoss/loss_iou=0.131, gpu_mem=12] Validation epoch 6: 100%|██████████| 6/6 [00:04<00:00, 1.28it/s]
=========================================================== SUMMARY OF EPOCH 6 ├── Training │ ├── Ppyoloeloss/loss = 1.7114 │ │ ├── Best until now = 1.7312 (↘ -0.0198) │ │ └── Epoch N-1 = 1.7312 (↘ -0.0198) │ ├── Ppyoloeloss/loss_cls = 1.0088 │ │ ├── Best until now = 1.0214 (↘ -0.0126) │ │ └── Epoch N-1 = 1.0214 (↘ -0.0126) │ ├── Ppyoloeloss/loss_dfl = 0.7515 │ │ ├── Best until now = 0.7522 (↘ -0.0007) │ │ └── Epoch N-1 = 0.7522 (↘ -0.0007) │ └── Ppyoloeloss/loss_iou = 0.1307 │ ├── Best until now = 0.1327 (↘ -0.002) │ └── Epoch N-1 = 0.1335 (↘ -0.0028) └── Validation ├── F1@0.50 = 0.0388 │ ├── Best until now = 0.0341 (↗ 0.0047) │ └── Epoch N-1 = 0.0103 (↗ 0.0285) ├── Map@0.50 = 0.5859 │ ├── Best until now = 0.4886 (↗ 0.0974) │ └── Epoch N-1 = 0.4811 (↗ 0.1048) ├── Ppyoloeloss/loss = 2.0398 │ ├── Best until now = 3.2664 (↘ -1.2267) │ └── Epoch N-1 = 3.6071 (↘ -1.5673) ├── Ppyoloeloss/loss_cls = 1.5023 │ ├── Best until now = 2.7611 (↘ -1.2588) │ └── Epoch N-1 = 3.0805 (↘ -1.5782) ├── Ppyoloeloss/loss_dfl = 0.6413 │ ├── Best until now = 0.6137 (↗ 0.0276) │ └── Epoch N-1 = 0.638 (↗ 0.0033) ├── Ppyoloeloss/loss_iou = 0.0867 │ ├── Best until now = 0.0752 (↗ 0.0115) │ └── Epoch N-1 = 0.083 (↗ 0.0037) ├── Precision@0.50 = 0.0198 │ ├── Best until now = 0.0173 (↗ 0.0025) │ └── Epoch N-1 = 0.0052 (↗ 0.0146) └── Recall@0.50 = 0.9792 ├── Best until now = 1.0 (↘ -0.0208) └── Epoch N-1 = 0.9688 (↗ 0.0104) ===========================================================
[2023-05-15 15:50:49] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 15:50:49] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.5859469175338745 Train epoch 7: 100%|██████████| 27/27 [00:39<00:00, 1.46s/it, PPYoloELoss/loss=1.65, PPYoloELoss/loss_cls=0.968, PPYoloELoss/loss_dfl=0.748, PPYoloELoss/loss_iou=0.125, gpu_mem=12] Validation epoch 7: 100%|██████████| 6/6 [00:04<00:00, 1.26it/s]
=========================================================== SUMMARY OF EPOCH 7 ├── Training │ ├── Ppyoloeloss/loss = 1.6548 │ │ ├── Best until now = 1.7114 (↘ -0.0565) │ │ └── Epoch N-1 = 1.7114 (↘ -0.0565) │ ├── Ppyoloeloss/loss_cls = 0.9684 │ │ ├── Best until now = 1.0088 (↘ -0.0404) │ │ └── Epoch N-1 = 1.0088 (↘ -0.0404) │ ├── Ppyoloeloss/loss_dfl = 0.7485 │ │ ├── Best until now = 0.7515 (↘ -0.003) │ │ └── Epoch N-1 = 0.7515 (↘ -0.003) │ └── Ppyoloeloss/loss_iou = 0.1249 │ ├── Best until now = 0.1307 (↘ -0.0059) │ └── Epoch N-1 = 0.1307 (↘ -0.0059) └── Validation ├── F1@0.50 = 0.0189 │ ├── Best until now = 0.0388 (↘ -0.0199) │ └── Epoch N-1 = 0.0388 (↘ -0.0199) ├── Map@0.50 = 0.5656 │ ├── Best until now = 0.5859 (↘ -0.0204) │ └── Epoch N-1 = 0.5859 (↘ -0.0204) ├── Ppyoloeloss/loss = 3.6809 │ ├── Best until now = 2.0398 (↗ 1.6411) │ └── Epoch N-1 = 2.0398 (↗ 1.6411) ├── Ppyoloeloss/loss_cls = 3.1908 │ ├── Best until now = 1.5023 (↗ 1.6885) │ └── Epoch N-1 = 1.5023 (↗ 1.6885) ├── Ppyoloeloss/loss_dfl = 0.6125 │ ├── Best until now = 0.6137 (↘ -0.0012) │ └── Epoch N-1 = 0.6413 (↘ -0.0288) ├── Ppyoloeloss/loss_iou = 0.0735 │ ├── Best until now = 0.0752 (↘ -0.0017) │ └── Epoch N-1 = 0.0867 (↘ -0.0132) ├── Precision@0.50 = 0.0096 │ ├── Best until now = 0.0198 (↘ -0.0102) │ └── Epoch N-1 = 0.0198 (↘ -0.0102) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 0.9792 (↗ 0.0208) ===========================================================
Train epoch 8: 100%|██████████| 27/27 [00:38<00:00, 1.43s/it, PPYoloELoss/loss=1.64, PPYoloELoss/loss_cls=0.961, PPYoloELoss/loss_dfl=0.736, PPYoloELoss/loss_iou=0.123, gpu_mem=12] Validation epoch 8: 100%|██████████| 6/6 [00:04<00:00, 1.27it/s]
=========================================================== SUMMARY OF EPOCH 8 ├── Training │ ├── Ppyoloeloss/loss = 1.6368 │ │ ├── Best until now = 1.6548 (↘ -0.018) │ │ └── Epoch N-1 = 1.6548 (↘ -0.018) │ ├── Ppyoloeloss/loss_cls = 0.9607 │ │ ├── Best until now = 0.9684 (↘ -0.0077) │ │ └── Epoch N-1 = 0.9684 (↘ -0.0077) │ ├── Ppyoloeloss/loss_dfl = 0.7361 │ │ ├── Best until now = 0.7485 (↘ -0.0124) │ │ └── Epoch N-1 = 0.7485 (↘ -0.0124) │ └── Ppyoloeloss/loss_iou = 0.1232 │ ├── Best until now = 0.1249 (↘ -0.0016) │ └── Epoch N-1 = 0.1249 (↘ -0.0016) └── Validation ├── F1@0.50 = 0.0151 │ ├── Best until now = 0.0388 (↘ -0.0238) │ └── Epoch N-1 = 0.0189 (↘ -0.0039) ├── Map@0.50 = 0.5353 │ ├── Best until now = 0.5859 (↘ -0.0506) │ └── Epoch N-1 = 0.5656 (↘ -0.0302) ├── Ppyoloeloss/loss = 4.2569 │ ├── Best until now = 2.0398 (↗ 2.2171) │ └── Epoch N-1 = 3.6809 (↗ 0.576) ├── Ppyoloeloss/loss_cls = 3.7513 │ ├── Best until now = 1.5023 (↗ 2.249) │ └── Epoch N-1 = 3.1908 (↗ 0.5605) ├── Ppyoloeloss/loss_dfl = 0.624 │ ├── Best until now = 0.6125 (↗ 0.0115) │ └── Epoch N-1 = 0.6125 (↗ 0.0115) ├── Ppyoloeloss/loss_iou = 0.0774 │ ├── Best until now = 0.0735 (↗ 0.0039) │ └── Epoch N-1 = 0.0735 (↗ 0.0039) ├── Precision@0.50 = 0.0076 │ ├── Best until now = 0.0198 (↘ -0.0122) │ └── Epoch N-1 = 0.0096 (↘ -0.002) └── Recall@0.50 = 0.9688 ├── Best until now = 1.0 (↘ -0.0312) └── Epoch N-1 = 1.0 (↘ -0.0312) ===========================================================
Train epoch 9: 100%|██████████| 27/27 [00:39<00:00, 1.46s/it, PPYoloELoss/loss=1.61, PPYoloELoss/loss_cls=0.941, PPYoloELoss/loss_dfl=0.728, PPYoloELoss/loss_iou=0.123, gpu_mem=12] Validation epoch 9: 100%|██████████| 6/6 [00:04<00:00, 1.29it/s]
=========================================================== SUMMARY OF EPOCH 9 ├── Training │ ├── Ppyoloeloss/loss = 1.6124 │ │ ├── Best until now = 1.6368 (↘ -0.0244) │ │ └── Epoch N-1 = 1.6368 (↘ -0.0244) │ ├── Ppyoloeloss/loss_cls = 0.9413 │ │ ├── Best until now = 0.9607 (↘ -0.0194) │ │ └── Epoch N-1 = 0.9607 (↘ -0.0194) │ ├── Ppyoloeloss/loss_dfl = 0.7279 │ │ ├── Best until now = 0.7361 (↘ -0.0082) │ │ └── Epoch N-1 = 0.7361 (↘ -0.0082) │ └── Ppyoloeloss/loss_iou = 0.1229 │ ├── Best until now = 0.1232 (↘ -0.0004) │ └── Epoch N-1 = 0.1232 (↘ -0.0004) └── Validation ├── F1@0.50 = 0.0172 │ ├── Best until now = 0.0388 (↘ -0.0217) │ └── Epoch N-1 = 0.0151 (↗ 0.0021) ├── Map@0.50 = 0.6264 │ ├── Best until now = 0.5859 (↗ 0.0405) │ └── Epoch N-1 = 0.5353 (↗ 0.0911) ├── Ppyoloeloss/loss = 4.0335 │ ├── Best until now = 2.0398 (↗ 1.9938) │ └── Epoch N-1 = 4.2569 (↘ -0.2234) ├── Ppyoloeloss/loss_cls = 3.5252 │ ├── Best until now = 1.5023 (↗ 2.0229) │ └── Epoch N-1 = 3.7513 (↘ -0.226) ├── Ppyoloeloss/loss_dfl = 0.6262 │ ├── Best until now = 0.6125 (↗ 0.0138) │ └── Epoch N-1 = 0.624 (↗ 0.0022) ├── Ppyoloeloss/loss_iou = 0.0781 │ ├── Best until now = 0.0735 (↗ 0.0045) │ └── Epoch N-1 = 0.0774 (↗ 0.0006) ├── Precision@0.50 = 0.0087 │ ├── Best until now = 0.0198 (↘ -0.0111) │ └── Epoch N-1 = 0.0076 (↗ 0.0011) └── Recall@0.50 = 0.9792 ├── Best until now = 1.0 (↘ -0.0208) └── Epoch N-1 = 0.9688 (↗ 0.0104) ===========================================================
[2023-05-15 15:53:57] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 15:53:57] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.6264063119888306 Train epoch 10: 100%|██████████| 27/27 [00:38<00:00, 1.44s/it, PPYoloELoss/loss=1.6, PPYoloELoss/loss_cls=0.942, PPYoloELoss/loss_dfl=0.721, PPYoloELoss/loss_iou=0.119, gpu_mem=12] Validation epoch 10: 100%|██████████| 6/6 [00:04<00:00, 1.22it/s]
=========================================================== SUMMARY OF EPOCH 10 ├── Training │ ├── Ppyoloeloss/loss = 1.6 │ │ ├── Best until now = 1.6124 (↘ -0.0124) │ │ └── Epoch N-1 = 1.6124 (↘ -0.0124) │ ├── Ppyoloeloss/loss_cls = 0.9424 │ │ ├── Best until now = 0.9413 (↗ 0.0011) │ │ └── Epoch N-1 = 0.9413 (↗ 0.0011) │ ├── Ppyoloeloss/loss_dfl = 0.721 │ │ ├── Best until now = 0.7279 (↘ -0.0069) │ │ └── Epoch N-1 = 0.7279 (↘ -0.0069) │ └── Ppyoloeloss/loss_iou = 0.1188 │ ├── Best until now = 0.1229 (↘ -0.004) │ └── Epoch N-1 = 0.1229 (↘ -0.004) └── Validation ├── F1@0.50 = 0.0184 │ ├── Best until now = 0.0388 (↘ -0.0204) │ └── Epoch N-1 = 0.0172 (↗ 0.0012) ├── Map@0.50 = 0.5772 │ ├── Best until now = 0.6264 (↘ -0.0492) │ └── Epoch N-1 = 0.6264 (↘ -0.0492) ├── Ppyoloeloss/loss = 3.4115 │ ├── Best until now = 2.0398 (↗ 1.3717) │ └── Epoch N-1 = 4.0335 (↘ -0.622) ├── Ppyoloeloss/loss_cls = 2.9121 │ ├── Best until now = 1.5023 (↗ 1.4098) │ └── Epoch N-1 = 3.5252 (↘ -0.6131) ├── Ppyoloeloss/loss_dfl = 0.6132 │ ├── Best until now = 0.6125 (↗ 0.0008) │ └── Epoch N-1 = 0.6262 (↘ -0.013) ├── Ppyoloeloss/loss_iou = 0.0771 │ ├── Best until now = 0.0735 (↗ 0.0036) │ └── Epoch N-1 = 0.0781 (↘ -0.001) ├── Precision@0.50 = 0.0093 │ ├── Best until now = 0.0198 (↘ -0.0105) │ └── Epoch N-1 = 0.0087 (↗ 0.0006) └── Recall@0.50 = 0.9688 ├── Best until now = 1.0 (↘ -0.0312) └── Epoch N-1 = 0.9792 (↘ -0.0104) ===========================================================
Train epoch 11: 100%|██████████| 27/27 [00:39<00:00, 1.45s/it, PPYoloELoss/loss=1.59, PPYoloELoss/loss_cls=0.922, PPYoloELoss/loss_dfl=0.723, PPYoloELoss/loss_iou=0.124, gpu_mem=12] Validation epoch 11: 100%|██████████| 6/6 [00:04<00:00, 1.27it/s]
=========================================================== SUMMARY OF EPOCH 11 ├── Training │ ├── Ppyoloeloss/loss = 1.5936 │ │ ├── Best until now = 1.6 (↘ -0.0064) │ │ └── Epoch N-1 = 1.6 (↘ -0.0064) │ ├── Ppyoloeloss/loss_cls = 0.9219 │ │ ├── Best until now = 0.9413 (↘ -0.0194) │ │ └── Epoch N-1 = 0.9424 (↘ -0.0205) │ ├── Ppyoloeloss/loss_dfl = 0.7233 │ │ ├── Best until now = 0.721 (↗ 0.0023) │ │ └── Epoch N-1 = 0.721 (↗ 0.0023) │ └── Ppyoloeloss/loss_iou = 0.124 │ ├── Best until now = 0.1188 (↗ 0.0052) │ └── Epoch N-1 = 0.1188 (↗ 0.0052) └── Validation ├── F1@0.50 = 0.0266 │ ├── Best until now = 0.0388 (↘ -0.0122) │ └── Epoch N-1 = 0.0184 (↗ 0.0082) ├── Map@0.50 = 0.5317 │ ├── Best until now = 0.6264 (↘ -0.0947) │ └── Epoch N-1 = 0.5772 (↘ -0.0455) ├── Ppyoloeloss/loss = 3.8108 │ ├── Best until now = 2.0398 (↗ 1.7711) │ └── Epoch N-1 = 3.4115 (↗ 0.3994) ├── Ppyoloeloss/loss_cls = 3.2559 │ ├── Best until now = 1.5023 (↗ 1.7536) │ └── Epoch N-1 = 2.9121 (↗ 0.3438) ├── Ppyoloeloss/loss_dfl = 0.6449 │ ├── Best until now = 0.6125 (↗ 0.0324) │ └── Epoch N-1 = 0.6132 (↗ 0.0317) ├── Ppyoloeloss/loss_iou = 0.093 │ ├── Best until now = 0.0735 (↗ 0.0195) │ └── Epoch N-1 = 0.0771 (↗ 0.0159) ├── Precision@0.50 = 0.0135 │ ├── Best until now = 0.0198 (↘ -0.0063) │ └── Epoch N-1 = 0.0093 (↗ 0.0042) └── Recall@0.50 = 0.9896 ├── Best until now = 1.0 (↘ -0.0104) └── Epoch N-1 = 0.9688 (↗ 0.0208) ===========================================================
Train epoch 12: 100%|██████████| 27/27 [00:38<00:00, 1.44s/it, PPYoloELoss/loss=1.58, PPYoloELoss/loss_cls=0.937, PPYoloELoss/loss_dfl=0.699, PPYoloELoss/loss_iou=0.118, gpu_mem=12] Validation epoch 12: 100%|██████████| 6/6 [00:04<00:00, 1.28it/s]
=========================================================== SUMMARY OF EPOCH 12 ├── Training │ ├── Ppyoloeloss/loss = 1.5809 │ │ ├── Best until now = 1.5936 (↘ -0.0127) │ │ └── Epoch N-1 = 1.5936 (↘ -0.0127) │ ├── Ppyoloeloss/loss_cls = 0.937 │ │ ├── Best until now = 0.9219 (↗ 0.0151) │ │ └── Epoch N-1 = 0.9219 (↗ 0.0151) │ ├── Ppyoloeloss/loss_dfl = 0.6991 │ │ ├── Best until now = 0.721 (↘ -0.0219) │ │ └── Epoch N-1 = 0.7233 (↘ -0.0242) │ └── Ppyoloeloss/loss_iou = 0.1178 │ ├── Best until now = 0.1188 (↘ -0.0011) │ └── Epoch N-1 = 0.124 (↘ -0.0063) └── Validation ├── F1@0.50 = 0.0334 │ ├── Best until now = 0.0388 (↘ -0.0055) │ └── Epoch N-1 = 0.0266 (↗ 0.0067) ├── Map@0.50 = 0.7286 │ ├── Best until now = 0.6264 (↗ 0.1022) │ └── Epoch N-1 = 0.5317 (↗ 0.1969) ├── Ppyoloeloss/loss = 2.1356 │ ├── Best until now = 2.0398 (↗ 0.0958) │ └── Epoch N-1 = 3.8108 (↘ -1.6753) ├── Ppyoloeloss/loss_cls = 1.6506 │ ├── Best until now = 1.5023 (↗ 0.1483) │ └── Epoch N-1 = 3.2559 (↘ -1.6052) ├── Ppyoloeloss/loss_dfl = 0.6063 │ ├── Best until now = 0.6125 (↘ -0.0062) │ └── Epoch N-1 = 0.6449 (↘ -0.0386) ├── Ppyoloeloss/loss_iou = 0.0727 │ ├── Best until now = 0.0735 (↘ -0.0008) │ └── Epoch N-1 = 0.093 (↘ -0.0203) ├── Precision@0.50 = 0.017 │ ├── Best until now = 0.0198 (↘ -0.0028) │ └── Epoch N-1 = 0.0135 (↗ 0.0035) └── Recall@0.50 = 0.9896 ├── Best until now = 1.0 (↘ -0.0104) └── Epoch N-1 = 0.9896 (= 0.0) ===========================================================
[2023-05-15 15:57:28] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 15:57:28] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.7286270260810852 Train epoch 13: 100%|██████████| 27/27 [00:38<00:00, 1.42s/it, PPYoloELoss/loss=1.56, PPYoloELoss/loss_cls=0.923, PPYoloELoss/loss_dfl=0.712, PPYoloELoss/loss_iou=0.112, gpu_mem=12] Validation epoch 13: 100%|██████████| 6/6 [00:04<00:00, 1.24it/s]
=========================================================== SUMMARY OF EPOCH 13 ├── Training │ ├── Ppyoloeloss/loss = 1.5601 │ │ ├── Best until now = 1.5809 (↘ -0.0208) │ │ └── Epoch N-1 = 1.5809 (↘ -0.0208) │ ├── Ppyoloeloss/loss_cls = 0.9231 │ │ ├── Best until now = 0.9219 (↗ 0.0012) │ │ └── Epoch N-1 = 0.937 (↘ -0.0139) │ ├── Ppyoloeloss/loss_dfl = 0.7116 │ │ ├── Best until now = 0.6991 (↗ 0.0125) │ │ └── Epoch N-1 = 0.6991 (↗ 0.0125) │ └── Ppyoloeloss/loss_iou = 0.1125 │ ├── Best until now = 0.1178 (↘ -0.0053) │ └── Epoch N-1 = 0.1178 (↘ -0.0053) └── Validation ├── F1@0.50 = 0.0463 │ ├── Best until now = 0.0388 (↗ 0.0074) │ └── Epoch N-1 = 0.0334 (↗ 0.0129) ├── Map@0.50 = 0.5119 │ ├── Best until now = 0.7286 (↘ -0.2167) │ └── Epoch N-1 = 0.7286 (↘ -0.2167) ├── Ppyoloeloss/loss = 3.5836 │ ├── Best until now = 2.0398 (↗ 1.5439) │ └── Epoch N-1 = 2.1356 (↗ 1.448) ├── Ppyoloeloss/loss_cls = 3.1151 │ ├── Best until now = 1.5023 (↗ 1.6128) │ └── Epoch N-1 = 1.6506 (↗ 1.4644) ├── Ppyoloeloss/loss_dfl = 0.5963 │ ├── Best until now = 0.6063 (↘ -0.01) │ └── Epoch N-1 = 0.6063 (↘ -0.01) ├── Ppyoloeloss/loss_iou = 0.0682 │ ├── Best until now = 0.0727 (↘ -0.0045) │ └── Epoch N-1 = 0.0727 (↘ -0.0045) ├── Precision@0.50 = 0.0237 │ ├── Best until now = 0.0198 (↗ 0.0039) │ └── Epoch N-1 = 0.017 (↗ 0.0067) └── Recall@0.50 = 0.9896 ├── Best until now = 1.0 (↘ -0.0104) └── Epoch N-1 = 0.9896 (= 0.0) ===========================================================
Train epoch 14: 100%|██████████| 27/27 [00:39<00:00, 1.45s/it, PPYoloELoss/loss=1.5, PPYoloELoss/loss_cls=0.889, PPYoloELoss/loss_dfl=0.694, PPYoloELoss/loss_iou=0.106, gpu_mem=12] Validation epoch 14: 100%|██████████| 6/6 [00:04<00:00, 1.27it/s]
=========================================================== SUMMARY OF EPOCH 14 ├── Training │ ├── Ppyoloeloss/loss = 1.5012 │ │ ├── Best until now = 1.5601 (↘ -0.0589) │ │ └── Epoch N-1 = 1.5601 (↘ -0.0589) │ ├── Ppyoloeloss/loss_cls = 0.8888 │ │ ├── Best until now = 0.9219 (↘ -0.0331) │ │ └── Epoch N-1 = 0.9231 (↘ -0.0343) │ ├── Ppyoloeloss/loss_dfl = 0.6944 │ │ ├── Best until now = 0.6991 (↘ -0.0047) │ │ └── Epoch N-1 = 0.7116 (↘ -0.0173) │ └── Ppyoloeloss/loss_iou = 0.1061 │ ├── Best until now = 0.1125 (↘ -0.0064) │ └── Epoch N-1 = 0.1125 (↘ -0.0064) └── Validation ├── F1@0.50 = 0.0484 │ ├── Best until now = 0.0463 (↗ 0.0021) │ └── Epoch N-1 = 0.0463 (↗ 0.0021) ├── Map@0.50 = 0.6436 │ ├── Best until now = 0.7286 (↘ -0.0851) │ └── Epoch N-1 = 0.5119 (↗ 0.1316) ├── Ppyoloeloss/loss = 3.4199 │ ├── Best until now = 2.0398 (↗ 1.3801) │ └── Epoch N-1 = 3.5836 (↘ -0.1637) ├── Ppyoloeloss/loss_cls = 2.9199 │ ├── Best until now = 1.5023 (↗ 1.4176) │ └── Epoch N-1 = 3.1151 (↘ -0.1952) ├── Ppyoloeloss/loss_dfl = 0.6138 │ ├── Best until now = 0.5963 (↗ 0.0175) │ └── Epoch N-1 = 0.5963 (↗ 0.0175) ├── Ppyoloeloss/loss_iou = 0.0773 │ ├── Best until now = 0.0682 (↗ 0.0091) │ └── Epoch N-1 = 0.0682 (↗ 0.0091) ├── Precision@0.50 = 0.0248 │ ├── Best until now = 0.0237 (↗ 0.0011) │ └── Epoch N-1 = 0.0237 (↗ 0.0011) └── Recall@0.50 = 0.9896 ├── Best until now = 1.0 (↘ -0.0104) └── Epoch N-1 = 0.9896 (= 0.0) ===========================================================
Train epoch 15: 100%|██████████| 27/27 [00:39<00:00, 1.45s/it, PPYoloELoss/loss=1.52, PPYoloELoss/loss_cls=0.873, PPYoloELoss/loss_dfl=0.713, PPYoloELoss/loss_iou=0.115, gpu_mem=12] Validation epoch 15: 100%|██████████| 6/6 [00:04<00:00, 1.28it/s]
=========================================================== SUMMARY OF EPOCH 15 ├── Training │ ├── Ppyoloeloss/loss = 1.517 │ │ ├── Best until now = 1.5012 (↗ 0.0158) │ │ └── Epoch N-1 = 1.5012 (↗ 0.0158) │ ├── Ppyoloeloss/loss_cls = 0.8729 │ │ ├── Best until now = 0.8888 (↘ -0.0159) │ │ └── Epoch N-1 = 0.8888 (↘ -0.0159) │ ├── Ppyoloeloss/loss_dfl = 0.7126 │ │ ├── Best until now = 0.6944 (↗ 0.0183) │ │ └── Epoch N-1 = 0.6944 (↗ 0.0183) │ └── Ppyoloeloss/loss_iou = 0.1151 │ ├── Best until now = 0.1061 (↗ 0.009) │ └── Epoch N-1 = 0.1061 (↗ 0.009) └── Validation ├── F1@0.50 = 0.0508 │ ├── Best until now = 0.0484 (↗ 0.0024) │ └── Epoch N-1 = 0.0484 (↗ 0.0024) ├── Map@0.50 = 0.5406 │ ├── Best until now = 0.7286 (↘ -0.1881) │ └── Epoch N-1 = 0.6436 (↘ -0.103) ├── Ppyoloeloss/loss = 3.1779 │ ├── Best until now = 2.0398 (↗ 1.1381) │ └── Epoch N-1 = 3.4199 (↘ -0.242) ├── Ppyoloeloss/loss_cls = 2.7146 │ ├── Best until now = 1.5023 (↗ 1.2123) │ └── Epoch N-1 = 2.9199 (↘ -0.2052) ├── Ppyoloeloss/loss_dfl = 0.599 │ ├── Best until now = 0.5963 (↗ 0.0027) │ └── Epoch N-1 = 0.6138 (↘ -0.0148) ├── Ppyoloeloss/loss_iou = 0.0655 │ ├── Best until now = 0.0682 (↘ -0.0027) │ └── Epoch N-1 = 0.0773 (↘ -0.0117) ├── Precision@0.50 = 0.0261 │ ├── Best until now = 0.0248 (↗ 0.0013) │ └── Epoch N-1 = 0.0248 (↗ 0.0013) └── Recall@0.50 = 0.9688 ├── Best until now = 1.0 (↘ -0.0312) └── Epoch N-1 = 0.9896 (↘ -0.0208) ===========================================================
Train epoch 16: 100%|██████████| 27/27 [00:37<00:00, 1.41s/it, PPYoloELoss/loss=1.48, PPYoloELoss/loss_cls=0.858, PPYoloELoss/loss_dfl=0.684, PPYoloELoss/loss_iou=0.111, gpu_mem=12] Validation epoch 16: 100%|██████████| 6/6 [00:04<00:00, 1.28it/s]
=========================================================== SUMMARY OF EPOCH 16 ├── Training │ ├── Ppyoloeloss/loss = 1.4765 │ │ ├── Best until now = 1.5012 (↘ -0.0248) │ │ └── Epoch N-1 = 1.517 (↘ -0.0406) │ ├── Ppyoloeloss/loss_cls = 0.8575 │ │ ├── Best until now = 0.8729 (↘ -0.0153) │ │ └── Epoch N-1 = 0.8729 (↘ -0.0153) │ ├── Ppyoloeloss/loss_dfl = 0.684 │ │ ├── Best until now = 0.6944 (↘ -0.0104) │ │ └── Epoch N-1 = 0.7126 (↘ -0.0286) │ └── Ppyoloeloss/loss_iou = 0.1108 │ ├── Best until now = 0.1061 (↗ 0.0047) │ └── Epoch N-1 = 0.1151 (↘ -0.0044) └── Validation ├── F1@0.50 = 0.0704 │ ├── Best until now = 0.0508 (↗ 0.0196) │ └── Epoch N-1 = 0.0508 (↗ 0.0196) ├── Map@0.50 = 0.7486 │ ├── Best until now = 0.7286 (↗ 0.02) │ └── Epoch N-1 = 0.5406 (↗ 0.208) ├── Ppyoloeloss/loss = 2.3274 │ ├── Best until now = 2.0398 (↗ 0.2877) │ └── Epoch N-1 = 3.1779 (↘ -0.8505) ├── Ppyoloeloss/loss_cls = 1.9052 │ ├── Best until now = 1.5023 (↗ 0.4029) │ └── Epoch N-1 = 2.7146 (↘ -0.8094) ├── Ppyoloeloss/loss_dfl = 0.5792 │ ├── Best until now = 0.5963 (↘ -0.0171) │ └── Epoch N-1 = 0.599 (↘ -0.0198) ├── Ppyoloeloss/loss_iou = 0.053 │ ├── Best until now = 0.0655 (↘ -0.0125) │ └── Epoch N-1 = 0.0655 (↘ -0.0125) ├── Precision@0.50 = 0.0365 │ ├── Best until now = 0.0261 (↗ 0.0104) │ └── Epoch N-1 = 0.0261 (↗ 0.0104) └── Recall@0.50 = 0.9792 ├── Best until now = 1.0 (↘ -0.0208) └── Epoch N-1 = 0.9688 (↗ 0.0104) ===========================================================
[2023-05-15 16:01:59] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 16:01:59] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.7485784888267517 Train epoch 17: 100%|██████████| 27/27 [00:38<00:00, 1.42s/it, PPYoloELoss/loss=1.42, PPYoloELoss/loss_cls=0.827, PPYoloELoss/loss_dfl=0.672, PPYoloELoss/loss_iou=0.102, gpu_mem=12] Validation epoch 17: 100%|██████████| 6/6 [00:04<00:00, 1.27it/s]
=========================================================== SUMMARY OF EPOCH 17 ├── Training │ ├── Ppyoloeloss/loss = 1.418 │ │ ├── Best until now = 1.4765 (↘ -0.0585) │ │ └── Epoch N-1 = 1.4765 (↘ -0.0585) │ ├── Ppyoloeloss/loss_cls = 0.8267 │ │ ├── Best until now = 0.8575 (↘ -0.0308) │ │ └── Epoch N-1 = 0.8575 (↘ -0.0308) │ ├── Ppyoloeloss/loss_dfl = 0.6719 │ │ ├── Best until now = 0.684 (↘ -0.0121) │ │ └── Epoch N-1 = 0.684 (↘ -0.0121) │ └── Ppyoloeloss/loss_iou = 0.1021 │ ├── Best until now = 0.1061 (↘ -0.004) │ └── Epoch N-1 = 0.1108 (↘ -0.0086) └── Validation ├── F1@0.50 = 0.0659 │ ├── Best until now = 0.0704 (↘ -0.0044) │ └── Epoch N-1 = 0.0704 (↘ -0.0044) ├── Map@0.50 = 0.7933 │ ├── Best until now = 0.7486 (↗ 0.0447) │ └── Epoch N-1 = 0.7486 (↗ 0.0447) ├── Ppyoloeloss/loss = 1.593 │ ├── Best until now = 2.0398 (↘ -0.4468) │ └── Epoch N-1 = 2.3274 (↘ -0.7345) ├── Ppyoloeloss/loss_cls = 1.1692 │ ├── Best until now = 1.5023 (↘ -0.3331) │ └── Epoch N-1 = 1.9052 (↘ -0.736) ├── Ppyoloeloss/loss_dfl = 0.5788 │ ├── Best until now = 0.5792 (↘ -0.0004) │ └── Epoch N-1 = 0.5792 (↘ -0.0004) ├── Ppyoloeloss/loss_iou = 0.0537 │ ├── Best until now = 0.053 (↗ 0.0007) │ └── Epoch N-1 = 0.053 (↗ 0.0007) ├── Precision@0.50 = 0.0341 │ ├── Best until now = 0.0365 (↘ -0.0024) │ └── Epoch N-1 = 0.0365 (↘ -0.0024) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 0.9792 (↗ 0.0208) ===========================================================
[2023-05-15 16:03:12] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 16:03:12] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.7932721376419067 Train epoch 18: 100%|██████████| 27/27 [00:39<00:00, 1.46s/it, PPYoloELoss/loss=1.39, PPYoloELoss/loss_cls=0.801, PPYoloELoss/loss_dfl=0.679, PPYoloELoss/loss_iou=0.0998, gpu_mem=12] Validation epoch 18: 100%|██████████| 6/6 [00:04<00:00, 1.24it/s]
=========================================================== SUMMARY OF EPOCH 18 ├── Training │ ├── Ppyoloeloss/loss = 1.3899 │ │ ├── Best until now = 1.418 (↘ -0.0281) │ │ └── Epoch N-1 = 1.418 (↘ -0.0281) │ ├── Ppyoloeloss/loss_cls = 0.8007 │ │ ├── Best until now = 0.8267 (↘ -0.026) │ │ └── Epoch N-1 = 0.8267 (↘ -0.026) │ ├── Ppyoloeloss/loss_dfl = 0.6791 │ │ ├── Best until now = 0.6719 (↗ 0.0073) │ │ └── Epoch N-1 = 0.6719 (↗ 0.0073) │ └── Ppyoloeloss/loss_iou = 0.0998 │ ├── Best until now = 0.1021 (↘ -0.0023) │ └── Epoch N-1 = 0.1021 (↘ -0.0023) └── Validation ├── F1@0.50 = 0.0545 │ ├── Best until now = 0.0704 (↘ -0.0159) │ └── Epoch N-1 = 0.0659 (↘ -0.0114) ├── Map@0.50 = 0.72 │ ├── Best until now = 0.7933 (↘ -0.0733) │ └── Epoch N-1 = 0.7933 (↘ -0.0733) ├── Ppyoloeloss/loss = 2.4328 │ ├── Best until now = 1.593 (↗ 0.8398) │ └── Epoch N-1 = 1.593 (↗ 0.8398) ├── Ppyoloeloss/loss_cls = 1.9832 │ ├── Best until now = 1.1692 (↗ 0.814) │ └── Epoch N-1 = 1.1692 (↗ 0.814) ├── Ppyoloeloss/loss_dfl = 0.5952 │ ├── Best until now = 0.5788 (↗ 0.0164) │ └── Epoch N-1 = 0.5788 (↗ 0.0164) ├── Ppyoloeloss/loss_iou = 0.0608 │ ├── Best until now = 0.053 (↗ 0.0077) │ └── Epoch N-1 = 0.0537 (↗ 0.0071) ├── Precision@0.50 = 0.028 │ ├── Best until now = 0.0365 (↘ -0.0085) │ └── Epoch N-1 = 0.0341 (↘ -0.006) └── Recall@0.50 = 0.9688 ├── Best until now = 1.0 (↘ -0.0312) └── Epoch N-1 = 1.0 (↘ -0.0312) ===========================================================
Train epoch 19: 100%|██████████| 27/27 [00:40<00:00, 1.50s/it, PPYoloELoss/loss=1.38, PPYoloELoss/loss_cls=0.807, PPYoloELoss/loss_dfl=0.662, PPYoloELoss/loss_iou=0.0965, gpu_mem=12] Validation epoch 19: 100%|██████████| 6/6 [00:04<00:00, 1.26it/s]
=========================================================== SUMMARY OF EPOCH 19 ├── Training │ ├── Ppyoloeloss/loss = 1.3797 │ │ ├── Best until now = 1.3899 (↘ -0.0102) │ │ └── Epoch N-1 = 1.3899 (↘ -0.0102) │ ├── Ppyoloeloss/loss_cls = 0.8072 │ │ ├── Best until now = 0.8007 (↗ 0.0064) │ │ └── Epoch N-1 = 0.8007 (↗ 0.0064) │ ├── Ppyoloeloss/loss_dfl = 0.6625 │ │ ├── Best until now = 0.6719 (↘ -0.0094) │ │ └── Epoch N-1 = 0.6791 (↘ -0.0167) │ └── Ppyoloeloss/loss_iou = 0.0965 │ ├── Best until now = 0.0998 (↘ -0.0033) │ └── Epoch N-1 = 0.0998 (↘ -0.0033) └── Validation ├── F1@0.50 = 0.0883 │ ├── Best until now = 0.0704 (↗ 0.0179) │ └── Epoch N-1 = 0.0545 (↗ 0.0338) ├── Map@0.50 = 0.7593 │ ├── Best until now = 0.7933 (↘ -0.0339) │ └── Epoch N-1 = 0.72 (↗ 0.0394) ├── Ppyoloeloss/loss = 2.4887 │ ├── Best until now = 1.593 (↗ 0.8958) │ └── Epoch N-1 = 2.4328 (↗ 0.0559) ├── Ppyoloeloss/loss_cls = 2.0623 │ ├── Best until now = 1.1692 (↗ 0.893) │ └── Epoch N-1 = 1.9832 (↗ 0.0791) ├── Ppyoloeloss/loss_dfl = 0.5807 │ ├── Best until now = 0.5788 (↗ 0.0019) │ └── Epoch N-1 = 0.5952 (↘ -0.0146) ├── Ppyoloeloss/loss_iou = 0.0545 │ ├── Best until now = 0.053 (↗ 0.0014) │ └── Epoch N-1 = 0.0608 (↘ -0.0063) ├── Precision@0.50 = 0.0462 │ ├── Best until now = 0.0365 (↗ 0.0097) │ └── Epoch N-1 = 0.028 (↗ 0.0181) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 0.9688 (↗ 0.0312) ===========================================================
Train epoch 20: 100%|██████████| 27/27 [00:40<00:00, 1.49s/it, PPYoloELoss/loss=1.37, PPYoloELoss/loss_cls=0.794, PPYoloELoss/loss_dfl=0.664, PPYoloELoss/loss_iou=0.0982, gpu_mem=12] Validation epoch 20: 100%|██████████| 6/6 [00:04<00:00, 1.27it/s]
=========================================================== SUMMARY OF EPOCH 20 ├── Training │ ├── Ppyoloeloss/loss = 1.371 │ │ ├── Best until now = 1.3797 (↘ -0.0086) │ │ └── Epoch N-1 = 1.3797 (↘ -0.0086) │ ├── Ppyoloeloss/loss_cls = 0.7935 │ │ ├── Best until now = 0.8007 (↘ -0.0072) │ │ └── Epoch N-1 = 0.8072 (↘ -0.0136) │ ├── Ppyoloeloss/loss_dfl = 0.6639 │ │ ├── Best until now = 0.6625 (↗ 0.0014) │ │ └── Epoch N-1 = 0.6625 (↗ 0.0014) │ └── Ppyoloeloss/loss_iou = 0.0982 │ ├── Best until now = 0.0965 (↗ 0.0017) │ └── Epoch N-1 = 0.0965 (↗ 0.0017) └── Validation ├── F1@0.50 = 0.0948 │ ├── Best until now = 0.0883 (↗ 0.0065) │ └── Epoch N-1 = 0.0883 (↗ 0.0065) ├── Map@0.50 = 0.8159 │ ├── Best until now = 0.7933 (↗ 0.0226) │ └── Epoch N-1 = 0.7593 (↗ 0.0566) ├── Ppyoloeloss/loss = 1.8899 │ ├── Best until now = 1.593 (↗ 0.2969) │ └── Epoch N-1 = 2.4887 (↘ -0.5989) ├── Ppyoloeloss/loss_cls = 1.4714 │ ├── Best until now = 1.1692 (↗ 0.3022) │ └── Epoch N-1 = 2.0623 (↘ -0.5909) ├── Ppyoloeloss/loss_dfl = 0.576 │ ├── Best until now = 0.5788 (↘ -0.0028) │ └── Epoch N-1 = 0.5807 (↘ -0.0046) ├── Ppyoloeloss/loss_iou = 0.0522 │ ├── Best until now = 0.053 (↘ -0.0009) │ └── Epoch N-1 = 0.0545 (↘ -0.0023) ├── Precision@0.50 = 0.0497 │ ├── Best until now = 0.0462 (↗ 0.0036) │ └── Epoch N-1 = 0.0462 (↗ 0.0036) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
[2023-05-15 16:06:37] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 16:06:37] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.8159089088439941 Train epoch 21: 100%|██████████| 27/27 [00:39<00:00, 1.48s/it, PPYoloELoss/loss=1.36, PPYoloELoss/loss_cls=0.79, PPYoloELoss/loss_dfl=0.669, PPYoloELoss/loss_iou=0.0961, gpu_mem=12] Validation epoch 21: 100%|██████████| 6/6 [00:04<00:00, 1.26it/s]
=========================================================== SUMMARY OF EPOCH 21 ├── Training │ ├── Ppyoloeloss/loss = 1.3646 │ │ ├── Best until now = 1.371 (↘ -0.0064) │ │ └── Epoch N-1 = 1.371 (↘ -0.0064) │ ├── Ppyoloeloss/loss_cls = 0.7901 │ │ ├── Best until now = 0.7935 (↘ -0.0035) │ │ └── Epoch N-1 = 0.7935 (↘ -0.0035) │ ├── Ppyoloeloss/loss_dfl = 0.6687 │ │ ├── Best until now = 0.6625 (↗ 0.0062) │ │ └── Epoch N-1 = 0.6639 (↗ 0.0048) │ └── Ppyoloeloss/loss_iou = 0.0961 │ ├── Best until now = 0.0965 (↘ -0.0004) │ └── Epoch N-1 = 0.0982 (↘ -0.0021) └── Validation ├── F1@0.50 = 0.0807 │ ├── Best until now = 0.0948 (↘ -0.0141) │ └── Epoch N-1 = 0.0948 (↘ -0.0141) ├── Map@0.50 = 0.8045 │ ├── Best until now = 0.8159 (↘ -0.0114) │ └── Epoch N-1 = 0.8159 (↘ -0.0114) ├── Ppyoloeloss/loss = 2.3086 │ ├── Best until now = 1.593 (↗ 0.7157) │ └── Epoch N-1 = 1.8899 (↗ 0.4187) ├── Ppyoloeloss/loss_cls = 1.8763 │ ├── Best until now = 1.1692 (↗ 0.7071) │ └── Epoch N-1 = 1.4714 (↗ 0.4049) ├── Ppyoloeloss/loss_dfl = 0.5859 │ ├── Best until now = 0.576 (↗ 0.0098) │ └── Epoch N-1 = 0.576 (↗ 0.0098) ├── Ppyoloeloss/loss_iou = 0.0558 │ ├── Best until now = 0.0522 (↗ 0.0036) │ └── Epoch N-1 = 0.0522 (↗ 0.0036) ├── Precision@0.50 = 0.0421 │ ├── Best until now = 0.0497 (↘ -0.0077) │ └── Epoch N-1 = 0.0497 (↘ -0.0077) └── Recall@0.50 = 0.9896 ├── Best until now = 1.0 (↘ -0.0104) └── Epoch N-1 = 1.0 (↘ -0.0104) ===========================================================
Train epoch 22: 100%|██████████| 27/27 [00:39<00:00, 1.46s/it, PPYoloELoss/loss=1.29, PPYoloELoss/loss_cls=0.741, PPYoloELoss/loss_dfl=0.649, PPYoloELoss/loss_iou=0.0909, gpu_mem=12] Validation epoch 22: 100%|██████████| 6/6 [00:04<00:00, 1.28it/s]
=========================================================== SUMMARY OF EPOCH 22 ├── Training │ ├── Ppyoloeloss/loss = 1.2929 │ │ ├── Best until now = 1.3646 (↘ -0.0717) │ │ └── Epoch N-1 = 1.3646 (↘ -0.0717) │ ├── Ppyoloeloss/loss_cls = 0.7412 │ │ ├── Best until now = 0.7901 (↘ -0.0489) │ │ └── Epoch N-1 = 0.7901 (↘ -0.0489) │ ├── Ppyoloeloss/loss_dfl = 0.649 │ │ ├── Best until now = 0.6625 (↘ -0.0135) │ │ └── Epoch N-1 = 0.6687 (↘ -0.0197) │ └── Ppyoloeloss/loss_iou = 0.0909 │ ├── Best until now = 0.0961 (↘ -0.0052) │ └── Epoch N-1 = 0.0961 (↘ -0.0052) └── Validation ├── F1@0.50 = 0.0711 │ ├── Best until now = 0.0948 (↘ -0.0237) │ └── Epoch N-1 = 0.0807 (↘ -0.0096) ├── Map@0.50 = 0.7544 │ ├── Best until now = 0.8159 (↘ -0.0615) │ └── Epoch N-1 = 0.8045 (↘ -0.05) ├── Ppyoloeloss/loss = 2.5425 │ ├── Best until now = 1.593 (↗ 0.9495) │ └── Epoch N-1 = 2.3086 (↗ 0.2339) ├── Ppyoloeloss/loss_cls = 2.1146 │ ├── Best until now = 1.1692 (↗ 0.9454) │ └── Epoch N-1 = 1.8763 (↗ 0.2383) ├── Ppyoloeloss/loss_dfl = 0.5796 │ ├── Best until now = 0.576 (↗ 0.0036) │ └── Epoch N-1 = 0.5859 (↘ -0.0062) ├── Ppyoloeloss/loss_iou = 0.0552 │ ├── Best until now = 0.0522 (↗ 0.003) │ └── Epoch N-1 = 0.0558 (↘ -0.0005) ├── Precision@0.50 = 0.0369 │ ├── Best until now = 0.0497 (↘ -0.0129) │ └── Epoch N-1 = 0.0421 (↘ -0.0052) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 0.9896 (↗ 0.0104) ===========================================================
Train epoch 23: 100%|██████████| 27/27 [00:39<00:00, 1.45s/it, PPYoloELoss/loss=1.32, PPYoloELoss/loss_cls=0.768, PPYoloELoss/loss_dfl=0.653, PPYoloELoss/loss_iou=0.0912, gpu_mem=12] Validation epoch 23: 100%|██████████| 6/6 [00:04<00:00, 1.25it/s]
=========================================================== SUMMARY OF EPOCH 23 ├── Training │ ├── Ppyoloeloss/loss = 1.3223 │ │ ├── Best until now = 1.2929 (↗ 0.0293) │ │ └── Epoch N-1 = 1.2929 (↗ 0.0293) │ ├── Ppyoloeloss/loss_cls = 0.768 │ │ ├── Best until now = 0.7412 (↗ 0.0269) │ │ └── Epoch N-1 = 0.7412 (↗ 0.0269) │ ├── Ppyoloeloss/loss_dfl = 0.6527 │ │ ├── Best until now = 0.649 (↗ 0.0037) │ │ └── Epoch N-1 = 0.649 (↗ 0.0037) │ └── Ppyoloeloss/loss_iou = 0.0912 │ ├── Best until now = 0.0909 (↗ 0.0002) │ └── Epoch N-1 = 0.0909 (↗ 0.0002) └── Validation ├── F1@0.50 = 0.0729 │ ├── Best until now = 0.0948 (↘ -0.0219) │ └── Epoch N-1 = 0.0711 (↗ 0.0018) ├── Map@0.50 = 0.8122 │ ├── Best until now = 0.8159 (↘ -0.0037) │ └── Epoch N-1 = 0.7544 (↗ 0.0577) ├── Ppyoloeloss/loss = 2.0019 │ ├── Best until now = 1.593 (↗ 0.4089) │ └── Epoch N-1 = 2.5425 (↘ -0.5406) ├── Ppyoloeloss/loss_cls = 1.587 │ ├── Best until now = 1.1692 (↗ 0.4177) │ └── Epoch N-1 = 2.1146 (↘ -0.5276) ├── Ppyoloeloss/loss_dfl = 0.5778 │ ├── Best until now = 0.576 (↗ 0.0017) │ └── Epoch N-1 = 0.5796 (↘ -0.0019) ├── Ppyoloeloss/loss_iou = 0.0504 │ ├── Best until now = 0.0522 (↘ -0.0018) │ └── Epoch N-1 = 0.0552 (↘ -0.0048) ├── Precision@0.50 = 0.0378 │ ├── Best until now = 0.0497 (↘ -0.0119) │ └── Epoch N-1 = 0.0369 (↗ 0.001) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
Train epoch 24: 100%|██████████| 27/27 [00:38<00:00, 1.43s/it, PPYoloELoss/loss=1.29, PPYoloELoss/loss_cls=0.738, PPYoloELoss/loss_dfl=0.654, PPYoloELoss/loss_iou=0.0913, gpu_mem=12] Validation epoch 24: 100%|██████████| 6/6 [00:04<00:00, 1.27it/s]
=========================================================== SUMMARY OF EPOCH 24 ├── Training │ ├── Ppyoloeloss/loss = 1.2931 │ │ ├── Best until now = 1.2929 (↗ 0.0002) │ │ └── Epoch N-1 = 1.3223 (↘ -0.0292) │ ├── Ppyoloeloss/loss_cls = 0.7379 │ │ ├── Best until now = 0.7412 (↘ -0.0033) │ │ └── Epoch N-1 = 0.768 (↘ -0.0302) │ ├── Ppyoloeloss/loss_dfl = 0.654 │ │ ├── Best until now = 0.649 (↗ 0.005) │ │ └── Epoch N-1 = 0.6527 (↗ 0.0013) │ └── Ppyoloeloss/loss_iou = 0.0913 │ ├── Best until now = 0.0909 (↗ 0.0004) │ └── Epoch N-1 = 0.0912 (↗ 1e-04) └── Validation ├── F1@0.50 = 0.0814 │ ├── Best until now = 0.0948 (↘ -0.0133) │ └── Epoch N-1 = 0.0729 (↗ 0.0086) ├── Map@0.50 = 0.8268 │ ├── Best until now = 0.8159 (↗ 0.0109) │ └── Epoch N-1 = 0.8122 (↗ 0.0146) ├── Ppyoloeloss/loss = 1.9583 │ ├── Best until now = 1.593 (↗ 0.3654) │ └── Epoch N-1 = 2.0019 (↘ -0.0435) ├── Ppyoloeloss/loss_cls = 1.5422 │ ├── Best until now = 1.1692 (↗ 0.3729) │ └── Epoch N-1 = 1.587 (↘ -0.0448) ├── Ppyoloeloss/loss_dfl = 0.5786 │ ├── Best until now = 0.576 (↗ 0.0026) │ └── Epoch N-1 = 0.5778 (↗ 0.0009) ├── Ppyoloeloss/loss_iou = 0.0507 │ ├── Best until now = 0.0504 (↗ 0.0003) │ └── Epoch N-1 = 0.0504 (↗ 0.0003) ├── Precision@0.50 = 0.0424 │ ├── Best until now = 0.0497 (↘ -0.0073) │ └── Epoch N-1 = 0.0378 (↗ 0.0046) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
[2023-05-15 16:11:11] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 16:11:11] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.8267745971679688 Train epoch 25: 100%|██████████| 27/27 [00:38<00:00, 1.41s/it, PPYoloELoss/loss=1.26, PPYoloELoss/loss_cls=0.71, PPYoloELoss/loss_dfl=0.653, PPYoloELoss/loss_iou=0.0884, gpu_mem=12] Validation epoch 25: 100%|██████████| 6/6 [00:04<00:00, 1.26it/s]
=========================================================== SUMMARY OF EPOCH 25 ├── Training │ ├── Ppyoloeloss/loss = 1.2579 │ │ ├── Best until now = 1.2929 (↘ -0.0351) │ │ └── Epoch N-1 = 1.2931 (↘ -0.0352) │ ├── Ppyoloeloss/loss_cls = 0.7102 │ │ ├── Best until now = 0.7379 (↘ -0.0276) │ │ └── Epoch N-1 = 0.7379 (↘ -0.0276) │ ├── Ppyoloeloss/loss_dfl = 0.6531 │ │ ├── Best until now = 0.649 (↗ 0.0041) │ │ └── Epoch N-1 = 0.654 (↘ -0.0009) │ └── Ppyoloeloss/loss_iou = 0.0884 │ ├── Best until now = 0.0909 (↘ -0.0025) │ └── Epoch N-1 = 0.0913 (↘ -0.0028) └── Validation ├── F1@0.50 = 0.07 │ ├── Best until now = 0.0948 (↘ -0.0248) │ └── Epoch N-1 = 0.0814 (↘ -0.0115) ├── Map@0.50 = 0.8218 │ ├── Best until now = 0.8268 (↘ -0.005) │ └── Epoch N-1 = 0.8268 (↘ -0.005) ├── Ppyoloeloss/loss = 2.5079 │ ├── Best until now = 1.593 (↗ 0.9149) │ └── Epoch N-1 = 1.9583 (↗ 0.5495) ├── Ppyoloeloss/loss_cls = 2.0806 │ ├── Best until now = 1.1692 (↗ 0.9114) │ └── Epoch N-1 = 1.5422 (↗ 0.5384) ├── Ppyoloeloss/loss_dfl = 0.5842 │ ├── Best until now = 0.576 (↗ 0.0082) │ └── Epoch N-1 = 0.5786 (↗ 0.0056) ├── Ppyoloeloss/loss_iou = 0.0541 │ ├── Best until now = 0.0504 (↗ 0.0037) │ └── Epoch N-1 = 0.0507 (↗ 0.0033) ├── Precision@0.50 = 0.0363 │ ├── Best until now = 0.0497 (↘ -0.0135) │ └── Epoch N-1 = 0.0424 (↘ -0.0062) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
Train epoch 26: 100%|██████████| 27/27 [00:39<00:00, 1.47s/it, PPYoloELoss/loss=1.23, PPYoloELoss/loss_cls=0.693, PPYoloELoss/loss_dfl=0.649, PPYoloELoss/loss_iou=0.0866, gpu_mem=12] Validation epoch 26: 100%|██████████| 6/6 [00:04<00:00, 1.28it/s]
=========================================================== SUMMARY OF EPOCH 26 ├── Training │ ├── Ppyoloeloss/loss = 1.2341 │ │ ├── Best until now = 1.2579 (↘ -0.0238) │ │ └── Epoch N-1 = 1.2579 (↘ -0.0238) │ ├── Ppyoloeloss/loss_cls = 0.6931 │ │ ├── Best until now = 0.7102 (↘ -0.0171) │ │ └── Epoch N-1 = 0.7102 (↘ -0.0171) │ ├── Ppyoloeloss/loss_dfl = 0.6492 │ │ ├── Best until now = 0.649 (↗ 0.0002) │ │ └── Epoch N-1 = 0.6531 (↘ -0.0039) │ └── Ppyoloeloss/loss_iou = 0.0866 │ ├── Best until now = 0.0884 (↘ -0.0019) │ └── Epoch N-1 = 0.0884 (↘ -0.0019) └── Validation ├── F1@0.50 = 0.0874 │ ├── Best until now = 0.0948 (↘ -0.0073) │ └── Epoch N-1 = 0.07 (↗ 0.0175) ├── Map@0.50 = 0.7365 │ ├── Best until now = 0.8268 (↘ -0.0902) │ └── Epoch N-1 = 0.8218 (↘ -0.0852) ├── Ppyoloeloss/loss = 3.2716 │ ├── Best until now = 1.593 (↗ 1.6787) │ └── Epoch N-1 = 2.5079 (↗ 0.7638) ├── Ppyoloeloss/loss_cls = 2.8457 │ ├── Best until now = 1.1692 (↗ 1.6765) │ └── Epoch N-1 = 2.0806 (↗ 0.7651) ├── Ppyoloeloss/loss_dfl = 0.5819 │ ├── Best until now = 0.576 (↗ 0.0059) │ └── Epoch N-1 = 0.5842 (↘ -0.0023) ├── Ppyoloeloss/loss_iou = 0.054 │ ├── Best until now = 0.0504 (↗ 0.0036) │ └── Epoch N-1 = 0.0541 (↘ -1e-04) ├── Precision@0.50 = 0.0457 │ ├── Best until now = 0.0497 (↘ -0.004) │ └── Epoch N-1 = 0.0363 (↗ 0.0095) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
Train epoch 27: 100%|██████████| 27/27 [00:38<00:00, 1.43s/it, PPYoloELoss/loss=1.21, PPYoloELoss/loss_cls=0.675, PPYoloELoss/loss_dfl=0.635, PPYoloELoss/loss_iou=0.0853, gpu_mem=12] Validation epoch 27: 100%|██████████| 6/6 [00:04<00:00, 1.29it/s]
=========================================================== SUMMARY OF EPOCH 27 ├── Training │ ├── Ppyoloeloss/loss = 1.2057 │ │ ├── Best until now = 1.2341 (↘ -0.0284) │ │ └── Epoch N-1 = 1.2341 (↘ -0.0284) │ ├── Ppyoloeloss/loss_cls = 0.6748 │ │ ├── Best until now = 0.6931 (↘ -0.0182) │ │ └── Epoch N-1 = 0.6931 (↘ -0.0182) │ ├── Ppyoloeloss/loss_dfl = 0.6352 │ │ ├── Best until now = 0.649 (↘ -0.0138) │ │ └── Epoch N-1 = 0.6492 (↘ -0.014) │ └── Ppyoloeloss/loss_iou = 0.0853 │ ├── Best until now = 0.0866 (↘ -0.0013) │ └── Epoch N-1 = 0.0866 (↘ -0.0013) └── Validation ├── F1@0.50 = 0.1076 │ ├── Best until now = 0.0948 (↗ 0.0128) │ └── Epoch N-1 = 0.0874 (↗ 0.0201) ├── Map@0.50 = 0.8142 │ ├── Best until now = 0.8268 (↘ -0.0126) │ └── Epoch N-1 = 0.7365 (↗ 0.0777) ├── Ppyoloeloss/loss = 2.2186 │ ├── Best until now = 1.593 (↗ 0.6256) │ └── Epoch N-1 = 3.2716 (↘ -1.0531) ├── Ppyoloeloss/loss_cls = 1.7997 │ ├── Best until now = 1.1692 (↗ 0.6305) │ └── Epoch N-1 = 2.8457 (↘ -1.0459) ├── Ppyoloeloss/loss_dfl = 0.5817 │ ├── Best until now = 0.576 (↗ 0.0057) │ └── Epoch N-1 = 0.5819 (↘ -0.0002) ├── Ppyoloeloss/loss_iou = 0.0512 │ ├── Best until now = 0.0504 (↗ 0.0008) │ └── Epoch N-1 = 0.054 (↘ -0.0028) ├── Precision@0.50 = 0.0568 │ ├── Best until now = 0.0497 (↗ 0.0071) │ └── Epoch N-1 = 0.0457 (↗ 0.0111) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
Train epoch 28: 100%|██████████| 27/27 [00:38<00:00, 1.42s/it, PPYoloELoss/loss=1.2, PPYoloELoss/loss_cls=0.673, PPYoloELoss/loss_dfl=0.63, PPYoloELoss/loss_iou=0.0841, gpu_mem=12] Validation epoch 28: 100%|██████████| 6/6 [00:04<00:00, 1.24it/s]
=========================================================== SUMMARY OF EPOCH 28 ├── Training │ ├── Ppyoloeloss/loss = 1.1989 │ │ ├── Best until now = 1.2057 (↘ -0.0068) │ │ └── Epoch N-1 = 1.2057 (↘ -0.0068) │ ├── Ppyoloeloss/loss_cls = 0.6735 │ │ ├── Best until now = 0.6748 (↘ -0.0013) │ │ └── Epoch N-1 = 0.6748 (↘ -0.0013) │ ├── Ppyoloeloss/loss_dfl = 0.6303 │ │ ├── Best until now = 0.6352 (↘ -0.005) │ │ └── Epoch N-1 = 0.6352 (↘ -0.005) │ └── Ppyoloeloss/loss_iou = 0.0841 │ ├── Best until now = 0.0853 (↘ -0.0012) │ └── Epoch N-1 = 0.0853 (↘ -0.0012) └── Validation ├── F1@0.50 = 0.0756 │ ├── Best until now = 0.1076 (↘ -0.032) │ └── Epoch N-1 = 0.1076 (↘ -0.032) ├── Map@0.50 = 0.8292 │ ├── Best until now = 0.8268 (↗ 0.0024) │ └── Epoch N-1 = 0.8142 (↗ 0.015) ├── Ppyoloeloss/loss = 2.4331 │ ├── Best until now = 1.593 (↗ 0.8401) │ └── Epoch N-1 = 2.2186 (↗ 0.2145) ├── Ppyoloeloss/loss_cls = 1.9983 │ ├── Best until now = 1.1692 (↗ 0.8291) │ └── Epoch N-1 = 1.7997 (↗ 0.1986) ├── Ppyoloeloss/loss_dfl = 0.5802 │ ├── Best until now = 0.576 (↗ 0.0042) │ └── Epoch N-1 = 0.5817 (↘ -0.0015) ├── Ppyoloeloss/loss_iou = 0.0579 │ ├── Best until now = 0.0504 (↗ 0.0075) │ └── Epoch N-1 = 0.0512 (↗ 0.0067) ├── Precision@0.50 = 0.0393 │ ├── Best until now = 0.0568 (↘ -0.0176) │ └── Epoch N-1 = 0.0568 (↘ -0.0176) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
[2023-05-15 16:15:49] INFO - base_sg_logger.py - Checkpoint saved in checkpoints/cars-from-above/ckpt_best.pth [2023-05-15 16:15:49] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.8291964530944824 Train epoch 29: 100%|██████████| 27/27 [00:37<00:00, 1.38s/it, PPYoloELoss/loss=1.24, PPYoloELoss/loss_cls=0.701, PPYoloELoss/loss_dfl=0.644, PPYoloELoss/loss_iou=0.0873, gpu_mem=12] Validation epoch 29: 100%|██████████| 6/6 [00:04<00:00, 1.27it/s]
=========================================================== SUMMARY OF EPOCH 29 ├── Training │ ├── Ppyoloeloss/loss = 1.2412 │ │ ├── Best until now = 1.1989 (↗ 0.0423) │ │ └── Epoch N-1 = 1.1989 (↗ 0.0423) │ ├── Ppyoloeloss/loss_cls = 0.7013 │ │ ├── Best until now = 0.6735 (↗ 0.0278) │ │ └── Epoch N-1 = 0.6735 (↗ 0.0278) │ ├── Ppyoloeloss/loss_dfl = 0.6436 │ │ ├── Best until now = 0.6303 (↗ 0.0133) │ │ └── Epoch N-1 = 0.6303 (↗ 0.0133) │ └── Ppyoloeloss/loss_iou = 0.0873 │ ├── Best until now = 0.0841 (↗ 0.0031) │ └── Epoch N-1 = 0.0841 (↗ 0.0031) └── Validation ├── F1@0.50 = 0.0763 │ ├── Best until now = 0.1076 (↘ -0.0313) │ └── Epoch N-1 = 0.0756 (↗ 0.0007) ├── Map@0.50 = 0.7855 │ ├── Best until now = 0.8292 (↘ -0.0437) │ └── Epoch N-1 = 0.8292 (↘ -0.0437) ├── Ppyoloeloss/loss = 2.502 │ ├── Best until now = 1.593 (↗ 0.9091) │ └── Epoch N-1 = 2.4331 (↗ 0.0689) ├── Ppyoloeloss/loss_cls = 2.0766 │ ├── Best until now = 1.1692 (↗ 0.9074) │ └── Epoch N-1 = 1.9983 (↗ 0.0783) ├── Ppyoloeloss/loss_dfl = 0.5782 │ ├── Best until now = 0.576 (↗ 0.0021) │ └── Epoch N-1 = 0.5802 (↘ -0.002) ├── Ppyoloeloss/loss_iou = 0.0545 │ ├── Best until now = 0.0504 (↗ 0.0041) │ └── Epoch N-1 = 0.0579 (↘ -0.0034) ├── Precision@0.50 = 0.0397 │ ├── Best until now = 0.0568 (↘ -0.0172) │ └── Epoch N-1 = 0.0393 (↗ 0.0004) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
[2023-05-15 16:17:15] INFO - sg_trainer.py - RUNNING ADDITIONAL TEST ON THE AVERAGED MODEL... Validation epoch 30: 100%|██████████| 6/6 [00:04<00:00, 1.31it/s]
=========================================================== SUMMARY OF EPOCH 30 ├── Training │ ├── Ppyoloeloss/loss = 1.2412 │ │ ├── Best until now = 1.1989 (↗ 0.0423) │ │ └── Epoch N-1 = 1.1989 (↗ 0.0423) │ ├── Ppyoloeloss/loss_cls = 0.7013 │ │ ├── Best until now = 0.6735 (↗ 0.0278) │ │ └── Epoch N-1 = 0.6735 (↗ 0.0278) │ ├── Ppyoloeloss/loss_dfl = 0.6436 │ │ ├── Best until now = 0.6303 (↗ 0.0133) │ │ └── Epoch N-1 = 0.6303 (↗ 0.0133) │ └── Ppyoloeloss/loss_iou = 0.0873 │ ├── Best until now = 0.0841 (↗ 0.0031) │ └── Epoch N-1 = 0.0841 (↗ 0.0031) └── Validation ├── F1@0.50 = 0.0899 │ ├── Best until now = 0.1076 (↘ -0.0176) │ └── Epoch N-1 = 0.0763 (↗ 0.0136) ├── Map@0.50 = 0.8338 │ ├── Best until now = 0.8292 (↗ 0.0046) │ └── Epoch N-1 = 0.7855 (↗ 0.0483) ├── Ppyoloeloss/loss = 2.134 │ ├── Best until now = 1.593 (↗ 0.541) │ └── Epoch N-1 = 2.502 (↘ -0.368) ├── Ppyoloeloss/loss_cls = 1.7223 │ ├── Best until now = 1.1692 (↗ 0.553) │ └── Epoch N-1 = 2.0766 (↘ -0.3544) ├── Ppyoloeloss/loss_dfl = 0.58 │ ├── Best until now = 0.576 (↗ 0.0039) │ └── Epoch N-1 = 0.5782 (↗ 0.0018) ├── Ppyoloeloss/loss_iou = 0.0487 │ ├── Best until now = 0.0504 (↘ -0.0017) │ └── Epoch N-1 = 0.0545 (↘ -0.0058) ├── Precision@0.50 = 0.0471 │ ├── Best until now = 0.0568 (↘ -0.0098) │ └── Epoch N-1 = 0.0397 (↗ 0.0074) └── Recall@0.50 = 1.0 ├── Best until now = 1.0 (= 0.0) └── Epoch N-1 = 1.0 (= 0.0) ===========================================================
Now that training is complete, you need to get the best trained model.
You used checkpoint averaging so the following code will use weights averaged across training runs.
If you want to use the best weights, or weights from the last epoch you'd use one of the following in the code below:
best weights: checkpoint_path = os.path.join(config.CHECKPOINT_DIR, config.EXPERIMENT_NAME, ckpt_best.pth)
last weights: checkpoint_path = os.path.join(config.CHECKPOINT_DIR, config.EXPERIMENT_NAME, ckpt_latest.pth)
best_model = models.get(config.MODEL_NAME,
num_classes=config.NUM_CLASSES,
checkpoint_path=os.path.join(config.CHECKPOINT_DIR, config.EXPERIMENT_NAME, 'ckpt_best.pth'))
[2023-05-15 16:19:59] INFO - checkpoint_utils.py - Successfully loaded model weights from checkpoints/cars-from-above/ckpt_best.pth EMA checkpoint.
trainer.test(model=best_model,
test_loader=val_data,
test_metrics_list=DetectionMetrics_050(score_thres=0.1,
top_k_predictions=300,
num_cls=config.NUM_CLASSES,
normalize_targets=True,
post_prediction_callback=PPYoloEPostPredictionCallback(score_threshold=0.01,
nms_top_k=1000,
max_predictions=300,
nms_threshold=0.7)))
Test: 83%|████████▎ | 5/6 [00:04<00:00, 1.34it/s]
{'PPYoloELoss/loss_cls': 2.0478516,
'PPYoloELoss/loss_iou': 0.051353697,
'PPYoloELoss/loss_dfl': 0.5761113,
'PPYoloELoss/loss': 2.4642918,
'Precision@0.50': tensor(0.0390),
'Recall@0.50': tensor(1.),
'mAP@0.50': tensor(0.7967),
'F1@0.50': tensor(0.0751)}
best_model.predict( "/content/letuce_data/test/images/000005.png", conf=0.19).show()
YOLOv9¶
YOLOv9 builds on the previous success of YOLOv7 released in 2022. Both were developed by Chien-Yao Wang, et al. YOLOv7 focused heavily on architectural optimizations in the training process, known as trainable goodies, to strengthen the training cost and improve object detection accuracy without increasing the inference cost. However, it did not address the issue of information loss in the input data due to multiple downscaling operations in the feedforward process, a phenomenon known as information bottleneck.
While existing methods such as using reversible architectures and masked modeling have been shown to alleviate information bottlenecks, they appear to lose effectiveness for more compact model architectures, which have been a hallmark of real-time object detectors such as the YOLO series of models.
YOLOv9 introduces two new techniques that not only address the information bottleneck problem but also further push the boundaries of improving object detection accuracy and efficiency.
YOLOv9 Architecture¶
YOLOv9 employing GELAN and PGI has outperformed all previous training-from-scratch methods in object detection performance. When compared in terms of accuracy, this new method outperforms RT DETR pre-trained on a large dataset and also demonstrates superior parameter utilization compared to YOLO MS designed based on depthwise convolution.
During the feedforward process in legacy methods, a significant amount of information is lost, which cannot be ignored. This loss of information can result in biased gradient streams, which are then used for model updates. These issues can lead deep networks to form incorrect associations between targets and inputs, resulting in inaccurate predictions from the trained model.
The concept of Programmable Gradient Information (PGI) aims to generate reliable gradients through a reversible auxiliary branch, ensuring that deep features retain features crucial to the intended task.
The reversible auxiliary branch avoids potential semantic loss that could arise from conventional deep supervision processes involving multi-path feature integration.
Essentially, PGI orchestrates the propagation of gradient information at different semantic levels, leading to optimized training results. PGI’s reversible architecture is seamlessly integrated with the auxiliary branch, at no additional cost.
In addition, PGI’s flexibility in selecting appropriate loss functions tailored to the task at hand circumvents the problems encountered in mask modeling. This versatile PGI mechanism can be applied to neural networks of varying sizes, overcoming the limitations of deep supervision mechanisms, which are predominantly tailored to extremely deep networks.
In addition, the Generalized Efficient Layer Aggregation Network (GELAN), an extension of ELAN, takes into account parameters, computational complexity, accuracy, and inference speed simultaneously. This flexible design allows users to select computational blocks tailored to different inference devices.
Programmable Gradient Information¶
YOLOv9 aims to address information bottlenecks through an auxiliary supervision framework known as Programmable Gradient Information (PGI). PGI is typically designed as a training aid to improve efficient and accurate gradient backpropagation through interconnections with previous layers, but through a removable branch so that these additional computations can be removed at inference time for model compactness and inference speed. To improve these interconnections, it uses multi-level auxiliary information with embedding networks, which aggregate gradients from multiple convolutional stages to consolidate meaningful gradients for propagation. PGI consists of three main components:
Main Branch: The main branch is primarily used for the inference process. Since the other components of PGI are not required for the inference stage, YOLOv9 ensures that no additional inference costs are incurred.
Reversible Auxiliary Branch: A reversible auxiliary branch is introduced to ensure reliable gradient generation and parameter updates in the network. This branch serves to maintain complete information, taking advantage of the reversible architecture. However, integrating it directly into the main branch incurs significant inference costs, motivating the design of a reversible auxiliary branch. By incorporating this branch into the deep supervision framework, the main branch can receive reliable gradient information, aiding in the extraction of features relevant to the target task. This allows its application in both shallow and deep networks, preserving inference capabilities by removing the auxiliary branch during inference.
Multi-level auxiliary information: Enhances deep supervision by integrating an integration network between hierarchical layers of the feature pyramid, allowing the head branch to receive aggregated gradient information from different prediction heads. This approach mitigates the problem of deep feature pyramids missing important information needed for target object prediction, ensuring that the head branch retains complete information to learn predictions on multiple targets.
Generalized Efficient Layer Aggregation Network (GELAN)¶
YOLOv9 also continues to maintain the real-time inference support that the YOLO architecture family is well known for, through the introduction of a Generalized Efficient Layer Aggregation Network (GELAN), which combines key features of CSPNet and ELAN.
CSPNet is known for its efficient gradient path planning, improving feature extraction. ELAN, on the other hand, prioritizes inference speed by employing stacked convolutional layers. GELAN integrates these strengths to create a versatile architecture that emphasizes lightweight design, fast inference, and accuracy. It extends ELAN’s capabilities by allowing the stacking of any computational blocks beyond the convolutional layers, allowing inference optimizations to be applied across all layers.
YOLOv9 vs YOLOv8 vs YOLOv7¶
Object detection with YOLOv9 shows dramatic improvements across multiple metrics compared to previous state-of-the-art models. Despite having significantly fewer parameters compared to the larger architectural variants of YOLOv7 and YOLOv8, YOLOv9 still manages to outperform them in terms of accuracy. Furthermore, YOLOv9 maintains nearly the same computational complexity as its direct predecessor, YOLOv7, and avoids the additional complexity that YOLOv8 incurs.
YOLOv9 even outperforms other state-of-the-art real-time object detectors outside the YOLO family, such as RT DETR, RTMDet, and PP-YOLOE when trained on the COCO dataset. These models had the advantage of leveraging pre-trained weights from ImageNet. Unusually, YOLOv9 was still able to secure an advantage over them despite using a from-scratch training method, demonstrating a strong ability to learn robust features quickly. This could mean that training YOLOv9 on custom datasets could further boost its already impressive metrics.
Implementing YOLOv9¶
We start by installing rasterio and importing the libraries that we will use throughout the code:
!pip install rasterio
Collecting rasterio
Downloading rasterio-1.3.10-cp310-cp310-manylinux2014_x86_64.whl (21.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.5/21.5 MB 36.2 MB/s eta 0:00:00
Collecting affine (from rasterio)
Downloading affine-2.4.0-py3-none-any.whl (15 kB)
Requirement already satisfied: attrs in /usr/local/lib/python3.10/dist-packages (from rasterio) (23.2.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from rasterio) (2024.2.2)
Requirement already satisfied: click>=4.0 in /usr/local/lib/python3.10/dist-packages (from rasterio) (8.1.7)
Requirement already satisfied: cligj>=0.5 in /usr/local/lib/python3.10/dist-packages (from rasterio) (0.7.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from rasterio) (1.25.2)
Collecting snuggs>=1.4.1 (from rasterio)
Downloading snuggs-1.4.7-py3-none-any.whl (5.4 kB)
Requirement already satisfied: click-plugins in /usr/local/lib/python3.10/dist-packages (from rasterio) (1.1.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from rasterio) (67.7.2)
Requirement already satisfied: pyparsing>=2.1.6 in /usr/local/lib/python3.10/dist-packages (from snuggs>=1.4.1->rasterio) (3.1.2)
Installing collected packages: snuggs, affine, rasterio
Successfully installed affine-2.4.0 rasterio-1.3.10 snuggs-1.4.7
import rasterio
import geopandas as gpd
import numpy as np
from matplotlib import pyplot as plt
import cv2
from rasterio.features import rasterize
from rasterio.windows import Window
import os
from shapely.geometry import box
import pandas as pd
from skimage.io import imsave
from sklearn import model_selection
import os
import shutil
import json
import ast
import numpy as np
from tqdm import tqdm
import pandas as pd
import seaborn as sns
import fastai.vision as vision
import glob
from skimage import io
from rasterio.plot import show
The next steps are: mount the Drive, define the file paths and import them:
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
path_img = '/content/drive/MyDrive/Datasets/Tomato_detection/Tomate_B_AOI.tif'
path_points = '/content/drive/MyDrive/Datasets/Tomato_detection/Tomate_B.shp'
src = rasterio.open(path_img)
img = src.read()
img.shape
(3, 9137, 2695)
img = img.transpose([1,2,0])
Now we can plot the original image:
plt.figure(figsize=[16,16])
plt.imshow(img)
plt.axis('off')
(-0.5, 2694.5, 9136.5, -0.5)
So, let's split this image into several 1024x1024 pixel patches:
os.mkdir('/content/data')
qtd = 0
out_meta = src.meta.copy()
for n in range((src.meta['width']//512)):
for m in range((src.meta['height']//512)):
x = (n*512)
y = (m*512)
window = Window(x,y,512,512)
win_transform = src.window_transform(window)
arr_win = src.read(window=window)
arr_win = arr_win[0:3,:,:]
qtd = qtd + 1
path_exp = '/content/data/img_' + str(qtd) + '.tif'
out_meta.update({"driver": "GTiff","height": 512,"width": 512, "transform":win_transform})
with rasterio.open(path_exp, 'w', **out_meta) as dst:
for i, layer in enumerate(arr_win, start=1):
dst.write_band(i, layer.reshape(-1, layer.shape[-1]))
del arr_win
We then have the Data folder with the patches resulting from the division. Let's separate these images into training images and validation images:
path_data = '/content/data'
images_files = [f for f in os.listdir(path_data)]
images_files_train, images_files_valid= model_selection.train_test_split(
images_files,
test_size=0.1,
random_state=42,
shuffle=True,
)
print(len(images_files_train))
print(len(images_files_valid))
76 9
The patches are in .tiff format, but the YOLOv9 implementation requires images in .jpg format. Let's then create two folders in our Google Colab content to store the training and validation images.
destination_1 = 'train'
destination_2 = 'validation'
if not os.path.isdir(destination_1):
os.mkdir(destination_1)
if not os.path.isdir(destination_2):
os.mkdir(destination_2)
path_data_new = '/content/train'
for images in images_files_train:
src = rasterio.open(os.path.join(path_data,images))
raster = src.read()
raster = raster.transpose([1,2,0])
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_16.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_74.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_71.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_10.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_72.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_73.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_12.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_15.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_9.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_13.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_17.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_69.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-17-fe69e390c447>:6: UserWarning: /content/train/img_70.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
path_data_new = '/content/validation'
for images in images_files_valid:
src = rasterio.open(os.path.join(path_data,images))
raster = src.read()
raster = raster.transpose([1,2,0])
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-18-3fb345d5435c>:6: UserWarning: /content/validation/img_11.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
<ipython-input-18-3fb345d5435c>:6: UserWarning: /content/validation/img_14.jpg is a low contrast image
imsave(os.path.join(path_data_new,images.split('.')[0] + '.jpg'), raster)
Now let's work with Labels. From the points we collected, we create a square buffer, so that we can obtain the boundbox.
gdf_tomate = gpd.read_file(path_points)
gdf_tomate.geometry = gdf_tomate.buffer(0.2, cap_style=3)
fig, ax = plt.subplots(1, 1, figsize=(15, 15))
gdf_tomate.plot(ax = ax)
<Axes: >
Now let's create the training and validation datasets with the bboxes that intersect each of the patches:
poly_geometry_train = []
img_id_train = []
for fp1 in images_files_train:
src1 = rasterio.open(os.path.join(path_data,fp1))
bounds1 = src1.bounds
df1 = gpd.GeoDataFrame({"id":1,"geometry":[box(*bounds1)]})
df1 = df1.set_crs(src1.crs)
for i,row in gdf_tomate.iterrows():
intersects = df1['geometry'][0].intersection(row['geometry'])
if (intersects.is_empty == False):
poly_geometry_train.append(intersects)
img_id_train.append(fp1)
poly_geometry_val = []
img_id_val = []
for fp2 in images_files_valid:
src2 = rasterio.open(os.path.join(path_data,fp2))
bounds2 = src2.bounds
df2 = gpd.GeoDataFrame({"id":1,"geometry":[box(*bounds2)]})
df2 = df2.set_crs(src2.crs)
for i,row in gdf_tomate.iterrows():
intersects = df2['geometry'][0].intersection(row['geometry'])
if (intersects.is_empty == False):
poly_geometry_val.append(intersects)
img_id_val.append(fp2)
dataset_train = gpd.GeoDataFrame(geometry=poly_geometry_train)
dataset_val = gpd.GeoDataFrame(geometry=poly_geometry_val)
dataset_train['ImageId'] = img_id_train
dataset_val['ImageId'] = img_id_val
So we have the dataframe with the bbox geometry and the id of the image to which it belongs:
dataset_val
| geometry | ImageId | |
|---|---|---|
| 0 | POLYGON ((225280.884 8367318.765, 225280.884 8... | img_65.tif |
| 1 | POLYGON ((225280.308 8367318.989, 225279.908 8... | img_65.tif |
| 2 | POLYGON ((225281.217 8367318.997, 225280.817 8... | img_65.tif |
| 3 | POLYGON ((225281.195 8367319.736, 225280.795 8... | img_65.tif |
| 4 | POLYGON ((225280.249 8367319.625, 225279.849 8... | img_65.tif |
| ... | ... | ... |
| 311 | POLYGON ((225266.761 8367331.986, 225266.361 8... | img_47.tif |
| 312 | POLYGON ((225266.647 8367332.692, 225266.247 8... | img_47.tif |
| 313 | POLYGON ((225266.503 8367333.414, 225266.103 8... | img_47.tif |
| 314 | POLYGON ((225266.389 8367333.953, 225265.989 8... | img_47.tif |
| 315 | POLYGON ((225266.321 8367334.955, 225266.321 8... | img_47.tif |
316 rows × 2 columns
The next step is to convert the coordinates into x and y values.
df_train = []
Id_train = []
for i,row in dataset_train.iterrows():
ImageID = row['ImageId'].split('.')[0] + '.jpg'
src1 = rasterio.open(os.path.join(path_data,row['ImageId']))
poly = []
for point in list(row.geometry.exterior.coords):
x = point[0]
y = point[1]
row, col = src1.index(x,y)
tuple = (row,col)
poly.append(tuple)
Id_train.append(ImageID)
df_train.append(poly)
df_val = []
Id_val = []
for i,row in dataset_val.iterrows():
ImageID = row['ImageId'].split('.')[0] + '.jpg'
src2 = rasterio.open(os.path.join(path_data,row['ImageId']))
poly = []
for point in list(row.geometry.exterior.coords):
x = point[0]
y = point[1]
row, col = src2.index(x,y)
tuple = (row,col)
poly.append(tuple)
Id_val.append(ImageID)
df_val.append(poly)
train_set = pd.DataFrame([])
valid_set = pd.DataFrame([])
train_set['ImageId'] = Id_train
valid_set['ImageId'] = Id_val
train_set['geometry'] = df_train
valid_set['geometry'] = df_val
train_set['class'] = 0
train_set['class_name'] = 'tomate'
valid_set['class'] = 0
valid_set['class_name'] = 'tomate'
train_set
| ImageId | geometry | class | class_name | |
|---|---|---|---|---|
| 0 | img_75.jpg | [(512, 26), (487, 26), (487, 52), (512, 52), (... | 0 | tomate |
| 1 | img_75.jpg | [(470, 45), (470, 19), (445, 19), (445, 45), (... | 0 | tomate |
| 2 | img_75.jpg | [(426, 35), (426, 10), (401, 10), (401, 35), (... | 0 | tomate |
| 3 | img_75.jpg | [(386, 33), (386, 8), (360, 8), (360, 33), (38... | 0 | tomate |
| 4 | img_75.jpg | [(351, 28), (351, 3), (325, 3), (325, 28), (35... | 0 | tomate |
| ... | ... | ... | ... | ... |
| 2981 | img_46.jpg | [(512, 0), (504, 0), (504, 25), (512, 25), (51... | 0 | tomate |
| 2982 | img_46.jpg | [(466, 0), (466, 22), (491, 22), (491, 0), (46... | 0 | tomate |
| 2983 | img_46.jpg | [(426, 0), (426, 16), (451, 16), (451, 0), (42... | 0 | tomate |
| 2984 | img_46.jpg | [(385, 0), (385, 12), (410, 12), (410, 0), (38... | 0 | tomate |
| 2985 | img_46.jpg | [(344, 0), (344, 6), (370, 6), (370, 0), (344,... | 0 | tomate |
2986 rows × 4 columns
We just need to get the xmax, ymax, xmin and ymim coordinates for each annotation. To do this we will use the getBounds function:
def getBounds(geometry):
try:
arr = np.array(geometry).T
xmin = np.min(arr[0])
ymin = np.min(arr[1])
xmax = np.max(arr[0])
ymax = np.max(arr[1])
return (xmin, ymin, xmax, ymax)
except:
return np.nan
def getWidth(bounds):
try:
(xmin, ymin, xmax, ymax) = bounds
return np.abs(xmax - xmin)
except:
return np.nan
def getHeight(bounds):
try:
(xmin, ymin, xmax, ymax) = bounds
return np.abs(ymax - ymin)
except:
return np.nan
train_set.loc[:,'bounds'] = train_set.loc[:,'geometry'].apply(getBounds)
train_set.loc[:,'width'] = train_set.loc[:,'bounds'].apply(getWidth)
train_set.loc[:,'height'] = train_set.loc[:,'bounds'].apply(getHeight)
train_set.head(10)
| ImageId | geometry | class | class_name | bounds | width | height | |
|---|---|---|---|---|---|---|---|
| 0 | img_75.jpg | [(512, 26), (487, 26), (487, 52), (512, 52), (... | 0 | tomate | (487, 26, 512, 52) | 25 | 26 |
| 1 | img_75.jpg | [(470, 45), (470, 19), (445, 19), (445, 45), (... | 0 | tomate | (445, 19, 470, 45) | 25 | 26 |
| 2 | img_75.jpg | [(426, 35), (426, 10), (401, 10), (401, 35), (... | 0 | tomate | (401, 10, 426, 35) | 25 | 25 |
| 3 | img_75.jpg | [(386, 33), (386, 8), (360, 8), (360, 33), (38... | 0 | tomate | (360, 8, 386, 33) | 26 | 25 |
| 4 | img_75.jpg | [(351, 28), (351, 3), (325, 3), (325, 28), (35... | 0 | tomate | (325, 3, 351, 28) | 26 | 25 |
| 5 | img_75.jpg | [(307, 30), (307, 5), (282, 5), (282, 30), (30... | 0 | tomate | (282, 5, 307, 30) | 25 | 25 |
| 6 | img_75.jpg | [(237, 0), (237, 18), (262, 18), (262, 0), (23... | 0 | tomate | (237, 0, 262, 18) | 25 | 18 |
| 7 | img_75.jpg | [(204, 0), (204, 15), (229, 15), (229, 0), (20... | 0 | tomate | (204, 0, 229, 15) | 25 | 15 |
| 8 | img_75.jpg | [(159, 0), (159, 7), (185, 7), (185, 0), (159,... | 0 | tomate | (159, 0, 185, 7) | 26 | 7 |
| 9 | img_75.jpg | [(129, 0), (129, 3), (154, 3), (154, 0), (129,... | 0 | tomate | (129, 0, 154, 3) | 25 | 3 |
valid_set.loc[:,'bounds'] = valid_set.loc[:,'geometry'].apply(getBounds)
valid_set.loc[:,'width'] = valid_set.loc[:,'bounds'].apply(getWidth)
valid_set.loc[:,'height'] = valid_set.loc[:,'bounds'].apply(getHeight)
valid_set.head(10)
| ImageId | geometry | class | class_name | bounds | width | height | |
|---|---|---|---|---|---|---|---|
| 0 | img_64.jpg | [(512, 326), (492, 326), (492, 352), (512, 352... | 0 | tomate | (492, 326, 512, 352) | 20 | 26 |
| 1 | img_64.jpg | [(512, 385), (494, 385), (494, 410), (512, 410... | 0 | tomate | (494, 385, 512, 410) | 18 | 25 |
| 2 | img_64.jpg | [(477, 405), (477, 380), (452, 380), (452, 405... | 0 | tomate | (452, 380, 477, 405) | 25 | 25 |
| 3 | img_64.jpg | [(474, 346), (474, 320), (449, 320), (449, 346... | 0 | tomate | (449, 320, 474, 346) | 25 | 26 |
| 4 | img_64.jpg | [(433, 342), (433, 317), (408, 317), (408, 342... | 0 | tomate | (408, 317, 433, 342) | 25 | 25 |
| 5 | img_64.jpg | [(438, 402), (438, 376), (412, 376), (412, 402... | 0 | tomate | (412, 376, 438, 402) | 26 | 26 |
| 6 | img_64.jpg | [(392, 401), (392, 375), (367, 375), (367, 401... | 0 | tomate | (367, 375, 392, 401) | 25 | 26 |
| 7 | img_64.jpg | [(394, 339), (394, 314), (369, 314), (369, 339... | 0 | tomate | (369, 314, 394, 339) | 25 | 25 |
| 8 | img_64.jpg | [(353, 332), (353, 307), (328, 307), (328, 332... | 0 | tomate | (328, 307, 353, 332) | 25 | 25 |
| 9 | img_64.jpg | [(352, 395), (352, 370), (327, 370), (327, 395... | 0 | tomate | (327, 370, 352, 395) | 25 | 25 |
Let's plot an example image and its annotations:
img = io.imread('/content/train/img_21.jpg')
detec = train_set[train_set['ImageId'] == 'img_21.jpg']
for i, row in detec.iterrows():
color = (255,0,0)
cv2.rectangle(img, (max(0, row['bounds'][1]), max(0, row['bounds'][0]) , max(0, row['bounds'][3] - row['bounds'][1]), max(0, row['bounds'][2] - row['bounds'][0])), color, 1)
plt.figure(figsize=(16,16))
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7efd5171d6f0>
After that we create the .csv files to use in YOLOv9:
def convert(data, data_type):
df = data.groupby('ImageId')['bounds'].apply(list).reset_index(name='bboxes')
df['classes'] = data.groupby('ImageId')['class'].apply(list).reset_index(drop=True)
df.to_csv(data_type + '.csv', index=False)
print(data_type)
print(df.shape)
print(df.head())
df_train = convert(train_set, '/content/train')
df_valid = convert(valid_set, '/content/validation')
/content/train
(57, 3)
ImageId bboxes \
0 img_1.jpg [(509, 407, 512, 432), (472, 403, 497, 428), (...
1 img_18.jpg [(269, 510, 294, 512), (221, 506, 247, 512), (...
2 img_2.jpg [(510, 428, 512, 453), (500, 484, 512, 509), (...
3 img_20.jpg [(502, 1, 512, 26), (490, 52, 512, 77), (447, ...
4 img_21.jpg [(492, 0, 512, 5), (474, 73, 499, 98), (490, 1...
classes
0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
1 [0, 0, 0, 0, 0]
2 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
3 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
4 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
/content/validation
(8, 3)
ImageId bboxes \
0 img_33.jpg [(211, 511, 236, 512), (173, 506, 199, 512), (...
1 img_41.jpg [(485, 465, 511, 490), (445, 459, 471, 485), (...
2 img_60.jpg [(492, 147, 512, 173), (501, 95, 512, 121), (4...
3 img_63.jpg [(496, 270, 512, 296), (482, 329, 507, 354), (...
4 img_64.jpg [(492, 326, 512, 352), (494, 385, 512, 410), (...
classes
0 [0, 0, 0, 0, 0, 0, 0]
1 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
2 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
3 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
4 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
We can then clone the YOLOv9 implementation from github
import os
HOME = os.getcwd()
print(HOME)
/content
!git clone https://github.com/SkalskiP/yolov9.git
%cd yolov9
!pip install -r requirements.txt -q
Cloning into 'yolov9'...
remote: Enumerating objects: 147, done.
remote: Counting objects: 100% (25/25), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 147 (delta 16), reused 14 (delta 14), pack-reused 122
Receiving objects: 100% (147/147), 607.60 KiB | 5.68 MiB/s, done.
Resolving deltas: 100% (58/58), done.
/content/yolov9
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 195.4/195.4 kB 1.9 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.7/62.7 kB 6.6 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 14.4 MB/s eta 0:00:00
from IPython.display import Image
!mkdir -p {HOME}/weights
Let's download the pre-trained weights:
!wget -P {HOME}/weights -q https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-c.pt
!wget -P {HOME}/weights -q https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-e.pt
!wget -P {HOME}/weights -q https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-c.pt
!wget -P {HOME}/weights -q https://github.com/WongKinYiu/yolov9/releases/download/v0.1/gelan-e.pt
!ls -la {HOME}/weights
total 402448 drwxr-xr-x 2 root root 4096 Feb 25 15:34 . drwxr-xr-x 1 root root 4096 Feb 25 15:34 .. -rw-r--r-- 1 root root 51508261 Feb 18 12:36 gelan-c.pt -rw-r--r-- 1 root root 117203713 Feb 18 12:36 gelan-e.pt -rw-r--r-- 1 root root 103153312 Feb 18 12:36 yolov9-c.pt -rw-r--r-- 1 root root 140217688 Feb 18 12:36 yolov9-e.pt
%cd yolov9
!ls
[Errno 2] No such file or directory: 'yolov9' /content/yolov9 benchmarks.py export.py panoptic segment utils classify figure README.md train_dual.py val_dual.py data hubconf.py requirements.txt train.py val.py detect.py models scripts train_triple.py val_triple.py
We create the main folder for our data:
!mkdir tomato_data
%cd tomato_data
/content/yolov9/tomato_data
!mkdir images
!mkdir labels
%cd images
!mkdir train
!mkdir validation
%cd ..
%cd labels
!mkdir train
!mkdir validation
%cd ..
%cd ..
%cd ..
/content/yolov9/tomato_data/images /content/yolov9/tomato_data /content/yolov9/tomato_data/labels /content/yolov9/tomato_data /content/yolov9 /content
for root,dir,_ in os.walk('yolov9/tomato_data'):
print(root)
print(dir)
yolov9/tomato_data ['images', 'labels'] yolov9/tomato_data/images ['train', 'validation'] yolov9/tomato_data/images/train [] yolov9/tomato_data/images/validation [] yolov9/tomato_data/labels ['train', 'validation'] yolov9/tomato_data/labels/train [] yolov9/tomato_data/labels/validation []
Let's copy the images and create a .txt for each image with boundaries inside the folder of our project that we just created:
INPUT_PATH = '/content/'
OUTPUT_PATH = '/content/yolov9/tomato_data'
def process_data(data, data_type='train'):
for _, row in tqdm(data.iterrows(), total = len(data)):
image_name = row['ImageId'].split('.')[0]
bounding_boxes = row['bboxes']
classes = row['classes']
yolo_data = []
for bbox, Class in zip(bounding_boxes, classes):
x_min = bbox[1]
y_min = bbox[0]
x_max = bbox[3]
y_max = bbox[2]
x_center = (x_min + x_max) / 2.0 / 512
y_center = (y_min + y_max) / 2.0 / 512
x_extend = (x_max - x_min) / 512
y_extend = (y_max - y_min) / 512
yolo_data.append([Class, x_center, y_center, x_extend, y_extend])
''''x = bbox[0]
y = bbox[1]
w = bbox[2]
h = bbox[3]
x_center = x + w / 2
y_center = y + h / 2
x_center /= 1024
y_center /= 1024
w /= 1024
h /= 1024
yolo_data.append([Class, x_center, y_center, w, h])'''
yoy_data = np.array(yolo_data)
np.savetxt(
os.path.join(OUTPUT_PATH, f"labels/{data_type}/{image_name}.txt"),
yolo_data,
fmt = ["%d", "%f", "%f", "%f", "%f"]
)
shutil.copyfile(
os.path.join(INPUT_PATH, f"{data_type}/{image_name}.jpg"),
os.path.join(OUTPUT_PATH, f"images/{data_type}/{image_name}.jpg")
)
df_train = pd.read_csv('/content/train.csv')
df_train.bboxes = df_train.bboxes.apply(ast.literal_eval)
df_train.classes = df_train.classes.apply(ast.literal_eval)
df_valid = pd.read_csv('/content/validation.csv')
df_valid.bboxes = df_valid.bboxes.apply(ast.literal_eval)
df_valid.classes = df_valid.classes.apply(ast.literal_eval)
process_data(df_train, data_type='train')
process_data(df_valid, data_type='validation')
100%|██████████| 57/57 [00:00<00:00, 1214.83it/s] 100%|██████████| 8/8 [00:00<00:00, 994.91it/s]
Here we can check if the .txt was created correctly:
f = open('/content/yolov9/tomato_data/labels/train/'+os.listdir("/content/yolov9/tomato_data/labels/train/")[0])
print(f.name)
for l in f:
print(l)
/content/yolov9/tomato_data/labels/train/img_49.txt 0 0.612305 0.981445 0.048828 0.037109 0 0.512695 0.979492 0.048828 0.041016 0 0.497070 0.909180 0.048828 0.048828 0 0.593750 0.903320 0.050781 0.048828 0 0.596680 0.830078 0.048828 0.050781 0 0.498047 0.834961 0.050781 0.048828 0 0.491211 0.752930 0.048828 0.048828 0 0.586914 0.749023 0.048828 0.048828 0 0.584961 0.675781 0.048828 0.050781 0 0.487305 0.677734 0.048828 0.050781 0 0.461914 0.598633 0.048828 0.048828 0 0.569336 0.593750 0.048828 0.050781 0 0.568359 0.505859 0.050781 0.050781 0 0.466797 0.505859 0.050781 0.050781 0 0.453125 0.430664 0.050781 0.048828 0 0.562500 0.430664 0.050781 0.048828 0 0.551758 0.358398 0.048828 0.048828 0 0.454102 0.352539 0.048828 0.048828 0 0.446289 0.280273 0.048828 0.048828 0 0.551758 0.266602 0.048828 0.048828 0 0.539062 0.190430 0.050781 0.048828 0 0.436523 0.198242 0.048828 0.048828 0 0.428711 0.110352 0.048828 0.048828 0 0.532227 0.114258 0.048828 0.048828 0 0.524414 0.037109 0.048828 0.050781 0 0.423828 0.043945 0.050781 0.048828 0 0.223633 0.961914 0.048828 0.048828 0 0.327148 0.975586 0.048828 0.048828 0 0.319336 0.909180 0.048828 0.048828 0 0.223633 0.884766 0.048828 0.050781 0 0.214844 0.799805 0.050781 0.048828 0 0.307617 0.824219 0.048828 0.050781 0 0.308594 0.741211 0.050781 0.048828 0 0.208008 0.719727 0.048828 0.048828 0 0.202148 0.641602 0.048828 0.048828 0 0.295898 0.649414 0.048828 0.048828 0 0.292969 0.583008 0.050781 0.048828 0 0.186523 0.569336 0.048828 0.048828 0 0.178711 0.489258 0.048828 0.048828 0 0.284180 0.500977 0.048828 0.048828 0 0.269531 0.420898 0.050781 0.048828 0 0.176758 0.395508 0.048828 0.048828 0 0.279297 0.347656 0.050781 0.050781 0 0.174805 0.322266 0.048828 0.050781 0 0.258789 0.265625 0.048828 0.050781 0 0.155273 0.235352 0.048828 0.048828 0 0.263672 0.182617 0.050781 0.048828 0 0.153320 0.161133 0.048828 0.048828 0 0.252930 0.096680 0.048828 0.048828 0 0.157227 0.080078 0.048828 0.050781 0 0.140625 0.013672 0.050781 0.027344 0 0.247070 0.020508 0.048828 0.041016 0 0.084961 0.999023 0.048828 0.001953 0 0.000977 0.979492 0.001953 0.041016 0 0.083008 0.942383 0.048828 0.048828 0 0.072266 0.861328 0.050781 0.050781 0 0.063477 0.786133 0.048828 0.048828 0 0.063477 0.706055 0.048828 0.048828 0 0.049805 0.627930 0.048828 0.048828 0 0.041992 0.545898 0.048828 0.048828 0 0.037109 0.478516 0.050781 0.050781 0 0.036133 0.398438 0.048828 0.050781 0 0.032227 0.325195 0.048828 0.048828 0 0.030273 0.242188 0.048828 0.050781 0 0.023438 0.169922 0.046875 0.050781 0 0.017578 0.083008 0.035156 0.048828 0 0.013672 0.016602 0.027344 0.033203
Finally, we will create the yaml with our project information and run the train.py file.
%cd yolov9
/content/yolov9
%%writefile tomato.yaml
train: tomato_data/images/train
val: tomato_data/images/validation
nc: 1
names: ['Tomato']
Writing tomato.yaml
!python train.py \
--batch 16 --epochs 20 --img 512 --device 0 --min-items 0 --close-mosaic 15 \
--data tomato.yaml \
--weights {HOME}/weights/gelan-c.pt \
--cfg models/detect/gelan-c.yaml \
--hyp hyp.scratch-high.yaml
2024-02-25 15:35:00.550292: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-02-25 15:35:00.550348: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-02-25 15:35:00.551964: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-02-25 15:35:02.617724: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT train: weights=/content/weights/gelan-c.pt, cfg=models/detect/gelan-c.yaml, data=tomato.yaml, hyp=hyp.scratch-high.yaml, epochs=20, batch_size=16, imgsz=512, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=0, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, flat_cos_lr=False, fixed_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, min_items=0, close_mosaic=15, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest YOLOv5 🚀 1e33dbb Python-3.10.12 torch-2.1.0+cu121 CUDA:0 (Tesla T4, 15102MiB) hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, cls_pw=1.0, dfl=1.5, obj_pw=1.0, iou_t=0.2, anchor_t=5.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.9, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.15, copy_paste=0.3 ClearML: run 'pip install clearml' to automatically track, visualize and remotely train YOLO 🚀 in ClearML Comet: run 'pip install comet_ml' to automatically track and visualize YOLO 🚀 runs in Comet TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ Downloading https://ultralytics.com/assets/Arial.ttf to /root/.config/Ultralytics/Arial.ttf... 100% 755k/755k [00:00<00:00, 15.9MB/s] Overriding model.yaml nc=80 with nc=1 from n params module arguments 0 -1 1 1856 models.common.Conv [3, 64, 3, 2] 1 -1 1 73984 models.common.Conv [64, 128, 3, 2] 2 -1 1 212864 models.common.RepNCSPELAN4 [128, 256, 128, 64, 1] 3 -1 1 164352 models.common.ADown [256, 256] 4 -1 1 847616 models.common.RepNCSPELAN4 [256, 512, 256, 128, 1] 5 -1 1 656384 models.common.ADown [512, 512] 6 -1 1 2857472 models.common.RepNCSPELAN4 [512, 512, 512, 256, 1] 7 -1 1 656384 models.common.ADown [512, 512] 8 -1 1 2857472 models.common.RepNCSPELAN4 [512, 512, 512, 256, 1] 9 -1 1 656896 models.common.SPPELAN [512, 512, 256] 10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 11 [-1, 6] 1 0 models.common.Concat [1] 12 -1 1 3119616 models.common.RepNCSPELAN4 [1024, 512, 512, 256, 1] 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 14 [-1, 4] 1 0 models.common.Concat [1] 15 -1 1 912640 models.common.RepNCSPELAN4 [1024, 256, 256, 128, 1] 16 -1 1 164352 models.common.ADown [256, 256] 17 [-1, 12] 1 0 models.common.Concat [1] 18 -1 1 2988544 models.common.RepNCSPELAN4 [768, 512, 512, 256, 1] 19 -1 1 656384 models.common.ADown [512, 512] 20 [-1, 9] 1 0 models.common.Concat [1] 21 -1 1 3119616 models.common.RepNCSPELAN4 [1024, 512, 512, 256, 1] 22 [15, 18, 21] 1 5491411 models.yolo.DDetect [1, [256, 512, 512]] gelan-c summary: 621 layers, 25437843 parameters, 25437827 gradients, 103.2 GFLOPs Transferred 931/937 items from /content/weights/gelan-c.pt AMP: checks passed ✅ optimizer: SGD(lr=0.01) with parameter groups 154 weight(decay=0.0), 161 weight(decay=0.0005), 160 bias albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8)) train: Scanning /content/yolov9/tomato_data/labels/train... 57 images, 0 backgrounds, 0 corrupt: 100% 57/57 [00:00<00:00, 994.73it/s] train: WARNING ⚠️ /content/yolov9/tomato_data/images/train/img_36.jpg: 1 duplicate labels removed train: New cache created: /content/yolov9/tomato_data/labels/train.cache val: Scanning /content/yolov9/tomato_data/labels/validation... 8 images, 0 backgrounds, 0 corrupt: 100% 8/8 [00:00<00:00, 299.38it/s] val: New cache created: /content/yolov9/tomato_data/labels/validation.cache Plotting labels to runs/train/exp/labels.jpg... Image sizes 512 train, 512 val Using 2 dataloader workers Logging results to runs/train/exp Starting training for 20 epochs... Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 0/19 12.7G 4.238 4.592 2.508 825 512: 100% 4/4 [00:33<00:00, 8.43s/it] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:04<00:00, 4.77s/it] all 8 521 0 0 0 0 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 1/19 12.7G 4.286 4.472 2.459 537 512: 100% 4/4 [00:03<00:00, 1.25it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 6.68it/s] all 8 521 0 0 0 0 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 2/19 12.7G 4 3.681 1.999 800 512: 100% 4/4 [00:03<00:00, 1.09it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 4.56it/s] all 8 521 0.605 0.387 0.425 0.107 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 3/19 12.7G 2.786 1.889 1.459 652 512: 100% 4/4 [00:03<00:00, 1.29it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.04it/s] all 8 521 0.431 0.639 0.465 0.159 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 4/19 12.7G 2.506 1.503 1.347 1112 512: 100% 4/4 [00:04<00:00, 1.12s/it] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.92it/s] all 8 521 0.605 0.771 0.671 0.249 Closing dataloader mosaic Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 5/19 12.7G 2.041 1.423 1.291 258 512: 100% 4/4 [00:02<00:00, 1.68it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.72it/s] all 8 521 0.895 0.881 0.903 0.375 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 6/19 12.7G 2.007 1.284 1.209 299 512: 100% 4/4 [00:02<00:00, 1.51it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 3.72it/s] all 8 521 0.918 0.848 0.893 0.419 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 7/19 12.7G 1.861 1.174 1.189 472 512: 100% 4/4 [00:02<00:00, 1.73it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.60it/s] all 8 521 0.931 0.916 0.942 0.426 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 8/19 12.7G 1.886 1.017 1.189 369 512: 100% 4/4 [00:02<00:00, 1.55it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 4.63it/s] all 8 521 0.925 0.901 0.921 0.442 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 9/19 12.7G 1.744 1.016 1.137 263 512: 100% 4/4 [00:02<00:00, 1.71it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.79it/s] all 8 521 0.944 0.913 0.935 0.461 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 10/19 12.7G 1.706 0.8961 1.076 405 512: 100% 4/4 [00:02<00:00, 1.64it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.74it/s] all 8 521 0.935 0.908 0.934 0.454 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 11/19 12.7G 1.708 0.8937 1.095 608 512: 100% 4/4 [00:02<00:00, 1.66it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.19it/s] all 8 521 0.964 0.919 0.948 0.481 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 12/19 12.7G 1.612 0.8843 1.035 342 512: 100% 4/4 [00:02<00:00, 1.65it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.67it/s] all 8 521 0.964 0.924 0.953 0.505 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 13/19 12.7G 1.659 0.9275 1.08 488 512: 100% 4/4 [00:02<00:00, 1.70it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.04it/s] all 8 521 0.963 0.923 0.95 0.498 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 14/19 12.7G 1.671 0.8024 1.059 410 512: 100% 4/4 [00:02<00:00, 1.61it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.79it/s] all 8 521 0.964 0.927 0.953 0.498 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 15/19 12.7G 1.609 0.8266 0.9904 426 512: 100% 4/4 [00:02<00:00, 1.71it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.78it/s] all 8 521 0.966 0.922 0.947 0.487 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 16/19 12.7G 1.569 0.7914 1.069 365 512: 100% 4/4 [00:02<00:00, 1.72it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.74it/s] all 8 521 0.966 0.925 0.953 0.503 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 17/19 12.7G 1.572 0.7782 1.07 292 512: 100% 4/4 [00:02<00:00, 1.43it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.68it/s] all 8 521 0.963 0.931 0.959 0.509 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 18/19 12.7G 1.577 0.7559 1.064 304 512: 100% 4/4 [00:02<00:00, 1.58it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 5.90it/s] all 8 521 0.948 0.925 0.945 0.499 Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 19/19 12.7G 1.524 0.7913 1.049 283 512: 100% 4/4 [00:02<00:00, 1.72it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 4.90it/s] all 8 521 0.95 0.919 0.945 0.508 20 epochs completed in 0.040 hours. Optimizer stripped from runs/train/exp/weights/last.pt, saved as runs/train/exp/weights/last_striped.pt, 51.4MB Optimizer stripped from runs/train/exp/weights/best.pt, saved as runs/train/exp/weights/best_striped.pt, 51.4MB Validating runs/train/exp/weights/best.pt... Fusing layers... gelan-c summary: 467 layers, 25411731 parameters, 0 gradients, 102.5 GFLOPs Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 3.01it/s] all 8 521 0.963 0.931 0.959 0.509 Results saved to runs/train/exp
Image(filename=f"{HOME}/yolov9/runs/train/exp/results.png", width=1000)
!python val.py --img 512 --batch 32 --conf 0.001 --iou 0.7 --device 0 --data tomato.yaml --weights runs/train/exp/weights/best.pt
val: data=tomato.yaml, weights=['runs/train/exp/weights/best.pt'], batch_size=32, imgsz=512, conf_thres=0.001, iou_thres=0.7, max_det=300, task=val, device=0, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/val, name=exp, exist_ok=False, half=False, dnn=False, min_items=0 YOLOv5 🚀 1e33dbb Python-3.10.12 torch-2.1.0+cu121 CUDA:0 (Tesla T4, 15102MiB) Fusing layers... gelan-c summary: 467 layers, 25411731 parameters, 0 gradients, 102.5 GFLOPs val: Scanning /content/yolov9/tomato_data/labels/validation.cache... 8 images, 0 backgrounds, 0 corrupt: 100% 8/8 [00:00<?, ?it/s] Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.16s/it] all 8 521 0.961 0.929 0.955 0.51 Speed: 0.2ms pre-process, 33.0ms inference, 67.6ms NMS per image at shape (32, 3, 512, 512) Results saved to runs/val/exp
We can use the detect.py file to perform detections on our validation images:
!python detect.py --img 512 --conf 0.1 --device 0 --weights runs/train/exp/weights/best.pt --source /content/validation --save-txt
detect: weights=['runs/train/exp/weights/best.pt'], source=/content/validation, data=data/coco128.yaml, imgsz=[512, 512], conf_thres=0.1, iou_thres=0.45, max_det=1000, device=0, view_img=False, save_txt=True, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 🚀 1e33dbb Python-3.10.12 torch-2.1.0+cu121 CUDA:0 (Tesla T4, 15102MiB) Fusing layers... gelan-c summary: 467 layers, 25411731 parameters, 0 gradients, 102.5 GFLOPs image 1/9 /content/validation/img_33.jpg: 512x512 8 Tomatos, 34.5ms image 2/9 /content/validation/img_41.jpg: 512x512 26 Tomatos, 34.6ms image 3/9 /content/validation/img_60.jpg: 512x512 101 Tomatos, 34.5ms image 4/9 /content/validation/img_63.jpg: 512x512 59 Tomatos, 34.5ms image 5/9 /content/validation/img_64.jpg: 512x512 60 Tomatos, 33.9ms image 6/9 /content/validation/img_71.jpg: 512x512 (no detections), 33.1ms image 7/9 /content/validation/img_79.jpg: 512x512 66 Tomatos, 33.1ms image 8/9 /content/validation/img_83.jpg: 512x512 109 Tomatos, 30.0ms image 9/9 /content/validation/img_85.jpg: 512x512 110 Tomatos, 27.8ms Speed: 0.4ms pre-process, 32.9ms inference, 49.9ms NMS per image at shape (1, 3, 512, 512) Results saved to runs/detect/exp2 8 labels saved to runs/detect/exp2/labels
for images in glob.glob('/content/yolov9/runs/detect/exp/*.jpg')[0:5]:
display(Image(filename=images))
Then we can use a complete orthomosaic to generate several patches and apply the trained model, thus obtaining the detections. We start by setting the image path and opening it:
path_img = '/content/drive/MyDrive/Datasets/Tomato_detection/Tomate_A.tif'
src_img = rasterio.open(path_img)
img = src_img.read()
img.shape
(3, 10412, 10014)
img = img.transpose([1,2,0])
plt.figure(figsize=[16,16])
plt.imshow(img)
plt.axis('off')
(-0.5, 10013.5, 10411.5, -0.5)
if not os.path.isdir('/content/Predict'):
os.mkdir('/content/Predict')
qtd = 0
out_meta = src_img.meta.copy()
for n in range((src_img.meta['width']//512)):
for m in range((src_img.meta['height']//512)):
x = (n*512)
y = (m*512)
window = Window(x,y,512,512)
win_transform = src_img.window_transform(window)
arr_win = src_img.read(window=window)
arr_win = arr_win[0:3,:,:]
if (arr_win.max() != 0) and (arr_win.shape[1] == 512) and (arr_win.shape[2] == 512):
qtd = qtd + 1
path_exp = '/content/Predict/img_' + str(qtd) + '.tif'
out_meta.update({"driver": "GTiff","height": 512,"width": 512, "transform":win_transform})
with rasterio.open(path_exp, 'w', **out_meta) as dst:
for i, layer in enumerate(arr_win, start=1):
dst.write_band(i, layer.reshape(-1, layer.shape[-1]))
del arr_win
print(qtd)
364
Now we open each of the generated images and save a copy in .JPG format:
if not os.path.isdir('/content/Predict_jpg'):
os.mkdir('/content/Predict_jpg')
path_data_pred = '/content/Predict_jpg'
imgs_to_pred = os.listdir('/content/Predict')
for images in imgs_to_pred:
src = rasterio.open('/content/Predict/' + images)
raster = src.read()
raster = raster.transpose([1,2,0])
raster = raster[:,:,0:3]
imsave(os.path.join(path_data_pred,images.split('.')[0] + '.jpg'), raster)
<ipython-input-71-448461b5bf97>:11: UserWarning: /content/Predict_jpg/img_68.jpg is a low contrast image
imsave(os.path.join(path_data_pred,images.split('.')[0] + '.jpg'), raster)
img = io.imread('/content/Predict_jpg/img_100.jpg')
plt.figure(figsize=(12,12))
plt.imshow(img)
plt.axis('off')
plt.show()
And then we apply predict by passing the path of the .jpg images folder. The result of the detections are .txt files with the boundboxes.
!python detect.py --img 512 --conf 0.4 --device 0 --weights runs/train/exp/weights/best.pt --source /content/Predict_jpg --save-txt
detect: weights=['runs/train/exp/weights/best.pt'], source=/content/Predict_jpg, data=data/coco128.yaml, imgsz=[512, 512], conf_thres=0.4, iou_thres=0.45, max_det=1000, device=0, view_img=False, save_txt=True, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 🚀 1e33dbb Python-3.10.12 torch-2.1.0+cu121 CUDA:0 (Tesla T4, 15102MiB) Fusing layers... gelan-c summary: 467 layers, 25411731 parameters, 0 gradients, 102.5 GFLOPs image 1/364 /content/Predict_jpg/img_1.jpg: 512x512 6 Tomatos, 34.6ms image 2/364 /content/Predict_jpg/img_10.jpg: 512x512 (no detections), 34.6ms image 3/364 /content/Predict_jpg/img_100.jpg: 512x512 111 Tomatos, 34.5ms image 4/364 /content/Predict_jpg/img_101.jpg: 512x512 91 Tomatos, 34.6ms image 5/364 /content/Predict_jpg/img_102.jpg: 512x512 100 Tomatos, 34.6ms image 6/364 /content/Predict_jpg/img_103.jpg: 512x512 99 Tomatos, 34.6ms image 7/364 /content/Predict_jpg/img_104.jpg: 512x512 107 Tomatos, 34.7ms image 8/364 /content/Predict_jpg/img_105.jpg: 512x512 103 Tomatos, 34.5ms image 9/364 /content/Predict_jpg/img_106.jpg: 512x512 102 Tomatos, 34.6ms image 10/364 /content/Predict_jpg/img_107.jpg: 512x512 86 Tomatos, 34.6ms image 11/364 /content/Predict_jpg/img_108.jpg: 512x512 85 Tomatos, 34.6ms image 12/364 /content/Predict_jpg/img_109.jpg: 512x512 (no detections), 34.5ms image 13/364 /content/Predict_jpg/img_11.jpg: 512x512 (no detections), 34.5ms image 14/364 /content/Predict_jpg/img_110.jpg: 512x512 (no detections), 27.9ms image 15/364 /content/Predict_jpg/img_111.jpg: 512x512 30 Tomatos, 27.1ms image 16/364 /content/Predict_jpg/img_112.jpg: 512x512 102 Tomatos, 27.4ms image 17/364 /content/Predict_jpg/img_113.jpg: 512x512 112 Tomatos, 25.7ms image 18/364 /content/Predict_jpg/img_114.jpg: 512x512 103 Tomatos, 24.1ms image 19/364 /content/Predict_jpg/img_115.jpg: 512x512 111 Tomatos, 24.0ms image 20/364 /content/Predict_jpg/img_116.jpg: 512x512 87 Tomatos, 24.1ms image 21/364 /content/Predict_jpg/img_117.jpg: 512x512 103 Tomatos, 24.1ms image 22/364 /content/Predict_jpg/img_118.jpg: 512x512 102 Tomatos, 24.0ms image 23/364 /content/Predict_jpg/img_119.jpg: 512x512 112 Tomatos, 24.0ms image 24/364 /content/Predict_jpg/img_12.jpg: 512x512 1 Tomato, 24.1ms image 25/364 /content/Predict_jpg/img_120.jpg: 512x512 113 Tomatos, 24.1ms image 26/364 /content/Predict_jpg/img_121.jpg: 512x512 101 Tomatos, 24.1ms image 27/364 /content/Predict_jpg/img_122.jpg: 512x512 96 Tomatos, 24.1ms image 28/364 /content/Predict_jpg/img_123.jpg: 512x512 102 Tomatos, 24.1ms image 29/364 /content/Predict_jpg/img_124.jpg: 512x512 98 Tomatos, 23.9ms image 30/364 /content/Predict_jpg/img_125.jpg: 512x512 100 Tomatos, 24.1ms image 31/364 /content/Predict_jpg/img_126.jpg: 512x512 89 Tomatos, 24.1ms image 32/364 /content/Predict_jpg/img_127.jpg: 512x512 85 Tomatos, 24.1ms image 33/364 /content/Predict_jpg/img_128.jpg: 512x512 92 Tomatos, 24.1ms image 34/364 /content/Predict_jpg/img_129.jpg: 512x512 (no detections), 24.1ms image 35/364 /content/Predict_jpg/img_13.jpg: 512x512 (no detections), 24.1ms image 36/364 /content/Predict_jpg/img_130.jpg: 512x512 (no detections), 24.1ms image 37/364 /content/Predict_jpg/img_131.jpg: 512x512 33 Tomatos, 24.1ms image 38/364 /content/Predict_jpg/img_132.jpg: 512x512 100 Tomatos, 24.1ms image 39/364 /content/Predict_jpg/img_133.jpg: 512x512 109 Tomatos, 23.7ms image 40/364 /content/Predict_jpg/img_134.jpg: 512x512 103 Tomatos, 23.8ms image 41/364 /content/Predict_jpg/img_135.jpg: 512x512 111 Tomatos, 23.8ms image 42/364 /content/Predict_jpg/img_136.jpg: 512x512 86 Tomatos, 23.7ms image 43/364 /content/Predict_jpg/img_137.jpg: 512x512 106 Tomatos, 23.5ms image 44/364 /content/Predict_jpg/img_138.jpg: 512x512 107 Tomatos, 23.7ms image 45/364 /content/Predict_jpg/img_139.jpg: 512x512 108 Tomatos, 23.8ms image 46/364 /content/Predict_jpg/img_14.jpg: 512x512 (no detections), 23.7ms image 47/364 /content/Predict_jpg/img_140.jpg: 512x512 106 Tomatos, 23.8ms image 48/364 /content/Predict_jpg/img_141.jpg: 512x512 91 Tomatos, 23.8ms image 49/364 /content/Predict_jpg/img_142.jpg: 512x512 97 Tomatos, 23.8ms image 50/364 /content/Predict_jpg/img_143.jpg: 512x512 107 Tomatos, 23.7ms image 51/364 /content/Predict_jpg/img_144.jpg: 512x512 98 Tomatos, 23.7ms image 52/364 /content/Predict_jpg/img_145.jpg: 512x512 108 Tomatos, 23.8ms image 53/364 /content/Predict_jpg/img_146.jpg: 512x512 88 Tomatos, 23.7ms image 54/364 /content/Predict_jpg/img_147.jpg: 512x512 97 Tomatos, 23.7ms image 55/364 /content/Predict_jpg/img_148.jpg: 512x512 103 Tomatos, 23.7ms image 56/364 /content/Predict_jpg/img_149.jpg: 512x512 (no detections), 23.7ms image 57/364 /content/Predict_jpg/img_15.jpg: 512x512 37 Tomatos, 23.7ms image 58/364 /content/Predict_jpg/img_150.jpg: 512x512 1 Tomato, 23.7ms image 59/364 /content/Predict_jpg/img_151.jpg: 512x512 32 Tomatos, 23.7ms image 60/364 /content/Predict_jpg/img_152.jpg: 512x512 103 Tomatos, 23.7ms image 61/364 /content/Predict_jpg/img_153.jpg: 512x512 108 Tomatos, 20.9ms image 62/364 /content/Predict_jpg/img_154.jpg: 512x512 110 Tomatos, 20.2ms image 63/364 /content/Predict_jpg/img_155.jpg: 512x512 104 Tomatos, 20.3ms image 64/364 /content/Predict_jpg/img_156.jpg: 512x512 81 Tomatos, 20.3ms image 65/364 /content/Predict_jpg/img_157.jpg: 512x512 103 Tomatos, 20.3ms image 66/364 /content/Predict_jpg/img_158.jpg: 512x512 112 Tomatos, 20.3ms image 67/364 /content/Predict_jpg/img_159.jpg: 512x512 105 Tomatos, 20.3ms image 68/364 /content/Predict_jpg/img_16.jpg: 512x512 65 Tomatos, 20.2ms image 69/364 /content/Predict_jpg/img_160.jpg: 512x512 108 Tomatos, 20.2ms image 70/364 /content/Predict_jpg/img_161.jpg: 512x512 90 Tomatos, 20.3ms image 71/364 /content/Predict_jpg/img_162.jpg: 512x512 101 Tomatos, 20.2ms image 72/364 /content/Predict_jpg/img_163.jpg: 512x512 101 Tomatos, 20.0ms image 73/364 /content/Predict_jpg/img_164.jpg: 512x512 98 Tomatos, 20.3ms image 74/364 /content/Predict_jpg/img_165.jpg: 512x512 98 Tomatos, 20.2ms image 75/364 /content/Predict_jpg/img_166.jpg: 512x512 85 Tomatos, 20.2ms image 76/364 /content/Predict_jpg/img_167.jpg: 512x512 101 Tomatos, 20.2ms image 77/364 /content/Predict_jpg/img_168.jpg: 512x512 89 Tomatos, 20.3ms image 78/364 /content/Predict_jpg/img_169.jpg: 512x512 (no detections), 20.3ms image 79/364 /content/Predict_jpg/img_17.jpg: 512x512 87 Tomatos, 20.2ms image 80/364 /content/Predict_jpg/img_170.jpg: 512x512 (no detections), 20.2ms image 81/364 /content/Predict_jpg/img_171.jpg: 512x512 7 Tomatos, 20.3ms image 82/364 /content/Predict_jpg/img_172.jpg: 512x512 25 Tomatos, 20.2ms image 83/364 /content/Predict_jpg/img_173.jpg: 512x512 27 Tomatos, 20.0ms image 84/364 /content/Predict_jpg/img_174.jpg: 512x512 37 Tomatos, 19.4ms image 85/364 /content/Predict_jpg/img_175.jpg: 512x512 39 Tomatos, 19.3ms image 86/364 /content/Predict_jpg/img_176.jpg: 512x512 15 Tomatos, 18.8ms image 87/364 /content/Predict_jpg/img_177.jpg: 512x512 42 Tomatos, 18.9ms image 88/364 /content/Predict_jpg/img_178.jpg: 512x512 63 Tomatos, 21.0ms image 89/364 /content/Predict_jpg/img_179.jpg: 512x512 67 Tomatos, 18.9ms image 90/364 /content/Predict_jpg/img_18.jpg: 512x512 86 Tomatos, 18.9ms image 91/364 /content/Predict_jpg/img_180.jpg: 512x512 78 Tomatos, 18.9ms image 92/364 /content/Predict_jpg/img_181.jpg: 512x512 66 Tomatos, 18.9ms image 93/364 /content/Predict_jpg/img_182.jpg: 512x512 89 Tomatos, 25.3ms image 94/364 /content/Predict_jpg/img_183.jpg: 512x512 105 Tomatos, 25.7ms image 95/364 /content/Predict_jpg/img_184.jpg: 512x512 100 Tomatos, 69.4ms image 96/364 /content/Predict_jpg/img_185.jpg: 512x512 107 Tomatos, 28.1ms image 97/364 /content/Predict_jpg/img_186.jpg: 512x512 73 Tomatos, 20.2ms image 98/364 /content/Predict_jpg/img_187.jpg: 512x512 107 Tomatos, 19.6ms image 99/364 /content/Predict_jpg/img_188.jpg: 512x512 90 Tomatos, 20.4ms image 100/364 /content/Predict_jpg/img_189.jpg: 512x512 (no detections), 18.6ms image 101/364 /content/Predict_jpg/img_19.jpg: 512x512 72 Tomatos, 21.7ms image 102/364 /content/Predict_jpg/img_190.jpg: 512x512 (no detections), 20.7ms image 103/364 /content/Predict_jpg/img_191.jpg: 512x512 14 Tomatos, 21.7ms image 104/364 /content/Predict_jpg/img_192.jpg: 512x512 74 Tomatos, 19.7ms image 105/364 /content/Predict_jpg/img_193.jpg: 512x512 76 Tomatos, 20.4ms image 106/364 /content/Predict_jpg/img_194.jpg: 512x512 59 Tomatos, 23.1ms image 107/364 /content/Predict_jpg/img_195.jpg: 512x512 40 Tomatos, 35.0ms image 108/364 /content/Predict_jpg/img_196.jpg: 512x512 49 Tomatos, 126.8ms image 109/364 /content/Predict_jpg/img_197.jpg: 512x512 44 Tomatos, 114.3ms image 110/364 /content/Predict_jpg/img_198.jpg: 512x512 20 Tomatos, 26.7ms image 111/364 /content/Predict_jpg/img_199.jpg: 512x512 7 Tomatos, 26.6ms image 112/364 /content/Predict_jpg/img_2.jpg: 512x512 34 Tomatos, 29.1ms image 113/364 /content/Predict_jpg/img_20.jpg: 512x512 60 Tomatos, 37.6ms image 114/364 /content/Predict_jpg/img_200.jpg: 512x512 12 Tomatos, 30.5ms image 115/364 /content/Predict_jpg/img_201.jpg: 512x512 5 Tomatos, 27.8ms image 116/364 /content/Predict_jpg/img_202.jpg: 512x512 1 Tomato, 28.8ms image 117/364 /content/Predict_jpg/img_203.jpg: 512x512 2 Tomatos, 26.7ms image 118/364 /content/Predict_jpg/img_204.jpg: 512x512 (no detections), 47.2ms image 119/364 /content/Predict_jpg/img_205.jpg: 512x512 (no detections), 37.6ms image 120/364 /content/Predict_jpg/img_206.jpg: 512x512 2 Tomatos, 46.4ms image 121/364 /content/Predict_jpg/img_207.jpg: 512x512 9 Tomatos, 70.1ms image 122/364 /content/Predict_jpg/img_208.jpg: 512x512 17 Tomatos, 49.3ms image 123/364 /content/Predict_jpg/img_209.jpg: 512x512 (no detections), 71.2ms image 124/364 /content/Predict_jpg/img_21.jpg: 512x512 83 Tomatos, 67.8ms image 125/364 /content/Predict_jpg/img_210.jpg: 512x512 1 Tomato, 39.5ms image 126/364 /content/Predict_jpg/img_211.jpg: 512x512 33 Tomatos, 68.2ms image 127/364 /content/Predict_jpg/img_212.jpg: 512x512 95 Tomatos, 44.8ms image 128/364 /content/Predict_jpg/img_213.jpg: 512x512 100 Tomatos, 32.7ms image 129/364 /content/Predict_jpg/img_214.jpg: 512x512 89 Tomatos, 27.2ms image 130/364 /content/Predict_jpg/img_215.jpg: 512x512 78 Tomatos, 33.3ms image 131/364 /content/Predict_jpg/img_216.jpg: 512x512 96 Tomatos, 36.9ms image 132/364 /content/Predict_jpg/img_217.jpg: 512x512 103 Tomatos, 28.9ms image 133/364 /content/Predict_jpg/img_218.jpg: 512x512 82 Tomatos, 49.5ms image 134/364 /content/Predict_jpg/img_219.jpg: 512x512 103 Tomatos, 35.5ms image 135/364 /content/Predict_jpg/img_22.jpg: 512x512 76 Tomatos, 81.6ms image 136/364 /content/Predict_jpg/img_220.jpg: 512x512 104 Tomatos, 90.7ms image 137/364 /content/Predict_jpg/img_221.jpg: 512x512 94 Tomatos, 36.5ms image 138/364 /content/Predict_jpg/img_222.jpg: 512x512 91 Tomatos, 34.2ms image 139/364 /content/Predict_jpg/img_223.jpg: 512x512 93 Tomatos, 34.4ms image 140/364 /content/Predict_jpg/img_224.jpg: 512x512 95 Tomatos, 34.6ms image 141/364 /content/Predict_jpg/img_225.jpg: 512x512 60 Tomatos, 34.6ms image 142/364 /content/Predict_jpg/img_226.jpg: 512x512 71 Tomatos, 34.5ms image 143/364 /content/Predict_jpg/img_227.jpg: 512x512 65 Tomatos, 33.8ms image 144/364 /content/Predict_jpg/img_228.jpg: 512x512 28 Tomatos, 33.7ms image 145/364 /content/Predict_jpg/img_229.jpg: 512x512 (no detections), 31.9ms image 146/364 /content/Predict_jpg/img_23.jpg: 512x512 69 Tomatos, 31.8ms image 147/364 /content/Predict_jpg/img_230.jpg: 512x512 (no detections), 31.7ms image 148/364 /content/Predict_jpg/img_231.jpg: 512x512 46 Tomatos, 31.6ms image 149/364 /content/Predict_jpg/img_232.jpg: 512x512 113 Tomatos, 22.4ms image 150/364 /content/Predict_jpg/img_233.jpg: 512x512 114 Tomatos, 22.4ms image 151/364 /content/Predict_jpg/img_234.jpg: 512x512 102 Tomatos, 22.4ms image 152/364 /content/Predict_jpg/img_235.jpg: 512x512 92 Tomatos, 22.5ms image 153/364 /content/Predict_jpg/img_236.jpg: 512x512 108 Tomatos, 22.4ms image 154/364 /content/Predict_jpg/img_237.jpg: 512x512 107 Tomatos, 22.5ms image 155/364 /content/Predict_jpg/img_238.jpg: 512x512 90 Tomatos, 22.3ms image 156/364 /content/Predict_jpg/img_239.jpg: 512x512 120 Tomatos, 22.3ms image 157/364 /content/Predict_jpg/img_24.jpg: 512x512 61 Tomatos, 22.4ms image 158/364 /content/Predict_jpg/img_240.jpg: 512x512 110 Tomatos, 22.3ms image 159/364 /content/Predict_jpg/img_241.jpg: 512x512 98 Tomatos, 22.4ms image 160/364 /content/Predict_jpg/img_242.jpg: 512x512 95 Tomatos, 22.4ms image 161/364 /content/Predict_jpg/img_243.jpg: 512x512 109 Tomatos, 22.4ms image 162/364 /content/Predict_jpg/img_244.jpg: 512x512 102 Tomatos, 25.4ms image 163/364 /content/Predict_jpg/img_245.jpg: 512x512 80 Tomatos, 22.5ms image 164/364 /content/Predict_jpg/img_246.jpg: 512x512 114 Tomatos, 22.4ms image 165/364 /content/Predict_jpg/img_247.jpg: 512x512 100 Tomatos, 22.2ms image 166/364 /content/Predict_jpg/img_248.jpg: 512x512 62 Tomatos, 22.1ms image 167/364 /content/Predict_jpg/img_249.jpg: 512x512 (no detections), 22.2ms image 168/364 /content/Predict_jpg/img_25.jpg: 512x512 28 Tomatos, 22.1ms image 169/364 /content/Predict_jpg/img_250.jpg: 512x512 (no detections), 22.3ms image 170/364 /content/Predict_jpg/img_251.jpg: 512x512 46 Tomatos, 22.2ms image 171/364 /content/Predict_jpg/img_252.jpg: 512x512 116 Tomatos, 22.1ms image 172/364 /content/Predict_jpg/img_253.jpg: 512x512 116 Tomatos, 21.9ms image 173/364 /content/Predict_jpg/img_254.jpg: 512x512 108 Tomatos, 22.0ms image 174/364 /content/Predict_jpg/img_255.jpg: 512x512 87 Tomatos, 22.8ms image 175/364 /content/Predict_jpg/img_256.jpg: 512x512 107 Tomatos, 22.9ms image 176/364 /content/Predict_jpg/img_257.jpg: 512x512 110 Tomatos, 23.4ms image 177/364 /content/Predict_jpg/img_258.jpg: 512x512 85 Tomatos, 23.4ms image 178/364 /content/Predict_jpg/img_259.jpg: 512x512 114 Tomatos, 24.1ms image 179/364 /content/Predict_jpg/img_26.jpg: 512x512 34 Tomatos, 24.0ms image 180/364 /content/Predict_jpg/img_260.jpg: 512x512 111 Tomatos, 24.0ms image 181/364 /content/Predict_jpg/img_261.jpg: 512x512 81 Tomatos, 24.0ms image 182/364 /content/Predict_jpg/img_262.jpg: 512x512 107 Tomatos, 24.1ms image 183/364 /content/Predict_jpg/img_263.jpg: 512x512 116 Tomatos, 24.5ms image 184/364 /content/Predict_jpg/img_264.jpg: 512x512 112 Tomatos, 24.7ms image 185/364 /content/Predict_jpg/img_265.jpg: 512x512 83 Tomatos, 25.4ms image 186/364 /content/Predict_jpg/img_266.jpg: 512x512 107 Tomatos, 25.4ms image 187/364 /content/Predict_jpg/img_267.jpg: 512x512 106 Tomatos, 25.3ms image 188/364 /content/Predict_jpg/img_268.jpg: 512x512 68 Tomatos, 25.7ms image 189/364 /content/Predict_jpg/img_269.jpg: 512x512 (no detections), 26.5ms image 190/364 /content/Predict_jpg/img_27.jpg: 512x512 31 Tomatos, 26.3ms image 191/364 /content/Predict_jpg/img_270.jpg: 512x512 (no detections), 26.5ms image 192/364 /content/Predict_jpg/img_271.jpg: 512x512 47 Tomatos, 26.5ms image 193/364 /content/Predict_jpg/img_272.jpg: 512x512 106 Tomatos, 21.3ms image 194/364 /content/Predict_jpg/img_273.jpg: 512x512 111 Tomatos, 20.7ms image 195/364 /content/Predict_jpg/img_274.jpg: 512x512 101 Tomatos, 20.7ms image 196/364 /content/Predict_jpg/img_275.jpg: 512x512 101 Tomatos, 20.7ms image 197/364 /content/Predict_jpg/img_276.jpg: 512x512 113 Tomatos, 20.7ms image 198/364 /content/Predict_jpg/img_277.jpg: 512x512 118 Tomatos, 20.6ms image 199/364 /content/Predict_jpg/img_278.jpg: 512x512 92 Tomatos, 23.4ms image 200/364 /content/Predict_jpg/img_279.jpg: 512x512 112 Tomatos, 20.7ms image 201/364 /content/Predict_jpg/img_28.jpg: 512x512 21 Tomatos, 20.6ms image 202/364 /content/Predict_jpg/img_280.jpg: 512x512 113 Tomatos, 20.6ms image 203/364 /content/Predict_jpg/img_281.jpg: 512x512 90 Tomatos, 20.7ms image 204/364 /content/Predict_jpg/img_282.jpg: 512x512 103 Tomatos, 20.5ms image 205/364 /content/Predict_jpg/img_283.jpg: 512x512 110 Tomatos, 20.7ms image 206/364 /content/Predict_jpg/img_284.jpg: 512x512 113 Tomatos, 20.7ms image 207/364 /content/Predict_jpg/img_285.jpg: 512x512 89 Tomatos, 20.7ms image 208/364 /content/Predict_jpg/img_286.jpg: 512x512 116 Tomatos, 20.6ms image 209/364 /content/Predict_jpg/img_287.jpg: 512x512 110 Tomatos, 20.7ms image 210/364 /content/Predict_jpg/img_288.jpg: 512x512 55 Tomatos, 20.7ms image 211/364 /content/Predict_jpg/img_289.jpg: 512x512 (no detections), 23.3ms image 212/364 /content/Predict_jpg/img_29.jpg: 512x512 7 Tomatos, 20.6ms image 213/364 /content/Predict_jpg/img_290.jpg: 512x512 (no detections), 20.6ms image 214/364 /content/Predict_jpg/img_291.jpg: 512x512 31 Tomatos, 20.6ms image 215/364 /content/Predict_jpg/img_292.jpg: 512x512 90 Tomatos, 20.1ms image 216/364 /content/Predict_jpg/img_293.jpg: 512x512 83 Tomatos, 20.0ms image 217/364 /content/Predict_jpg/img_294.jpg: 512x512 78 Tomatos, 19.2ms image 218/364 /content/Predict_jpg/img_295.jpg: 512x512 98 Tomatos, 19.2ms image 219/364 /content/Predict_jpg/img_296.jpg: 512x512 105 Tomatos, 19.2ms image 220/364 /content/Predict_jpg/img_297.jpg: 512x512 113 Tomatos, 19.2ms image 221/364 /content/Predict_jpg/img_298.jpg: 512x512 89 Tomatos, 19.2ms image 222/364 /content/Predict_jpg/img_299.jpg: 512x512 109 Tomatos, 19.3ms image 223/364 /content/Predict_jpg/img_3.jpg: 512x512 18 Tomatos, 19.2ms image 224/364 /content/Predict_jpg/img_30.jpg: 512x512 (no detections), 19.2ms image 225/364 /content/Predict_jpg/img_300.jpg: 512x512 115 Tomatos, 19.2ms image 226/364 /content/Predict_jpg/img_301.jpg: 512x512 91 Tomatos, 19.2ms image 227/364 /content/Predict_jpg/img_302.jpg: 512x512 114 Tomatos, 19.2ms image 228/364 /content/Predict_jpg/img_303.jpg: 512x512 113 Tomatos, 25.5ms image 229/364 /content/Predict_jpg/img_304.jpg: 512x512 118 Tomatos, 23.7ms image 230/364 /content/Predict_jpg/img_305.jpg: 512x512 87 Tomatos, 23.6ms image 231/364 /content/Predict_jpg/img_306.jpg: 512x512 109 Tomatos, 22.4ms image 232/364 /content/Predict_jpg/img_307.jpg: 512x512 112 Tomatos, 19.4ms image 233/364 /content/Predict_jpg/img_308.jpg: 512x512 52 Tomatos, 22.1ms image 234/364 /content/Predict_jpg/img_309.jpg: 512x512 (no detections), 20.8ms image 235/364 /content/Predict_jpg/img_31.jpg: 512x512 (no detections), 24.6ms image 236/364 /content/Predict_jpg/img_310.jpg: 512x512 (no detections), 24.0ms image 237/364 /content/Predict_jpg/img_311.jpg: 512x512 27 Tomatos, 24.0ms image 238/364 /content/Predict_jpg/img_312.jpg: 512x512 90 Tomatos, 25.5ms image 239/364 /content/Predict_jpg/img_313.jpg: 512x512 92 Tomatos, 20.6ms image 240/364 /content/Predict_jpg/img_314.jpg: 512x512 62 Tomatos, 26.5ms image 241/364 /content/Predict_jpg/img_315.jpg: 512x512 96 Tomatos, 22.6ms image 242/364 /content/Predict_jpg/img_316.jpg: 512x512 110 Tomatos, 24.2ms image 243/364 /content/Predict_jpg/img_317.jpg: 512x512 105 Tomatos, 20.2ms image 244/364 /content/Predict_jpg/img_318.jpg: 512x512 83 Tomatos, 19.1ms image 245/364 /content/Predict_jpg/img_319.jpg: 512x512 113 Tomatos, 30.0ms image 246/364 /content/Predict_jpg/img_32.jpg: 512x512 6 Tomatos, 19.7ms image 247/364 /content/Predict_jpg/img_320.jpg: 512x512 100 Tomatos, 18.1ms image 248/364 /content/Predict_jpg/img_321.jpg: 512x512 79 Tomatos, 18.6ms image 249/364 /content/Predict_jpg/img_322.jpg: 512x512 104 Tomatos, 18.6ms image 250/364 /content/Predict_jpg/img_323.jpg: 512x512 100 Tomatos, 20.3ms image 251/364 /content/Predict_jpg/img_324.jpg: 512x512 113 Tomatos, 22.8ms image 252/364 /content/Predict_jpg/img_325.jpg: 512x512 78 Tomatos, 19.8ms image 253/364 /content/Predict_jpg/img_326.jpg: 512x512 106 Tomatos, 26.6ms image 254/364 /content/Predict_jpg/img_327.jpg: 512x512 87 Tomatos, 27.4ms image 255/364 /content/Predict_jpg/img_328.jpg: 512x512 43 Tomatos, 28.5ms image 256/364 /content/Predict_jpg/img_329.jpg: 512x512 1 Tomato, 32.4ms image 257/364 /content/Predict_jpg/img_33.jpg: 512x512 103 Tomatos, 30.3ms image 258/364 /content/Predict_jpg/img_330.jpg: 512x512 21 Tomatos, 31.0ms image 259/364 /content/Predict_jpg/img_331.jpg: 512x512 63 Tomatos, 27.4ms image 260/364 /content/Predict_jpg/img_332.jpg: 512x512 57 Tomatos, 30.1ms image 261/364 /content/Predict_jpg/img_333.jpg: 512x512 36 Tomatos, 27.9ms image 262/364 /content/Predict_jpg/img_334.jpg: 512x512 84 Tomatos, 27.8ms image 263/364 /content/Predict_jpg/img_335.jpg: 512x512 95 Tomatos, 27.8ms image 264/364 /content/Predict_jpg/img_336.jpg: 512x512 97 Tomatos, 27.8ms image 265/364 /content/Predict_jpg/img_337.jpg: 512x512 70 Tomatos, 27.7ms image 266/364 /content/Predict_jpg/img_338.jpg: 512x512 94 Tomatos, 27.8ms image 267/364 /content/Predict_jpg/img_339.jpg: 512x512 94 Tomatos, 27.8ms image 268/364 /content/Predict_jpg/img_34.jpg: 512x512 103 Tomatos, 27.5ms image 269/364 /content/Predict_jpg/img_340.jpg: 512x512 67 Tomatos, 27.6ms image 270/364 /content/Predict_jpg/img_341.jpg: 512x512 89 Tomatos, 27.8ms image 271/364 /content/Predict_jpg/img_342.jpg: 512x512 89 Tomatos, 27.7ms image 272/364 /content/Predict_jpg/img_343.jpg: 512x512 101 Tomatos, 27.8ms image 273/364 /content/Predict_jpg/img_344.jpg: 512x512 70 Tomatos, 27.8ms image 274/364 /content/Predict_jpg/img_345.jpg: 512x512 88 Tomatos, 27.8ms image 275/364 /content/Predict_jpg/img_346.jpg: 512x512 94 Tomatos, 27.8ms image 276/364 /content/Predict_jpg/img_347.jpg: 512x512 12 Tomatos, 27.7ms image 277/364 /content/Predict_jpg/img_348.jpg: 512x512 5 Tomatos, 27.7ms image 278/364 /content/Predict_jpg/img_349.jpg: 512x512 32 Tomatos, 27.8ms image 279/364 /content/Predict_jpg/img_35.jpg: 512x512 102 Tomatos, 24.3ms image 280/364 /content/Predict_jpg/img_350.jpg: 512x512 55 Tomatos, 24.4ms image 281/364 /content/Predict_jpg/img_351.jpg: 512x512 34 Tomatos, 24.4ms image 282/364 /content/Predict_jpg/img_352.jpg: 512x512 69 Tomatos, 24.3ms image 283/364 /content/Predict_jpg/img_353.jpg: 512x512 70 Tomatos, 24.3ms image 284/364 /content/Predict_jpg/img_354.jpg: 512x512 67 Tomatos, 24.4ms image 285/364 /content/Predict_jpg/img_355.jpg: 512x512 51 Tomatos, 24.4ms image 286/364 /content/Predict_jpg/img_356.jpg: 512x512 63 Tomatos, 24.5ms image 287/364 /content/Predict_jpg/img_357.jpg: 512x512 67 Tomatos, 24.4ms image 288/364 /content/Predict_jpg/img_358.jpg: 512x512 47 Tomatos, 24.5ms image 289/364 /content/Predict_jpg/img_359.jpg: 512x512 57 Tomatos, 24.3ms image 290/364 /content/Predict_jpg/img_36.jpg: 512x512 104 Tomatos, 24.5ms image 291/364 /content/Predict_jpg/img_360.jpg: 512x512 57 Tomatos, 24.6ms image 292/364 /content/Predict_jpg/img_361.jpg: 512x512 59 Tomatos, 24.4ms image 293/364 /content/Predict_jpg/img_362.jpg: 512x512 51 Tomatos, 24.2ms image 294/364 /content/Predict_jpg/img_363.jpg: 512x512 73 Tomatos, 24.4ms image 295/364 /content/Predict_jpg/img_364.jpg: 512x512 14 Tomatos, 24.4ms image 296/364 /content/Predict_jpg/img_37.jpg: 512x512 110 Tomatos, 24.4ms image 297/364 /content/Predict_jpg/img_38.jpg: 512x512 71 Tomatos, 24.4ms image 298/364 /content/Predict_jpg/img_39.jpg: 512x512 106 Tomatos, 24.4ms image 299/364 /content/Predict_jpg/img_4.jpg: 512x512 4 Tomatos, 24.4ms image 300/364 /content/Predict_jpg/img_40.jpg: 512x512 103 Tomatos, 24.4ms image 301/364 /content/Predict_jpg/img_41.jpg: 512x512 103 Tomatos, 24.4ms image 302/364 /content/Predict_jpg/img_42.jpg: 512x512 106 Tomatos, 24.2ms image 303/364 /content/Predict_jpg/img_43.jpg: 512x512 70 Tomatos, 25.4ms image 304/364 /content/Predict_jpg/img_44.jpg: 512x512 99 Tomatos, 25.3ms image 305/364 /content/Predict_jpg/img_45.jpg: 512x512 100 Tomatos, 26.1ms image 306/364 /content/Predict_jpg/img_46.jpg: 512x512 98 Tomatos, 26.1ms image 307/364 /content/Predict_jpg/img_47.jpg: 512x512 95 Tomatos, 26.4ms image 308/364 /content/Predict_jpg/img_48.jpg: 512x512 17 Tomatos, 26.5ms image 309/364 /content/Predict_jpg/img_49.jpg: 512x512 (no detections), 26.6ms image 310/364 /content/Predict_jpg/img_5.jpg: 512x512 3 Tomatos, 26.1ms image 311/364 /content/Predict_jpg/img_50.jpg: 512x512 (no detections), 25.8ms image 312/364 /content/Predict_jpg/img_51.jpg: 512x512 8 Tomatos, 26.2ms image 313/364 /content/Predict_jpg/img_52.jpg: 512x512 105 Tomatos, 26.1ms image 314/364 /content/Predict_jpg/img_53.jpg: 512x512 101 Tomatos, 21.2ms image 315/364 /content/Predict_jpg/img_54.jpg: 512x512 111 Tomatos, 20.7ms image 316/364 /content/Predict_jpg/img_55.jpg: 512x512 104 Tomatos, 20.4ms image 317/364 /content/Predict_jpg/img_56.jpg: 512x512 102 Tomatos, 20.5ms image 318/364 /content/Predict_jpg/img_57.jpg: 512x512 77 Tomatos, 20.4ms image 319/364 /content/Predict_jpg/img_58.jpg: 512x512 113 Tomatos, 20.0ms image 320/364 /content/Predict_jpg/img_59.jpg: 512x512 103 Tomatos, 20.4ms image 321/364 /content/Predict_jpg/img_6.jpg: 512x512 5 Tomatos, 20.4ms image 322/364 /content/Predict_jpg/img_60.jpg: 512x512 106 Tomatos, 20.4ms image 323/364 /content/Predict_jpg/img_61.jpg: 512x512 101 Tomatos, 20.4ms image 324/364 /content/Predict_jpg/img_62.jpg: 512x512 80 Tomatos, 20.4ms image 325/364 /content/Predict_jpg/img_63.jpg: 512x512 102 Tomatos, 20.4ms image 326/364 /content/Predict_jpg/img_64.jpg: 512x512 99 Tomatos, 20.4ms image 327/364 /content/Predict_jpg/img_65.jpg: 512x512 105 Tomatos, 20.4ms image 328/364 /content/Predict_jpg/img_66.jpg: 512x512 100 Tomatos, 20.3ms image 329/364 /content/Predict_jpg/img_67.jpg: 512x512 65 Tomatos, 20.4ms image 330/364 /content/Predict_jpg/img_68.jpg: 512x512 (no detections), 20.4ms image 331/364 /content/Predict_jpg/img_69.jpg: 512x512 (no detections), 20.4ms image 332/364 /content/Predict_jpg/img_7.jpg: 512x512 1 Tomato, 20.5ms image 333/364 /content/Predict_jpg/img_70.jpg: 512x512 (no detections), 20.4ms image 334/364 /content/Predict_jpg/img_71.jpg: 512x512 24 Tomatos, 22.4ms image 335/364 /content/Predict_jpg/img_72.jpg: 512x512 106 Tomatos, 20.3ms image 336/364 /content/Predict_jpg/img_73.jpg: 512x512 107 Tomatos, 15.8ms image 337/364 /content/Predict_jpg/img_74.jpg: 512x512 111 Tomatos, 15.6ms image 338/364 /content/Predict_jpg/img_75.jpg: 512x512 102 Tomatos, 18.8ms image 339/364 /content/Predict_jpg/img_76.jpg: 512x512 109 Tomatos, 15.7ms image 340/364 /content/Predict_jpg/img_77.jpg: 512x512 84 Tomatos, 15.8ms image 341/364 /content/Predict_jpg/img_78.jpg: 512x512 106 Tomatos, 15.7ms image 342/364 /content/Predict_jpg/img_79.jpg: 512x512 105 Tomatos, 15.7ms image 343/364 /content/Predict_jpg/img_8.jpg: 512x512 1 Tomato, 15.9ms image 344/364 /content/Predict_jpg/img_80.jpg: 512x512 108 Tomatos, 15.9ms image 345/364 /content/Predict_jpg/img_81.jpg: 512x512 93 Tomatos, 15.7ms image 346/364 /content/Predict_jpg/img_82.jpg: 512x512 87 Tomatos, 15.8ms image 347/364 /content/Predict_jpg/img_83.jpg: 512x512 111 Tomatos, 15.8ms image 348/364 /content/Predict_jpg/img_84.jpg: 512x512 115 Tomatos, 15.7ms image 349/364 /content/Predict_jpg/img_85.jpg: 512x512 122 Tomatos, 15.8ms image 350/364 /content/Predict_jpg/img_86.jpg: 512x512 111 Tomatos, 15.9ms image 351/364 /content/Predict_jpg/img_87.jpg: 512x512 83 Tomatos, 15.8ms image 352/364 /content/Predict_jpg/img_88.jpg: 512x512 27 Tomatos, 15.8ms image 353/364 /content/Predict_jpg/img_89.jpg: 512x512 (no detections), 16.2ms image 354/364 /content/Predict_jpg/img_9.jpg: 512x512 (no detections), 16.3ms image 355/364 /content/Predict_jpg/img_90.jpg: 512x512 (no detections), 17.2ms image 356/364 /content/Predict_jpg/img_91.jpg: 512x512 22 Tomatos, 15.9ms image 357/364 /content/Predict_jpg/img_92.jpg: 512x512 100 Tomatos, 16.5ms image 358/364 /content/Predict_jpg/img_93.jpg: 512x512 110 Tomatos, 15.8ms image 359/364 /content/Predict_jpg/img_94.jpg: 512x512 101 Tomatos, 15.8ms image 360/364 /content/Predict_jpg/img_95.jpg: 512x512 108 Tomatos, 15.8ms image 361/364 /content/Predict_jpg/img_96.jpg: 512x512 97 Tomatos, 25.9ms image 362/364 /content/Predict_jpg/img_97.jpg: 512x512 100 Tomatos, 15.6ms image 363/364 /content/Predict_jpg/img_98.jpg: 512x512 108 Tomatos, 18.1ms image 364/364 /content/Predict_jpg/img_99.jpg: 512x512 104 Tomatos, 15.7ms Speed: 0.4ms pre-process, 25.5ms inference, 2.4ms NMS per image at shape (1, 3, 512, 512) Results saved to runs/detect/exp3 328 labels saved to runs/detect/exp3/labels
Some results:
for images in glob.glob('/content/yolov9/runs/detect/exp3/*.jpg')[0:5]:
display(Image(filename=images))
We list all the .txt files and with the help of rasterio we convert the x and y positions into latitude and longitude values:
ls_x = []
ls_y = []
imgs_to_pred = [f for f in os.listdir('/content/yolov9/runs/detect/exp3/labels/') if f.endswith('.txt')]
for images in imgs_to_pred:
filename = images.split('.')[0]
src = rasterio.open('/content/Predict/' + filename + '.tif')
path = f'/content/yolov9/runs/detect/exp3/labels/'+filename+'.txt'
cols = ['class', 'x-center', 'y-center', 'x_extend', 'y_extend']
df = pd.read_csv(path, sep=" ", header=None)
df.columns = cols
df['x-center'] = np.round(df['x-center'] * 512)
df['y-center'] = np.round(df['y-center'] * 512)
for i,row in df.iterrows():
xs, ys = rasterio.transform.xy(src.transform, row['y-center'], row['x-center'])
ls_x.append(xs)
ls_y.append(ys)
df_xy = pd.DataFrame([])
df_xy['x'] = ls_x
df_xy['y'] = ls_y
We generate a GeoDataFrame and save it in a .json file
gdf = gpd.GeoDataFrame(df_xy, geometry=gpd.points_from_xy(df_xy['x'], df_xy['y']))
src_img.crs
CRS.from_epsg(32723)
gdf = gdf.set_crs(src_img.crs.to_dict()['init'])
fig, ax = plt.subplots(figsize=(20, 20))
show((src_img), ax=ax)
gdf.plot(ax=ax, marker='o', color='red', markersize=15)
<Axes: >
gdf.to_file('/content/tomato.json', driver="GeoJSON")