Ultralytics
Ultralytics is the home for cutting-edge, state-of-the-art computer vision models for tasks like image classification, object detection, image segmentation, and pose estimation. Not only it hosts YOLOv8, the latest iteration in the YOLO series of real-time object detection models, but other powerful computer vision models such as SAM (Segment Anything Model), RT-DETR, YOLO-NAS, etc. Besides providing implementations of these models, Ultralytics also provides us with out-of-the-box workflows for training, fine-tuning, and applying these models using an easy-to-use API.
Getting Startedโ
First, we need to install ultralytics
and wandb
.
- Command Line
- Notebook
pip install --upgrade ultralytics==8.0.238 wandb
# or
# conda install ultralytics
!pip install --upgrade ultralytics==8.0.238 wandb
Note: the development team has tested the integration with ultralyticsv8.0.238
and below. To report any issues with the integration, create a GitHub issue with the tag yolov8
.
Experiment Tracking and Visualizing Validation Resultsโ
This section demonstrates a typical workflow of using an Ultralytics model for training, fine-tuning, and validation and performing experiment tracking, model-checkpointing, and visualization of the model's performance using W&B.
You can try out the code in Google Colab: Open In Colab
You can also check out about the integration in this report: Supercharging Ultralytics with W&B
In order to use the W&B integration with Ultralytics, we need to import the wandb.integration.ultralytics.add_wandb_callback
function.
import wandb
from wandb.integration.ultralytics import add_wandb_callback
from ultralytics import YOLO
Next, we initialize the YOLO
model of our choice, and invoke the add_wandb_callback
function on it before performing inference with the model. This would ensure that when we perform training, fine-tuning, validation, or inference, it would automatically log the experiment logs and the images over laid with both ground-truth and the respective prediction results using the interactive overlays for computer vision tasks on W&B along with additional insights in a wandb.Table
.
# Initialize YOLO Model
model = YOLO("yolov8n.pt")
# Add W&B callback for Ultralytics
add_wandb_callback(model, enable_model_checkpointing=True)
# Train/fine-tune your model
# At the end of each epoch, predictions on validation batches are logged
# to a W&B table with insightful and interactive overlays for
# computer vision tasks
model.train(project="ultralytics", data="coco128.yaml", epochs=5, imgsz=640)
# Finish the W&B run
wandb.finish()
Here's how experiments tracked using W&B for an Ultralytics training or fine-tuning workflow looks like:
YOLO Fine-tuning Experiments
Here's how epoch-wise validation results are visualized using a W&B Table:
WandB Validation Visualization Table
Visualizing Prediction Resultsโ
This section demonstrates a typical workflow of using an Ultralytics model for inference and visualizing the results using W&B.
You can try out the code in Google Colab: Open in Colab.
You can also check out about the integration in this report: Supercharging Ultralytics with W&B
In order to use the W&B integration with Ultralytics, we need to import the wandb.integration.ultralytics.add_wandb_callback
function.
import wandb
from wandb.integration.ultralytics import add_wandb_callback
from ultralytics.engine.model import YOLO
Now, let us download a few images to test the integration on. You can use your own images, videos or camera sources. For more information on inference sources, you can check out the official docs.
!wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img1.png
!wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img2.png
!wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img4.png
!wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img5.png
Next, we initialize a W&B run using wandb.init
.
# Initialize W&B run
wandb.init(project="ultralytics", job_type="inference")
Next, we initialize the YOLO
model of our choice, and invoke the add_wandb_callback
function on it before performing inference with the model. This would ensure that when we perform inference, it would automatically log the images overlaid with our interactive overlays for computer vision tasks along with additional insights in a wandb.Table
.
# Initialize YOLO Model
model = YOLO("yolov8n.pt")
# Add W&B callback for Ultralytics
add_wandb_callback(model, enable_model_checkpointing=True)
# Perform prediction which automatically logs to a W&B Table
# with interactive overlays for bounding boxes, segmentation masks
model(
[
"./assets/img1.jpeg",
"./assets/img3.png",
"./assets/img4.jpeg",
"./assets/img5.jpeg",
]
)
# Finish the W&B run
wandb.finish()
Note: We do not need to explicitly initialize a run using wandb.init()
in case of a training or fine-tuning workflow. However, tt is necessary to explicitly create a run, if the code only involves prediction.
Here's how the interactive bbox overlay looks:
WandB Image Overlay
You can fine more information on the W&B image overlays here.