Ultralytics

Ultralytics is the home for cutting-edge, state-of-the-art computer vision models for tasks like image classification, object detection, image segmentation, and pose estimation. Not only it hosts YOLOv8, the latest iteration in the YOLO series of real-time object detection models, but other powerful computer vision models such as SAM (Segment Anything Model), RT-DETR, YOLO-NAS, etc. Besides providing implementations of these models, Ultralytics also provides us with out-of-the-box workflows for training, fine-tuning, and applying these models using an easy-to-use API.

Get started

  1. Install ultralytics and wandb.

    ```shell
    pip install --upgrade ultralytics==8.0.238 wandb
    
    # or
    # conda install ultralytics
    ```
    
    ```bash
    !pip install --upgrade ultralytics==8.0.238 wandb
    ```
    

    The development team has tested the integration with ultralyticsv8.0.238 and below. To report any issues with the integration, create a GitHub issue with the tag yolov8.

Track experiments and visualize validation results

This section demonstrates a typical workflow of using an Ultralytics model for training, fine-tuning, and validation and performing experiment tracking, model-checkpointing, and visualization of the model’s performance using W&B.

You can also check out about the integration in this report: Supercharging Ultralytics with W&B

To use the W&B integration with Ultralytics, import the wandb.integration.ultralytics.add_wandb_callback function.

import wandb
from wandb.integration.ultralytics import add_wandb_callback

from ultralytics import YOLO

Initialize the YOLO model of your choice, and invoke the add_wandb_callback function on it before performing inference with the model. This ensures that when you perform training, fine-tuning, validation, or inference, it automatically saves the experiment logs and the images, overlaid with both ground-truth and the respective prediction results using the interactive overlays for computer vision tasks on W&B along with additional insights in a wandb.Table.

# Initialize YOLO Model
model = YOLO("yolov8n.pt")

# Add W&B callback for Ultralytics
add_wandb_callback(model, enable_model_checkpointing=True)

# Train/fine-tune your model
# At the end of each epoch, predictions on validation batches are logged
# to a W&B table with insightful and interactive overlays for
# computer vision tasks
model.train(project="ultralytics", data="coco128.yaml", epochs=5, imgsz=640)

# Finish the W&B run
wandb.finish()

Here’s how experiments tracked using W&B for an Ultralytics training or fine-tuning workflow looks like:

YOLO Fine-tuning Experiments

Here’s how epoch-wise validation results are visualized using a W&B Table:

WandB Validation Visualization Table

Visualize prediction results

This section demonstrates a typical workflow of using an Ultralytics model for inference and visualizing the results using W&B.

You can try out the code in Google Colab: Open in Colab.

You can also check out about the integration in this report: Supercharging Ultralytics with W&B

In order to use the W&B integration with Ultralytics, we need to import the wandb.integration.ultralytics.add_wandb_callback function.

import wandb
from wandb.integration.ultralytics import add_wandb_callback

from ultralytics.engine.model import YOLO

Download a few images to test the integration on. You can use still images, videos, or camera sources. For more information on inference sources, check out the Ultralytics docs.

!wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img1.png
!wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img2.png
!wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img4.png
!wget https://raw.githubusercontent.com/wandb/examples/ultralytics/colabs/ultralytics/assets/img5.png

Next, initialize a W&B run using wandb.init.

# Initialize W&B run
wandb.init(project="ultralytics", job_type="inference")

Next, initialize your desired YOLO model and invoke the add_wandb_callback function on it before you perform inference with the model. This ensures that when you perform inference, it automatically logs the images overlaid with your interactive overlays for computer vision tasks along with additional insights in a wandb.Table.

# Initialize YOLO Model
model = YOLO("yolov8n.pt")

# Add W&B callback for Ultralytics
add_wandb_callback(model, enable_model_checkpointing=True)

# Perform prediction which automatically logs to a W&B Table
# with interactive overlays for bounding boxes, segmentation masks
model(
    [
        "./assets/img1.jpeg",
        "./assets/img3.png",
        "./assets/img4.jpeg",
        "./assets/img5.jpeg",
    ]
)

# Finish the W&B run
wandb.finish()

You do not need to explicitly initialize a run using wandb.init() in case of a training or fine-tuning workflow. However, if the code involves only prediction, you must explicitly create a run.

Here’s how the interactive bbox overlay looks:

WandB Image Overlay

You can fine more information on the W&B image overlays here.

More resources


Last modified January 20, 2025: Add svg logos to front page (#1002) (e1444f4)