Segmenting objects from timeseries images with SAM 2¶
This notebook shows how to segment objects from timeseries with the Segment Anything Model 2 (SAM 2).
Make sure you use GPU runtime for this notebook. For Google Colab, go to Runtime
-> Change runtime type
and select GPU
as the hardware accelerator.
Install dependencies¶
Uncomment and run the following cell to install the required dependencies.
# %pip install -U segment-geospatial
Import libraries¶
import leafmap
from samgeo import SamGeo2
Download sample data¶
For now, SamGeo2 supports remote sensing data in the form of RGB images, 8-bit integer. Make sure all images are in the same width and height.
url = "https://github.com/opengeos/datasets/releases/download/raster/landsat_ts.zip"
leafmap.download_file(url)
Initialize the model¶
predictor = SamGeo2(
model_id="sam2-hiera-large",
video=True,
)
Specify the input data¶
Point to the directory containing the images or the video file.
video_path = "landsat_ts"
predictor.set_video(video_path)
Specify the input prompts¶
The prompts can be points and boxes. The points are represented as a list of tuples, where each tuple contains the x and y coordinates of the point. The boxes are represented as a list of tuples, where each tuple contains the x, y, width, and height of the box.
predictor.show_images()
prompts = {
1: {
"points": [[1582, 933], [1287, 905], [1473, 998]],
"labels": [1, 1, 1],
"frame_idx": 0,
},
}
predictor.show_prompts(prompts, frame_idx=0)
Althernatively, prompts can be provided in lon/lat coordinates. The model will automatically convert the lon/lat coordinates to pixel coordinates when the point_crs
parameter is set to the coordinate reference system of the lon/lat coordinates.
prompts = {
1: {
"points": [[-74.3713, -8.5218], [-74.2973, -8.5306], [-74.3230, -8.5495]],
"labels": [1, 1, 1],
"frame_idx": 0,
},
}
predictor.show_prompts(prompts, frame_idx=0, point_crs="EPSG:4326")
Segment the objects¶
predictor.predict_video()
Save results¶
To save the results as gray-scale GeoTIFFs with the same georeference as the input images:
predictor.save_video_segments("segments")
To save the results as blended images and MP4 video:
predictor.save_video_segments_blended(
"blended", fps=5, output_video="segments_blended.mp4"
)
Segment the objects from a video¶
predictor = SamGeo2(
model_id="sam2-hiera-large",
video=True,
)
url = "https://github.com/opengeos/datasets/releases/download/videos/cars.mp4"
video_path = url
predictor.set_video(video_path)
predictor.show_images()
prompts = {
1: {
"points": [[335, 203]],
"labels": [1],
"frame_idx": 0,
},
2: {
"points": [[420, 201]],
"labels": [1],
"frame_idx": 0,
},
}
predictor.show_prompts(prompts, frame_idx=0)
predictor.predict_video(prompts)
predictor.save_video_segments_blended("cars", output_video="cars_blended.mp4", fps=25)