Meta Segment Anything Model 3

https://news.ycombinator.com/rss Hits: 5
Summary

Takeaways:We’re announcing Meta Segment Anything Model 3 (SAM 3), a unified model for detection, segmentation, and tracking of objects in images and video using text, exemplar, and visual prompts.As part of this release, we’re sharing SAM 3 model checkpoints, evaluation datasets, and fine-tuning code.We’re also introducing Segment Anything Playground, a new platform that makes it easy for anyone to understand the capabilities of SAM and experiment with cutting-edge AI models for creative media modification.In Edits, Instagram’s video creation app, SAM 3 will soon enable new effects that creators can apply to specific people or objects in their videos. New creation experiences enabled by SAM 3 will also be coming to Vibes on the Meta AI app and meta.ai on the web.Separately, we’re sharing SAM 3D, a suite of open source models, code, and data for 3D objects and human reconstruction from a single image, setting a new standard for grounded 3D reconstruction in physical world scenarios. Learn more by reading the SAM 3D blog post.SAM 3 and SAM 3D are powering Facebook Marketplace’s new View in Room feature, helping people visualize the style and fit of home decor items, like a lamp or a table, in their spaces before purchasing.Together with our partners at Conservation X Labs and Osa Conservation, we’re also launching a first-of-its-kind, publicly available video dataset for wildlife monitoring using SAM 3.We’re unveiling the next generation of the Segment Anything collection of models, advancing image, and video understanding. Segment Anything Model 3 (SAM 3) introduces some of our most highly requested features like text and exemplar prompts — enabling detection, segmentation, and tracking of any visual concept across images and video. We also want to make it easier for more people to use our models. As part of this release, we’re debuting the Segment Anything Playground, the simplest way for anyone to experiment with applying our state-of-the-art models to media modifi...

First seen: 2025-11-25 13:25

Last seen: 2025-11-25 17:26