Mapbox Vision SDK

Turn your connected camera into the smartest co-pilot around. The Vision SDK uses AI and AR to understand and illuminate the driver’s environment from both a street-level perspective and a bird’s-eye view. Do more than guide. Define the journey with the Mapbox Vision SDK.
available on:
iOS
Android
Linux

Build a smarter drive without draining your data plan or your wallet

The Vision SDK’s neural networks are lightweight enough to run on today’s devices, yet powerful enough to make sense of every pixel the connected camera captures. Mapbox Vision interprets pixels as data points and uses that information to build a real-time understanding of the road. Now, developers can deliver precise navigation guidance, display safety warnings at the right time, and run custom workflows easily.

Lane Detection

Detect the vehicle’s current lane to enable lane-level navigation

Object Detection

Detect and track nearby vehicles, pedestrians, signs, and traffic lights

Safety Alerts

Create custom alerts for speeding, lane departures, tailgating, and more

Sign Detection

Recognize road signs for speed limits, construction, turn restrictions, and more

Scene Segmentation

Expose clearly distinguished lanes, curbs, sidewalks, buildings, and more

Augmented Reality

Build heads up navigation with turn-by-turn directions and custom objects

Use Cases

Add Visual Context to Navigation

The Vision SDK allows more intuitive navigation experiences with augmented reality. When reaching your destination, for example, developers can overlay additional POI details or tips on parking.

Lane Level Navigation

Lane detection allows smarter navigation instructions with augmented reality or audio cues. Users can be alerted when a lane change is needed to complete a maneuver.

Driver Alerts

The Vision SDK detects pedestrians and cyclists in the path of your vehicle. Developers can set custom thresholds for alerting drivers with visual or audio cues in these cases.

Monitor Driving Conditions

Developers can use the Vision SDK’s sign and object detection to keep drivers informed about changing road conditions, such as construction and lane closures.

Fleet Safety

Customize safety alerts and monitor driver performance of your fleet. The Vision SDK recognizes risky behaviors such as speeding, tailgating, or running stop signs, and enables triggered actions such as video capture when specified events are detected.

Enhanced Fleet Management

Keep your drivers safe and operating costs low with aggregated driving performance data and automatic incident reports with supporting images or video.

How it works

The Vision SDK provides developers with cutting-edge AI and AR tools to build better driving experiences. It’s smart enough to understand the road, yet lean enough to run on devices that billions of drivers use everyday.

Understand the driver’s environment

The Vision SDK uses AI-powered semantic segmentation, object detection, and classification to identify the variables that define a driver’s journey. Detect construction, recognize street signs and speed limits, and identify potential hazards to ensure a safer, more informed drive.

Alert in real-time

Track and assess environmental variables to guide the driver. The Vision SDK detects nearby cars, pedestrians, and traffic lights, and lets developers determine the appropriate alert based on the driver’s distance from that object, current speed, and driving behavior.

Define the journey with the Mapbox platform

Mapbox Vision SDK delivers a 3D view of where the driver is, and works in tandem with the Navigation SDK to project the route ahead. Shift map perspective, mark important landmarks, or provide lane-level routing using Vision SDK’s AR power.

Customer Stories

At Driver, we’re bringing the best in new car user experience and safety to drivers and fleets via a hardware-free mobile app. The Vision SDK provides an awesome set of tools for delivering both.
- Rashid Galadanci, CEO, Driver
The Raven combines navigation, vehicle sensor data and diagnostics, and a smart camera to create an always-on solution for commuters and fleets. We’re using the Vision SDK to build new capabilities on top of our IoT platform.
- Dan Carruthers, Co-founder, Klashwerks
At Cogneo Labs, we’re enriching the way humans interact with the world with AI, AR, and VR. The Mapbox Vision SDK makes it easy for us to use computer vision to create better experiences for drivers.
- James Osborne, CEO, Cogneo Labs

Documentation

Preview documentation for the mobile SDKs here:

Frequently Asked Questions

General

What is the Mapbox Vision SDK?

What is the Mapbox Vision SDK?

The Mapbox Vision SDK is a tool developers use to build a better driving experience. The SDK processes images captured by connected cameras on mobile phones, dash cameras, or embedded navigation systems. The SDK uses this data to understand the driver’s environment and deliver driver assistance and AR navigation features.

What can I do with the Vision SDK?

The Vision SDK lets developers see the view from the driver’s seat. The SDK pairs augmented reality and artificial intelligence into one lightweight, multi-platform solution you can use to engineer a better driving experience. Build custom AR navigation experiences, classify and display regulatory and warning signs, trigger driver alerts for nearby vehicles, cyclists, and pedestrians, and more.

How does the Vision SDK tie into the rest of Mapbox’s products and services?

Mapbox’s live location platform incorporates dozens of different data sources to power our maps. Map data originates from sensors as far away as satellites and as close up as street level imagery. Conventionally, collected imagery requires extensive processing before a map can be created or updated. The innovation of the Vision SDK is its ability to process live data with distributed sensors, keeping up with our rapidly changing world. Developers will be able to use this new ability to create richer, more immersive experiences with Mapbox maps, navigation, and search.

Using Vision

In which regions is the Vision SDK supported?

In which regions is the Vision SDK supported?

Semantic segmentation, object detection, and following distance detection will work on virtually any road. The core functionality of the augmented reality navigation with turn-by-turn directions is supported globally. AR navigation with live traffic is supported in over 50 countries, covering all of North America, most of Europe, Japan, South Korea, and several other markets. Sign classification is currently optimized for North America, with some limited support in other regions. Sign classification for additional regions is under development.

Can the Vision SDK read all road signs?

The latest version of the Vision SDK recognizes over 200 of the most common road signs today, including speed limits (5 - 120 mph or kph), regulatory signs (merges, turn restrictions, no passing, etc.), warning signs (traffic signal ahead, bicycle crossing, narrow road, etc.), and many others. The Vision SDK does not read individual letters or words on signs, but rather learns to recognize each sign type holistically. As a result, it generally cannot interpret guide signs (e.g. “Mariposa St. Next Exit”). We’re exploring Optical Character Recognition (OCR) as a future release.

What are the requirements for calibration?

AR navigation and Safety mode require calibration, which takes 20-30 seconds of normal driving.(Yourdevice will not be able to calibrate without being mounted.) Because the Vision SDK is designed to work with an arbitrary mounting position, it needs this short period of calibration when it’s initialized to be able to accurately gauge the locations of other objects in the driving scene. Once calibration is complete, the Vision SDK will automatically adjust to vibrations and changes in orientation while driving.

What is the best way for users to mount their devices when using the Vision SDK?

The Vision SDK works best when your device is mounted either to the windshield or the dashboard of your vehicle with a good view of the road. We’ve tested a lot of mounts; here are a few of our favorites:
Some things to consider when choosing and setting up a mount:
  • Generally, shorter length mounts will vibrate less. Mounting to your windshield or to the dashboard itself are both options.
  • The Vision SDK will work best when the phone is near/behind where your rear view mirror is, but please note your local jurisdiction’s limits on where mounts may be placed.
  • Make sure the camera view is unobstructed (youwill be able to tell with any of the video screens open).

Can I use the Vision SDK with an external camera?

Yes. Beginning with public beta, developers will be able to connect Vision-enabled devices to remote cameras. Image frames from external cameras can be transmitted over WiFi or via a direct connection.

Will the Vision SDK drain my battery?

The Vision SDK consumes CPU, GPU and other resources to process road imagery on-the-fly. Just as with any other navigation or video application, we recommend having your phone plugged in if you are going to use it for extended periods of time.

Can I rely on the Vision SDK to make driving decisions?

No. The Vision SDK is designed to provide context to aid driving, but does not replace any part of the driving task. During beta, feature detection is still being tested, and may not detect all hazards.

Can I use the Vision SDK to make my car drive itself?

No. The Vision SDK can be used to issue safety alerts and provide augmented reality navigation instructions and other features, but does not make any driving decisions.

Will my device get hot if I run the Vision SDK for a long time?

Phones and other IoT devices will get warmer over time as the onboard AI consumes a decent amount of resources. However, we have not run into any heat issues with moderate-to-heavy use.

Will the Vision SDK work in countries that drive on the left?

Yes.

Does the Vision SDK work at night?

The Vision SDK works best under good lighting conditions. However, it does function at night, depending on how well the road is illuminated. In cities with ample street lighting, for example, the Vision SDK still performs quite well.

Does the Vision SDK work in the rain and/or snow?

Yes. Just like human eyes, however, the Vision SDK works better the better it can see. Certain features, such as lane detection, will not work when the road is covered with snow.

Tech

What is the difference between detection and segmentation?

What is “classification”?

In computer vision, classification is the process by which an algorithm identifies the presence of a feature in an image. For example, the Vision SDK classifies whether there are certain road signs in a given image.

What is “detection”?

In computer vision, detection is similar to classification - except instead of only identifying whether a given feature is present, a detection algorithm also identifies where in the image the feature occurred. For example, the Vision SDK detects vehicles in each image, and indicates where it sees them with bounding boxes. The Vision SDK supports the following detection classes: cars (or trucks), bicycles/motorcycles, pedestrians, traffic lights, and traffic signs.

What is “segmentation”?

In computer vision, segmentation is the process by which each pixel in an image is assigned to a different category, or “class”. For example, the Vision SDK analyzes each frame of road imagery and paints the pixels different colors corresponding its the underlying class. The Vision SDK supports the following segmentation classes: buildings, cars (or trucks), curbs, roads, non-drivable flat surfaces (such as sidewalks), single lane markings, double lane markings, other road markings (such as crosswalks), bicycles/motorcycles, pedestrians, sky, and unknown.

What is the difference between detection and segmentation?

Detection identifies discrete objects(e.g.,individual vehicles) and draws bounding boxes around each one that is found. The number of detections in an image changes from one image to the next, depending on what appears. Segmentation, on the other hand, goes pixel-by-pixel and assigns each to a different category. For a given segmentation model, the same number of pixels are classified and colored in every image. Features from segmentation can be any shape describable by a 2-d pixel grid, while features from object detection are indicated with boxes defined by four pixels making up the corners.

Where does calibration happen?

Calibration is handled in the VisionCore module. VisionCore uses camera, IMU, and GPS to calibrate itself for best performance of Vision features.

Data and Privacy

How much data does the Vision SDK use?

Do I need to use my data plan to utilize the Vision SDK?

In the standard configuration, the Vision SDK requires connectivity for initialization, road feature extraction, and augmented reality navigation (VisionAR). However, the neural networks used to run classification, detection, and segmentation all run on-device without needing cloud resources. For developers interested in running the Vision SDK offline, please contact us.

How much data does the Vision SDK use?

The Vision SDK beta uses a maximum of 30 MB of data per hour. For reference, this is less than half of what Apple Music uses on the lowest quality setting.

What type of road network data is Mapbox getting back from the Vision SDK?

The Vision SDK is sending back limited telemetry data from the device, including road feature detections and certain road imagery, at a data rate not to exceed 30 MB per hour. This data is being used to improve Mapbox products and services, including the Vision SDK. As with other Mapbox products and services, Mapbox only collects the limited personal information described in our privacy policy.
Browse all questions

Ready to build?

All you need is a mapbox.com account.