10 min read

Roboto Sandbox v0.1 Released

Our first sandbox with the nuScenes automotive dataset is live! This is a preview of core functionality that we've been building over the past few months. Take a look and let us know your feedback!
Written by
Yves Albers
April 6, 2023


Modern robotics systems use various sensors to perceive and navigate their surroundings. For example, a typical perception suite of an autonomous drone may include cameras, LiDARs, radars, navigation systems (GNSS), and inertial measurement units (IMUs). This raw sensor data is processed through complex algorithms to make informed decisions. Additionally, robots produce vast amounts of software logs to keep track of the system's internal state and reasoning. All this data gets saved in timestamped, multi-modal records during operation, enabling later analysis and debugging. However, the file formats produced by these systems are notoriously hard to work with, and the sheer amount of data quickly gets overwhelming. For example, the total bandwidth of an autonomous car can reach up to 5 GB/s.

We need better tools

A powerful data platform to efficiently ingest, search, transform, and analyze this data is critical to building safe and reliable autonomous systems. Empowering robotics engineers to rapidly debug system issues, identify edge cases, and develop new training and test datasets is essential. The need for these tools becomes particularly important as companies move from prototyping to production. 

The problem is that existing data platforms like Splunk or Datadog don't support multi-modal robotics data formats like Rosbags or PX4 logs. Robotics engineers, therefore, spend days writing data ingestion and aggregation scripts to get the data they need. These homegrown solutions are hard to maintain, expensive, and don't scale. Industry leaders like Waymo and Cruise can afford to hire large infrastructure teams, but smaller and mid-sized companies lack the resources to invest in scalable data exploration systems. 

At Roboto, we believe that building AI-enabled tools for roboticists is key to enabling the next wave of automation. We're excited to showcase some of the tech we've been working on in the form of a free sandbox accessible at

Multi-Modal Search

Robotics data is inherently multi-modal, meaning it comes from various sources. You often need to correlate and aggregate across these modalities to find interesting scenarios and events in your data. For example, to answer a question like "Find all images with vehicle speed above 30 km/h in a left turn with a person in view" you need a backend system that can associate images with other measurements such as vehicle speed, steering angle, and object detection algorithm results. This is not a simple task, given that these signals are produced at different frequencies and stored in nested message formats. Our data ingestion pipeline, by design, extracts and transforms your data in a way that unlocks advanced search and transformation use cases.

Depending on the task, users within robotics organizations may look for different types of data entities. Here are some examples for various roles:

  • Computer Vision Engineer: wants images with people at a specific view angle and camera to evaluate a new deep learning model.
  • Thermal Engineer: wants battery plots from recent drives below a given temperature to verify performance in harsh conditions.
  • Systems Engineer: wants datasets with sudden lane changes to identify dangerous locations. 

Moreover, the search interfaces that are most suitable for interacting with the data may vary depending on the user's technical background and the nature of the task. Our sandbox showcases how you can easily combine the "what" you're looking for with use-case-specific search technologies.

Natural Language Search

This type of search offers the lowest barrier to entry. It enables anyone in an organization to search for data using standard language without knowing much about the underlying signals and data representation.

Searching for images with natural language in the Roboto Sandbox

Query Builder and SQL Search

Our query builder gives you finer-grained control over your search results by exposing a structured way to create powerful filtered queries across the available signals. You can also programmatically access this query type through an API to integrate search and aggregations into existing data pipelines. 

Searching for plots with structured query builder in the Roboto Sandbox

Signal Search

Sometimes, there are interesting patterns in a time-series plot that simple thresholds or rules cannot easily describe. Our patent-pending signal search allows you to find similar patterns across multiple datasets and answer questions like: Have we seen this pattern before? For example, the following similarity search demonstrates how you can look for more "left turns" in your dataset based on the steering angle pattern.

Searching for plots with signal search in the Roboto Sandbox

Image Search

Like our signal search, you may be interested in finding similar-looking images or objects across other datasets. Instead of just using tags or annotations, our image search allows you to select an area of an image and find the visually most similar objects in other images.

Searching for similar objects in the Roboto Sandbox


Once you find an interesting data point, you can dive deeper into the underlying dataset to further analyze and debug using our dataset viewer. Everything is time-synchronized and allows you to smoothly move back and forth in time using a global timeline and video player.

Exploring a dataset in the Roboto Sandbox


Roboto allows you to mix and match different signals easily to create custom debugging workspaces. Users can share workspaces with their team and save them for future tasks. We also make it very easy to add comments to datasets and graphs. We also enable you to download parts of the data so that you don't have to download several gigabytes of raw data if you only need a .csv file with a couple of signals.

This example shows a potentially dangerous situation where the object detector fails to detect a jaywalker because of an overexposed camera image.


A collection can consist of a set of datasets, images, plots, or a combination of these entities. There are different ways to add items to a collection:

  • Search: After a multi-modal search, you can add the results of your query to a collection. 
  • Triggers: At data upload, you can configure triggers to add items to a collection. For example: add all flights with saturated pixels to overexposure-collection.
  • Manual: You can manually add data points in the visualizer to a collection. 


Collections allow you to gather data points with corresponding analysis to summarize findings and highlight issues. This creates a centralized repository for debugging and sharing insights among different roles in a robotics organization. Collections enable you to collaborate around the data instead of sharing screenshots or PDF reports through email or Jira. All that's needed is a link to a collection to have access to the relevant data and discussions. 

For example, a systems engineer could create a collection of datasets that include unprotected left turns at challenging locations. The user can then share the collection with the safety and reliability team for manual verification. Based on their assessment, the team could flag any scenarios that are deemed dangerous and then send the collection to the perception and controls team for further review.

Check out this example collection with left turns.

Edge Cases

You can use collections to track edge cases and interesting data points collaboratively. For example, you may find interesting false detections of your object detection algorithm caused by reflections or unexpected battery voltage drops that you want to keep track of.

Check out this example collection with false positive people detections.

Crafting Datasets

A common use case for a perception engineer in a robotics organization is to aggregate images across multiple datasets and craft targeted training and test sets for model training and algorithm evaluation. 

For example, you may want to check your computer vision algorithm performance on a specific subset of your data, and for that purpose, you create a collection of images at night with motorcycles and pedestrians.

Collections allow you to easily export your images or videos to your favorite machine learning framework so that you don't need to write any conversion code. 

Another use case for collections is data labeling. Labeling is expensive, and you must be very selective about which images you send to an external labeling provider. Our multi-modal search allows you to subsample your images using relevant query signals and then add them to a collection for labeling. For example, you may only want to label images at a certain steering angle and vehicle speed. We're working on integrating collections with external labeling providers (more on this soon!)


In conclusion, the Roboto Sandbox provides several new ways to search, visualize and analyze multi-modal sensor data. You can try the sandbox out for yourself here:

We'd love to hear from you as we continue to iterate on the user experience and APIs that we build along the way. Stay tuned for more exciting features coming soon! Meanwhile, if you'd like to collaborate or get early access to these tools for your own data, get in touch with us at


Sample data is © nuScenes (CC BY-NC-SA 4.0).