Frequently Asked Questions
Reality AI Tools™ is a cloud-based application for R&D engineers working with sensors and signals. It can be used to generate code for detecting real-world events and conditions using signal and sensor inputs.
Users can load or link to their sensor data, curate training and validation sample lists, use AI Explore™ to create optimized feature sets and generate machine learning models, then train and test those models in the field.
Reality AI Tools allows users to make trained classifiers, detectors and predictors available in the cloud, where they can be used with a simple API. Or, with a subscription upgrade, export them into a form where they can be integrated into firmware and run in real-time, at the edge, on your device.
Reality AI will work with any kind of sensor input. Customers have used a wide range of sensor and signal inputs, including accelerometry and vibration, sound, image, LiDAR, 3D imagery, electrical signals and proprietary sensor types.
The amount of data needed depends on the amount of variation in target classes and in the environmental background. In many cases, we can get useful results with small datasets. For some use cases, even a few dozen examples are sufficient to get started.
Eventually, to ensure a solution that will perform adequately in the field, it will be necessary to collect data that covers the full range of variation expected both in target and in background. For classification problems, that means examples of target classes occuring in as many different circumstances as possible, as well as counter-examples of non-target classes that could be confounded with targets.
To work with Reality AI Tools, data must be loaded or linked in one of our standard file formats. Please refer to our Standard File Format Guide.
Traditional signal data analysis is a “model-driven” approach based on the engineer’s understanding of the physics of the device, the physics of the sensor and a physical model of how target phenomena will be manifested in sensor output. It is typically an iterative, trial-and-error approach. Often, the engineer will use a fast-fourier transform (FFT), filter banks, or a linear systems analysis to discover the amount of energy in the signal’s frequency and time domain and use these outputs with statistical methods.
Reality AI uses a “data-driven” approach that makes no assumptions about the underlying physics, and instead employs advanced mathematics and machine learning to identify relevant features (which can be very different than frequency domain features found by an FFT), and then learn to classify on the basis of those features. Data-driven approaches to signal analysis are relatively new, and can be a powerful complement to traditional model-driven approaches.
Haven't found the answer you were looking for? Get in touch with our sales team for more details, we're happy to help!