Frequently Asked Questions
Reality AI Tools is a cloud-based application for R&D engineers working with sensors and signals. It can be used to generate code for detecting real-world events and conditions using signal and sensor inputs.
Users can load or link to their sensor data, curate training and validation sample lists, use AI Explore™ to create optimized feature sets and generate machine learning models, then train and test those models in the field.
Reality AI Tools allows users to make trained classifiers, detectors and predictors available in the cloud, where they can be used with a simple API. Or, with a subscription upgrade, export them into a form where they can be integrated into firmware and run in real-time, at the edge, on your device.
Reality AI will work with any kind of sensor input. Customers have used a wide range of sensor and signal inputs, including accelerometry and vibration, sound, image, LiDAR, 3D imagery, electrical signals and proprietary sensor types.
Reality AI Tools is generally more appropriate for non-image applications where the sensor data carries a sample rate of 25Hz or greater. For slower sample rates, traditional machine learning techniques geared for statistical time series are likely to give good results and should be tried first.
For image applications, Reality AI is generally most appropriate for problems related to identifying different surface textures and discontinuities in surface textures. For object identification and scene classification problems, a solution based on deep learning is more likely to give good results and should be tried first.
For sound applications, Reality AI is appropriate for a wide range of problems. However we do not have solutions for natural language processing or speech recognition. Other tools will be more appropriate for those kinds of problems.
The amount of data needed depends on the amount of variation in target classes and in the environmental background. In many cases, we can get useful results with small datasets. For some use cases, even a few dozen examples are sufficient to get started.
Eventually, to ensure a solution that will perform adequately in the field, it will be necessary to collect data that covers the full range of variation expected both in target and in background. For classification problems, that means examples of target classes occuring in as many different circumstances as possible, as well as counter-examples of non-target classes that could be confounded with targets.
To work with Reality AI Tools, data must be loaded or linked in one of our standard file formats. Please refer to our Standard File Format Guide.
Traditional signal data analysis is a “model-driven” approach based on the engineer’s understanding of the physics of the device, the physics of the sensor and a physical model of how target phenomena will be manifested in sensor output. It is typically an iterative, trial-and-error approach. Often, the engineer will use a fast-fourier transform (FFT), filter banks, or a linear systems analysis to discover the amount of energy in the signal’s frequency and time domain and use these outputs with statistical methods.
Reality AI uses a “data-driven” approach that makes no assumptions about the underlying physics, and instead employs advanced mathematics and machine learning to identify relevant features (which can be very different than frequency domain features found by an FFT), and then learn to classify on the basis of those features. Data-driven approaches to signal analysis are relatively new, and can be a powerful complement to traditional model-driven approaches.
You can find more information in our Technical Whitepaper.
Haven't found the answer you were looking for? Get in touch with our sales team for more details, we're happy to help!