FAQs - Reality AI

General Questions

 

What is Edge AI?   What is TinyML?

Edge AI and TinyML are interchangeable buzzwords for techniques used to create machine learning or artificial intelligence models that can be deployed outside of cloud data centers – often on processor nodes at the “edge of the network,” close to where the data originates.   Edge AI, TinyML, Embedded AI and Embedded Machine Learning are all basically synonyms for machine learning running in low-power, low-memory, compute-constricted environments, typically in firmware.  For Reality AI, Edge AI means running machine learning locally in firmware on inexpensive microcontrollers as part of a product, or on IoT edge devices in a factory or plant.  For more, see this video from our Reality Bytes series on “What is Edge AI?”

Is Reality AI a type of Explainable AI?

Yes.  Reality AI Tools® software works by constructing an optimized feature space based on signal processing transforms before constructing a machine learning model.  These feature space transforms can be visualized in terms of their time- and frequency-domain behavior, so that engineers can relate machine learning model functioning to their own understanding of the underlying physics of the sensor’s interaction with the environment.  We believe that Reality AI is unique among Edge AI / TinyML software for sensor data in providing this kind of explainability.  For more on this topic, see our blog on how Reality AI is  Explainable AI.   You can also see a demonstration of this in action in the ARM Virtual Tech Talk by Reality AI on how to build products with Edge AI / TinyML.   The explainability demo is at the 11:30 time mark.

How do I select a mic or accelerometer for my project?

There are such a range of considerations to be taken into account when selecting instrumentation – physical characteristics like weight, size, electrical requirements and so on; acoustic characteristics like frequency response and dynamic range; and processing characteristics such as maximum sample rate and bit depth.   Some applications require explosion-proof or corrosion-resistant instrumentation.     Generally speaking, our answer to this question depends on where you are in the design process:  At the beginning?  Use whatever is easiest and see whether it works.  Dev boards can be a good choice here, or readily available USB-connectable sensors that can connect to a laptop or RealityCheck AD edge node.   Proven feasibility and now trying to identify specific components for your product Bill of Materials?   Reality AI Tools® software has analytics to help you pick the best components and generate minimum component specifications, so you can procure the parts you need through your existing supply chain with confidence.

How much data do I need?

The short answer is that you will need “enough.”   How much is enough?  That’s very hard to say in advance.  Machine learning on sensor data is an exercise in overcoming variation with data – and the more variation there is, the more data you will need to overcome it.   Reality AI Tools® software has functionality to help you assess data coverage and readiness.  To make optimum use of it and to minimize your cost of data collection, have a look at our blog “The best way to reduce the cost of data collection is ‘only do it once.’” and our data collection whitepaper.


Reality AI Tools® software

 

What is Reality AI Tools® software for?

Reality AI Tools software is an Edge AI platform used to support a variety of AI use cases, including real-time AI applications based on audio data, vibration data, or other waveform sensor data.   Reality AI Tools software generates embedded AI models that can be incorporated into a firmware build as part of a product, or deployed to an inexpensive edge node in the factory.   Contact us for more information on AI case studies in predictive maintenance, condition monitoring, gesture recognition using accelerometer or other data, anomaly detection, and other predictive analytics based on sensor data.

What kinds of sensors does Reality AI Tools® software support? 

Reality AI Tools® software works with most kinds of sensor data, but works best with sensor data collected with high sample rates, like accelerometer data, vibration, sound, current, voltage, RF, and other data that one might describe as “waveforms.”   Reality AI can use data collected with lower frequency, and is especially effective when slower time series are used in combination with high-frequency sensors – for example combining periodic temperature and pressure readings with accelerometry or vibration.   But Reality AI Tools software is not limited to any specific list of supported sensors – it will support data from any sensor so long as it is provided in a supported format.   Contact us for more information on supported data formats.

What kind of edge device will Reality AI Tools® software support?

Reality AI Tools software is an Edge AI platform that can generate code suitable for use with a wide range of edge devices.  We can generate code for microcontrollers using the GCC toolchain, which includes most all ARM Cortex M-, A- and R- class MCUs.   We also support a number of non-ARM architectures, including Knowles DSP processors.  Finally, we support Linux implementations on a number of processors, including higher end ASPs based on ARM and x86 processors.   We support a number of different Raspberry Pi models as well.   Contact us if you have questions about a specific processor.

How many sensor data channels can I use with Reality AI Tools® software?

Reality AI Tools software does not contain a hard limit on the number of sensor data channels.  Practically speaking though, the computing constraints of your target environment will put an effective limit on the number of channels you employ – RAM in particular.    The RAM requirements of a machine learning model are closely related to the size of the input buffer required to hold a frame of sensor data, and the size of the buffer is proportional to the number of channels x the sample rate x the length of the window.   The more sensor data channels, the bigger the window buffer, and bigger window buffers require more RAM to manipulate mathematically.  Reality AI Tools software can help you explore the RAM implications of different choices of input channels, window length and sample rate to optimize the RAM requirements of any machine learning models you create.

How does Reality AI Tools® software work to create embedded machine learning models?

Experienced machine learning practitioners know:  If you have the right features, almost any machine learning algorithm will find what you’re looking for with about the same accuracy.   The Reality AI approach is based on this fundamental insight:  It’s all about the features.  Reality AI Tools software is based on a proprietary method for algorithmically determining optimum features for a given machine learning problem by seeking to construct feature spaces that optimize separation between classes or correlation to a target variable.   By selecting the best feature space and then optimizing it for maximum efficiency and effectiveness, Reality AI Tools software drastically simplifies the machine learning problem, allowing use of compact, simple and efficient learning algorithms.    For more information on this topic, see this video from our Reality Bytes series on “How to put AI on a Microcontroller.”

What types of signal processing features does Reality AI Tools® software use?

Reality AI Tools software is optimized for problems using high-frequency / high-sample-rate sensor data that might otherwise require input from a signal processing engineer to manipulate – data like sound, vibration, accelerometer data, and other types of data that you would think of as having a “waveform.”  Reality AI Tools uses a range of statistical as well as time- and frequency-domain transforms:  basic statistical features; spectral features; statistics on spectral features; spectral periodicity and time-variant measures of spectral features; wavelets and other time-frequency codings, including sparse coding; and a variety of transforms from the field of compressive sensing.  Reality AI Tools software evaluates each feature family against the data, selects a specific set of computations, and then optimizes the parameters of that feature computation.  It does this for each sensor channel, and combines the results into a feature computation specific to the classification, regression or anomaly detection problem at hand.  These types of features work exceptionally well for audio signal processing, vibration analysis, gesture recognition, and other AI use cases that make use of signal data.

Does Reality AI Tools software create deep learning models?

Our approach at Reality AI focuses on algorithmic feature discovery and optimization, using a proprietary AI-driven determining optimum features for a given machine learning problem.  In most cases, this drastically simplifies the machine learning problem, allowing use of compact, simple and efficient learning algorithms.  By contrast, Deep Learning typically uses multiple layers of convolutional or recurrent neural networks to accomplish that same task – determining which of many possible underlying features matter most to the decision being made at the top of the network.    The problem with Deep Learning for embedded systems is that it can be computationally wasteful – bringing all of that neural network machinery into deployment.  And for real-time AI applications based on signal processing features, the neural network approximation of signal processing transforms is massively inefficient – see this blog on “stupid deep learning tricks” for more on the underlying mathematics behind that assertion.   But there are certainly some applications for which embedded deep learning methods can add predictive power over and above simpler machine learning, even with optimized features.  For that reason, we will shortly be offering support for Tensor Flow Lite microcontroller deployments, with transparent comparisons of both expected accuracy and RAM / flash / processing latency between Tensor Flow Lite and other more efficient machine learning methods. 

What algorithms does Reality AI Tools® software support?

Our approach at Reality AI focuses on algorithmic feature discovery and optimization, using a proprietary AI-driven determining optimum features for a given machine learning problem.  In most cases, this drastically simplifies the machine learning problem, allowing use of compact, simple and efficient learning algorithms.   For classification problems, this is usually an ensemble of support vector machines (in embedded inference, SVMs have the distinct advantage of being evaluated with just a few matrix multiplies).  For regression problems, its usually an SVM regression model, and for anomaly detection our software typically employs a one-sided SVM or a K-means algorithm.   We will soon be adding additional learning algorithms to the mix, including deep learning for cases that require them – but for most problems these other learning algorithms are not ideal, adding significant computational complexity without materially increasing accuracy.  See, for example, this blog on “stupid deep learning tricks.”

Can I tune the models myself, or is Reality AI Tools® software a “black box”?

Reality AI Tools software offers many options for model tuning.   At the start of the process, you have the option to adjust window size and stride, or to use energy-triggered segmentation based on a virtual-oscilloscope trigger (see our blog on segmenting real-time sensor data, and our presentation on the topic in the ARM Virtual Tech Talks series).  Then, in the model construction process, Reality AI Tools software allows you to select the feature set and model construction that best fits your needs, be they for accuracy, resource consumption, or explainability.   Also during this phase, we offer a number of different visualization options for specific class signatures and feature significance (see our blog on how we have implemented Explainable AI).  Post-construction, we offer tuning models for adjusting error balance by changing your tolerance for false positives vs false negatives (see our blog on how bias isn’t always bad), and by implementing different types of smoothing.  Finally, for the ultimate in transparency and customizability, we also offer pro-tier options for MATLAB and C code exports.  With these source code options, you can use Reality AI Tools software as your starting point, incorporating its output into your own tool stack and development process.


RealityCheck AD


Can RealityCheck AD help us leverage AI for industrial anomaly detection?

RealityCheck AD is an add-on to Reality AI Tools® software for factory applications such as condition monitoring, predictive maintenance, and automated end-of-line testing using anomaly detection.  It is intended to be useful in cases where building balanced, labeled predictive maintenance datasets in advance is not practical or possible.   RealityCheck AD uses anomaly detection running on a standard industrial edge PC or Raspberry Pi connected to an accelerometer, microphone, or other industrial sensor.  Out of the box, it will learn a normal baseline and alert the user when anomalies are detected.  These anomalies can then be investigated and their data retained for creating additional models in Reality AI Tools later.  Contact us for more information on AI case studies in predictive maintenance, condition monitoring, and Industrial IoT using accelerometer or other data, anomaly detection, and predictive analytics based on sensor data.

What kinds of sensors does RealityCheck AD support?

RealityCheck AD supports a number of different accelerometer and mic options for direct connection to an edge device.   We also support interfacing thru DAQ’s from National Instruments[1]  and Digiducer[2]  (among others), and we also offer a “virtual edge node” option for integration with existing instrumentation and infrastructure.  Contact us to discuss your project and instrumentation needs, and we can help you find a suitable sensor set

What types of edge devices does RealityCheck AD support?

RealityCheck AD can run on any linux edge device that meets minimum requirements.  We also support a “virtual edge node” for integration with existing infrastructure.  Contact us for a list of supported edge devices, or to see whether your existing equipment will work with RealityCheck AD.

Can RealityCheck AD integrate with my MES or other factory systems?

Yes.   Every RealityCheck AD edge device reports its results and transmits required data over MQTT.   Integrating with other factory systems is as simple as subscribing to the relevant MQTT feeds and ingesting the data.  Similarly, any communications back to RealityCheck AD can also be accomplished using MQTT.

Can RealityCheck AD use my existing sensors?

In most cases, yes.  RealityCheck AD offers an optional “virtual sensor DAQ” which can be used to receive sensor data from other systems via MQTT.   Contact us to learn more and to discuss whether your existing sensor implementation can be accommodated.

Can RealityCheck AD use my existing sensors from National Instruments?

RealityCheck AD offers an integration add-on for National Instruments DAQ systems, allowing sensors currently in use and connected to a National Instruments system to be used with RealityCheck AD as well.

Where does model training take place in RealityCheck AD?

When you deploy a RealityCheck AD edge device and instruct it to “Learn Normal”, it will collect baseline data to be used for anomaly detection and use that data locally at the edge node to train a baseline AD model.  Additional “Learn Normal” actions can append data to the baseline set, with retraining also happening on the edge device.   If you create additional models using Reality AI Tools for classification, regression, or anomaly detection, those models are trained in the cloud (and may be trained on data from multiple edge nodes).   Models trained with Reality AI Tools in the cloud can be deployed to RealityCheck AD edge devices for local inference.

Where does inference take place in RealityCheck AD?

Models trained thru “Learn Normal” on the RealityCheck AD screen will run on the selected edge device.   Models constructed and trained in Reality AI Tools® software can be deployed to run locally on a selected edge device by hitting the “Add” button on the Device Setup screen after selecting “Activity Monitoring”.


Automotive SWS

What is Automotive SWS?

Automotive SWS is a system from Reality AI and its partners that allows cars to “see with sound.”   In particular, it uses an array of MEMS microphones on the exterior of the vehicle to detect other road participants, compute the angle-of-arrival of the sound, and determine other properties of the target (approximate range, whether it is approaching or receding) where that is possible.   Automotive SWS is intended to supplement collision avoidance, ADAS and autonomous operation stacks based on cameras, lidar or radar that otherwise would be limited to line-of-sight.   Sound is a high-quality / low-cost addition that makes it possible, for example, to hear an emergency vehicle that is still .5km away (and therefore too far to see), or a car coming around a blind corner.   Automotive SWS won the 2020 Future Mobility Award from MobilityXLabs, and also a Best of Sensors 2021 award at Sensors Converge ExpoContact us  for more information on Automotive SWS, and to learn how to incorporate it into your own sensor / ADAS stack.