Welcome to Reality AI 4.0!
Since we released Reality AI Tools® version 1.0 in 2017, we’ve been pleased to see it recognized as the most powerful development environment for Edge AI and advanced non-visual sensing available anywhere. Global 2000 organizations have been using Reality AI for more than two years now to accelerate their sensor-oriented product R&D and get better results than they would any other way.
These early customers taught us what working R&D engineers really need from machine learning. So today we are excited to announce Reality AI Tools® 4.0 -- a massive upgrade to our Reality AI Tools® software that introduces new functionality for engineering users not available anywhere else. Reality AI 4.0 will allow customers to use artificial intelligence to reduce the cost of developing, procuring and manufacturing smart devices.
Better Automated Feature Discovery and Model Generation
Automated feature discovery and model generation has been the core of Reality AI since the beginning, and in Reality AI 4.0 we’ve made it even better. We’ve added more advanced feature sets and improved model tuning. Reality AI now searches more than 10,000 possible feature sets and model construction options to deliver the best, most effective solution to your sensing problems.
We’re also adding new features to assist in fine-tuning and improving models, including:
Error balancing, to reduce false positives or false negatives thru bias tuning. To learn more, see our blog “Bias isn’t always bad”.
New methods for segmenting real-time, streaming data. It’s not just sliding windows anymore. Watch our blog for more information coming soon.
Improved Explainable AI visualizations. Engineers need to know how their models work and why, and we support that. To learn more, see our blog “How do you make AI explainable? Start with the explanation.”
New Tools for Selecting Sensors and Processor Components
Product R&D is as much about cost as it is about technology, and in Reality AI 4.0 we are releasing new features to help you optimize the Bill-of-Materials by specifying the fewest number of cheapest-to-procure sensor components placed in locations that are cheapest-to-manufacture. We’ll also identify minimum tolerance and sensitivity requirements, and provide tools for reducing the resource requirements needed for on-board processing.
"Product R&D is as much about cost, as it is about technology."
In engineering, everything is a trade-off, and Reality AI 4.0 will tell you what you need to know to understand cost vs. performance tradeoffs. For now, we’re focused on:
Identifying how many of which kind of sensors you really need, and what is their optimum placement. You can use a cost function to prefer certain choices over others.
Identifying acceptable levels of internal noise and measurement tolerance. Model how tolerances stack (or don’t) across multiple channels.
Examine sensitivity of model accuracy to changes in sample rate, bit depth, and noise floor, all of which drive processing cost and impact component selection.
There will be much more to come on this topic, so watch our blog as we discuss this new functionality and how it works.
Streamlining the Most Expensive Part of Machine Learning Development - Data Collection
Anyone who’s done it will tell you - the most difficult, most time consuming, and most expensive part of solution development is data collection. Whether it’s for initial problem exploration or for field testing, data collection is the part of the project most likely to blow your budget and kill your timeline.
So what’s the best way to minimize the cost of data collection? Make sure you don’t have to do it twice.
Data collection costs are often compounded by problems and errors -- malfunctioning equipment, faulty data logging, garbled data transmission -- that would have been much less costly and much less impactful to the project plan had they been caught quickly.
"What’s the best way to minimize the cost of data collection? Make sure you don’t have to do it twice."
To address this, Reality AI 4.0 contains a variety of tools for checking data -- automatically, each time a new file arrives from the field:
File consistency - checks the number of columns, data types, file sizes, sample rates, and other parameters.
Data quality - identifies zeroed or suspicious data blocks, inconsistent data types, data misalignment, internal file data errors, clipping, and other common issues that indicate the contents of a data file may not be what you think it is.
Data coverage - tracks the amount of data accumulated by major categorical variables, so you can spot gaps and track progress to plan.
Time coverage - especially important for collecting real-time streaming data, time coverage analyzes how much continuous data is available, tabbed and cross-tabbed by major categorical and continuous variables. As you consider different sizes of sliding-window, for example, these charts will tell you how much data you really have to work with.
Much more on this to come as well, so watch the Reality AI Blog for more information.
Automated Export for TinyML / Edge AI Code for Embedded Use
Reality AI has always been about delivering extremely compact, computationally efficient code for use in embedded devices, and now we’ve made our embedded support even better, with:
Improved optimizations in our code for feature computations and machine learning
Full transparency on resource requirements (RAM, flash, multiplies) before code export
Automated support for any MCU using GCC toolchain.
Automated support for Linux environments
Semi-automated support for any MCU supported by Arm Keil.
Timing, and What This Means for Customers
Some of these new features are rolling out immediately, with the remainder going live in production between now and June. All new features are available for evaluation and testing by Reality AI customers today.
To learn more about Reality AI and the new features in Reality AI 4.0, sign up for our webinar here.