Reality AI is sensor agnostic and can easily handle many data channels for sensor fusion scenario. Use Reality AI with your proprietary sensors, or other sensors not listed here, alone or in combination. The only limit is the available processing power.

Proprietary sensors

Reality AI is completely data-driven, making no assumptions about the underlying physics or the nature of the sensor used.  

For proprietary sensors, you need not even disclose details of the sensor mechanism or the underlying phenomena.  

The dimensionality and the relationships between different channels (eg if there are three data channels, whether these are three independent collections, or x/y/z components of a 3-dimensional vector), and the algorithms do the rest.


Data can be linked or loaded in any of several different formats, available here.

Sensor fusion

Reality AI tools are ideally suited for high-dimensionality problems combining input from multiple sensors and sensor-types.


In automotive, for example, we can combine, image, sound and LiDAR to detect road conditions. In work with the US Army, we combine imagery and LiDAR collected by UAV to classify terrain.


In sensor fusion, it is not necessary for all sensors to use the same sample rate, but close time and space correspondence between inputs significantly improves the detection outcome.








Proprietary Sensors



Learn more with our Technical Whitepaper

Explore the technical details behind the Reality AI approach to machine learning with signals

  • Why signals require a different approach than other machine learning problems

  • The importance of "features" to effective machine learning

  • Why the FFT probably isn't good enough, and what other options are better

  • The difference between Reality AI and Deep Learning