Reality AI Sound Classifiers Demo

Ambient sound detectors

How does this sound demo works?

This demo was trained on just over 1300 ten-second samples of ambient sound recorded using binaural microphones in six different, but similar sounding, environments: on a bus, in a car, on the street in the city center, in a metro station, on a train, and on a tram.

We then used Reality AI Tools™ in the cloud to train a six-class classifier to differentiate between the backgrounds. That classifier uses a one-second decision window and accumulates votes over the ten-second duration of each observation to make the final prediction.

Detection code was then exported and compiled for the desktop environment, where it was incorporated into an app that plays back clips and displays classification results. Though the demonstration takes place in a desktop environment, the detection code could easily run in on Cortex A-series processor or a low-end smartphone.

Building sound classifiers with machine learning

Information from ambient noise

Reality Ai easily differentiates between different machine sounds captured by the microphones.

Easily Embeddable

This demo shows detection that could be readily implemented in a low-end Cortex A-series processor or a smartphone environment.

See another demo using accelerometer, sound or image

Choose one of these demonstrations below to see other examples of Reality AI technology at work.

Request Access

Become part of the Reality AI community and help shape the AI revolution