Neural Beatbox

  • Share

ABSTRACT

Rhythm is one of the most ancient means of communication. Neural Beatbox enables anyone to collectively create beats and rhythms using their own sounds. The AI segments and classifies sounds into drum categories and continuously generates new rhythms. By combining the contributions of viewers, it creates an evolving musical dialogue between people. The AI’s slight imperfections enrich the creative expression by generating unique musical experiences.

BACKGROUND

Through interactive system and design experience work, the setup allows for collaboration between a user and the AI system; while the latter makes the decision about the results, the content itself only comes from humans.
This largely resonates with the practice of beatbox, where the instruments are removed from the equation to put the emphasis on the creative potential of an individual, its voice and its body.
The AI, therefore becomes a tool for natural expression, trying to make the best out of human-produced contents. It remains imperfect and produces results that might meet expectations or not.

TECHNOLOGY

One critical aspect of the installation was to propose a setup responsive enough to allow for a fully interactive experience, while still showcasing the interest of machine learning in the context of music production.
In order to do so, the system was designed from server to client-side;

On the back-end, the role of the server is divided into 2 parts:

1. CLASSIFICATION OF SOUND

The first one is to receive sound files (recordings from the users), divide them into meaningful segments and assign a drum class to each one of those thanks to a neural network classifier.

2. GENERATE RHYTHM SEQUENCE

The second one is to generate beats — sequences of drums — that will be played.
This generative part is also the result of a trained neural network(VariationalAutoencoder), but it involves extra processes that give us more control over the choice of the sequence : for instance, each drum can be weighted in order to pull a beat with more relevant items within the current context (e.g. if a user records a kick, we will want to update the beat with a sequence containing a kick).

The client-side runs as a web app, and makes uses of the moderns features provided by browsers: media recording API, advanced graphics, etc.
This allows the application to be accessible from any modern machine with minimal configuration.
In the context of the event at the Barbican Centre, the system behind Neural Beatbox was first designed to run as an exhibition app, and thus features reduced interactions to fit the simple interface provided on-site ; On the long term though, it is planned to evolve into a proper browser-friendly application, to allow each and everyone to experiment with it.

While the data processing could be done in the browser with modern APIs, delegating the heavier computations to the server lets us have more control over the smoothness of the process. Being centralized, it also opens doors for further iterations in the future; collaboration could now potentially happen across users, establishing a connection between them, with the AI acting as a “curator” in the middle.

Neural Beatbox at AI: More than Human

Design

In order to convey the intents of the experience, the visual setup had to go through a proper design process. We collaborated with Alvaro Arregui (Nuevo.Studio)for that purpose, and developed a set of animations to reflect the dynamic of the music, and give a rhythm to the piece.

INSTALLATIONS
ARCHIVES

DateTitlePlace
2019/05 – 2019/08AI: More than HumanBarbican Centre, London UK


CREDITS

  • Concept / Direction

    Nao Tokui(Qosmo, Inc.)

  • Research / Management

    Max Frenzel(Qosmo, Inc.)

  • Development

    Robin Jungers (Qosmo, Inc.)

  • Design

    Alvaro Arregui(Nuevo.Studio)

Get in touch with us here!

CONTACT