“Neural Beatbox” is a project that enables new music creation through collaboration between humans and AI.
Users record familiar sounds such as voices and claps, which are then automatically split into segments by AI. These segments are assigned to drum parts such as kicks and snares, and diverse and new rhythms are continuously generated. The moderate “mistakes” made by the AI during the generation process lead to a unique music experience, expand human creativity and inspire richer expressions.
In 2019, “Neural Beatbox for Barbican Centre” was exhibited at the Barbican Center in London. In 2020, during COVID-19, lockdown, we updated the application to “Neural Beatbox for Multi Session” which allows multiple users to cooperate and is available online.
It is designed to demonstrate the potential of machine learning in the context of music production and to enable users and AI to experience music production more interactively.
Neural Beatbox for Multi-Session (2020)
Neural Beatbox for Barbican Centre (2019)
Exhibitions
2019/05 – 2019/08 | AI: More than Human | Barbican Centre, London UK
Articles
AI-powered Online Beatboxing Experiment
Credits
Neural Beatbox for Multi Session (2020) Development (Front-end): Robin Jungers (Qosmo, Inc.) Development (Back-end): Bogdan Teleaga (Qosmo, Inc.) Machine Learning / Direction: Nao Tokui (Qosmo, Inc.) Machine Learning: Christopher Mitcheltree (Qosmo, Inc.) Design: Naoki Ise (Qosmo, Inc.) Illustration: Chisaki Murakami Project Management: Sakiko Yasue (Qosmo, Inc.), Yumi Takahashi (Qosmo, Inc.) Music: JEMAPUR
Neural Beatbox for Barbican Centre (2019) Concept / Direction: Nao Tokui (Qosmo, Inc.) Research / Management: Max Frenzel (Qosmo, Inc.) Development: Robin Jungers (Qosmo, Inc.) Design: Alvaro Arregui (Nuevo.Studio)