AI that generates music according to conditions

  • Music Generation AI x DRIVE

Our music generation system can automatically compose and generate music by specifying the genre and characteristics of a song. The a number of control parameters can be set and changes will be reflected in real time, allowing nuanced operations to be dynamically performed in response to inputs such as body movements, biological data from sensors or the environmental conditions of a particular location.

Feature

Instantly generate MIDI signals

Instantly generate MIDI signals by specifying given music genres, song characteristics, etc.

Real-time control

Real-time control of tempo, instruments, number of notes, speed of development, etc. on a measure-by-measure basis

Feature

Feature

Generate music that changes according to biological data, in environment

Possible to continuously generate music that changes according to biological data from sensors, changes in environment etc.

Use Case

Functional music

With the goal of enhancing human functions such as concentration and performance during exercise, we can provide dynamic music applications that change the content generated in real time in response to biological responses.

Music for Safe Driving

According to the driving conditions of the car, we can provide music that continue to change. In the future, it can also be utilized as in-car entertainment when autonomous cars become widespread (refer to the top video for more information).

Music that shapes the environment

Music can be generated in hotels, offices, and other living environments to suit the situation at any given time. It is copyright-free and can be used without restrictions on media or location.

Client

logo suntory

This product was used for Suntory’s special tea campaign website for a project to generate music that matches the user’s diet.

suntory project image 220415-1536x1026

AI music generation site based on dietary data: SUNTORY TOKUCHA MUSIC (https://tokuchamusic.jp/)

Technology

A deep learning model based on Transformers and Recurrent neural networks for generating MIDI signals is constructed by learning from various musical pieces. The architecture is configured to accept not only changes in initial conditions but also changes in conditions during playback. The actual tone of the music played is selected from a synthesizer, sampler, or other sound source. Research has shown the positive effects of music on health, but how music specifically affects particular biological responses is an area that requires research and development on a case-by-case basis.

Tech Spec


Price System

Licensing period: Monthly Developer’s license: Yes


Input/Output

Input: Composition parameters (genre, tempo, instrument, number of notes, development, etc.) Output: MIDI signal or WAV


Operating Environment

Cloud computing: Standard API provided On-premise environment: Possible by consultation


Processing Speed

Real-time

Other products

Timbre Transfer