OpenAI presents MuseNet: a deep neural network to generate musical compositions


OpenAI has built a new deep neural network called MuseNet to compose music, the details of which she shared in a blog post yesterday. The research organization has made a prototype co-composer powered by MuseNet available to users until May 12.

What is MuseNet?

MuseNet uses the same general purpose unsupervised technology as OpenAI’s GPT-2 language model, Sparse Transformer. This transformer allows MuseNet to predict the next grade based on the given set of grades. To enable this behavior, Sparse Transformer uses something called “Sparse Attention”, where each output position calculates weights from a subset of input positions.

For audio tracks, a 72-layer array with 24 attention heads is formed using the recalculated and optimized cores of Sparse Transformer. This provides the model with a long-term context that allows it to remember the long-term structure of a song.

To train the model, the researchers collected training data from a variety of sources. The dataset includes MIDI files donated by ClassicalArchives and BitMidi. The dataset also includes data from online collections including jazz, pop, African, Indian, and Arabic styles.

The model is able to generate musical compositions of 4 minutes with 10 different instruments and takes into account the different styles of music of composers such as Bach, Mozart, Beatles, etc. It can also convincingly mix different styles of music to create a whole new piece of music.

The MuseNet prototype, which is made available to users for testing, only ships with a small subset of options. It supports two modes:

  • In simple mode, users can listen to unselected samples generated by OpenAI. To generate a piece of music yourself, all you need to do is choose a composer or style and an optional start of a famous piece.
  • In advanced mode, users can interact directly with the model. Generating music in this mode will take much longer but will result in a brand new song. This is what advanced mode looks like:

Source: OpenAI

What are his limits ?

The music generation tool is still a prototype, so it has some limitations:

  • To generate each note, MuseNet calculates the probabilities on all possible notes and instruments. Although the model gives more priority to your instrument choices, it is possible that it will choose something else.
  • MuseNet has difficulty generating a piece of music if there are odd pairings of styles and instruments. The generated music will be more natural if you choose the instruments closest to the usual style of the composer or group.

Many users have already started to test the model. While some users are quite impressed with the AI-generated music, some think it’s pretty obvious that the music is machine-generated and lacks the emotional factor.

here is a opinion shared by a Redditor for different styles of music:

My take on the classical parts of it, as a classical pianist. Overall: stylistic consistency at the ~ 15 second scale. Better than anything I’ve heard so far. Seems to have an attachment to pedal notes.

Mozart: I would say that the distinguishing feature of Mozart as a composer is that every bar “rings true”. Even without knowing the piece, you can usually tell when a performer made a mistake and strayed from the score. Mozart’s samples sound… wrong. There are parallel fifths everywhere.

Bach: (I heard a sample of Bach in the live concert) – He had pretty much the right consistency in the melody, but zero counterpoint, which is the defining characteristic of Bach. Perhaps the conditioning is not strong enough?

Rachmaninoff: Known for his lush musical textures and hauntingly beautiful melodies. The samples have about the correct texture, although I would describe them more as cloudy than lush. No melody to hear. “

Another user commented, “It can be academically interesting, but the music always sounds wrong enough to be obnoxious (i.e. there’s no way I’m spending time listening to it on purpose). “

Although this model is still in its infancy, an important question that comes to mind is who will own the generated music. “While discussing this with my friends, an interesting question arose: who owns the music that this produces? Couldn’t we generate music and upload it to Spotify and get paid based on the number of plays?” another user added.

To find out more in detail, visit the OpenAI official website. Also discover an experimental MuseNet concert broadcast live on Tic.

Read more

OpenAI researchers have developed Sparse Transformers, a neural network that can predict the rest of a sequence

OpenAI Five bots destroyed Dota 2 human players this weekend

OpenAI Five beats pro Dota 2 players; wins 2-1 against the players


About Madeline J. Carter

Check Also

Cincinnati Chamber Orchestra’s Summermusik Festival Brings Award-Winning Compositions to Queen City | cultural | Cincinnati

Click to enlarge Photo: Sergio R. Reyes A performance by Héctor del Curto will open …