MidiMe is a machine learning experiment to train a small model to sound like you. All the training happens directly in the browser using TensorFlow.js -- no servers or backends here!

Try loading a single, full song to get outputs that sound like variations on it, or load multiple songs to get samples that combine various characteristics of them.

The models we're using are either monophonic (for melodies) or polyphonic (multi-instrument trios), and the results depend on which of these you choose -- in the monophonic case the samples will (hopefully!) sound very close to the main melody of the song; in the polyphonic case the reconstruction of the melody will be significantly worse, but the trios will have similar musical patterns (like motifs) to the original. Choose your own adventure:

Built with magenta.js. See the code on Glitch.