The Brain Piano

You create music with your brain. Now, create music with someone else’s using the BRAIN PIANO.

Creating the Brain Piano: We started by recording the brain’s response to each note of the piano. Then, using a simple process called “sonification,” these neural recordings were converted back to sound clips.

Take a listen! (Click on the orange arrow below or listen to a full playlist here.) What you’ll hear is the result of thousands of neurons synchronously firing in tune. These sound clips can be combined to recreate your favorite song or compose your own neural symphony.


To start composing, simply download the Neural Symphony library containing 100+ sounds of the human brain mapped to the piano keyboard, load it into your favorite music sampler, and let your creativity take over.

DOWNLOAD HERE (2 MB .Zip File)

Specs:

  • Contains 100+ royalty-free audio files (Notes C0 – B8, equal-tempered tuning, A4 = 440 Hz)
  • .WAV file format
  • 25 kHz, 32-bit resolution
  • Compatible with all major music software
  • PC- and MAC-compatible

John Spencer also created a soundfont file of the sample library (DOWNLOAD HERE) that is compatible with soundfont players.

While the contents of Neural Symphony are free, if they are used, please cite: Tichko, P., & Skoe, E. (2017).


Background:

Technological and ideological developments during the early 20th century gave rise to both modern neuroscience and experimental music. However, it wasn’t until the post-war period (1945-), with the emergence of the music avant-garde, that neuroscience and music began to interact, forming creative connections that pushed the boundaries of brain science and music performance. Hans Berger (1873-1941) became the first scientist to record cortical brain activity using the electroencephalogram (EEG) in 1929. Nearly three decades would pass before Alvin Lucier (1931-2021) would be the first composer to employ EEG techniques for his piece, Music for Solo Performer (1965). To perform Lucier’s work, the performer is instructed to sit quietly in a chair while EEG electrodes are affixed to his or her scalp. Brain activity, detected by the EEG electrodes, is then amplified and used to vibrate percussive instruments placed throughout the performance environment (Lutters & Koehler, 2016). Inspired by Lucier’s work and our own research on the musical brain, we created the BRAIN PIANO.


Frequently Asked Questions (FAQ):

FAQ: How do I create music with the Brain Piano?

The Neural Symphony library is a zip file containing 100+ royalty-free audio files (25 kHZ, 32-bit) in the WAV file format. The sound files, which span notes C0 – B8 (equal-tempered tuning, A4 = 440 Hz), are compatible with all major music software.

There are many ways to create neuromusic using the sound clips included in Neural Symphony library. The easiest way to make music with this library is to work with a musical instrument called a “sampler.” A sampler is generally a piece of software or hardware that allows you to load in audio files, then directly map them to keys on a keyboard interface. You can play the samples by pressing the keys on the keyboard interface or, in some cases, program a melody directly into the software by hand. A quick Google search will get you a variety of free samplers that are available online.

Note: in order to use a software sampler, you may need a host application called a digital audio workstation (DAW). While professional DAWs can be expensive, there are a number of free DAWs available online for both Mac and PC.

FAQ: The Brain Piano was created using neural sonification. So, what is neural sonification?

Sonification is the process of transforming otherwise non-audible data into a format that can be played back and listened to. For the Brain Piano, we transformed recordings of brain responses to an audio file format that can be used for music performance, composition, and sound design. Sonification is a creative technique for producing novel sounds to use when composing music. The sonification process also enables scientists to listen to their data, instead of just visualizing it (Weiss and Bidelman, 2015; Bidelman 2018). This sonification technique has also been adopted in other areas outside of auditory neuroscience, such as in the study of the dynamics of handwriting (Danna et al., 2015) and neural activity during seizures.

FAQ: How exactly did we make these neural sonifications?

To create these neural sonifications, we enlisted a volunteer to listen to each note of the piano while we recorded brain activity from sensors placed on their head. To derive the final neural response used to create the sonifications, hundreds of neural responses to the same note were averaged together. Finally, the neural response to each note was converted to an audio file that can be played back and listened to. Voila!

FAQ: Why do some notes sound louder than others? And why do some notes appear to be absent from the sonifications?

Some of the lower-frequency sounds are perceptually louder than others. This is likely due to the complex interaction between the neural generators that underlie the responses (Tichko & Skoe, 2017).

For these low-frequency sounds, the brain can faithfully follow the cyclical, pitch-bearing, features of the sound. However, above ~1000 Hz (or above a D6 on the piano), this frequency tracking starts to become weaker in the brain. This means that for notes above this range, the neural response to the note may be difficult to hear because it is embedded in noise. (For these notes, your brain uses a different approach for representing frequency.)

FAQ: Why do I hear noise in the samples?

Your brain is always on, operating in the background. When recording neural responses to sound, this background activity will be captured along with the response to the sound. The electrodes we used to record the brain responses do not discriminate in what electrical activity they pick up–in addition to the brain’s electrical activity they are sensitive to electrical noise in the environment from computers.

We left the samples very close to their raw form to give you insight into neurophysiology as well as our recording techniques.

FAQ: Why are the samples included in Neural Symphony so short?

We used short sounds (<300 ms) for a number of reasons. In addition to allowing us to record a large number of frequencies in a shorter amount of time, short-sounds that quickly turn on and off can boost the neural response above the noise floor and into the audible range. Even with short sound clips, most sound samplers can loop a sample, which will enable you to sustain the playback of the neural sonifications.

FAQ: Why is the sample rate of the neural recordings 25 kHz? Sound files typically have a higher sample rate, right?

The brain recordings were sampled at a lower rate than is typically used for digital-audio applications. The reason for this is that electrophysiological recordings of the brain generate large amounts of raw data. Yet, even with the 25 kHz resolution, acoustic features of the stimulus can be heard with high fidelity.

FAQ: Where can I learn more about the science behind human brain responses to sound?

Our lab has multiple papers on the frequency-following response (FFR), the type of brain responses recorded here. For technical information on how the response is typically recorded, please see Skoe & Kraus, 2010 and Tichko & Skoe, 2017.

FAQ: I have made some music with Neural Symphony. Where can I share it?

We welcome you to submit your piece on our SoundCloud page. link

Creating a SoundCloud account is free and allows you to upload an audio file of your song. Using the hashtags #neuromusic and #neuralsymphony will also help promote your work.


Media Coverage and Events Featuring the Brain Piano

2019

The BRAIN PIANO was featured during the inaugural activities for UConn’s new president.

 

2018

The UConn Podcast, episode 3: Play Us a Song, You’re the Piano Brain

Association for Research in Otolaryngology Outreach Event (video link)

 

2017

UConn Today, December 6, 2107: The Music of the Brain

 


Works Cited:

Batra, R., Kuwada, S., & Maher, V. L. (1986). The frequency-following response to continuous tones in humans. Hearing Research.

Bidelman, G., (2018). Sonification of scalp-recorded frequency-following responses (FFRs) offers improved response detection over conventional statistical metrics. Journal of Neuroscience Methods.

Danna, J., Paz-Villagran, V., Gondre, C., Aramaki, M., Kronland-Martinet, R., Ystad, S., & Velay, J.L. (2015). Let Me Hear Your Handwriting! Evaluating the Movement Fluency from Its Sonification. PLOS ONE.

Galbraith, G. C., Arbagey, P. W., Branski, R., Comerci, N., & Rector, P. M. (1995). Intelligible speech encoded in the human brain stem frequency-following response. Neuroreport.

Lutters, B., & Koehler, P. J. (2016). Brainwaves in concert: the 20th Century sonification of the electroencephalogram. Brain.

Skoe, E., & Kraus, N. (2010). Auditory brainstem response to complex sounds: a tutorial. Ear and Hearing.

Tichko, P., & Skoe, E. (2017). Frequency-dependent fine structure in the frequency-following response: The byproduct of multiple generators. Hearing Research.

Weiss, M., & Bidelman G. (2015). Listening to the Brainstem: Musicianship Enhances Intelligibility of Subcortical Representations for Speech. Journal of Neuroscience.