Does AI Prefer Digital or Analog Music?
- Julie Simmons
- 6 days ago
- 6 min read
Updated: 5 days ago
By Julie Simmons
A music journalist gets into a car with a record producer and a gaming executive.
(This is not the set up for a joke).
As they were curling their way up a mountain outside of Silicon Valley, the conversation spun around to the topic of analog vs. digital music.
“Remember that question about whether robots would listen to analog or digital music?” the record producer piped up.
The gaming executive chuckled then asserted how artificial intelligence (AI) should “naturally prefer” digital music because robots are digital beings. The record producer argued that even a semi-intelligent robot would easily be able to recognize the superior sound quality of analog recordings.
Fifty years ago, attempting to answer a question about robots would have started with a reference to a sci-fi movie. But today, if we really want to know whether AI would prefer digital or analog, we need to understand how AI inventors are programming for preference.
Programming for Preference
While no one on Google or Quora directly posed the question “Would a robot prefer digital or analog music?” I read a response from Aaron Hosford about "AI preferences" that served as a good starting point. Hosford is a musician and a robotic programmer who claims to have written programs that actually make music. On Quora, Hosford answered a question about how all forms of intelligence, both humans and machines, have preferences.
“Preferences are intrinsic to motivation,” he elaborated. “Without motivation, a machine is nothing more than a stimulus / response mechanism. In other words, if a machine never makes choices, it is not intelligent.”
Motivated to get an answer to the robot preferencce question, I reached out to Hosford and asked, “Would a robot prefer digital or analog music?" Also, "Would it be possible to program a robot to prefer the sound of analog or digital music?”
Hosford confirmed that it would not be possible for an AI to inherently prefer anything. Preference is both programmed and learned over time. "However," he explained, “it would indeed be possible to program a robot to prefer analog or digital music. If I were going to build such a machine, I would start by using deep learning to train an Artificial Neural Network (ANN) to correctly categorize music samples as either analog or digital. This would give the robot the ability to distinguish the two forms of music.”
Hosford went on to describe how, in addition to programming AI to recognize the difference between analog and digital, he could then apply a reinforcement learning algorithm. And, if he wanted his robot to develop an appreciation for analog, he would consistently expose it to analog music followed by a reward signal that would be implemented every time the AI selected correctly.
“The AI could be explicitly programmed with preferences, or the AI could develop an emergent preference,” that develops over time, Hosford added. But perhaps more compelling than an emergent preference is to design an AI with a reward signal set for human satisfaction.

Does That Make You Happy?
So, for a moment, let's forget about analog v. digital.
Hosford invited, “Imagine AI programmed to only care about making humans happy.” What if you had your very own robot that either selected songs or maybe even created songs that made you happy? Or, maybe you could program it to satisfy feelings of aggression."
Hosford supposed a computational creation and shared, “I would let [the AI] prefer music that it believed contributed to human happiness. Its preferences would derive from its beliefs about the relationship between observable features of the music and human moods. If the machine was exposed to lots of people really enjoying a particular type of music, this might have a significant effect on its beliefs, and therefore on its preferences. An AI with indirectly derived preferences like this would have to be relatively sophisticated -- capable of reasoning or at least modeling its environment in some way -- for secondary preferences to develop.”
Unlike streaming services that recommend music based on playlists and purchasing habits, what Hosford suggested is more comparable to the movie Big Hero 6. In the film, Baymax, the AI healthcare provider, scans his patient’s serotonin and hormone levels and ultimately confirms, "Will doing that make you happy?"

So, a little like Baymax, Melomic's music streaming service, @life, was designed to select music depending on a person's surroundings and activity. For example, while driving, @life will select songs "according to the speed as estimated by GPS, relaxing music will play while immersed in dense traffic, changing to activating music when zooming the highway." And while trying to fall asleep, @life will play tracks that "will get more and more relaxing, depending on the body movements, disappearing after falling asleep."
Although @life is no longer available, the fact that humans continue to program AI to appreciate music begs the existential question whether humans were programmed to appreciate music.
The Golden Mean
1.61803398875...
For those who don't know, that number is the algebraic expression of the perfect geometric relationship or ratio between two quantities. It’s called the Golden Mean and most people don’t even know they respond to it, sometimes on a daily basis. The Golden Mean (aka the Golden Ratio) is directly linked to the Fibonacci sequence as both operate aesthetically. The Fibonacci sequence, in particular, can be seen in nature's patterns, such as the spiraling seeds in the middle of a sunflower.
Certain equations instinctively tells us the perfect place to position a painting on the wall based on where we’re standing. Its rational proportions have even been applied to famous architecture. So what about sound? Is there such a thing as a Golden Mean or Fibonacci series in music that suggests humans have been programmed to respond to music?
Musical Ratios
In the NOVA episode "The Great Math Mystery," jazz musician Esperanza Spalding demonstrates how Pythagoras and the Greeks discovered three musical ratios that were most pleasing to humans: the octave, a fifth and a fourth. Esperanza plucks notes on her upright bass so listeners can hear how an octave is basically the first two notes of “Somewhere Over the Rainbow;" in vibrational terms, that's the relationship of 2:1.
A fifth can be heard in the first two notes of "Twinkle Twinkle Little Star" or a ratio of 3:2. And a fourth is the beginning of "Here Comes the Bride" or a 4:3 ratio. For a more contemporary example of how humans might have been programmed to prefer certain types of music, we look to the British comedy band, Axis of Awesome. Their musical montage, "4 Chords," has received over 4 million views on YouTube. Their claim is that "the most popular songs from the past 40 years use the exact same four chords."
Granted, the Axis of Awesome’s theory solely focuses on what Western Culture has deemed "popular." And, really, those "popular songs" were deliberately served to the general public by music industry leaders and radio station programmers. But before we sneer at the industry for trying to program us to prefer certain music, let's bring the focus back to that seemingly insignificant question of whether robots would prefer digital or analog music.
The line of creator and creation has been blurring ever since AI started making its own music.
Artificial Composers
At least 20 years ago, computer software programs were helping people compose music digitally. Even non-musicians could plot out notes on a series of scales, assign a few instruments and listen to their very own orchestral composition. Now musical AIs, known as "artificial composers,"are writing and performing autonomously. Two such AIs are named "Iamus" (as in the Greek god who was fluent in bird languages) and "Melomics109."
An article in studentpulse.com reported, "Iamus takes a self-generated piece of music and mutates it, checking to see how the mutations conform to its prescribed rules, both musically stylistic rules and biological, human limitations. The best products are allowed to mutate further, allowing the most fit [songs] to continue until all the conditions are met and whole pieces of music are formed." While Iamus produces modernist classical music, Melomics109 is designed to create "contemporary popular music" for the masses.
The general public has yet to recognize a "popular song" made by Melomics109, but who knows – a day might come when the music journalist gets into a self-driving car with the music producer and gaming expert and the journalist asks, "Who's your favorite AI musician?"
UPDATE: The original discussion about robots preferring digital or analog was observed in March 2014. Since this article was originally published in Reverb (2016), a 2025 indie rock band, Velvet Sundown, released two albums and appeared to be going viral on Spotify before someone exposed the band as being AI-generated. According to the French streaming service, Deezer, more than 20,000 AI-generated tracks are uploaded on its platform daily.
***
Do you like to talk about music in a way that's intellectual, civil and collaborative? Ask to join the Facebook Group, Music Makes You Think.