Techno Blender
Digitally Yours.

3 Reasons Why Music Is Ideal for Learning and Teaching Data Science | by Max Hilsdorf | Sep, 2022

0 26


Image by the Author.

Music has always been a great passion of mine. So much so that I got my first university degree in musicology. When I started to learn data science, AI music just seemed like the obvious path forward. However, after studying and working in AI music for over two years now, I have come to realize that if I could go back in time, I would start with AI music over and over again. Because I will be taking on a teaching role soon, I have also been thinking deeply about music as a tool to teach data science.

In this article, I argue that music is ideal for learning and teaching data science because:

  1. Music is fun and engaging.
  2. AI music encompasses most AI disciplines.
  3. There are lots of unsolved problems.
“oil painting of a wide variety of species dancing around a musical instrument, meditative, spiritual, heavenly lighting” — Generated by the Author using StableDiffusion.

Now, this first argument seems like a no-brainer, but it is not as trivial as it sounds. We cannot ignore that data science can be a really dry and abstract subject. Linear algebra, bayesian stochastics, gradient descent, backpropagation — this stuff is not a slice of cake, especially for non-STEM students/graduates or even people without an academic degree.

Music is Motivating for Everyone

When things get challenging, you need to bring up more volition to stay focused and push through difficult subjects. What you really need in these situations, according to self-determination theory (SDT), is a motivation as intrinsic as possible. If you are fully externally motivated (“I need to study stochastics because my employer wants me to”), you have already lost the battle. And although true intrinsic motivation would be ideal, few people love to compute gradients just for the sake of it.

In the real world, you only get to pick between a more and a less intrinsic source of motivation. Which of the following two mindsets would you prefer?

  1. “I need to study stochastics because I want to become a Data Scientist”
  2. “I need to study stochastics because this will help me to understand how Spotify recommends new music to me”

In my estimate, both are decent. Still, I imagine most of you would choose mindset 2. Why? Because it matters not in an abstract and future-oriented sense, but will affect your everyday life almost instantly. This is the kind of motivation that will get you to sit down and study. Music is deeply embedded in the everyday lives of almost everyone. Why teach a class to build predictive models to classify different kinds of monstera plants, when you could teach them to build a genre recognition model?

Listening to Music is Fun

Aside from the fact that building AI for music applications is pretty cool, actually listening to music is even more fun. I have been building music classification models professionally for almost two years. The most fun thing I got to do — by a large margin — was to qualitatively evaluate my models by listening to music and judging the model’s predictions. There is something deeply vivid about learning the strengths and weaknesses of your AI by listening to music alongside it.

Furthermore, as a Data Scientist, you sometimes have to benchmark your models in comparison to a human judgment. Let me ask bluntly: Would you rather listen to 200 pieces of music to classify their genre or go through 200 images of monstera plants and figure out which variety each is?

“multiple hands reaching for a violin, heavenly background, gentle” — Generated by the Author using StableDiffusion.

Every Data Scientist knows that dealing with text data is something entirely different than dealing with images. What is fascinating about music data is that it is used in almost all major data science disciplines, because this kind of data is so versatile.

Music Classification Includes Image Classification

Now, this fact will probably blow your mind as it did mine: state-of-the-art music classification models like genre, mood, or instrument classifiers are essentially image classification models trained on image data. If you convert your audio files into so-called spectrograms (Figure 1) and finetune a standard image processing CNN on them, you will most likely achieve impressive results.

Figure 1 — Waveform of Digitalized Audio Signal And Mel Spectrogram of the First Ten Seconds of Metallica’s “Seek and Destroy”. Image by Author.

Of course, there are lots of domain-specific modifications which can improve your models (Oramas et al., 2017; Choi et al., 2017), but this may not be relevant for your individual learning/teaching purposes. Just remember that while you are building a classifier to distinguish between Bruno Mars’ and Michael Jackson’s music, you are — at the same time — also learning to build classifiers to tell empty crosswalks from those with grandmas on them (which is good to have for autonomous driving).

Music Always Comes With Metadata

Whether it is the artist’s name, the release year of an album, or the associated genre — music almost always comes with metadata. In some freely-available music datasets like the free music archive (FMA) or the million song dataset, you will also find hundreds or thousands of crowd-sourced tags spanning all categories from styles or moods to situations or associations. Sometimes, you will even find song lyrics! If you are trying to learn to process basic text data, this is the perfect playing ground for that.

In fact, you can build interesting and useful tools without ever touching the audio signal. Maybe you can analyze different artists’ lyrical content by performing sentiment detection on their song lyrics. You could also build a keyword recommendation system, as laid out in this post of mine. And if you do combine metadata with audio, you can build genre or mood classifiers or come up with your own ideas. If you have lyrics available, you could even try to build a lyric transcription tool using a transformer — which brings me to my next point.

Music Data is Perfect for Transformers

The transformer architecture with its self-attention mechanism is easily the most hyped machine learning technology of the last couple of years. At this point, it seems like a transformer can — given enough data — solve every problem that involves sequential data like text data, stock data, or … music data.

There have been numerous cool things done with music using transformers. In this video, the well-known AI guru Tristan Behrens shows how his generative transformer writes alternative drums to Metallica’s “Enter Sandman”:

In 2021, the German AI startup Cyanite launched a music captioning transformer which automatically writes full-text descriptions for a given audio input. Following the current trend around diffusion models like DALL-E-2 and StableDiffusion, a first attempt has been made at MIDI-to-audio synthesis using diffusion models. And there is plenty more to come in the next months!

Other Cool Use Cases of Music in AI

Since the uprising of music streaming services, one of the most important use cases in music AI has been the development of smart music recommendation systems. Broadly speaking, these systems consist of a similarity metric (what makes a good recommendation?) and a search algorithm (how to efficiently find good recommendations?). When it comes to music recommendation systems, the most intriguing part is developing similarity metrics, because it is not obvious what makes two musical pieces similar or dissimilar to each other.

A rather obvious advantage of learning music AI is that most of what you learn is transferable to other audio processing tasks like bird song classification, voice identification, or speech transcription. While there are certainly domain-specific peculiarities, most of the signal processing and machine learning are really similar and sometimes even identical. With audio-based controls starting to replace keyboards more and more, this transferability may prove useful in the future.

Lastly, music is an amazing data type to perform data augmentation with. As I pointed out earlier, music is often presented to neural networks in the form of images. Therefore, some basic image augmentation techniques like stretching and masking are applicable. Moreover, the audio signal itself may be augmented in plenty of ways. Not only can you stretch the signal to increase or decrease a track’s tempo or perform a pitch shift to make the piece sound lower or higher. You can also apply audio effects like reverb, chorus, or compressors to the audio signal. Let me tell you, this is a fun playing ground for data scientists and musicians alike.

“photograph of an old professor ruminating over a particularly complicated problem, candle lighting, close up” — Generated by the Author using StableDiffusion.

When you are just starting out learning machine learning techniques, it can certainly be fun to develop the 100,000th titanic survivors classifier in the world. At some point, however, you will probably want to develop something of actual value in the real world. This is why I want to point to a variety of problems in music AI which are still not yet solved to a satisfactory degree. May this serve as inspiration and motivation on your AI journey.

Music Stemming

The idea behind music stemming is to separate the signal of a musical piece into its instrumental components. For example, it is common to split the signal into vocals & instrumental (2 stems) or vocals, rhythm & harmony (3 stems). To beginners, this often sounds like a trivial task. After all, can we not just “locate the vocals” and “delete” them? Although I cannot lay out the technical details here, I can say that before Deezer’s 2019 groundbreaking release of spleeter, the quality of music stemming algorithms was far from usable.

And while many new innovative stemming tools are coming on the market, I would still consider the quality of the “isolated” instrument tracks unsatisfying. I recommend you check out this great article in which Alex Holmes from musictech.com compares different state-of-the-art stemming tools alongside each other. If these AI-generated stems ever become truly inseparable from “real” isolated instrument tracks, that would be a small revolution.

Music Generation

The field of music generation is very similar to that of music stemming: The rate of innovation is extreme and we are getting closer to the goal, but we are just not quite there, yet. First, we need to distinguish between generating symbolic music, a high-level description of how the music should be played, and sounding music, which constitutes an actual (digitally simulated) acoustic event.

In AI music, it is common to use MIDI as a form of symbolic music. Roughly speaking, MIDI describes when which note is played at which velocity (~loudness). Excellent transformer models have been trained to output MIDI symbols which can be transformed into sounding music by an orchestrator. The results are impressive, as you can see for yourself in this video by Tristan Behrens who composed Heavy Metal music using GPT-2:

Embedded Youtube Video Showing How Tristan Behrens Uses GPT-2 to Generate Heavy Metal Music.

Generating sounding music turns out to be much harder than generating MIDI. This is no surprise since MIDI is a simplified abstraction of an actual audio signal. However, the quality of sounding music generated by AI has — in my estimation — not improved substantially since the influential SampleRNN model by Carr & Zukowski (2018) who trained a recurrent neural network to generate Heavy Metal, Rock, and Punk. In fact, their model trained on the musical style of “Archspire” still composes music 24/7 in a YouTube live stream:

Youtube Livestream Streaming Automatically Generated Heavy Metal Music 24/7.

Lyrics Transcription

Speech transcription tools are becoming more and more competent and are now available for a wide variety of languages. To an outsider, it may seem like transcribing music lyrics would be a trivial task at this point. However, this is far from reality. In fact, it is shocking how bad AI transcription tools still are at seemingly simple tasks like transcribing pop songs. In a case study, the audio intelligence service AssemblyAI showed that their AI could recognize about 20 to 30% of the lyrics correctly.

I would say that I currently understand about 70% of the words when someone speaks Swedish, much better than AssemblyAI understands music lyrics. However, I do not sell my services as a Swedish transcriptor — for obvious reasons. Further, when publically evaluating their own AI, I am sure the authors chose favorable examples. Moreover, all the tracks were from the genres Pop, Rock, and R&B. I bet transcriptions for Death Metal songs would be more comedic than helpful. However, the authors pointed out that transcriptions would get better if the vocals were isolated from the instrumental. This means that the progress in stemming and lyric transcription goes hand in hand to some extent.

Other Unsolved Problems

In the field of hit song detection, classifiers are built to find out which songs will become hits and which will be flops. The field is mostly abandoned because even with state-of-the-art technology, the sought-after “hit song formula” has not yet been found (see Yang et al., 2017). Maybe this field will experience a revival when a ground-breaking idea or new technology emerges.

The field of music plagiarism detection is also extremely promising from a business point of view. Although well-performing similarity measurements have been developed (He et al., 2021), moving from a similarity metric to plagiarism detection is quite a large step. One major problem in this field is the relatively low amount of ground truth data. After all, there have not been hundreds of thousands of successful music plagiarism lawsuits.

Lastly, I would like to point to the field of music cover art generation. This field is rather active with many interesting recent publications (for instance Efimova et al., 2022; Marien et al., 2022). However, to my knowledge, no attempts have been made to use the currently trending diffusion models known from DALL-E-2 and StableDiffusion to generate music cover art. That may be promising!

In this post, I showed that music is ideal for learning and teaching data science because (1) it is fun and engaging, (2) can be applied to most AI disciplines, and (3) leaves much room for your own creative endeavors and data science projects.

If you want to read more about music and AI, consider checking out some of my related work on Medium:

  1. Build Your First Mood-Based Music Recommendation System in Python
  2. Music Genre Classification Using a Divide & Conquer CRNN

Thank you very much for reading this post and please let me know your thoughts on the topic!

[1] Carr & Zukowski (2018). “Generating Albums with SampleRNN to Imitate Metal, Rock, and Punkt Bands”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.1811.06633

[2] Choi et al. (2017). “Convolutional Recurrent Neural Networks for Music Classification”, in: International Conference on Acoustics, Speech, and Signal Processing 2017.

[3] Efimova et al. (2022). “Conditional Vector Graphics Generation for Music Cover Images”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.2205.07301

[4] He et al. (2021). “Music Plagiarism Detection via Bipartite Graph Matching”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.2107.09889

[5] Marien et al. (2022). “Audio-Guided Album Cover Art Generation With Genetic Algorithms”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.2207.07162

[6] Oramas et al. (2017). “Multi-Label Music Genre Classification from Audio, Text, and Images Using Deep Features”, in: arXiv, DOI: https://doi.org/10.48550/arxiv

[7] Yang et al. (2017). “Revisiting The Problem of Audio-Based Hit Song Prediction Using Convolutional Neural Networks”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.1704.01280


Image by the Author.

Music has always been a great passion of mine. So much so that I got my first university degree in musicology. When I started to learn data science, AI music just seemed like the obvious path forward. However, after studying and working in AI music for over two years now, I have come to realize that if I could go back in time, I would start with AI music over and over again. Because I will be taking on a teaching role soon, I have also been thinking deeply about music as a tool to teach data science.

In this article, I argue that music is ideal for learning and teaching data science because:

  1. Music is fun and engaging.
  2. AI music encompasses most AI disciplines.
  3. There are lots of unsolved problems.
“oil painting of a wide variety of species dancing around a musical instrument, meditative, spiritual, heavenly lighting” — Generated by the Author using StableDiffusion.

Now, this first argument seems like a no-brainer, but it is not as trivial as it sounds. We cannot ignore that data science can be a really dry and abstract subject. Linear algebra, bayesian stochastics, gradient descent, backpropagation — this stuff is not a slice of cake, especially for non-STEM students/graduates or even people without an academic degree.

Music is Motivating for Everyone

When things get challenging, you need to bring up more volition to stay focused and push through difficult subjects. What you really need in these situations, according to self-determination theory (SDT), is a motivation as intrinsic as possible. If you are fully externally motivated (“I need to study stochastics because my employer wants me to”), you have already lost the battle. And although true intrinsic motivation would be ideal, few people love to compute gradients just for the sake of it.

In the real world, you only get to pick between a more and a less intrinsic source of motivation. Which of the following two mindsets would you prefer?

  1. “I need to study stochastics because I want to become a Data Scientist”
  2. “I need to study stochastics because this will help me to understand how Spotify recommends new music to me”

In my estimate, both are decent. Still, I imagine most of you would choose mindset 2. Why? Because it matters not in an abstract and future-oriented sense, but will affect your everyday life almost instantly. This is the kind of motivation that will get you to sit down and study. Music is deeply embedded in the everyday lives of almost everyone. Why teach a class to build predictive models to classify different kinds of monstera plants, when you could teach them to build a genre recognition model?

Listening to Music is Fun

Aside from the fact that building AI for music applications is pretty cool, actually listening to music is even more fun. I have been building music classification models professionally for almost two years. The most fun thing I got to do — by a large margin — was to qualitatively evaluate my models by listening to music and judging the model’s predictions. There is something deeply vivid about learning the strengths and weaknesses of your AI by listening to music alongside it.

Furthermore, as a Data Scientist, you sometimes have to benchmark your models in comparison to a human judgment. Let me ask bluntly: Would you rather listen to 200 pieces of music to classify their genre or go through 200 images of monstera plants and figure out which variety each is?

“multiple hands reaching for a violin, heavenly background, gentle” — Generated by the Author using StableDiffusion.

Every Data Scientist knows that dealing with text data is something entirely different than dealing with images. What is fascinating about music data is that it is used in almost all major data science disciplines, because this kind of data is so versatile.

Music Classification Includes Image Classification

Now, this fact will probably blow your mind as it did mine: state-of-the-art music classification models like genre, mood, or instrument classifiers are essentially image classification models trained on image data. If you convert your audio files into so-called spectrograms (Figure 1) and finetune a standard image processing CNN on them, you will most likely achieve impressive results.

Figure 1 — Waveform of Digitalized Audio Signal And Mel Spectrogram of the First Ten Seconds of Metallica’s “Seek and Destroy”. Image by Author.

Of course, there are lots of domain-specific modifications which can improve your models (Oramas et al., 2017; Choi et al., 2017), but this may not be relevant for your individual learning/teaching purposes. Just remember that while you are building a classifier to distinguish between Bruno Mars’ and Michael Jackson’s music, you are — at the same time — also learning to build classifiers to tell empty crosswalks from those with grandmas on them (which is good to have for autonomous driving).

Music Always Comes With Metadata

Whether it is the artist’s name, the release year of an album, or the associated genre — music almost always comes with metadata. In some freely-available music datasets like the free music archive (FMA) or the million song dataset, you will also find hundreds or thousands of crowd-sourced tags spanning all categories from styles or moods to situations or associations. Sometimes, you will even find song lyrics! If you are trying to learn to process basic text data, this is the perfect playing ground for that.

In fact, you can build interesting and useful tools without ever touching the audio signal. Maybe you can analyze different artists’ lyrical content by performing sentiment detection on their song lyrics. You could also build a keyword recommendation system, as laid out in this post of mine. And if you do combine metadata with audio, you can build genre or mood classifiers or come up with your own ideas. If you have lyrics available, you could even try to build a lyric transcription tool using a transformer — which brings me to my next point.

Music Data is Perfect for Transformers

The transformer architecture with its self-attention mechanism is easily the most hyped machine learning technology of the last couple of years. At this point, it seems like a transformer can — given enough data — solve every problem that involves sequential data like text data, stock data, or … music data.

There have been numerous cool things done with music using transformers. In this video, the well-known AI guru Tristan Behrens shows how his generative transformer writes alternative drums to Metallica’s “Enter Sandman”:

In 2021, the German AI startup Cyanite launched a music captioning transformer which automatically writes full-text descriptions for a given audio input. Following the current trend around diffusion models like DALL-E-2 and StableDiffusion, a first attempt has been made at MIDI-to-audio synthesis using diffusion models. And there is plenty more to come in the next months!

Other Cool Use Cases of Music in AI

Since the uprising of music streaming services, one of the most important use cases in music AI has been the development of smart music recommendation systems. Broadly speaking, these systems consist of a similarity metric (what makes a good recommendation?) and a search algorithm (how to efficiently find good recommendations?). When it comes to music recommendation systems, the most intriguing part is developing similarity metrics, because it is not obvious what makes two musical pieces similar or dissimilar to each other.

A rather obvious advantage of learning music AI is that most of what you learn is transferable to other audio processing tasks like bird song classification, voice identification, or speech transcription. While there are certainly domain-specific peculiarities, most of the signal processing and machine learning are really similar and sometimes even identical. With audio-based controls starting to replace keyboards more and more, this transferability may prove useful in the future.

Lastly, music is an amazing data type to perform data augmentation with. As I pointed out earlier, music is often presented to neural networks in the form of images. Therefore, some basic image augmentation techniques like stretching and masking are applicable. Moreover, the audio signal itself may be augmented in plenty of ways. Not only can you stretch the signal to increase or decrease a track’s tempo or perform a pitch shift to make the piece sound lower or higher. You can also apply audio effects like reverb, chorus, or compressors to the audio signal. Let me tell you, this is a fun playing ground for data scientists and musicians alike.

“photograph of an old professor ruminating over a particularly complicated problem, candle lighting, close up” — Generated by the Author using StableDiffusion.

When you are just starting out learning machine learning techniques, it can certainly be fun to develop the 100,000th titanic survivors classifier in the world. At some point, however, you will probably want to develop something of actual value in the real world. This is why I want to point to a variety of problems in music AI which are still not yet solved to a satisfactory degree. May this serve as inspiration and motivation on your AI journey.

Music Stemming

The idea behind music stemming is to separate the signal of a musical piece into its instrumental components. For example, it is common to split the signal into vocals & instrumental (2 stems) or vocals, rhythm & harmony (3 stems). To beginners, this often sounds like a trivial task. After all, can we not just “locate the vocals” and “delete” them? Although I cannot lay out the technical details here, I can say that before Deezer’s 2019 groundbreaking release of spleeter, the quality of music stemming algorithms was far from usable.

And while many new innovative stemming tools are coming on the market, I would still consider the quality of the “isolated” instrument tracks unsatisfying. I recommend you check out this great article in which Alex Holmes from musictech.com compares different state-of-the-art stemming tools alongside each other. If these AI-generated stems ever become truly inseparable from “real” isolated instrument tracks, that would be a small revolution.

Music Generation

The field of music generation is very similar to that of music stemming: The rate of innovation is extreme and we are getting closer to the goal, but we are just not quite there, yet. First, we need to distinguish between generating symbolic music, a high-level description of how the music should be played, and sounding music, which constitutes an actual (digitally simulated) acoustic event.

In AI music, it is common to use MIDI as a form of symbolic music. Roughly speaking, MIDI describes when which note is played at which velocity (~loudness). Excellent transformer models have been trained to output MIDI symbols which can be transformed into sounding music by an orchestrator. The results are impressive, as you can see for yourself in this video by Tristan Behrens who composed Heavy Metal music using GPT-2:

Embedded Youtube Video Showing How Tristan Behrens Uses GPT-2 to Generate Heavy Metal Music.

Generating sounding music turns out to be much harder than generating MIDI. This is no surprise since MIDI is a simplified abstraction of an actual audio signal. However, the quality of sounding music generated by AI has — in my estimation — not improved substantially since the influential SampleRNN model by Carr & Zukowski (2018) who trained a recurrent neural network to generate Heavy Metal, Rock, and Punk. In fact, their model trained on the musical style of “Archspire” still composes music 24/7 in a YouTube live stream:

Youtube Livestream Streaming Automatically Generated Heavy Metal Music 24/7.

Lyrics Transcription

Speech transcription tools are becoming more and more competent and are now available for a wide variety of languages. To an outsider, it may seem like transcribing music lyrics would be a trivial task at this point. However, this is far from reality. In fact, it is shocking how bad AI transcription tools still are at seemingly simple tasks like transcribing pop songs. In a case study, the audio intelligence service AssemblyAI showed that their AI could recognize about 20 to 30% of the lyrics correctly.

I would say that I currently understand about 70% of the words when someone speaks Swedish, much better than AssemblyAI understands music lyrics. However, I do not sell my services as a Swedish transcriptor — for obvious reasons. Further, when publically evaluating their own AI, I am sure the authors chose favorable examples. Moreover, all the tracks were from the genres Pop, Rock, and R&B. I bet transcriptions for Death Metal songs would be more comedic than helpful. However, the authors pointed out that transcriptions would get better if the vocals were isolated from the instrumental. This means that the progress in stemming and lyric transcription goes hand in hand to some extent.

Other Unsolved Problems

In the field of hit song detection, classifiers are built to find out which songs will become hits and which will be flops. The field is mostly abandoned because even with state-of-the-art technology, the sought-after “hit song formula” has not yet been found (see Yang et al., 2017). Maybe this field will experience a revival when a ground-breaking idea or new technology emerges.

The field of music plagiarism detection is also extremely promising from a business point of view. Although well-performing similarity measurements have been developed (He et al., 2021), moving from a similarity metric to plagiarism detection is quite a large step. One major problem in this field is the relatively low amount of ground truth data. After all, there have not been hundreds of thousands of successful music plagiarism lawsuits.

Lastly, I would like to point to the field of music cover art generation. This field is rather active with many interesting recent publications (for instance Efimova et al., 2022; Marien et al., 2022). However, to my knowledge, no attempts have been made to use the currently trending diffusion models known from DALL-E-2 and StableDiffusion to generate music cover art. That may be promising!

In this post, I showed that music is ideal for learning and teaching data science because (1) it is fun and engaging, (2) can be applied to most AI disciplines, and (3) leaves much room for your own creative endeavors and data science projects.

If you want to read more about music and AI, consider checking out some of my related work on Medium:

  1. Build Your First Mood-Based Music Recommendation System in Python
  2. Music Genre Classification Using a Divide & Conquer CRNN

Thank you very much for reading this post and please let me know your thoughts on the topic!

[1] Carr & Zukowski (2018). “Generating Albums with SampleRNN to Imitate Metal, Rock, and Punkt Bands”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.1811.06633

[2] Choi et al. (2017). “Convolutional Recurrent Neural Networks for Music Classification”, in: International Conference on Acoustics, Speech, and Signal Processing 2017.

[3] Efimova et al. (2022). “Conditional Vector Graphics Generation for Music Cover Images”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.2205.07301

[4] He et al. (2021). “Music Plagiarism Detection via Bipartite Graph Matching”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.2107.09889

[5] Marien et al. (2022). “Audio-Guided Album Cover Art Generation With Genetic Algorithms”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.2207.07162

[6] Oramas et al. (2017). “Multi-Label Music Genre Classification from Audio, Text, and Images Using Deep Features”, in: arXiv, DOI: https://doi.org/10.48550/arxiv

[7] Yang et al. (2017). “Revisiting The Problem of Audio-Based Hit Song Prediction Using Convolutional Neural Networks”, in: arXiv, DOI: https://doi.org/10.48550/arXiv.1704.01280

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment