Using a Convolutional Neural Network to classify musical instruments

In this colab you will train a model to recognize musical instruments from the nsynth database.

Import the nsynth dataset

Nsynth is a large-scale and high-quality dataset of annotated musical notes: https://magenta.tensorflow.org/datasets/nsynth

The Nsynth dataset

First, look at the dataset shape and select a percentage of which to use here

Helper Functions

To make MFCCs in tensorflow and to map MFCCs and labels to our nsynth dataset

MFCCs and Label Datasets

Plot some elements of the labelled MFCCs

Normalization Layer


Batch, Cache and Prefetch

We'll make a new dataset that is used for training. This one is different from the Spectrogram and Label datasets, because it is now "batched", or split into groups, and it will be cached in memory for better performance.

Batch the training and validation sets for model training and add dataset cache() and prefetch() operations to reduce read latency while training the model.

Input Shape

Innstantiate the Sequential class with your layers

Compile the model

Train

Plot results

Confusion matrix

Single Predictions