Skip to yearly menu bar Skip to main content


Poster
in
Affinity Event: Black in AI

Enhanced Audio Extraction with Deep Neural Network

Wesagn Dawit Chemma · Selameab Demilew


Abstract:

In this paper we present a convolutional neural network (CNN) autoencoder model for audio source separation. The model is designed to separate individual sound sources from a mixed audio signal. It consists of an encoder network that compresses the input signal, a bottleneck layer that captures salient features, and a decoder network that reconstructs the separated sound sources. The model is trained and evaluated on a diverse dataset of mixed audio files. Results demonstrate its effectiveness in accurately separating individual sound sources, outperforming traditional methods and achieving state-of-the-art performance.

Live content is unavailable. Log in and register to view live content