The use of machine learning on IoT data has opened up lots of opportunities. Neural networks are used to analyze the data and make sense of it by converting data into useful information in real-world applications such as speech recognition or image classification. Today, optimized neural networks found their way from the cloud to IoT devices. These high-end embedded devices are much more powerful than tiny embedded devices in wearables or implanted medical devices. This research aims to investigate to which extent con-volutional neural networks can be used on tiny embedded systems in the context of audio classification. Three challenges regarding a cochlear implant application have been considered; hardware resource limitations, the model type versus nature of sounds to classify, and the impact of subcutaneous MEMS microphone. From a wide range of experiments, we have learned that post quantization and quantization aware training models can score equally well on the UrbanSound8k dataset compared to floating point models. Acoustic event detection models can characterize an acoustic environment where the scene classification score can be improved by transferring knowledge from an event classification task. The simulated subcutaneous recordings performed poor on all features still the Mel feature achieved the highest classification score. This research shows that convolutional neural networks for audio classification can be effectively reduced in size to make it suitable for tiny embedded devices, however, the edge hardware specifications must be taken into account. The frequency-specific output of an embedded microphone can lead to significant accuracy loss during deployment.