A manual workflow to fix the "muffled" audio of AI music models

I have been trying out Suno and Udio over the last couple of months. The lyrics are amazing, but the technical sound quality, according to users, is very poor, they have even gone as far as saying it sounds 'underwater' or thin when using mobile devices.

Most of the AI enhancers which are automated, I felt, brought more of the artifact and robotic hiss sound to the track. I used to spend several weeks working on manual post, processing techniques concentrating on spatial widening and applying a specific mid, range EQ balancing.

I have documented my research in a 21, page picture guide for creators who might be interested in polishing their raw generations to make them sound good on earphones. It is flexible for use both on Mobile and PC.

If you are an audio engineer or a hobbyist, I would love to get feedback from you: What are you currently doing to master raw AI audio generations?

1 points | by JoyTxis 2 hours ago

1 comments