Introducing SonicScale : A Smarter Way to Rank Speech Enhancement Models

https://edgeai.modelnova.ai/benchmarks

We’re thrilled to announce that SonicScale is now live! SonicScale is a cutting-edge benchmarking platform for Speech Enhancement models, built on ELO scoring – the same system used in chess and competitive gaming – to deliver dynamic, fair, and fast-converging model rankings.

How it works:
Models are evaluated through blind A/B testing: users listen to an original noisy audio clip alongside two enhanced versions and vote for the better result – without knowing which model produced which output. This ensures genuinely unbiased human preference data.

In addition to ELO scores, SonicScale computes standard objective metrics (PESQ, STOI, SI-SNR, WERI, DNSMOS) to analyze how well they correlate with real human preferences.

IP Protection built-in:
Submitted models are hosted securely on the Edge AI Foundation domain via the EmbedUR platform. Models are never downloadable or exposed – evaluation happens exclusively through the SonicScale interface.

Want to benchmark your model? Submit a .tflite file with basic metadata (parameter count, sampling rate, window hop size) and we’ll integrate it into the leaderboard.

SonicScale is available at:
https://edgeai.modelnova.ai/benchmarks

Go check it out and start evaluating!