A doorbell rings on TV. Gina — my wire-haired dachshund — hears it and loses it. Drag the sliders and watch the extension react.
2Loud uses a sound classification model called YAMNet, built by Google and trained to recognize 521 different audio events — everything from doorbells to dog barks to glass breaking. It's the same technology used in smart doorbells, security systems, and hearing aid research.
The model runs directly in your browser using TensorFlow.js. It processes your tab's audio in short frames, classifies what it hears, and when it detects a sound in one of your selected categories, it smoothly lowers the volume — then brings it back up when the sound passes.
Your audio never leaves your device. There's no server listening to what you watch. There's no account to create. There's no cloud processing. The AI runs locally, in your browser, on your machine.
This isn't a privacy policy — it's how the thing is built. The extension only requests two permissions: access to tab audio, and permission to run on streaming sites. No browsing history. No cookies. No identity. If a permission isn't declared, Chrome physically prevents the extension from accessing it. You can read more here.
The extension has a simple feedback mechanism — a thumbs up when the filter catches something correctly, and a thumbs down when it misses or gets it wrong. That's it.
Your feedback is anonymous and optional. But in aggregate, across many people watching many hours of content, it tells us which categories need tuning, which platforms have quirks, and which sounds bother people that nobody thought to include. Every thumbs down makes the next version smarter. You're not reporting a bug — you're training the product.
I'm building this now and sharing it with friends first. Drop your email and I'll let you know when it's ready.
No spam, ever. Just a heads up when you can try it.