Apple’s purchase of music recognition app Shazam serves multiple purposes for the company including making Siri smarter, adding data science experts and providing hooks into services such as Apple Music.
Sure, Apple Music is a natural fit for Shazam, but I’d argue that the acquisition is also about Siri. Shazam’s engine can recognize songs in milliseconds and has focused for years on metadata, tagging and building a database that can be tapped in real time. I argued in 2014 that Shazam would have made a good Apple acquisition.
In a blog post, Shazam recently noted:
When you Shazam a song, we recommend similar tracks. If we have models that understand, in some abstract sense, musical features such as genre and mood, then those recommended tracks could become even more relevant, interesting, and unexpected. We already create interesting charts and playlists, including our exclusive Future Hits, but automated music classification means we can dig up deep cuts in playlists made just for you.
The bad news: auto-tagging tasks are hard. Let’s take genre as an example. A song’s genre is inherently subjective.
Indeed, the models and tagging are so hard that Shazam made a better buy than for Apple to build that functionality. See: The business of Shazam
Shazam also outlined its approach to filtering and modeling.
What Apple really bought with Shazam is a team that’s expert at training models, developing algorithms and managing data. Shazam is about music–for now. However, that same modeling expertise could apply to many other tasks.
To see what else Apple is acquiring with Shazam it’s worth checking out its engineering blog. A quick scan reveals expertise in GPUs, cloud infrastructure and various algorithm approaches.
Simply put, Shazam knows data. And that data engineering will apply to multiple areas of Apple and likely bolster Siri in the future.