In March 2021, Google began using federated learning on Android to improve the accuracy of “Hey Google” hotwords. An upcoming “Personalized Speech Recognition” feature is said to help Google Assistant “better recognize your common words and names.”
About APK Insight: In this APK Insight post, we have decompiled the latest version of an application that Google has uploaded to the Play Store. If we decompile these files (called APKs in the case of Android apps), we can see different lines of code within this hint to possible future features. Keep in mind that Google may or may not ever ship these features and our interpretation of what they are may not be perfect. However, we will try to enable those that are nearing completion to show you what they will look like should they ship. With that in mind, read on.
According to Strings in recent versions of the Google app on Android, “Personalized Speech Recognition” appears in the Google Assistant settings with the following description:
Save audio recordings on this device so the Google Assistant can better recognize what you’re saying. Audio remains on this device and can be deleted at any time by turning off personalized voice recognition. Learn more
This “more info” could link to an existing support article about Google’s use of federated learning to improve hotword activations by similarly “using voice recordings stored on user devices to create models like the “Hey Google” refine detection:
It learns how to adjust the model based on the language data and sends a summary of the model changes to the Google servers. These summaries are aggregated across many users to provide a better model for everyone.
The upcoming feature aims to extend these machine learning-based improvements beyond “Hey Google” to your actual Assistant commands, particularly those with names (e.g. using your voice to message contacts) and commonly spoken words. Audio of past utterances is stored on the device and analyzed to make future transcriptions more accurate.
On devices like the 2nd-gen Nest Hub and Mini, Google already uses a machine learning chip that processes your most frequent requests locally “to achieve a much faster response time”. This concept could now be extended beyond smart home devices to Android.
Given Google’s stance on Assistant and Voice Privacy, this will likely be an opt-in feature, as it is today Assistant Settings > “Help Improve Assistant.” From the description available today, it says “audio will remain on this device” and will be erased when the feature is disabled. If you turn off personalized speech recognition in the meantime, Google warns that:
If you turn this off, the Assistant is less accurate at recognizing names and other words you say frequently. All audio data used to improve speech recognition for you will be deleted from this device.
It’s not clear when this feature will be rolled out or how much improvement it will have. This comes as Google previewed how next year’s Assistant conversations will become more natural at I/O 2022. The Assistant essentially ignores “um,” breaks, natural pauses, and other self-corrections — even verbally acknowledging them. This compares to today’s assistant taking what you said literally and printing out a response.
More about Google Assistant:
Thanks to JEB Decompiler, which benefits some teardowns from APK Insight.
Dylan Roussel and Kyle Bradshaw contributed to this article.
FTC: We use income earning auto affiliate links. More.
Visit 9to5Google on YouTube for more news: