Google Assistant now sounds more realistic thanks to DeepMind

Smart Android And Trik-Commenting on Andorid indeed never endless, because smart devices this one is often updated every certain amount of time. So that the market can always be garapnya menerinya with pleasure. And it is not denied if this device has become the lifestyle of each society. To not wonder if the 6th business information and many are turning to mobail smartphone. With Android which thoroughly dominated the mobile industry, choosing the best Android smartphone is almost identical to choose the best smartphone, period. But while Android phones have few real opponents on other platforms, internal competition is intense.

From the sleek devices impress with the design premium, up to a full plant furniture features, to a very good device, and affordable mobile phone has a heavy weight, the Android ecosystem inhabited by a diverse range of attractive mobile phone Google Assistant now sounds more realistic thanks to DeepMind Google Assistant now sounds more realistic thanks to DeepMind,But "oversize" are subjective, and sometimes pieces of the specification and a list of features is not enough to get an idea of how good a phone. In this roundup, we look at the absolute best-the Android phone you can't go wrong with. The habits of young people or to accentuate trand blindly lifestyle, make this a medoroang this clever device industry vying to do modifications to the device, with a distinctly vitur vitur-tercanggihnya. So it can be received over the counter Google Assistant now sounds more realistic thanks to DeepMind

Unless you've been living under a rock, you're probably familiar with Google Assistant at this point. Google has made a massive push into artificial intelligence and machine learning. It even states at its events that it has moved from a mobile-first strategy to an AI-first strategy. That means that it wants to train computers to always be delivering relevant and helpful information to you before you even know you need it.

See Also
You may have noticed a difference in Google Assistant the last few days. That's because Google has started using a technology called WaveNet from the DeepMind team. The goal of the new WaveNet technology is to move Assistant from synthesized speech to a more natural speech pattern. Synthesized speech like you'd get from Google Assistant or Apple's Siri is normally stitched together using small bits of recorded speech. This is called "concatenative text-to-speech" and it's why some answers can sound a bit off when they're read back to you.

Since bits of speech are essentially glued together, it's hard to account for emotion or inflection. To get around that, most voice models are trained with samples that have as little variance as possible. That lack of any variance in the speech pattern is why it can sound a bit robotic, which is where WaveNet comes in. Google and the DeepMind team are trying to get around that with this new technology.

WaveNet is a completely different approach. Instead of recording hours of words, phrases, and fragments and then linking them together, the technology uses real speech to train a neural network. WaveNet learned the underlying structure of speech like which tones followed others and which waveforms were realistic and which weren't. Using that data, the network was then able to synthesize voice samples one at a time and take into account the voice sample before it. By being aware

of the waveform before it, WaveNet was able to create speech patterns that sound more natural.

See also
The advantages of this new system are subtle, but you can definitely hear them. When speaking to another human, you'll pick up on when they're coming to the end of a thought because their voice starts to go down at the end of a sentence. If you ever sit and watch the news for a few minutes, you can always tell when a story is about to end because the anchor will start to slow down and the volume or tone of their voice lowers. Part of the reason that concatenative text-to-speech sounds less natural are subtleties like that. That's a huge part of where the new WaveNet technology improves on the current system.

With this new system, WaveNet can add in subtle sounds to make the voice even more believable. While the sound of your lips smacking together or the sides of your mouth opening might be almost imperceptible, you still do hear those things. Small details like this add to the authenticity of the new waveforms.

Read More: Google Pixel 2 vs. Google Pixel: What's changed?

The system has come a long way in a short time. Just 12 months ago when it was introduced, it took one second to generate 0.02 seconds of speech. In those 12 months, the team was able to make the process 1,000 times faster. It can now generate 20 seconds of higher quality audio in just one second of processing time.  The team has also increased the quality of the audio. The waveform resolution for each sample has also been bumped from 8 bits to 16 bits, the resolution used in CDs (remember those?).

To hear the differences, we suggest you head over to Google's blog on this topic (linked below). The new technology is rolling out for U.S. English and Japanese voices and Google has provided comparisons for each.

Have you noticed a change in Google Assistant recently? Does a more natural sounding voice make you more likely to use it? Let us know down in the comments.



from Android Authority http://ift.tt/2gdNrzX
via IFTTT

Read:


Subscribe to receive free email updates:

Related Posts :

0 Response to "Google Assistant now sounds more realistic thanks to DeepMind"

Post a Comment