Accessible Audio: Android Hearing Aid Support and the Audio Framework (Google I/O’19)


[MUSIC] BRIAN KEMLER: Good afternoon. Welcome to Accessible Audio,
Android Hearing Aid Support, and the Audio Framework. My name is Brian Kemler,
and I’m a PM on the Android Accessibility Team. Our team’s mission is to
improve the lives of people with disabilities by creating
useful and innovative products. Specifically, we design
Android Accessibility Suite, Sound Amplifier,
Live Transcribe, and as you may have heard
today, now Live Caption. In the last year and a
half, we’ve invested heavily in making audio on
Android more accessible, and so we’re going to be
focusing on that in this talk. I’m here today with an
awesome cast of characters– our engineers and
product managers Ricardo, John, and Stanley. So we are going to talk not
only about what we’ve done, but about what you
can do with the work that we’ve created by leveraging
the Android Audio Framework. First, I’m going to walk you
guys through hearing loss literally and
figuratively, so we’re going to talk all about it
to help you build empathy and actually hear or
experience what that is like. Next, Ricardo is going
to go under the hood and talk about the technologies
we’ve baked in to the platform so you can make your apps and
your devices more accessible and have a better
experience for your users. Lastly, John and
Stanley are going to get up and talk about
hearing aid support and how to create the
best experience possible, whether you’re a
device maker or an OEM. Now let’s get started. So first, let’s walk
through and talk about what hearing loss
is, how it affects people and what we can do about it. So the scope of the
problem is massive. 466 million people
worldwide have hearing loss. If that were a
country, that would be a bigger country
than every country except for India and China. So that’s bigger than the US. So this is a massive
problem, and again, according to the World
Health Organization, this is actually supposed to
grow to nearly a billion people in the year 2055. So it can have a dramatic
effect not only on the people who have it but also their
friends, family, colleagues, and loved ones. These effects can include
fatigue, social isolation, or reduced job or
academic performance. Now in my own life, both my
dad in this picture and my best friend from university
have hearing loss, and it’s really
made communicating with them difficult
and challenging, and not only for
them, but also for me. So what is hearing loss? How do we quantify
or measure it? So this is an empty
audiogram and we’re going to walk you through this. On the top, we have the
normal range of hearing. At the very bottom,
we have profound loss. And then we have different
gradations in between. On the left, we
have loud sounds. At the bottom, we have
frequency measured in pitch, measured in Hertz, from low
to high, going left to right. This line is called the
threshold of hearing. Anything above it you cannot
hear unless you’re a dog or a dolphin, in which case
dogs actually hear three times better than humans and
dolphins hear way better, something like seven
times better than humans, which is absolutely amazing. Now, I’m overlaying
the regions in which music and speech occur. So we have a music
region, a speech region, and anything outside
of these two regions is in that threshold,
what we can’t hear. People with loss in
the speech region may have difficulty
interpreting speech, and similarly if you have
loss in the music region you’ll have difficulty
interpreting music. So now let’s focus a
little bit on speech. On the left-hand side, we
have low frequency sounds. So these would typically
be women– or, I’m sorry– men’s voices, so
think baritones. On the right-hand side, we
have high frequency loss, so this would typically
be women’s voices, children’s voices,
think sopranos. This is called
the speech banana, and I didn’t make this up. This is actually a thing. So this is where we hear
consonants and vowels. And so you can see
this little overlay where all these
letters literally sit. For English, persons with
mild high frequency loss will have difficulty
hearing f, th and s. So in other words, a sentence
like “This is your fate” could become
something completely unintelligible to anybody,
like “E e you’re eight.” What does that mean? Nothing. So that’s how you can
experience hearing loss. Let’s shift from
speech over to sound. So in this overlay, I’m
overlaying typical sounds and where they take place. So you can see we
have very loud sounds, like the lawnmower, the phone,
the bus, at the very bottom, and quieter ones like the wind
and the seagull at the top. This is an audiogram for
a fictitious patient. And so we stripped away
the sounds, the consonants, the vowels and so
forth, and we can measure the ability to hear
for each ear, left and right. And so the right ear is red
in the left ear is blue. Loss will vary by
person, and indeed it’s going to vary ear to ear. And over time this will change. So you can see the pattern
of loss in this example is high frequency
loss, and that is one of the most typical
forms of hearing loss. Now we’re laying
everything on top. So we have the sounds,
we have the audiogram. And basically everything
down and to the left is going to be audible
for this person. Everything up and to the right
is going to be inaudible. So they can hear the bus,
the baby and the phone, but they cannot hear the seagull
or the clock and the wind. So a hearing aid. This is a very
simplified example, but a hearing aid is
going to work by injecting the right amount of gain– so turning up the volume– and the right amount
of signal processing at the appropriate
frequency levels. It also works by
lowering the gain so that you’re not overloading
the ears in certain cases. So this can make up for loss. And this is called compensation. And the amount of
compensation, again, it’s going to vary from
individual to individual. And I’m sort of showing
this fictitiously by raising up this line. So hearing aids are really,
really, really effective. They work well, but they’re
not perfect solutions. According to a recent survey,
the majority of respondents said that their quality of life
was improved with a hearing aid compared to no
amplification at all. So that’s really, really good. But there are a lot of
challenges with hearing aids, too. So for people who
self-identified as having difficulty
hearing, that was about 10% of the population. However, only a corresponding
3% of the population reported having one or
more hearing aid devices. So there are a lot of reasons
why hearing aids are not widely adopted. One is expense. Typically they’re not
covered by insurance. Two, sadly, there could
be a stigma associated with wearing a hearing
aid, or people just don’t want to admit that
they have this loss. So we’re doing a bunch
of things about this. But what I want to do right now
is a hearing loss simulation demo. So I’m going to switch over
and run this demo for you guys, playing some music,
so you guys get a sense of what normal hearing
is like and then what hearing with hearing loss sounds like. Demo. So I’m going to play– [MUSIC] –play a little
opera for you guys. [MUSIC] So right now using our
hearing loss simulator, I basically in a
very simplistic sense turned the volume down, even
though the audio was still playing. Now what I’m going
to demonstrate, using our sound
amplifier product, is how we can apply some basic
compensation to this loss, and you can get to experience
the effect of the product. So I’m going to
play the music again and then I’m going to switch
over to Sound Amplifier. [MUSIC] So again, you can
barely hear it, but the music is
actually playing. Now I’m going to turn
on Sound Amplifier. [MUSIC] RICARDO GARCIA: Is it running? Is it running Sound Amplifier? BRIAN KEMLER: What’s that? RICARDO GARCIA: Enable
Sound Amplifier. [MUSIC] BRIAN KEMLER: The demo
gods are not with us. OK, they are now. [MUSIC] There we go. Thank you. This is why, as a
product manager, you always need to have
an engineer by your side. So right now I’m
playing with the boost. And a user would simply
use this by finding the right combination
of the settings to get a little more Mozart. And we can reduce the sound
with a sound amplifier. [MUSIC] There we go. Thank you. Back to the slides, please. So Sound Amplifier
is not a hearing aid, but we do use machine learning
to process audio in real time in order to make
listening a little easier. It’ll add gain in
quiet situations and will also help reduce
distracting background noise. Sound Amplifier’s free,
and you need nothing more than a set of
wired headphones to boost the sound of a
faraway speaker, a TV, a movie, or reduce the noise of an
annoying heating, ventilation, and air conditioning system. Sound Amplifier dramatically
lowers the barriers to entry for somebody
who can’t afford, doesn’t have, or
forgets the batteries to their hearing aids. So now that we’ve
understood a little bit more about hearing loss,
Ricardo is going to discuss how you can
use some of the technology under the hood of
Sound Amplifier to make your products more
accessible for everybody. Ricardo? RICARDO GARCIA:
Thank you, Brian. So, hello. I’m Ricardo Garcia. I’m a software engineer in the
Android Audio Frameworks Team, and I am also the tech lead
for the Sound Amplifier project that you saw. So today in this
segment, we are going to talk more about how
to build some enhancement in your own applications. So the question is when would
you like sound enhancement? There are many, many situations
where your applications can benefit from this. One example is the
one that we just saw, that when you have a sound
amplification application, you would like to take
a microphone input and process the sound and
somehow enhance that sound, bring some soft sounds up,
take some very loud sounds and put them down,
something that you would like to process in that way. Sometimes also,
let’s say that you are building your own Android
device, a speaker or something that has a headphone,
and you would like to tune how that speaker,
that headphones sound. You would like to
voice that speaker and make it have a
certain frequency profile. Another application
will be, let’s say that you’re building
an Android TV and you would like to
implement the TV midnight mode. You are watching a movie at
home at midnight and, you know, watching the movie and
you have very loud sounds and explosions,
but you also have someone whispering right there. You would like to have some
dynamics processing that is going to bring up
the very soft sounds and is going to bring
down the very loud sounds, so everything sounds better
and your parents at home will not kick you out of home. Or if you want to have a
media player application and you would like to level
the loudness from song to song, from video to video. You can build these
kind of applications and you can use
sound enhancements to make the experience
better for your users. So in this section, we are going
to have three major things. The first one is
a little reminder of what we have about
the Dynamics Processing Effect, the DPE, that
we introduced last year. We are going to
also show the code of an equalizer, an app, a
simple application just doing an equalization. And we are going to have a look
inside of our very own Sound Amplifier, especially how
we control the parameters, how we do the noise reduction. And the goal is that
with all these tools, you could actually go out and
build your own Sound Amplifiers and distribute
them to your users. So to start, with the refresher. The Dynamics
Processing Effect, DPE for short, because I’m going
to say that many times today. The Dynamics Processing Effect
was introduced in Android Pie last year. And we also had a talk
in Google I/O which has a little bit more
content than this one, but is a good reference to have. With the Dynamics
Processing Effect we have a very
flexible architecture that will allow developers to
implement dynamic processing effects in the way that
you want to implement them. For example, we process
the sound by channels. You have left and right. You will have two
channels in there. So we are going to
do some processing on each one of the channels. And in the channels we have
four blocks, four modules, that can be used for
sound processing. We have a pre EQ, and equalizer
that can shape the frequency response of the sound. We have a multi-brand
compressor that allows for different
frequencies to treat the sound in different ways. A compressor does
one of the things that I mentioned
before, that it takes very loud sounds and
makes them softer, and very soft sounds
and makes them louder, which is the heart of the
sound amplifier that we saw. Post equalizer, also
to be able to shape the spectrum of the
sound that is coming out. And we have a limiter. A limiter is a very
good safety device, that after you have done all
this processing, if you have for some reason a pop
or a very loud sound, it’s good to have a
limiter that is going to limit how loud that sound
is going to go when it’s coming out of the processing. And inside of each
one of these modules, we have a lot of
parameters, a lot of bands, that we can control
individually. And in addition to that,
we have many channels. So as you see, our architecture
will allow us to actually have a lot of channels,
a lot of stages, and a lot of parameters
in each one of them, which is kind of
difficult to control, but it gives you a lot of
power to port your algorithms into this platform. Going into code,
I want to show how this instance is instantiated. First, we introduce our
Dynamics Processing Effect, we import the Dynamics
Processing Effect module in your Java code. Then we are using the
builder design pattern, where we are going to configure
what we want the Dynamics Processing Effect to do. And then we actually create
a Dynamics Processing Effect. In this case, we
are saying that we want to favor frequency
resolution instead of time domain. We have different
flavors of processing. With frequency domain,
you can have more control of how your frequencies are
distributed in the frequency domain. With time resolution, with
favoring time resolution, you will have more control of
how big your blocks in time will be. So you want fast
response, probably you would like to have the
time domain implementation. We can also say how
many channels we want, and then we can say in the
stages if we want to use pre EQ and how many bands. In this case, we want to
use pre EQ and 12 bands. We want to use a multi-band
compressor and eight bands, and we want to use post EQ 16
bands and also the limiter. So pretty much we
are using everything that we have access to. Then we can set some
parameters that are preferred. We are kind of
hinting the system. We would like to have frames
that are 10 milliseconds long. Doesn’t mean that the
system is going to comply, but it is going to do the
best effort to offer us 10 milliseconds frames. We can ask for something
smaller, something longer. It depends what your
algorithm needs. Then we create the
config object and we are ready to go to the next stage. In the next stage, we haven’t
created the Dynamics Processing Effect yet. We would like to go and get hold
of the multi-band compressor and change some of the
default parameters. So we get ahold
of that and we are going to iterate in this moment,
in the eight bands in this toy example that I have here. We get ahold of the band. In this moment, we set
some of the parameters, like the attack time,
to be 50 milliseconds. We set the release time
to be 100 milliseconds. We can set all the
other parameters whatever that we want there. And this can really mimic the
algorithms that we already have in place, or
they are the specs that your sound designers
or your sound engineers require for your application. And finally, we are going to
create the Dynamics Processing Effect, the one
that I call my DPE. That one is my actual
Dynamics Processing that is going to do all
the signal processing. A very important part is
that we have the sessionId. And this is something
that as a developer is going to give
you the flexibility to create a media player
or to start playing a sound and then create an effect and
put them together, because you are using the same sessionId. And we are going to see
that in the next example. So to recap, we are creating
the dynamic processing effect, and we have access to a lot of
parameters that we can control, either at init
time or at runtime. At runtime we can
go and change what is the frequency that
we want to affect, how much energy we
want in each frequency. We can control
everything real time. And in this graph, we
see that we have channels inside of the channels. We have different stages, pre
EQ, multi-band compressor, post EQ and the limiter. And inside of each
one of them we have bands and multiple
parameters that we can control. So one very quick example. I am going to use
this equalizer demo and I’m going to show
you the code that I used for this application. So can we switch to
the demo, please? So I have a very
simple application that is called equalizer
I am going to play, so I am not doing
any hearing loss. I just want it to play here. [MUSIC] And I’m going to start
modifying the bands in there. So. [MUSIC] OK, that was the demo. So I just wanted to
show that it is running, and I’m going to show
you how it’s working. So back to the slides, please. So we start with, we
create a media player. We go to the code and we
just use the same code that we have used for many
years to create a media object. In this case, it’s
the media player. And I’m playing a classical
music example in there. But the most important thing
is that when I create that, we have the sessionId
as I mentioned before. We create a media
player and we are going to use the sessionId
in a second, when we create the Dynamics Processing Effect. So I am going to go inside
my function that is there, the last one called createDP. That is my own custom function. I am using the builder,
a design pattern, that I am going to say
how many channels I want, how many bands I want. I am just going to use the
first one that is the EQ. And I am going to create
in each one of the EQ bands what is the level,
the initial level. So I know there is
a lot of code there. But the most important part is
that when we have created this and at runtime, when I move in
the sliders, what I am doing when I’m moving the sliders. If you’re looking here,
the section that I just highlighted there. Every time that I move
one of the sliders, I go and find the band
of interest for that EQ and just assign a
different value there. I say I want to attenuate or
I want to enhance that band. This is all. This is all the
code that you need to run a very powerful,
very configurable EQ in your application. And now I’m going to go to
my last part of my talk. I know that is a
lot of data, but. Inside of the Sound Amplifier. So our Sound Amplifier,
the application that Brian demoed before, is
built on top of the Dynamics Processing Effect. We are using the
dynamic sources effect that anyone that is using
Android Pie or above can have access to. And it has in the
architecture, it has two parts that I
want to make emphasis to. The first one is the
control part, the top part. Very much the user
has access to the UI. It can move some sliders. And when it moves the
sliders, the application is computing some parameters
that gets sent to the Dynamics Processing Effect. That is says, change the EQ
here, change this band here, move things here. This is all done in the UI, I
will say in the control part. But all the number-crunching,
all the magic is happening in the
Dynamics Processing Effect that belongs to the OS. Another part of that I want
to show is the audio part. We are doing some noise
reduction analysis in the audio, but at
the end we are just selecting some parameters
that get sent to the Dynamics Processing Effect to
tell the system how to process the audio. How do we do this, the
first part, the UI part? So our problem is
we have two sliders and we would like the Dynamics
Processing Effect to change the parameters that this has. What type of changes we
want in those parameters? We are going to use
some EQ and compression. We are going to do what
Brian showed us to do, that hearing aids
do, that they just change some of the parameters,
how they affect the frequency response and how they
affect the frequency, at different frequencies
in different ways. We can go and– in this
case, the top curve is telling me to boost some
of the low frequencies, and the bottom curve is telling
me to apply some compression also to the low frequencies. But still, that’s
a lot of parameters for a user to control directly. We could simplify these
curves by just saying, well, you know, let’s take
the slope of the curve. I’m just to represent the whole
curve with all these parameters with a single number. And if I do that
for both curves, I could plot that in a
two-dimensional space. I could say the first
curve is one number, the second curve
is another number. I have an xy
coordinate and I just represent that in that space as
the circle that you see there. I could do the same
for another curve. And it’s a different point
that I get in xy coordinate. The same for another
curve, another curve. So we can get a lot of
curves that get represented, even so that they have a lot of
parameters that get represented in the xy coordinate. Very simple. The magic, the beauty of this is
that now with an xy coordinate, a map, we can just take our
boost and our fine tuning and we could pretty much
navigate that xy coordinate space very easy,
go and find what is the point that is closest
to what the user wants, and that gets translated
to a lot of parameters that get sent to the
Dynamics Processing Effect. So we’re actually doing
multidimensional mapping. We are taking something
that has many dimensions and reducing that
to two dimensions. Of course, this
is a toy example. There is a lot of math
involved in there. But you get the idea. But one of the other important
things that we did here is that we took many
curves from many people from many audiograms,
and with those audiograms we managed to come up
with all these curves that represent the kind
of hearing enhancement that a lot of
people will benefit from in the real world, when
they’re in a coffee shop or in a restaurant,
in different places. OK. And to finish, I want
to talk a little bit about noise reduction. With noise reduction, what
we do is in the bottom part, we are get hold of the audio. We are looking at
the audio that is coming through the microphone. But we would like
to get rid of sounds that, especially– they
are all there, like there for a long time. For example, that AC sound
that is really annoying. So if we start looking at
the spectrum of the sound and we take one spectrum,
then another spectrum, then another spectrum, we can see
what is pretty much there for a long period of time,
versus something that changes a lot, like my voice
that is just there for a second and then it’s not
there the next instant. When we compute that, we
can go and tell the DPE, you know, these frequencies
that have been there for a long time? Please put them down
and attenuate them. Now, just to recap, we looked
inside the Dynamics Processing Effect, we build
our own equalizer and we talk about the Sound
Amplifier, noise reduction and the control UI. Now we are going to
go to Jon and Stanley to know more about Android
hearing aid support. Thank you. [APPLAUSE] JONATHAN HURWITZ:
Thanks, Ricardo. Let’s talk about hearing aids. So first, I’m going to
go over what the existing landscape looks like and where
what we’ve built in fits in. And then I’m going
to hand it off to Stanley, who’s going to talk
through exactly how to build this into your devices if you’re
an OEM or a hearing aid maker. So today, if you want to get
audio from your Android phone to your hearing aids,
it’s not so easy. Typically, you’ll need
what’s called a streamer. And this is a small
piece of hardware that you wear on your
body that allows the phone to relay an audio signal
to the hearing aids. And the reason why this exists
is because of battery life. So Bluetooth Classic
is just not good enough for the power constraints
of a hearing aid device. And we can’t ask hearing aid
users to replace the battery on the order of hours or days. And so what the
streamer does is it will take a classic
Bluetooth signal from the phone to the
streamer and then use a proprietary low-power link to
send the audio to the hearing aids. But this introduces
two problems. The first is that it’s
just simply something else to worry about. If you lose it or damage
it, you’re out of luck. And the second is it’s
another thing to keep charged. We have a growing
ecosystem of devices. All of us have smart watches,
smartphones, tablets, and more, and to have yet
another thing you need to plug in every night or
every other night is a pain. Because if you wake up
and it’s not charged, you’re out of luck. So we challenged
ourselves to do better, and we worked really hard
these past few months. We think we really have. So you’ll be able to
take calls, stream music, all without an
accessory, because we built direct streaming
support for hearing aids right into Android. But we didn’t stop there. We really listened
to users, and we know that battery life
is incredibly important. And so what we’ve actually
done is challenged ourselves to build battery life into
the hearing aids that’s better than a regular headphone,
and the team delivered. We’ve been able to deliver
week-long battery life, even with streaming support. This isn’t something that
any headphone can do. So how do we do it? Well, Bluetooth
Classic wasn’t enough, so we had to go
to something else. And we turned to Bluetooth
Low Energy, or BLE for short. To get audio from the phone to
the hearing aids, what we do is we actually open
up two audio channels, called low energy
connection oriented channels, where
we send the data. We have one channel
per hearing aid device. And the benefit of
using BLE over Classic is that all the power
savings we get we can push back to the
user as battery life. So there’s no need to
recharge your hearing aids every single night. And in addition, all
of the important ways in which you interface
with your hearing aids are built directly
into Android, which means right out
of the box you’ll be able to pair, connect
and stream, so there’s no need for a third party app. We do really love third
party apps, though, because they allow you
to enhance your listening experience. You can customize EQ and
adjust your hearing profile based on where you are,
whether you’re in a restaurant, whether you’re outside, or
in a presentation like this. And so they’re not going away. In fact, we’ve actually
created some new APIs which Stanley is
going to talk through to make your life easier and
to make hearing aid companies, to enable them to build
really killer apps. So we’ll be launching with
support for select GN hearing aids and Cochlear implantables. And we’re really excited
about these partnerships. We worked super hard
with these companies over the past few months to
build some killer products that we think users
are going to love. But we’re also excited for
all of the other hearing aid companies that are
in the pipeline, So keep your eyes out for
other people and companies who will be showing up here
within the next few months. So phone calls, audio,
music, podcasts and more, all without an accessory and
all with a week of battery life. This has been an amazing
effort and partnership, and we’re super
excited about what this is going to do for
the 466 million people who have disabling hearing loss. I’m going to hand
it over to Stanley now, who’s going to talk through
exactly how to build this into your device
if you’re in OEM. Thank you. [APPLAUSE] STANLEY TNG: Thanks, John. I’m Stanley Tng from the
Android Bluetooth Team. I’ll be talking through
how Android devices can support hearing aids directly. So the very first thing
is that go and look at our open and
publicly available Android hearing aid
specifications, in a given link here. Once you read the spec,
you want to make sure that the Android device
that you want to [INAUDIBLE] can support ASHA. I’ll be going through the list
of requirements a bit later. Then reach out to Google to
receive reference hearing aid devices for your testing. All of our development
is done in open source and on the AOSP code base. So with your reference devices
and the AOSP open source code, you have everything you
need to build a solution. But I must stress,
it’s very important that you test thoroughly
with the reference device to make sure it’s correct. Our test plan will be
made available for you shortly to test for both
quality and interoperability. So let’s talk about what are the
Android platform requirements. Really good news. There is no new hardware
or Bluetooth chip firmware needed to support ASHA. You just use any BTSIG-compliant
Bluetooth chipsets. We didn’t want to define any new
vendor-specific HCI commands. That means you could still
use your generic Bluetooth firmware that is provided
from your chip provider. However, we do prefer
a BT 5.0 chipset. As John has mentioned,
we are using the LE Connection on the
Channels, or CoC for short. You want to make sure that
your implementation of CoC is good on your
Android platform. Therefore, please run
the following two tests– the CTS verifier
and the ACTS test. We have written special
tests on CoC for you. Now, do approach your
Bluetooth chip vendor and make sure they honor
these two parameters. They are called Min_CE parameter
and the Max_CE parameters. So these two parameters are
used whenever the connection parameters are updated, so
it’s important they honor it so that you have
good quality audio. Lastly, for your
audio HAL module, please make sure you implement
these two new audio routes that is needed for hearing aids. That’s it. I say no new hardware. Now, if you are hearing me
because the following slides have useful information
on how to build an ASHA-compatible device. Firstly, go read our open and
publicly available ASHA spec. It was posted there
since last August. To start building, you need to
obtain a Pixel 3 phone running Q, since Pixel 3 will be the
first phone to support ASHA. As I’ve said before,
our implementation is open source and in
the AOSP source code. Please look at AOSP to see how
we implement the audio source, and this should help you
implement your audio sync. We have built new
APIs in Android Q to make your life easier. Remember to use these APIs
when writing your app. I’ll talk more about
these APIs in a bit. Finally, I must stress
test thoroughly. We’ll be releasing
our test plans and will also be
compiling common issues and sharing them as FAQ
to make life easier. Let’s talk about the new APIs. So for Android Q, we are
defining a new Bluetooth profile called the
hearing aids profile. So with this new
profile, your app can detect whether the phone
has hearing aid support or not. This new API allows your app
to install, manage, configure the paired devices. Furthermore, the apps can still
continue to use the existing LE services on your hearing aids. This means that all the special
features and enhancements that’s available on
your hearing aids is still possible with your app. Just a small code
segment to demonstrate how you could check whether a
phone supports the hearing aid profile. We have defined
a new profile ID, and you can imagine using
this at your app level to verify that the
phone is compatible, and if not, throw a
notification to the user. I’d like to cover a few more
things to watch out for, because the Android
device is streaming audio to two different hearing devices
on two separate channels. There is a need to synchronize
the two audio streams. Hearing aid devices must have
their own stereo sync method. However, on the
Android side we will assist by sending the triggers
to the device whenever this sync needs to happen. Stay tuned and
check the ASHA spec, because we are adding
more details there. We are using the LE connection
on the channels, especially the credit-based flow control. So this credit is controlled
by the hearing devices, to let Android know when to
send the next audio data. Therefore, this flow control
is an important mechanism to regulate the amount of data
that the Android device sends to the hearing aids. As I said before,
Pixel 3 and Pixel 3 XL will be the first devices
is to support ASHA. If you are keen on
building quickly, go grab one to
start development. Thank you. Let me hand it back to Brian. BRIAN KEMLER:
Thank you, Stanley. [APPLAUSE] So this feature, I think, is one
of the most important features that we’re launching,
because it means users are going to have the best
possible experience on Android with hearing aids to stream
music, stream anything. And Stanley was really the prime
mover technically behind this, so big shout out to Stanley
for making that happen. [APPLAUSE] So we talked today
about hearing loss and helped you understand
it a little bit better. And then Ricardo
dived in and helped you understand how to
build your own equalizer and effects into your
software and your hardware, and that was awesome. Running out of time. So if you guys
are interested, we have another talk tomorrow,
Demystifying Android Accessibility
Development, which is a very developer-focused talk,
tomorrow at 9:30 AM on stage 7. And we also have two
really cool sandboxes, the accessibility sandbox
and the experiment sandbox, which should be right out there. So I hope everybody had a
chance to learn something new. And I hope you all have
a great I/O. Thank you so much for coming out, and hope
to see you around or next year. [MUSIC]