Google’s annual developers conference in May, Google I/O, featured many announcements, accomplishments and 2017 plans. Of particular interest, the Android Location and Context Team’s talk “Android Sensors & Location: What’s New and Best Practices,” is available online.
This followed a keynote by CEO Sundar Pichai on solving problems at scale with deep neural networks, machine learning algorithms and artificial intelligence (AI). He also spoke about a shift from a mobile-first model to AI-first. Google is doing this across every product area, applying AI and machine learning. Other keynotes updated Assistant, Photos, YouTube, Superchat, Android and VR (virtual reality).
The Android Location and Context team — Marc Stogaitis, Wei Wang, Souvik Sen and myself — spoke about background location, location accuracy, activity recognition, Android sensor hub, Android sensors and the future of location and context.
Discussing why battery life is so important, we showed detailed graphs on the costs of accessing different parts of the phone subsystems like WiFi, GNSS and making data connections.
Then we introduced Background Location Limitations (at the 4:30 point in the posted video) coming with Android’s latest operating system in Android O. These limits will prevent applications from misusing Android’s APIs in the background, thus saving its user’s battery. There were examples on how to make your app background ready for these upcoming changes.
We showed plans for location accuracy improvements (12:50) coming later this year and comparisons of existing vs. upcoming solutions for the positioning algorithm.
We covered the tools to help analyze GNSS measurements. How strong are the individual measurements? How accurate are the range measurements? With these tools, developers now have direct insight into the lowest layers of a GNSS receiver. Then came activity recognition algorithms (15:40) and how deep neural networks will improve the precision of these algorithms and help advance the field in activity recognition.
I spoke spoke about the Android Sensor Hub (20:27), how Google is leveraging the capabilities of an always-on low-power processor in Android phones. The sensor hub allows Google to port algorithms such as Activity Recognition, Geofencing and Gestures from the main application processor into the low-power sensor hub. We then went into detail around the new sensor features (25;55) and improvements around the compass (28:34).
Finally, we looked into the future (33:28). I covered Project Elevation, Accurate Indoor Location, and dual-frequency GNSS. Closing thoughts were around how more signals are going to be added into the low-power always-on compute domains so that the phone is more aware and intelligent, simplifying users’ interactions, augmenting human memory and knowledge, and assisting users understanding of themselves and the world around them.
Access to Raw GNSS Measurements
In related news, our new web page is up and operational! This site provides all the details around GNSS Raw measurements in Android along with our analysis tools for anyone to download. Our previous site was accessible to people who signed up as a partner with Google, but now we have opened up this site to everyone.
Android apps typically access GNSS chipsets through a filter, which improves the GNSS location output for the majority of use cases. Filters use additional sensors, such as motion sensors, to improve the end user experience. However, filtering is not appropriate for some applications used by professionals such as researchers and original equipment manufacturer (OEM) developers. The Android Framework provides access to raw GNSS measurements on some Android devices. The page lists Android devices that support raw GNSS measurements as well as tools that help you log and analyze GNSS data.
[“Source-gpsworld”]