What’s Google I/O?
Google I/O is an annual software developer-focused conference which features a keynote on the latest updates and announcements by Google. The conference also hosts in-depth sessions focused on building web, mobile, and enterprise applications with Google and open web technologies such as Android, machine learning, TensorFlow, Chrome, Chrome OS, Google APIs, Google Web Toolkit, and App Engine.
In this blog post, I’m going to share my favorite announcements from the conference. Hope these items serve makers, app developers, and web developers.
Abstract: Angular has a flag that will cut hundreds of kilobytes off of your bundles, improve mobile experiences, and allow you to dynamically create components on the fly. Learn about these changes and what they mean for your applications.
Android Studio 3.2
Google has worked to improve the application model for Android for simplicity, power, and developer speed. I’m curious to test the speed of the new Android emulator.
Abstract: The last couple of years have seen a plethora of new features and patterns for Android developers. But how do developers know when to use existing APIs and features vs. new ones? This session will help developers understand how they all work together and learn what they should use to build solid, modern Android applications.
For our makers and tinkering readers, you might check out Google AIY projects. I find it interesting that you can go to your local Target store and pick up a Google AIY kit so that you can start experimenting with machine learning, voice control, and computer vision.
The following MagPi issue covers the AIY voice kit:
Abstract: AIY efforts at Google puts AI into various maker toolkits, to make things more playful and, more importantly, to help you solve real problems that matter to you and your communities. Join this session to learn how you can use these kits to start adding natural human interaction to your maker projects. It will feature demos on the Voice and Vision Kits, and some amazing AIY experiments built by the makers community around the world.
Abstract: Come watch a single developer code a beautiful app in real-time from the ground-up that runs natively on iOS and Android, all from a single codebase. Along the way, learn how to marry Flutter’s latest multi-platform reactive UI elements, accelerometer, and audio capabilities with powerful Firebase SDK functionality. See this app painted to life piece-by-piece in under 40 minutes thanks to Flutter’s sub-second hot reload developer experience.
Google’s ARCore framework received several notable updates. Firstly, Google ARCore enables developers to write Android apps that sense your environment. With these capabilities, developers can place 3D content layered over a view of the real world. This technology unlocks an amazing class of games, collaboration, and design applications that serve users in their physical spaces. The first version of Google ARCore focused on horizontal surfaces. Google has upgraded ARCORE to sense vertical surfaces(walls) and pictures. (i.e. custom tracker markers) Google now offers a way to shared markers or points of interest with multiple users. Let’s say you’re making an AR pool game using your dining room table. Multiple players of your game can collaboratively target the same dining room table and participate in a shared game experience. It should be noted that you can “instant preview” ARCore apps using ARCore Unity tools. This really helps you reduce your iteration cycles.
Abstract: Learn how to create shared AR experiences across iOS and Android and how to build apps using the new APIs revealed in the Google Keynote: Cloud Anchor and Augmented Images API. You’ll come out understanding how to implement them, how they work in each environment, and what opportunities they unlock for your users.
What’s new on Android on ChromeBooks
On InspiredToEducate.NET, we’re passionate about serving students, teachers, and makers of all ages. Since my wife works as a college professor, we’re constantly geeking out over various tools in educational technology. It’s very clear that Chrome books have made a positive impact in K-12 education. According to this article, Google Chromebooks command 58% of laptop devices in the K-12 market. That translates to millions of devices. It’s cool to see Google expand the capabilities of Google Chromebooks using their innovations in Android.
Abstract: With the Play Store on Chromebooks gaining traction, developers need to understand how to build high-quality apps and content for the new form factor. Attend this session to learn about adding support for larger screens, mouse and trackpad support, keyboard support (i.e. shortcut keys), free-from resizable windows, and stylus support for devices that have them.
Abstract: Android Things is Google’s platform to support the development of Internet of Things devices. This talk will provide an update on the program and the future roadmap. Learn more about the breadth of hardware reference designs, the operating system, building apps, device management, and support from chip vendors. It will also discuss use-cases where edge computing can be used, and examples of prototype-to-production that demonstrate how Android Things is ready for commercial products.
Abstract: Sceneform SDK is a new library for Android that enables the rapid creation and integration of AR experiences in your app. It combines ARCore and a powerful physically-based 3D renderer. In this session, you’ll learn how to use the Sceneform SDK, and how to use its material system to create virtual objects that integrate seamlessly with the environment.
Over the years, Google has focused their energy on advancing machine learning capabilities. They have now entered a phase where application developers can now weave the power of machine learning brains(machine learning models) into their applications. Google TensorFlow enables app developers to train powerful neural network models so that computers can learn and use that intelligence in applications. In Google photos, I can do weird searches like “flowers in macon, ga.” Since Google have fast neural networks that can I identify flowers, Google can quickly return a list of photos with flowers matching my expectations. Wouldn’t it be cool if you could put these capabilities into your Raspberry Pi or Android app? TensorFlow Lite enables you to leverage pre-trained TensorFlow models in your apps. I’m very impressed by their focus on speed and efficiency.
Abstract: TensorFlow Lite enables developers to deploy custom machine learning models to mobile devices. This technical session will describe in detail how to take a trained TensorFlow model, and use it in a mobile app through TensorFlow Lite.
The following video demo’s some of Google’s cool innovations in computer vision. Using Google Lens, the photos app can identify objects in view. In the future, you’ll be able to point your phone at a store. Using an AR view, Google can tell you ratings, descriptions, and pictures related to the store.
Join the conversation at our next Google Developer Group.
Interested in digging deeper into these technology announcements? What are consequences of connecting some of these ideas together? What opportunities do these capabilities give to our local developer community?
We’ll dig deeper into the latest announcements from Google I/O conference. We’ll discuss the various pathways for leveraging these technologies in your career. We’re excited to discuss how these tools can benefit local startups, makers and businesses in Orlando, FL.
When: May 24, 2018 – 6pm to 9pm