Building Chat Bot Apps with Google Actions

r2d2

In science fiction, we have dreamed about the days when we’ll talk to our computers to make things happen. In Star Trek, crew members can talk to the Holodeck computer to “program” and explore amazing virtual experiences. Tony Stark(aka. Ironman) constantly gets situation awareness from Jarvis during battle by simply talking to his device. We’re still a long way away from Holodecks, R2D2, and Ironman. As developers and makers, we can explore the potential of voice interactions with mobile devices today.

The Google Actions toolkit enables you to integrate your services into the voice command interface of a Google assistant. This technology touches millions of devices including phones, cars, and assistant devices. You can also integrate into services provided by Google or third party services.

This past weekend, our local Google developer group of Central Florida organized a hackathon to explore applications of voice user interfaces and Google Actions. We enjoy getting to organize community workshops like this. Love seeing our community come together. It’s always a great opportunity to learn, meet people, and generate new ideas.

Google Actions Hackathon

In general, Google actions work well in three major use cases. Users on the go. People starting their day. People relaxing at the end of the day.   For my Google Actions app, I tried to think of an application of Google Actions that would support our leadership team for our GDG. We recently adopted a Trello board to help us organize tasks for our club. If you’re not familiar with Trello, it’s simple a task management system popular with Agile teams. ( see a screenshot below )  As a busy Dad and professional, I typically think of stuff that needs to be accomplished for the GDG while I’m driving.

Trello board

I decided to create a simple Google Action to enable me to collect a task and share it on our leadership Trello board. I tried to explore this task in three phases.

1. Get to know the Google Actions API: I used a variety of resources to get to know the Google Actions interface. I, however, found this code lab very helpful.  After doing this code lab, I was able to slightly elaborate on the tutorial to create my own stuff.

https://codelabs.developers.google.com/codelabs/actions-1

2. Build Trello integration code to add a task to a list: On my local laptop, I started playing around with a few options for adding task information to a list.  I found the “node-trello” package for NodeJs worked really well.

https://github.com/adunkman/node-trello

3. Integrate the Google Actions API and the NodeJS code together

Here’s a quick tour of the Google Actions conversation setup. Using Dialogflow, it’s really cool that you can create conversational interface actions with almost no code.  JavaScript code becomes necessary if you need to integrate services or databases together.   Let’s focus on one intent: adding a task.   In general, intents enable you to accomplish a focused interaction on your Google assistant.   In my case, the user calls my action by doing the following:

  • Ok, Google.  Let me talk to GDG tasks
    • The system replies with a greeting and a prompt for a command
  • The user can reply “add task.”

In this intent, we can configure the system to respond to similar phrases to “add task.”

Add task

At this point, the intent collects two pieces of information.  (task name and task details).  We configure the intent to fulfill the conclusion of the intent with custom code.

Hope this code sample helps you understand the experience of building a Google Actions application.

Here’s a few more resources and ideas to help you write your own Google Actions app.

  • https://developers.google.com/actions/templates/
    • These are great tutorials for “non-coders” and programmers.  These templates are designed for teachers, educators or people curious about chat bot building.   The tutorials are designed to be very short.

10 Trends You Need to Know from Google I/O 2018

Google IO - Logo

What’s Google I/O?

Google I/O is an annual software developer-focused conference which features a keynote on the latest updates and announcements by Google. The conference also hosts in-depth sessions focused on building web, mobile, and enterprise applications with Google and open web technologies such as Android, machine learning, TensorFlow, Chrome, Chrome OS, Google APIs, Google Web Toolkit, and App Engine.

In this blog post, I’m going to share my favorite announcements from the conference. Hope these items serve makers, app developers, and web developers.

Angular Updates

It’s Christmas time for Angular developers. Check out this talk to learn what’s new with Angular, Google’s platform for scalable front-end web development. Using Angular 5 at work has been fun. Love working with TypeScript and the component model. In general, it helps reduce common JavaScript errors. It has also created a great deal of unity between our back-end and front-end code.

Abstract: Angular has a flag that will cut hundreds of kilobytes off of your bundles, improve mobile experiences, and allow you to dynamically create components on the fly. Learn about these changes and what they mean for your applications.

Android Studio 3.2

Google has worked to improve the application model for Android for simplicity, power, and developer speed. I’m curious to test the speed of the new Android emulator.

Abstract: The last couple of years have seen a plethora of new features and patterns for Android developers. But how do developers know when to use existing APIs and features vs. new ones? This session will help developers understand how they all work together and learn what they should use to build solid, modern Android applications.

AIY

For our makers and tinkering readers, you might check out Google AIY projects. I find it interesting that you can go to your local Target store and pick up a Google AIY kit so that you can start experimenting with machine learning, voice control, and computer vision.

The following MagPi issue covers the AIY voice kit:
https://www.raspberrypi.org/magpi-issues/Essentials_AIY_Projects_Voice_v1.pdf

 

Abstract: AIY efforts at Google puts AI into various maker toolkits, to make things more playful and, more importantly, to help you solve real problems that matter to you and your communities. Join this session to learn how you can use these kits to start adding natural human interaction to your maker projects. It will feature demos on the Voice and Vision Kits, and some amazing AIY experiments built by the makers community around the world.

Flutter.IO

A few years ago, I had tried the Dart programming language and enjoyed it. For background, I work as a web app developer using C# and JavaScript. I find Dart very approachable. In the Flutter.IO project, Google has worked to expand the influence of Dart into building native iOS applications and Android apps. I find the “hot reload” feature of Flutter.IO very compelling. It’s awesome to go from idea to device quickly. My only reservation with Flutter is that it doesn’t have a declarative model for expressing components(or widgets).

Abstract: Come watch a single developer code a beautiful app in real-time from the ground-up that runs natively on iOS and Android, all from a single codebase. Along the way, learn how to marry Flutter’s latest multi-platform reactive UI elements, accelerometer, and audio capabilities with powerful Firebase SDK functionality. See this app painted to life piece-by-piece in under 40 minutes thanks to Flutter’s sub-second hot reload developer experience.

ARCore

Google’s ARCore framework received several notable updates. Firstly, Google ARCore enables developers to write Android apps that sense your environment. With these capabilities, developers can place 3D content layered over a view of the real world. This technology unlocks an amazing class of games, collaboration, and design applications that serve users in their physical spaces. The first version of Google ARCore focused on horizontal surfaces. Google has upgraded ARCORE to sense vertical surfaces(walls) and pictures. (i.e. custom tracker markers) Google now offers a way to shared markers or points of interest with multiple users. Let’s say you’re making an AR pool game using your dining room table. Multiple players of your game can collaboratively target the same dining room table and participate in a shared game experience. It should be noted that you can “instant preview” ARCore apps using ARCore Unity tools. This really helps you reduce your iteration cycles.

Abstract: Learn how to create shared AR experiences across iOS and Android and how to build apps using the new APIs revealed in the Google Keynote: Cloud Anchor and Augmented Images API. You’ll come out understanding how to implement them, how they work in each environment, and what opportunities they unlock for your users.

What’s new on Android on ChromeBooks

On InspiredToEducate.NET, we’re passionate about serving students, teachers, and makers of all ages. Since my wife works as a college professor, we’re constantly geeking out over various tools in educational technology. It’s very clear that Chrome books have made a positive impact in K-12 education. According to this article, Google Chromebooks command 58% of laptop devices in the K-12 market. That translates to millions of devices. It’s cool to see Google expand the capabilities of Google Chromebooks using their innovations in Android.

Abstract: With the Play Store on Chromebooks gaining traction, developers need to understand how to build high-quality apps and content for the new form factor. Attend this session to learn about adding support for larger screens, mouse and trackpad support, keyboard support (i.e. shortcut keys), free-from resizable windows, and stylus support for devices that have them.

Android Things

Abstract: Android Things is Google’s platform to support the development of Internet of Things devices. This talk will provide an update on the program and the future roadmap. Learn more about the breadth of hardware reference designs, the operating system, building apps, device management, and support from chip vendors. It will also discuss use-cases where edge computing can be used, and examples of prototype-to-production that demonstrate how Android Things is ready for commercial products.

Sceneform

Abstract: Sceneform SDK is a new library for Android that enables the rapid creation and integration of AR experiences in your app. It combines ARCore and a powerful physically-based 3D renderer. In this session, you’ll learn how to use the Sceneform SDK, and how to use its material system to create virtual objects that integrate seamlessly with the environment.

TensorFlow Lite

Over the years, Google has focused their energy on advancing machine learning capabilities. They have now entered a phase where application developers can now weave the power of machine learning brains(machine learning models) into their applications. Google TensorFlow enables app developers to train powerful neural network models so that computers can learn and use that intelligence in applications. In Google photos, I can do weird searches like “flowers in macon, ga.” Since Google have fast neural networks that can I identify flowers, Google can quickly return a list of photos with flowers matching my expectations. Wouldn’t it be cool if you could put these capabilities into your Raspberry Pi or Android app? TensorFlow Lite enables you to leverage pre-trained TensorFlow models in your apps. I’m very impressed by their focus on speed and efficiency.

Abstract: TensorFlow Lite enables developers to deploy custom machine learning models to mobile devices. This technical session will describe in detail how to take a trained TensorFlow model, and use it in a mobile app through TensorFlow Lite.

Google Lens

The following video demo’s some of Google’s cool innovations in computer vision. Using Google Lens, the photos app can identify objects in view. In the future, you’ll be able to point your phone at a store. Using an AR view, Google can tell you ratings, descriptions, and pictures related to the store.

Join the conversation at our next Google Developer Group.

Interested in digging deeper into these technology announcements? What are consequences of connecting some of these ideas together? What opportunities do these capabilities give to our local developer community?

We’ll dig deeper into the latest announcements from Google I/O conference. We’ll discuss the various pathways for leveraging these technologies in your career. We’re excited to discuss how these tools can benefit local startups, makers and businesses in Orlando, FL.

When: May 24, 2018 – 6pm to 9pm

https://www.meetup.com/GDG-Central-Florida/events/247996681/

AFrame: Building WebVR experiences with HTML and JavaScript

Sample AFrame experience

Hey, makers and web developers! Looking for a fun weekend coding challenge? In this post, I wanted to introduce you to a simple tool called A-Frame for building VR experiences using HTML and JavaScript.  Originally sponsored by the Mozilla foundation, A-frame enables you to quickly build WebVR scenes, panoramas, games, or data visualizations.  With very basic HTML skills, you can build delightful VR experiences.  I believe you’ll find the declarative or tag based coding style very approachable.  For advanced developers, you can drop to the JavaScript and ThreeJS abstraction layers to customize the experiences in more detail.

Don’t have an expensive VR rig? No problem!  AFrame supports platforms as simple as desktops, mobile phones or Google Cardboard.  I find it impressive that the AFrame platform supports advanced rigs like Vive and Rift.    I’m looking forward to seeing support for Microsoft Mixed Reality too.

Here’s a quick “hello world” experience for you to test drive.

Projects Built with AFrame

How can you get started with AFrame today?

Make sure to check out the impressive community of plugins on the AFrame registry

If you’re in the Florida area, I would like to invite you to the DevFest Florida 2017.  All the Google developer groups in Florida have combined forces to throw an awesome developer party and learning conference.   Join us for a great weekend of networking, learning, and hacking! Learn more at https://devfestflorida.org/.

 

 

 

Top 10 Posts on InspiredToEducate.NET

PIRobot

Science Education

Using Android, JavaScript, and Arduino to control your robot. #makered #javascript #android

DroidScript

Let’s say you want to tinker with making a robot controller on your Android device, but you don’t have a lot of time to learn Java.  Perhaps you just love JavaScript and want to write Android Apps.   In our maker education programs, we enjoy introducing students to JavaScript since the language helps students go from idea to prototype quickly.

Consider checking DroidScript on the Google Play Store.  DroidScript enables you to quickly build simple Android apps using JavaScript.

I greatly appreciate DroidScript enabling you to edit programs directly from a Wifi connected desktop computer.  All you need is a desktop web browser.   You don’t need to install Eclipse, Java, simulators, Netbeans or anything.   You press a button in DroidScript and the app fires up a web server on your Android device.   From your web browser, you can start making new apps, exploring and running sample programs, and checking out the documentation.

What features of Android can you access using DroidScript?

  • You can use the GPS, Compass, Camera, and Accelerometer.
  • DroidScript can do basic graphics functions.
  • According to the documentation, you can send and receive emails and SMS.
  • You can control Arduino and Lego NXT.
  • On a personal project, I used DroidScript to send commands to an Arduino through serial communication.
  • You can also fire up a custom web server so that your phone can respond to HTTP requests.

I think users will appreciate the effective samples and documentation.

Docs screen

For the young programmer, hobby programmer or someone who needs some quick code duck tape on Android, DroidScript is worth checking out.  If you need help, they have an active forum community at https://groups.google.com/forum/#!forum/androidscript

Let’s Build a Robot Control program

In our previous blog post, we showed you how to build your own DIY servo robot using Arduino.   Let’s assume that the Arduino code follows the following protocol when it receives bytes on the serial port:

  • When w is received, the robot moves forward.
  • When  s is received, the robot moves backward.
  • When a is received, the robot moves left.
  • When d is received, the robot moves right.
  • When the space character is received, the robot stops all motion.

Here’s your test robot using an Arduino Nano.

Android Bot

For this robot design, I want to use my Android device and a blue tooth keyboard to remotely control the Android device.   The blue tooth keyboard helps me control the robot at a distance.  You will also need to obtain a USB to micro-USB adapter like this one to connect your Android device to Arduino.  Here’s the code needed to control the robot.







Robot control program active


DroidScript supports two modes of development: pure JavaScript and HTML mode. This code sample uses the HTML style of application development making it natural for web developers. Let’s tear down the code. We need to import a few script files at the top of the code.







When the application starts, we need to configure the serial port to a baud rate of 9600.


//Called after application is started.
function OnStart()
{
app.ShowPopup( "Robot control active" );
usb = app.CreateUSBSerial(9600);
}

The following functions define our protocol for sending messages to the Arduino. In each case, we’re just sending the appropriate character to the serial port.


function forward()
{
usb.Write( "w" );
}

function back()
{
usb.Write( "s" );
}

function left()
{
usb.Write( "a" );
}

function stop()
{
usb.Write( " " );
}

function right()
{
usb.Write( "d" );
}

To accept keyboard input from the user, we use the following switch statement:


$(document).keypress(function(event) {
switch(event.charCode)
{
case 119:
forward();
break;
case 115:
back();
break;
case 97:
left();
break;
case 100:
right();
break;
case 32:
stop();
break;

DroidScript has a really easy function for text to speech too.


var pitch = 1.0, speed = 1.0;
app.TextToSpeech( "Resistance is futile. You will be assimilated.", pitch, speed );

Make sure to download DroidScript and check out the other cool features it offers. We love to hear from our readers! We would enjoy seeing your robots or your apps! Leave a comment below. All the best!

 

Top Stories on InspiredToEducate.NET

Learning To Code

Science Education

 

Join the Spark Macon Maker Space Community on Facebook

Key Insights from Microsoft Build 2015 Conference

Build 2015

3D Modeling with 123D Sculpt

123D Sculpt

For tinkers, artists, and students interested in 3D modeling, I wanted to share my review of 123D Sculpt by @AutoDesk.   At the time of this writing, you can only find this app on the iOS app store.   I find that I gravitate toward using my desktop computer for 3D modeling since the keyboard and mouse make me feel more productive and precise.   I, however, think that 123D Sculpt utilizes the user interface of the IPAD and touch well.  In the real world, you might start a sculpture with a lump of clay and mold it into a shape.   123D Sculpt enables you to start with a digital “lump of clay” and mold it into forms that you design.      Check out the video below to see how it works.

My favorite features…

  • Symmetry: I appreciate the symmetry feature in the tool.   123D Sculpt works well when you are creating something organic looking.  In nature, symmetry is everywhere!  The symmetry feature enables you to mold and paint one half of the object.   Your changes will be mirrored on the other side of the object.
  • Downloading model files: If you publish your 3D model to the community, you can download the model files for printing.   To learn more about this procedure, please refer to the following link.   In a future paid edition, I would imagine that AutoDesk will probably enable you to download your model without publishing to the community.
  • Kid friendly: I have let my kids make stuff with 123D sculpt.  Sometimes, they just like to move around the 3D model and color it.   Other times the kids surprise me with their own creations like space ships, dinosaurs, etc.
  • Textures: You can really improve a 3D model through textures.   The tool comes with a library of textures for eyes, fur, feathers, hair, faces, skin, and more.   You can also create your own textures using your photos.

I would like to see this app come to Android devices as well.   I think I would gladly pay for this app if it enabled me to export my 3D models without using the “community share” feature.

All in all, I think this is a fun way to introduce 3D modeling to makers.   Whether you’re young or old, you’ll find this tool engaging.  I can imagine this tool being used in video game design and 3d printing projects.

 

What sort of 3D Modeling do you enjoy doing?  What tools are you excited about?

 

 

 

Join the Spark Macon Maker Space Community on Facebook

Posts From InspiredToEducate.NET

10 Community Service Challenge Ideas for Civic Hackers

Hack for change picuture
As I have started to share the story of the National Day of Civic Hacking with my friends, local designers, programmers and family, I commonly receive the following question:  What kinds of projects can I do?
I wrote this post so that you’re not stuck with “blank paper” syndrome.   Innovation is sparked by context, looking at needs and challenges.   When you attend a hackathon like the National Day of Civic Hacking, the organizers or event sponsors often share project ideas, challenge statements and support material.    You, however, do not need to limit yourself to those ideas.    I wanted to briefly share a few project ideas to help inspire you during your next civic hacking event.   Most of the project ideas come from a community known as “Random Hacks of Kindness.”   Please consider joining us and citizens around the nation in the National Day of Civic Hacking. (see details below)   It will be a fun and innovative weekend of community service to YOUR community.

Looking for more inspiration? Check out the following:

 

Hack for Change 2014

InspiredToEducate.NET Posts

10+ Lessons To Help You Grow Your Skills as a Developer

Coding Dojo

The Google Developer Group of Mercer University strives to teach web and mobile developers about the various tools Google has available to help create great products more efficiently.  GDG Macon also helps technology students at Mercer University meet developers from the local community and provide community building for Middle Georgia IT professionals.

We have had a lot of fun building community and learning together in 2013 and 2014.  Just in case you missed all the action, we’ve collected many of the lessons from our sessions.   We hope the content serves you in growing your skills as a technology professional.

Fall 2013
Connecting Your Favorite Google Services Using Google App Script

Building Web Apps at Google Scale – Introduction To AppEngine

Finding Common Interests in Community Challenges and Hack-a-thons

Review of Tools for Android Development

Using “Coding Dojo” to Engage Learners and Have Fun

Ignite Talks on HTML5 – Lots of Learning in 5 minutes

Introduction HTML5 and JavaScript using Games

Spring 2014

Easy Data Visualization with Google Charts and JavaScript

Backbone.js

Introduction to Dart

Avast, Ye Pirates: Write a Web App using Dart

Coming Soon to Mercer GDG in Spring 2014


Join Google Developer Group at Mercer University

 

Join Google Developer Group at Mercer University

Welcome To Google Developer Group at Mercer University
https://sites.google.com/site/gdgmerceru/

We extend a warm welcome to software professionals, students, web designers, web programmers, and people excited about technology.

Come help us plan the first year, hear about cool Google technologies, and see Google Glass. Proposed themes for the Fall:
Web application development using Google App Engine

When:

  • Tues, August 20th
  • 6:30 pm – Networking
  • 7:00 pm – Google Glass introduction

Sign up for the event!
We appreciate you joining Google+, adding us to your circles, and signing up for the event here by August 13th. This will help us plan arrangements for food and refreshment during our networking time.
Sign Up! https://plus.google.com/events/c0v3565uuav5kcvh471qnksug8k

 

Where:

 

Learn more

 

I’m really excited to be supporting the students and community at GDG Mercer University.   It’s going to be a great season of making cool software with Google technology.   What Google or Internet technology are you interested in learning?   Leave us a comment on this post.

 

Related Posts: