3 Reasons Why Google Developer Groups Support Community Growth

Hi, friends. While looking through old blog post I have written, I discovered that I wrote my first blog post about Google developer group ten years ago. I have enjoyed the journey. In this blog post, I wanted to explore some of my personal motivations, stories and feelings toward supporting our local Google developer groups. For my kids and my community, I enjoy advancing the mission of helping “students young and old to love learning through making, tinkering, and exploration.” I appreciate the GDG community enabling me to explore this mission using Google tools and the culture of open innovation.

Growing Students: In the past month, I had bumped into this cool episode on the Google Cloud Platform Podcast with one of my mentors, Dr. Laurie White.

Google Cloud for Higher Education with Laurie White and Aaron Yeats

Dr. Robert Allen and Dr. White invited me to join their Google Developer Group(GDG) while I lived in Macon, GA. The GDG of Macon focused on serving the students of Mercer University and the Macon community. I think they sensed my curiousity for teaching software engineering and invited me to teach some of my first sessions. (Google app engine, Firebase, JS, etc.) The experiences amplified a corner of my soul that enjoyed helping college students jump into the crazy world of software engineering. In the podcast, Dr. White unscores that traditional computer science education has many strengths. The average CS program, however, does not address many critical topics desired by engineering teams. (i.e. working with a cloud provider, engineering software for easy testing, test automation, user centered design, etc.) These gaps become blockers to early stage developers seeking work. I found joy helping connecting these students to address these gaps and connect them with opportunities around AI, web, mobile and open source stuff. In the Google ecosystem, there are tribes of mentors who want to help you become successful.

Growing a community of professionals: For many developer community organizers, we recognize the opportunity and promise of software craftmanship. We live in amazing industry not blocked by atoms and the need of physical source material. In the world of software, you can start a business with a strong concept, persistence, and good habits for incremental learning. In the world of software, you can find a good technology job by becoming a little bit better every day AND connecting with a supportive community. For many, software engineering helps real people feed and elevate themselves and their families. I believe that’s an important mission. I believe our GDG communities hit a high mark in helping professionals to grow and making the experience “excellent.” As GDG organizers, we’re passionate about helping you and your teams become successful with your cloud strategies, mobile/web apps, empower creators with AI and design culture. I have had the blessing of many mentors. Dr. Allen gave me my first Google Cardboard and introduced me to Unity3D. I now work with a wonderful design firm focused on creating learning experiences with virtual and augmented reality. It’s important to remember that small sparks can grow to bigger things. It’s important to give back and grow the next generation. We seek to become sparks for others.
Growing future startups: I continue to believe that small businesses will continue to become our engine of economic growth. The news often paints sad picture of our world as broken. We love to support startups who believe they can meaningfully improve the world and help others become successful too. To that end, I love that Google helps startups become successful through their various growth programs like Google developer groups, women tech makers, startup.google.com and student groups. Google’s learning team has put a lot of care into growing an open learning ecosystem through codelabs.google.com, web.dev/learn, Flutter.dev, kaggle.com/learn and other product guides. Learning becomes more joyful when you can learn as a tribe. Why go solo?

Invite to DevFest Florida

If you’re looking for supportive mentors and a growth oriented meetup community, I extend a warm invite to you to DevFestFlorida.org. Working with my fellow GDG organizers across Tampa, Miami, and Orlando, we’re organizing one of the largest local dev conferences in the south to help you learn and grow. It’s an experience designed by developers for our local developers. DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology. Lots of hands-on learning and fun.

Consider joinining us for DevFest Florida Orlando.
– WHERE: Seminole State College Wayne M. Densch Partnership Center in Sanford, FL
– WHEN: Oct 14th
– Check out the details on our tracks:
Web
Mobile
Cloud and data
AI
Startup

Learn more about DevFest Florida Orlando – Oct 14th
Use the following SECRETSALE code to get 10% off tickets

Bird Watching With Python and TensorFlowJS ( Part 2 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see birds. Check out the first post here. In this post, we will focus on the problem of taking pictures every minute or so. For fun, I decided to build this part in Python. To review the system overview, check out my previous blog post here.

The solution for the watcher involves the following major elements and concepts.
– Setup a connection to Azure blob storage. To keep things simple, Azure blob storage enables you to securely store files in Microsoft Azure cloud at low cost.
– Set the time interval for taking pictures
– Setup connection to message queue system. The watch program needs to send a message to an analysis program that will analyze the image content. Please keep in mind that RabbitMQ is simply “email for computer programs.” It’s a way for programs to message each other to do work. I will be running the watcher program on a pretty low-complexity Raspberry PI 2. In my case, I wanted to off-load the image analysis to another computer system with a bit more horse power. In future work, we might move the analysis program to a cloud function. That’s a topic for a future post.

Here’s some pseudo code.
– Setup the program to take pictures
– Loop
– Take a picture
– Store the picture on disk
– Upload the picture to Azure blob storage
– Signal the analysis program to review the picture
– Delete the local copy of the picture
– Wait until we need to take a picture

Setting the stage

Let’s start by sketching out the functions for setting up the blob storage, rabbit message queue, and camera.
At the top of the python file, we need to import the following:

import  cv2
import  time
import  pika
import  json
import  os
from  azure.storage.blob  import  BlobServiceClient

In the following code, we setup the major players of the blob storage, rabbit message queue, and camera.

container_client  =  setup_blob_storage() 
# Set the time interval in seconds
interval  =  60  # every min 
# Initialize the webcam
cap  =  cv2.VideoCapture(0)  

# Check if the webcam is opened successfully

if  not  cap.isOpened():
    print("Error: Could not open the webcam.")
    exit()
queue_name, connection, channel  =  setup_rabbit_message_queue()

Take a picture

In the later part of the program, we start to loop to take a picture and send the data to the analysis program.

ret, frame  =  cap.read() 
if  not  ret:
    print("Error: Could not read frame from the webcam.")
    break  

timestamp, filename = store_picture_on_disk(frame)
print(f"Image captured and saved as {filename}")

Send the picture to Blob Storage

local_file_path  =  filename
blob_name  =  filename 
with  open(local_file_path, "rb") as  data:
    container_client.upload_blob(name=blob_name, data=data)

Signal analysis program to review image using a message

# Prepare a JSON message
message  = {
'fileName': filename,
'timestamp': timestamp,
}
message_json  =  json.dumps(message)

# Send the JSON message to RabbitMQ
channel.basic_publish(exchange='', routing_key=queue_name, body=message_json)
print(f"Message sent to RabbitMQ: {message_json}")

In the previous code sketches, we have not implemented several key functions. Let’s fill in those functions now. You’ll need to position these functions near the top of your script.

setup_blob_storage

Please use this link to learn about Azure Blob storage, account configuration, and Python code patterns.

container_name  =  "picturesblobstorage"
def  setup_blob_storage():
    connect_str  =  "Get connection string for your Azure storage account"
    blob_service_client  =  BlobServiceClient.from_connection_string(connect_str)
    container_client  =  blob_service_client.get_container_client(container_name)
    return  container_client

setup_rabbit_message_queue

Setup connection to message queue system.

def  setup_rabbit_message_queue():
    rabbitmq_host  =  'localhost'
    rabbitmq_port  =  5672
    rabbitmq_username  =  'guest'
    rabbitmq_password  =  'guest'
    queue_name  =  'review-picture-queue'

    # Initialize RabbitMQ connection and channel with authentication
    credentials  =  pika.PlainCredentials(rabbitmq_username, rabbitmq_password)
    connection  =  pika.BlockingConnection(pika.ConnectionParameters(host=rabbitmq_host,port=rabbitmq_port,credentials=credentials))
    channel  =  connection.channel()

    # Declare a queue for sending messages
    channel.queue_declare(queue=queue_name)
    return  queue_name,connection,channel

To keep this blog post brief, I will not be able to jump into all the details regarding setting up RabbitMQ on your local system. Please refer to this 10-minute video for details on setting up this sub-system.

This blog post does a great job of setting up RabbitMQ with “docker-compose.” It’s a light weight way to setup stuff in your environment.

Easy RabbitMQ Deployment with Docker Compose (christian-schou.dk)

store_picture_on_disk

def  store_picture_on_disk(frame):
    timestamp  =  time.strftime("%Y%m%d%H%M%S")
    filename  =  f"image_{timestamp}.jpg"
    cv2.imwrite(filename, frame)
    return  timestamp,filename

In our final blog post, we’ll use NodeJs to load the COCO-SSD model into memory and let it comment upon the image in question.

You can check out the code solution in progress at the following github repository.

https://github.com/michaelprosario/birdWatcher

Check out object-detection.js to see how how object detection will work. Check out watcher.py for a completed version of this tutorial.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Make A Bird Detector with TensorFlowJs ( Part 1 )


In the Rosario tradition of boldly exploring nature, Sarah and my eldest have gotten into bird watching. It’s been cool to see my son and my wife going on hikes and finding cool birds with a local meetup. My wife gave me a challenge to make a bird watcher device for our yard and our bird house. In her vision, we want to understand when we see the most birds in the back yard and capture great photos. In future work, we might even identify the type of bird. In our post today, I thought we would talk through the high level code I’ve prototyped. This will become a fun family project and give me an opportunity to play with some TensorFlowJs.

In the past 12 years, the industry has exploded with innovations involving machine learning. We see these innovations when we ask our home assistant to play a song, using ChatGPT, or using speech to text. In the domain of bird watching, we might build a machine learning model using pictures of different birds with labels specifying the type of bird. A machine learning(ML) system observes patterns in the input data set(set of bird pictures with labels) and constructs rules or structures so the system can classify future images. In contrast to traditional computer programming, we do not explicitly define the code or rules. We train the model using examples and feedback so it learns. In this case, we want to determine if a picture contains a bird.

In this prototype, I will leverage a pretrained ML system or model called COCO-SSD. The COCO aspect of the model finds 80 different classes of things in the context of the picture. (including birds) The model will estimate if it detects a bird in the picture and a bounding box location for the object. The model makes a best attempt to segment the picture and report on all the objects it can see and provide labels.

This diagram provides an overview of the prototype system.

Major elements

  • Watcher – In this project, Python takes pictures every 5 minutes. Pictures get stored to the file system. The file name of the picture gets stored in a message that eventually gets added to a queue.
  • RabbitMQ – We’re using RabbitMQ with JSON messages to manage our queue plumbing. You can think of RabbitMQ as email for computer programs. You can insert messages into different folders. Job processor programs start executing when they receive messages in these folders. This also enables us to create multi-program solutions in different languages.
  • Job Processor – The job processor, written in JavaScript using NodeJS, monitors the message queue for work. When it receives a file name to process, we load the image into memory and request the machine learning process to review it. The COCO-SSD model will report a list of objects it detects with confidence factors associated. If the system finds a bird, the process will write a database record with the details.
  • Database – For this solution, we’re currently prototyping the solution using Supabase. On many of my weekend projects, I enjoy getting to rapidly create structures and store data in the cloud. Under the hood, it uses PostgresDB and feels pretty scalable. Thank you to my friend Javier who introduced me to this cool tool.

The job processor element using TensorFlowJS to execute the object detection model. TensorFlowJs is a pretty amazing solution for executing ML models in the browser or NodeJS backends. Learn more with the following talk.

In our next post, we’ll dive into the details of the job processor process.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Gitpod: Cloud Dev Environment That Saves You Time


In the good old mainframe days, professionals may have used a “dumb terminal.” This terminal had enough power to execute input and output tasks with a user, but the deeper magic happened on more powerful main frame computer. In 2023, student makers may have a Chromebook, a great inexpensive laptop for academic computing. In common cases, it’s hard do larger dev projects on the laptop alone due to limited speed and capacity. Over the past six months, I have enjoyed using Gitpod.io, a cloud based code editor and development environment empowering devs with high performance, isolation, and security. With gitpod, a dumb Chromebook workstation becomes a robust dev machine for web development and data science learning.

Gitpod.io is a powerful online Integrated Development Environment (IDE) that allows developers to write, test, and deploy code without the need for local installations of software. Gitpod.io is built on top of Git and leverages the power of Docker containers to provide a lightweight and fast environment for developers to work in. For makers familiar with Visual studio code, you’ll find the Gitpod experience very inviting since the tool builds upon the user experience of VSCode. I have used many of my favorite VSCode extensions for .NET, nodejs, Azure, and Python with Gitpod.

You can start a new workspace with just a few clicks, and it automatically clones your repository, installs dependencies, and sets up your environment. This means you can start coding right away without having to worry about configuring your development environment. When teaching new skills to developers, this benefit becomes very helpful to mentors or workshop organizers.

Gitpod.io also provides a range of features to make the development process more efficient. For example, it has built-in support for code completion, debugging, and code reviews, as well as a terminal that allows you to run commands directly from your workspace. In the past week, I have focused on learning new Python data science environments. (PyTorch) Using a easy github template template, I had a fast, web based, Python environment running quickly. I also appreciated that the Python notebooks worked well inside of VSCode.

https://databaseline.tech/zoose-3.0/

Gitpod provides a generous free tier to help you get started. If your software team needs more time on the platform, they offer reasonable pay plans. I hope that you consider checking out gitpod.io for your next web dev or data science project. In many situations, having access to a high performance coding environment through a browser helps the flow of your creative project.

To learn more about the origins of this cool tool, check out this podcast with the founders of Gitpod. Their CTO, Chris Weichel, does a good job talking through the benefits of Gitpod for professional software teams and saving pro devs time.
Chris Weichel talks about GitPod time saving in the enterprise

Make music with code using DotNet Core

Curious about making music with code? As a software engineer and music guy, I have enjoyed seeing the connections between music and computers. The first computer programmer, Ada Lovelace predicted that computers will move beyond doing boring math problems into the world of creative arts. If a problem can be converted to a system of symbols, she reasoned that computers could help. She used music as her example.

In previous experiments, I have explored the ideas of code and music using TypeScript, NodeJs, and Angular. You can find this work here.

After looking around GitHub, I found a really cool music library for C# devs. I’m hoping to use it to create tools to make quick backup tracks for practicing improv. It’s just fun to explore electronic music, theory, and computational music. Make sure to check out the blog post by Maxim. ( the author of DryWetMidi ) It’s a pretty comprehensive guide to his library.

What is MIDI?

MIDI stands for musical instrument digital interface. (MIDI) Under a file format like WAV or MP3, the computer stores the raw wave form data about sound. The MIDI file format and protocols operate an conceptual layer of music data. You can think of a MIDI file having many tracks. You can assign different instruments ( sounds to tracks ). In each track, the musician can record songs as many events. MIDI music events might include turning a note on, turning a note off, engaging the sustain pedal, and changing tempo. MIDI music software like Garage band, Cakewalk and Bandlab can send the MIDI event data to a software synth which interprets the events into sound. In general, the MIDI event paradigm can be extended to support other things like lighting, lyrics, and other stuff.

DryWetMidi Features

  • Writing MIDI files: For my experiments, I have used DryWetMIDI to explore projects for making drum machines and arpeggio makers. I’m really curious about using computers to generate the skeleton of songs. Can computers generate a template for a POP song, a fiddle tune, or a ballad? We’re about to find out! DryWetMIDI provides a lower level API for raw MIDI event data. The higher level “Pattern” and “PatternBuilder” APIs enable coders to think of expressing a single thread of musical ideas. Let’s say you’re trying to describe a piece for a string quartet. The “PatternBuilder” API enables you to use a fluent syntax to describe the notes played by the cello player. While playing with this API, I have to say that I loved the ability to combine musical patterns. The framework can stack or combine musical patterns into a single pattern. Let’s say you have three violin parts in 3 patterns. The library enables you to blend those patterns into a single idea with one line of code. Maxim showed great care in designing these APIs.
  • Music theory tools: The framework provides good concepts for working for notes, intervals, chords and other fundamental concepts of music.
  • Reading MIDI files: The early examples show that DryWetMIDI can read MIDI files well. I’ve seen some utility functions that enable you to dump MIDI files to CSVs to support debugging. The documentation hints at a chord extraction API that looks really cool. Looking forward to testing this.
  • Device interaction: DryWetMIDI enables makers to send MIDI events and receive them. This capability might become helpful if you’re making a music tutor app. You can use the music device interaction API to watch note events. The system can provide feedback to the player if they’re playing the right notes at the appropriate time.

Visions for MusicMaker.NET for .NET Core

In the following code example, I’ve built an API to describe drum patterns using strings.
The strings represent sound at a resolution of 16th notes. Using the “MakeDrumTrack” service,
we can quickly express patterns of percussion.

IMidiServices midiServices = new MidiServices();
var service = new MakeDrumTrackService(midiServices);
var command = new MakeDrumTrackCommand
{
    BeatsPerMinute = 50,
    FileName = fileName,
    Tracks = new List<DrumTrackRow>
    {
        new()
        {
            Pattern = "x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|",
            InstrumentNumber = DrumConstants.HiHat
        },
        new()
        {
            Pattern = "x---|----|x---|----|x---|----|x---|----|",
            InstrumentNumber = DrumConstants.AcousticBassDrum
        },
        new()
        {
            Pattern = "----|x---|----|x--x|----|x---|----|x--x|",
            InstrumentNumber = DrumConstants.AcousticSnare
        },
        new()
        {
            Pattern = "-x-x|x-x-|-x-x|x--x|-xx-|xx--|-xx-|x--x|",
            InstrumentNumber = DrumConstants.HiBongo
        }
    },
    UserId = "system"
};

// act
var response = service.MakeDrumTrack(command);

Using the ArpeggioPlayer service, we’ll be able to render out a small fragement of music given a list of chords and an arpeggio spec.

var tempo = 180;
var instrument = (byte)Instruments.AcousticGrandPiano;
var channel = 1;

var track = new ChordPlayerTrack(instrument, channel, tempo);
var command = ArpeggioPatternCommandFactory.MakeArpeggioPatternCommand1();
var player = new ArpeggioPlayer(track, );
var chordChanges = GetChords1();  // Am | G | F | E

player.PlayFromChordChanges(chordChanges);

// Write MIDI file with DryWetMIDI
var midiFile = new MidiFile();
midiFile.Chunks.Add(track.MakeTrackChunk());
midiFile.Write("arp1.mid", true);

In the following method, the maker can describe the arpeggio patterns using ASCII art strings. The arpeggio patterns operate at resolution of sixteenth notes. This works fine for most POP or eletronic music. In future work, we can build web apps or mobile UX to enable the user to design the arpeggio patterns or drum patterns.

public static MakeArpeggioPatternCommand MakeArpeggioPatternCommand1()
{
    var command = new MakeArpeggioPatternCommand
    {
        Pattern = new ArpeggioPattern
        {
            Rows = new List<ArpeggioPatternRow>
            {
                new() { Type = ArpeggioPatternRowType.Fifth, Octave = 2, Pattern = "----|----|----|---s|" },
                new() { Type = ArpeggioPatternRowType.Third, Octave = 2, Pattern = "----|--s-|s---|s---|" },
                new() { Type = ArpeggioPatternRowType.Root, Octave = 2, Pattern =  "---s|-s-s|---s|-s--|" },
                new() { Type = ArpeggioPatternRowType.Fifth, Octave = 1, Pattern = "--s-|s---|--s-|--s-|" },
                new() { Type = ArpeggioPatternRowType.Third, Octave = 1, Pattern = "-s--|----|-s--|----|" },
                new() { Type = ArpeggioPatternRowType.Root, Octave = 1, Pattern =  "s---|----|s---|----|" }
            },
            InstrumentNumber = Instruments.Banjo
        },
        UserId = "mrosario",
        BeatsPerMinute = 120,
        Channel = 0
    };
    return command;
}

The previous code sample writes out a music fragment like the following.

If you’re interested in following my work here, check out the following repo.

Getting Started with PhaserJs and TypeScript

Curious about building 2D games with web skills? In this post, we’ll explore tools and patterns to use PhaserJs and JavaScript to make engaging 2D games. We’ll cover tools to make experiences with our favorite language: TypeScript.

Reference links

Related links

Quick start for Phaser 3 and TypeScript

As I have time for small project experiments for myself, I decided to explore new developments with Phaser JS. In general, Phaser JS seems like a fun framework to enable novice game makers to build fun 2D games. The javascript language has become a popular choice since the language exists in every web browser.

What can you build with Phaser 3? Check out some examples here.
Tetris
Robowhale
Fun Math Game

In this blog post, we walk through building a small space shooter game.

As you start Phaser JS development, many tutorials walk you through some process to setup a web server to serve html and javascript content. Unfortunately, plain JavaScript alone does not guide makers to create well formed code. In plain Javascript, coders need to create stuff in baby steps. In this process, you should test stuff at each step. If you use a tool like Visual Studio code, the tool provides awesome guidance and autocomplete for devs. It would be nice if tools could improve to help you find more code faults and common syntax mistakes.

The TypeScript language invented by Anders Hejlsberg comes to the rescue. The TypeScript language and related coding tools provides robust feedback to the coder while constructing code. The JavaScript language does not support ideas like class structures or interfaces. Classes enable makers to describe a consistent template for making objects, their related methods, and properties. In a similar way, interfaces enable coders to describe the properties and methods connected to an object, but does not define implementations of methods. It turns out these ideas provide an increased structure and guidance to professional developers to create large applications using JavaScript. When your tools help you find mistakes faster, you feel like you move faster. This provides great support for early stage devs. TypeScript borrows patterns and ideas from C#, another popular language for game developers and business developers.

I found a pretty nice starter kit that integrates TypeScript, a working web server, and Phaser 3 JS together. Here’s the general steps for setting up your Phaser 3 development environment.

Install Visual Studio Code

Install NodeJs and NPM

  • NodeJs enables coders to create JavaScript tools outside of the browser.
  • npm – When you build software in modern times, tools that you build will depend upon other lego blocks. Those lego blocks may depend upon others. The node package manager makes it easy to install NodeJs tools and their related dependences.
  • Use the following blog post to install NodeJs and NPM
  • Installing NodeJs and NPM from kinsta.com

Install Yarn

  • Install Yarn
  • Yarn is a package manager that provides more project organization tools.

Download Phaser 3+TypeScript repository

On my environment, I have unziped the files to /home/michaelprosario/phaser3-rollup-typescript-master.

Finish the setup and run

cd /home/michaelprosario/phaser3-rollup-typescript-master
yarn install
yarn dev

At this point, you should see that the system has started a web server using vite. Open your browser to http://localhost:3000. You should see a bouncing Phaser logo.

Open up Visual Studio Code and start hacking

  • Type CTRL+C in the terminal to stop the web server.
  • In the terminal, type ‘code .’ to load Visual Studio Code for the current folder.
  • Once Visual Studio Code loads, select “Terminal > New Terminal”
  • In the terminal, execute ‘yarn dev’
  • This will run your development web server and provide feedback to the coder on syntax errors every time a file gets saved.
  • If everything compiles, the web server serves your game at http://localhost:3000

TypeScript Sample Code

Open src/scenes/Game.ts using Visual Studio Code. If you’ve done Java or some C#, the code style should feel more familiar.

import Phaser from 'phaser';

// Creates a scene called demo as a class
export default class Demo extends Phaser.Scene {
  constructor() {
    super('GameScene');
  }

  preload() {
    // preload image asset into memory
    this.load.image('logo', 'assets/phaser3-logo.png');
  }

  create() {
    // add image to scene
    const logo = this.add.image(400, 70, 'logo');
    // bounce the logo using a tween
    this.tweens.add({
      targets: logo,
      y: 350,
      duration: 1500,
      ease: 'Sine.inOut',
      yoyo: true,
      repeat: -1
    });
  }
}

Make Unity 3D Games To Amaze Your Friends!

Hello makers! Like many in the computer industry, I had the dream of learning how to build video games. When the math class seemed difficult, I found inspiration to move forward since I had strong motivation to learn how to build video games someday! Unity 3D and their amazing community of game creators have created powerful opportunities for curious makers to build games that amaze your friends. From my first encounters with Unity 3D, I felt that they have done a good job of educating their users. In the past few years, I greatly admire the new strategies they have created to engage learners in their tools.

The idea of “modding” has engaged generations of gamers. (Thank you Minecraft and Roblox!). We’ve become used to the idea that games setup a robust environment where you can build big and crazy things. In lots of games, you’re placed into a position of saving the world. (i.e. you’ve been given a motivation to do something bigger than yourself that’s fun). The Unity 3D “microgame” tutorials provide students with the basic shell of well crafted game experiences. In this context, the Unity 3D team have created tutorial experience to gently guide learners through the Unity 3D environment, programming concepts, and their system for building Unity “lego” blocks. In this experience, you get to select your adventure. Do you want to build your own Lego game? Do you want to make your own version of Super Mario brothers? You can challenge yourself by building a cool kart racing game. In the videos below, I wanted to give a shout out to the Lego action “game jam” and the Kart Racing tutorials.

I always enjoy learning new Unity tricks from other developers. It has been fun to pick apart aspects of these games. In the newest Kart racing tutorials, you can also learn about the newer machine learning capabilities of Unity 3D. ( ML Agents ) It kind of blows my mind that these ideas can now appear in tutorials for early stage coders. As I’ve tested these experiences with my kids, they have enjoyed creating novel kart racing experiences and environments. My older son has enjoyed customizing his own shooter game.

Make sure to check out Unity 3D’s Learning index here: https://learn.unity.com/

If you make something cool, please share a link below in the comments!

Your First Game Jam: LEGO Ideas Edition

In this edition, you will discover how to build a quest in your LEGO® Microgame using the newly released “Speak” and “Counter” LEGO® Behaviour Bricks. Learn step-by-step with a special guest from the LEGO® Games division and our Unity team to create your own unique, shareable game.

Build Your Own Karting Microgame

It’s never been easier to start creating with Unity. From download to Microgame selection, to modding, playing, and sharing your first playable game, this video shows you what you can accomplish in as little as 30 minutes!

For detailed step-by-step Unity tutorials, check out

The Official Guide to Your First Day in Unity playlist.

Related Posts

Build an AFrame.IO Scene on Oculus Quest with Teleportation

FireFox Mixed Reality

Hey web developers! Looking for a fun way to build VR experiences on the Oculus Quest? This tutorial will provide a brief guide to drafting an AFrame.IO VR experience that includes GLTF model loading and teleportation controls. As web developers, we have the unique opportunity to link data, models, and services to WebXR experiences. We really love seeing AFrame.IO work well on the Oculus platform. These are exciting times and trends!

AFrame.IO Script for Oculus WebXR

Fork the script at https://aframeexamples.glitch.me. In 2023, I feel that @ProfStemkoski has created one of the best collections of AFrame.IO templates. I like how he keeps his examples relatively small. It makes it easier to find a starting point for your project. Under the “quest-extras.html”, you’ll find an approachable example for starting with a “player movement” component that works with Oculus Quest. This example also shows an example for object interactivity via raycasting.

<!DOCTYPE html>
<html>

<head>
    <title>A-Frame: Quest movement and interaction</title>
    <meta name="description" content="Moving around an A-Frame scene with Quest touch controllers.">
    <script src="https://aframe.io/releases/1.3.0/aframe.min.js"></script>
    <script src="js/aframe-environment-component.js"></script>
    <script src="js/controller-listener.js"></script>
    <script src="js/player-move.js"></script>
    <script src="js/raycaster-extras.js"></script>
</head>

<body>

<script>
// if raycaster is pointing at this object, press trigger to change color
AFRAME.registerComponent("raycaster-color-change", {
    init: function () 
    {
        this.colors = ["red", "orange", "yellow", "green", "blue", "violet"];
        this.controllerData = document.querySelector("#controller-data").components["controller-listener"];
        this.hoverData      = this.el.components["raycaster-target"];
    },

    tick: function()
    {
        if (this.hoverData.hasFocus && this.controllerData.rightTrigger.pressed )
        {
            let index = Math.floor( this.colors.length * Math.random() );
            let color = this.colors[index];
            this.el.setAttribute("color", color);
        }

        if (!this.hoverData.hasFocus || this.controllerData.rightTrigger.released)
        {
            this.el.setAttribute("color", "#CCCCCC");
        }
    }
});


</script>

<a-scene environment="preset: default;" renderer="antialias: true;">

    <a-assets>
        <img id="gradient" src="images/gradient-fade.png" />
    </a-assets>

    <a-sky 
        color = "#000337">
    </a-sky>

    <!-- use a simple mesh for raycasting/navigation -->
    <a-plane
        width="100" height="100"
        rotation="-90 0 0"
        position="0 0.01 0"
        visible="false"
        class="groundPlane"
        raycaster-target>
    </a-plane>

    <a-entity 
        id="player" 
        position="0 0 0" 
        player-move="controllerListenerId: #controller-data;
                     navigationMeshClass: groundPlane;">

        <a-camera></a-camera>

        <a-entity 
            id="controller-data" 
            controller-listener="leftControllerId:  #left-controller; 
                                 rightControllerId: #right-controller;">
        </a-entity>

        <a-entity 
            id="left-controller"
            oculus-touch-controls="hand: left">
        </a-entity>

        <!-- experiment with raycasting interval; slight performance improvement but jittery appearance in world -->
        <a-entity
            id="right-controller"
            oculus-touch-controls="hand: right"
            raycaster="objects: .raycaster-target; interval: 0;"
            raycaster-extras="controllerListenerId: #controller-data; 
                              beamImageSrc: #gradient; beamLength: 0.5;">
        </a-entity>

    </a-entity>

    <a-torus-knot 
        p="2" q="3" radius="0.5" radius-tubular="0.1"
        position = "-2.5 1.5 -4"
        color="#CC3333"
        raycaster-target>
    </a-torus-knot>

    <a-box
        width = "2" height = "1" depth = "1"
        position = "-1 0.5 -3"
        rotation = "0 45 0"  
        color = "#FF8800"
        class = ""
        raycaster-target>
    </a-box>

    <a-sphere
        radius = "1.25"
        position = "0 1.25 -5"
        color = "#DDBB00"
        raycaster-target>
    </a-sphere>

    <a-cylinder
        radius = "0.5" height = "1.5"
        position = " 1 0.75 -3"
        color = "#008800" 
        raycaster-target>
    </a-cylinder>

    <a-cone
        radius-bottom = "1" radius-top = "0" height = "2"
        position = "3 1 -4"
        color = "#4444CC"
        raycaster-target>
    </a-cone>

    <a-torus 
        radius="0.5" radius-tubular="0.1"
        position = "2 3 -4"
        rotation = "30 -20 0"
        color="#8800FF"
        raycaster-target>
    </a-torus>

    <!-- demo interaction boxes -->

    <a-dodecahedron
        radius = "0.5"
        position = "-0.8 1 -2"
        color = "#EEEEEE"
        raycaster-target="canGrab: true;"
        raycaster-color-change>
    </a-dodecahedron>

    <a-icosahedron
        radius = "0.5"
        position = "0.8 1 -2"
        color = "#EEEEEE"
        raycaster-target="canGrab: true;"
        raycaster-color-change>
    </a-icosahedron>

</a-scene>

</body>
</html>

I also admire the work of Ada Rose Canon too. You can find a very complete starter kit for AFrame.IO here:
https://aframe-xr-starterkit.glitch.me/. This example shows features like collision detection, AR integration, and more.

Let us know if you make anything cool!!

Top Stories on InspiredToEducate.NET

14 AFrame.IO Resources For Your WebXR Project

AFrame Logo

I’m a big fan of the work of the AFrame.IO community.  Thank you to Mozilla, Diego Marcos, Kevin Ngo, and Don McCurdy for their influence and effort to build a fun and productive platform for building WebVR experiences.   In this post, I’ve collected a few Github repositories and resources to support you in building AFrame experiences.

Talk Abstract: In the next few years, augmented reality and virtual reality will continue to provide innovations in gaming, education, and training. Other applications might include helping you tour your next vacation resort or explore a future architecture design. Thanks to open web standards like WebXR, web developers can leverage their existing skills in JavaScript and HTML to create delightful VR experiences. During this session, we will explore A-Frame.io, an open source project supported by Mozilla enabling you to craft VR experiences using JavaScript and a growing ecosystem of web components.

https://github.com/ngokevin/kframe
Kevin’s collection of A-Frame components and scenes.

https://webvr.donmccurdy.com/
Awesome WebXR from Don McCurdy

https://github.com/feiss/aframe-environment-component
Infinite background environments for your A-Frame VR scene in just one file.

https://github.com/aframevr/aframe-school
Interactive workshop and lessons for learning A-Frame and WebVR.

https://aframe.io/aframe-registry/
Official registry of cool AFrame stuff

https://github.com/donmccurdy/aframe-physics-system
Components for A-Frame physics integration, built on CANNON.js.

Experiment with AR and A-Frame
AFrame now has support for ARCore. Paint the real world with your XR content! Using FireFox Reality for iOS, you can leverage ARKit on your favorite IPad or IPhone.

https://github.com/michaelprosario/aframe
I’ve collected a small collection of demo apps to explore some of the core ideas of AFrame.

AFrame Layout Component
Automatically positions child entities in 3D space, with several layouts to choose from.

Animation
An animation component for A-Frame using anime.js. Also check out the animation-timeline component for defining and orchestrating timelines of animations.

Super Hands
All-in-one natural hand controller, pointer, and gaze interaction library for A-Frame. Seems to work well with Oculus Quest.

A-Frame Component loading Google Poly models from Google Poly
Component enables you to quickly load 3D content from Google Poly

aframe-htmlembed-component
HTML Component for A-Frame VR that allows for interaction with HTML in VR. Demo

https://github.com/nylki/aframe-lsystem-component
L-System/LSystem component for A-Frame to draw 3D turtle graphics. Using Lindenmayer as backend.

Thanks to the amazing work from Mozilla, WebXR usability has improved leveraging specialized FireFox browsers
FireFox Reality
FireFox Reality for HoloLens 2 – For raw ThreeJs scripts, it works well. I’m still doing testing on AFrame scenes.

If you live in Central Florida or Orlando, consider checking out our local chapter of Google developer Group.  We enjoy building a fun creative community of developers, sharing ideas, code, and supporting each other in the craft of software.  Learn more about our community here:

GDGCentralFlorida.org

Top Stories on InspiredToEducate.NET