3 Reasons Why Google Developer Groups Support Community Growth

Hi, friends. While looking through old blog post I have written, I discovered that I wrote my first blog post about Google developer group ten years ago. I have enjoyed the journey. In this blog post, I wanted to explore some of my personal motivations, stories and feelings toward supporting our local Google developer groups. For my kids and my community, I enjoy advancing the mission of helping “students young and old to love learning through making, tinkering, and exploration.” I appreciate the GDG community enabling me to explore this mission using Google tools and the culture of open innovation.

Growing Students: In the past month, I had bumped into this cool episode on the Google Cloud Platform Podcast with one of my mentors, Dr. Laurie White.

Google Cloud for Higher Education with Laurie White and Aaron Yeats

Dr. Robert Allen and Dr. White invited me to join their Google Developer Group(GDG) while I lived in Macon, GA. The GDG of Macon focused on serving the students of Mercer University and the Macon community. I think they sensed my curiousity for teaching software engineering and invited me to teach some of my first sessions. (Google app engine, Firebase, JS, etc.) The experiences amplified a corner of my soul that enjoyed helping college students jump into the crazy world of software engineering. In the podcast, Dr. White unscores that traditional computer science education has many strengths. The average CS program, however, does not address many critical topics desired by engineering teams. (i.e. working with a cloud provider, engineering software for easy testing, test automation, user centered design, etc.) These gaps become blockers to early stage developers seeking work. I found joy helping connecting these students to address these gaps and connect them with opportunities around AI, web, mobile and open source stuff. In the Google ecosystem, there are tribes of mentors who want to help you become successful.

Growing a community of professionals: For many developer community organizers, we recognize the opportunity and promise of software craftmanship. We live in amazing industry not blocked by atoms and the need of physical source material. In the world of software, you can start a business with a strong concept, persistence, and good habits for incremental learning. In the world of software, you can find a good technology job by becoming a little bit better every day AND connecting with a supportive community. For many, software engineering helps real people feed and elevate themselves and their families. I believe that’s an important mission. I believe our GDG communities hit a high mark in helping professionals to grow and making the experience “excellent.” As GDG organizers, we’re passionate about helping you and your teams become successful with your cloud strategies, mobile/web apps, empower creators with AI and design culture. I have had the blessing of many mentors. Dr. Allen gave me my first Google Cardboard and introduced me to Unity3D. I now work with a wonderful design firm focused on creating learning experiences with virtual and augmented reality. It’s important to remember that small sparks can grow to bigger things. It’s important to give back and grow the next generation. We seek to become sparks for others.
Growing future startups: I continue to believe that small businesses will continue to become our engine of economic growth. The news often paints sad picture of our world as broken. We love to support startups who believe they can meaningfully improve the world and help others become successful too. To that end, I love that Google helps startups become successful through their various growth programs like Google developer groups, women tech makers, startup.google.com and student groups. Google’s learning team has put a lot of care into growing an open learning ecosystem through codelabs.google.com, web.dev/learn, Flutter.dev, kaggle.com/learn and other product guides. Learning becomes more joyful when you can learn as a tribe. Why go solo?

Invite to DevFest Florida

If you’re looking for supportive mentors and a growth oriented meetup community, I extend a warm invite to you to DevFestFlorida.org. Working with my fellow GDG organizers across Tampa, Miami, and Orlando, we’re organizing one of the largest local dev conferences in the south to help you learn and grow. It’s an experience designed by developers for our local developers. DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology. Lots of hands-on learning and fun.

Consider joinining us for DevFest Florida Orlando.
– WHERE: Seminole State College Wayne M. Densch Partnership Center in Sanford, FL
– WHEN: Oct 14th
– Check out the details on our tracks:
Web
Mobile
Cloud and data
AI
Startup

Learn more about DevFest Florida Orlando – Oct 14th
Use the following SECRETSALE code to get 10% off tickets

Bird Watching With Python and TensorFlowJS ( Part 3 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see anything. In the future, we want to search the database for birds. This post will focus on the problem of detecting objects in the image and storing them into a database. Check out part 1 and part 2 to more context on this project.

TensorFlow.js is an open-source JavaScript library developed by Google’s TensorFlow team. It enables machine learning and deep learning tasks to be performed directly in web browsers and Node.js environments using JavaScript or TypeScript. TensorFlow.js brings the power of TensorFlow, a popular machine learning framework, to the JavaScript ecosystem, making it accessible for web developers and data scientists.

Under the TensorFlow Js Framework, you have access to the COCOS-SSD model that detects 80 classes of common objects. The output response reports a list of objects found in the image, a confidence factor, bounding boxes to point to each object. Check out this video for an example.

In the following code, we import some of our dependencies. This includes
– TFJS – TensorflowJs
– cocosSSd – TensorFlow model for common object detection
– amqp – A library for connecting to rabbitMQ
– supabase/supabase-js – To log data of objects found, we will send our data to Supabase
– azure/storage-blob – To download pictures from Azure blob storage, we add a client library to connect to the cloud

const tf = require("@tensorflow/tfjs-node")
const amqp = require('amqplib');
const cocosSSd = require("@tensorflow-models/coco-ssd")
const { createCanvas, loadImage } = require('canvas');
const { createClient } = require('@supabase/supabase-js');
const { BlobServiceClient } = require("@azure/storage-blob");
const { v1: uuidv1 } = require("uuid");
var fs = require('fs');

My friend Javier got me excited about trying out https://supabase.com/. If you’re looking for a simple document or relational database solution with an easy api, it’s pretty cool. This code will grab some details from the environment and setup a supabase client.

const supabaseUrl = process.env.SUPABASEURL;
const supabaseKey = process.env.SUPABASEKEY;
const supabase = createClient(supabaseUrl, supabaseKey)

To learn more about Supabase, check out supabase.com.

In our situation, the job-processor program and the watcher program will probably run on two different machines. I will try to run the watcher process on a RaspberryPi. The job processor will probably run on some other machine. The watch program takes pictures and stores the files into Microsoft Azure blob storage. The watcher signals the job processor by sending a message through rabbitMQ.

Let’s setup the connection to Azure Blob storage.

const AZURE_BLOB_STORAGE_CONNECTION_STRING = process.env.AZURE_BLOB_STORAGE_CONNECTION_STRING;

if (!AZURE_BLOB_STORAGE_CONNECTION_STRING) 
{
  throw Error('Azure Storage Connection string not found');
}

const containerName = "picturesblobstorage";
const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_BLOB_STORAGE_CONNECTION_STRING);
const containerClient = blobServiceClient.getContainerClient(containerName);

When we want to download a file from Azure blob storage, we leverage our container client.

async function downloadPictureFromBlobStorage(fileName)
{  
  try 
  {
    const blobClient = containerClient.getBlobClient(fileName);
    console.log(`Downloading blob ${fileName} to ${fileName}`);
    const downloadBlockBlobResponse = await blobClient.downloadToFile(fileName);
    console.log(`Downloaded ${downloadBlockBlobResponse.contentLength} bytes`);
    return true;
  } catch (err) {
    console.error(err.message);
    return false;
  }  
}

Let’s setup our class for getting insight from our object detection algorithm. In the following class, the “makeCanvasFromFilePath” method loads the picture into memory as a canvas. Using CocosSSD mode, we detect objects in the image using the predict method.

class ObjectDetection 
{
    constructor()
    {
        this.model = null;
    }

    async predict(image)
    {
        if(!this.model)
        {
            this.model = await cocosSSd.load();
        }

        const canvas = await this.makeCanvasFromFilePath(image);    
        const predictions = await this.model.detect(canvas);

        return { predictions: predictions }
    }

    async makeCanvasFromFilePath(image) {
        const img = await loadImage(image);
        const canvas = createCanvas(img.width, img.height);
        const ctx = canvas.getContext('2d');
        ctx.drawImage(img, 0, 0);
        return canvas;
    }
}

const objectDetection = new ObjectDetection();

Let’s configure RabbitMQ

// RabbitMQ connection URL
const rabbitmqUrl = 'amqp://localhost';

// Queue name to consume messages from
const queueName = 'review-picture-queue';

The “processJsonMessage” method is the heart of this nodeJs script. At a high level, the system does the following tasks.
– Read a JSON message from the watcher program.
– Download the picture from Azure blob storage.
– Run object detection on the file.
– Store findings into database ( Supabase )

// Create a function to process JSON messages
async function processJsonMessage(message) {
  try {
    const json = JSON.parse(message.content.toString());
    // Replace this with your custom processing logic for the JSON data
    console.log('Received JSON:', json);
    console.log(json.fileName);

    // need function to download file from blob storage 
    const fileDownloaded = await downloadPictureFromBlobStorage(json.fileName);
    if(fileDownloaded)
    {
      // Run TF prediction ...
      const response = await objectDetection.predict(json.fileName);
      console.log(response)

      // Store data in supabase ....
      const { error } = await supabase.from('watch_log').insert({ file_name: json.fileName, json: response })    
      if(error)
      {
        console.log("error object defined");
        console.log(error);
      }  

      deletePictureFromBlobStorage(json.fileName);
      fs.unlinkSync(json.fileName);

    }else{
      console.log("Error downloading file from blob storage");
    }

  } catch (error) {
    console.error('Error processing JSON message:', error.message);
  }
}

Here’s some sample data captured as JSON:

{
  "predictions": [
    {
      "bbox": [
        -0.36693572998046875,
        163.0312156677246,
        498.0821228027344,
        320.0614356994629
      ],
      "class": "person",
      "score": 0.6217759847640991
    }
  ]
}

In this last section, we connect ourselves to RabbitMQ so that we can start to accept work.


// Connect to RabbitMQ and consume messages async function consume() { try { const connection = await amqp.connect(rabbitmqUrl); const channel = await connection.createChannel(); await channel.assertQueue(queueName, { durable: false }); console.log(`Waiting for messages in ${queueName}. To exit, press Ctrl+C`); channel.consume(queueName, (message) => { if (message !== null) { processJsonMessage(message); channel.ack(message); } }); } catch (error) { console.error('Error:', error.message); } } consume();

That’s about it. If need to see the completed project files, check out the following github link:
https://github.com/michaelprosario/birdWatcher

If you’re interested in exploring more tutorials on TensorFlowJs, check out the following links to code labs:
TensorFlowJs Code Labs

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Bird Watching With Python and TensorFlowJS ( Part 2 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see birds. Check out the first post here. In this post, we will focus on the problem of taking pictures every minute or so. For fun, I decided to build this part in Python. To review the system overview, check out my previous blog post here.

The solution for the watcher involves the following major elements and concepts.
– Setup a connection to Azure blob storage. To keep things simple, Azure blob storage enables you to securely store files in Microsoft Azure cloud at low cost.
– Set the time interval for taking pictures
– Setup connection to message queue system. The watch program needs to send a message to an analysis program that will analyze the image content. Please keep in mind that RabbitMQ is simply “email for computer programs.” It’s a way for programs to message each other to do work. I will be running the watcher program on a pretty low-complexity Raspberry PI 2. In my case, I wanted to off-load the image analysis to another computer system with a bit more horse power. In future work, we might move the analysis program to a cloud function. That’s a topic for a future post.

Here’s some pseudo code.
– Setup the program to take pictures
– Loop
– Take a picture
– Store the picture on disk
– Upload the picture to Azure blob storage
– Signal the analysis program to review the picture
– Delete the local copy of the picture
– Wait until we need to take a picture

Setting the stage

Let’s start by sketching out the functions for setting up the blob storage, rabbit message queue, and camera.
At the top of the python file, we need to import the following:

import  cv2
import  time
import  pika
import  json
import  os
from  azure.storage.blob  import  BlobServiceClient

In the following code, we setup the major players of the blob storage, rabbit message queue, and camera.

container_client  =  setup_blob_storage() 
# Set the time interval in seconds
interval  =  60  # every min 
# Initialize the webcam
cap  =  cv2.VideoCapture(0)  

# Check if the webcam is opened successfully

if  not  cap.isOpened():
    print("Error: Could not open the webcam.")
    exit()
queue_name, connection, channel  =  setup_rabbit_message_queue()

Take a picture

In the later part of the program, we start to loop to take a picture and send the data to the analysis program.

ret, frame  =  cap.read() 
if  not  ret:
    print("Error: Could not read frame from the webcam.")
    break  

timestamp, filename = store_picture_on_disk(frame)
print(f"Image captured and saved as {filename}")

Send the picture to Blob Storage

local_file_path  =  filename
blob_name  =  filename 
with  open(local_file_path, "rb") as  data:
    container_client.upload_blob(name=blob_name, data=data)

Signal analysis program to review image using a message

# Prepare a JSON message
message  = {
'fileName': filename,
'timestamp': timestamp,
}
message_json  =  json.dumps(message)

# Send the JSON message to RabbitMQ
channel.basic_publish(exchange='', routing_key=queue_name, body=message_json)
print(f"Message sent to RabbitMQ: {message_json}")

In the previous code sketches, we have not implemented several key functions. Let’s fill in those functions now. You’ll need to position these functions near the top of your script.

setup_blob_storage

Please use this link to learn about Azure Blob storage, account configuration, and Python code patterns.

container_name  =  "picturesblobstorage"
def  setup_blob_storage():
    connect_str  =  "Get connection string for your Azure storage account"
    blob_service_client  =  BlobServiceClient.from_connection_string(connect_str)
    container_client  =  blob_service_client.get_container_client(container_name)
    return  container_client

setup_rabbit_message_queue

Setup connection to message queue system.

def  setup_rabbit_message_queue():
    rabbitmq_host  =  'localhost'
    rabbitmq_port  =  5672
    rabbitmq_username  =  'guest'
    rabbitmq_password  =  'guest'
    queue_name  =  'review-picture-queue'

    # Initialize RabbitMQ connection and channel with authentication
    credentials  =  pika.PlainCredentials(rabbitmq_username, rabbitmq_password)
    connection  =  pika.BlockingConnection(pika.ConnectionParameters(host=rabbitmq_host,port=rabbitmq_port,credentials=credentials))
    channel  =  connection.channel()

    # Declare a queue for sending messages
    channel.queue_declare(queue=queue_name)
    return  queue_name,connection,channel

To keep this blog post brief, I will not be able to jump into all the details regarding setting up RabbitMQ on your local system. Please refer to this 10-minute video for details on setting up this sub-system.

This blog post does a great job of setting up RabbitMQ with “docker-compose.” It’s a light weight way to setup stuff in your environment.

Easy RabbitMQ Deployment with Docker Compose (christian-schou.dk)

store_picture_on_disk

def  store_picture_on_disk(frame):
    timestamp  =  time.strftime("%Y%m%d%H%M%S")
    filename  =  f"image_{timestamp}.jpg"
    cv2.imwrite(filename, frame)
    return  timestamp,filename

In our final blog post, we’ll use NodeJs to load the COCO-SSD model into memory and let it comment upon the image in question.

You can check out the code solution in progress at the following github repository.

https://github.com/michaelprosario/birdWatcher

Check out object-detection.js to see how how object detection will work. Check out watcher.py for a completed version of this tutorial.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Make A Bird Detector with TensorFlowJs ( Part 1 )


In the Rosario tradition of boldly exploring nature, Sarah and my eldest have gotten into bird watching. It’s been cool to see my son and my wife going on hikes and finding cool birds with a local meetup. My wife gave me a challenge to make a bird watcher device for our yard and our bird house. In her vision, we want to understand when we see the most birds in the back yard and capture great photos. In future work, we might even identify the type of bird. In our post today, I thought we would talk through the high level code I’ve prototyped. This will become a fun family project and give me an opportunity to play with some TensorFlowJs.

In the past 12 years, the industry has exploded with innovations involving machine learning. We see these innovations when we ask our home assistant to play a song, using ChatGPT, or using speech to text. In the domain of bird watching, we might build a machine learning model using pictures of different birds with labels specifying the type of bird. A machine learning(ML) system observes patterns in the input data set(set of bird pictures with labels) and constructs rules or structures so the system can classify future images. In contrast to traditional computer programming, we do not explicitly define the code or rules. We train the model using examples and feedback so it learns. In this case, we want to determine if a picture contains a bird.

In this prototype, I will leverage a pretrained ML system or model called COCO-SSD. The COCO aspect of the model finds 80 different classes of things in the context of the picture. (including birds) The model will estimate if it detects a bird in the picture and a bounding box location for the object. The model makes a best attempt to segment the picture and report on all the objects it can see and provide labels.

This diagram provides an overview of the prototype system.

Major elements

  • Watcher – In this project, Python takes pictures every 5 minutes. Pictures get stored to the file system. The file name of the picture gets stored in a message that eventually gets added to a queue.
  • RabbitMQ – We’re using RabbitMQ with JSON messages to manage our queue plumbing. You can think of RabbitMQ as email for computer programs. You can insert messages into different folders. Job processor programs start executing when they receive messages in these folders. This also enables us to create multi-program solutions in different languages.
  • Job Processor – The job processor, written in JavaScript using NodeJS, monitors the message queue for work. When it receives a file name to process, we load the image into memory and request the machine learning process to review it. The COCO-SSD model will report a list of objects it detects with confidence factors associated. If the system finds a bird, the process will write a database record with the details.
  • Database – For this solution, we’re currently prototyping the solution using Supabase. On many of my weekend projects, I enjoy getting to rapidly create structures and store data in the cloud. Under the hood, it uses PostgresDB and feels pretty scalable. Thank you to my friend Javier who introduced me to this cool tool.

The job processor element using TensorFlowJS to execute the object detection model. TensorFlowJs is a pretty amazing solution for executing ML models in the browser or NodeJS backends. Learn more with the following talk.

In our next post, we’ll dive into the details of the job processor process.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Make music with code using DotNet Core

Curious about making music with code? As a software engineer and music guy, I have enjoyed seeing the connections between music and computers. The first computer programmer, Ada Lovelace predicted that computers will move beyond doing boring math problems into the world of creative arts. If a problem can be converted to a system of symbols, she reasoned that computers could help. She used music as her example.

In previous experiments, I have explored the ideas of code and music using TypeScript, NodeJs, and Angular. You can find this work here.

After looking around GitHub, I found a really cool music library for C# devs. I’m hoping to use it to create tools to make quick backup tracks for practicing improv. It’s just fun to explore electronic music, theory, and computational music. Make sure to check out the blog post by Maxim. ( the author of DryWetMidi ) It’s a pretty comprehensive guide to his library.

What is MIDI?

MIDI stands for musical instrument digital interface. (MIDI) Under a file format like WAV or MP3, the computer stores the raw wave form data about sound. The MIDI file format and protocols operate an conceptual layer of music data. You can think of a MIDI file having many tracks. You can assign different instruments ( sounds to tracks ). In each track, the musician can record songs as many events. MIDI music events might include turning a note on, turning a note off, engaging the sustain pedal, and changing tempo. MIDI music software like Garage band, Cakewalk and Bandlab can send the MIDI event data to a software synth which interprets the events into sound. In general, the MIDI event paradigm can be extended to support other things like lighting, lyrics, and other stuff.

DryWetMidi Features

  • Writing MIDI files: For my experiments, I have used DryWetMIDI to explore projects for making drum machines and arpeggio makers. I’m really curious about using computers to generate the skeleton of songs. Can computers generate a template for a POP song, a fiddle tune, or a ballad? We’re about to find out! DryWetMIDI provides a lower level API for raw MIDI event data. The higher level “Pattern” and “PatternBuilder” APIs enable coders to think of expressing a single thread of musical ideas. Let’s say you’re trying to describe a piece for a string quartet. The “PatternBuilder” API enables you to use a fluent syntax to describe the notes played by the cello player. While playing with this API, I have to say that I loved the ability to combine musical patterns. The framework can stack or combine musical patterns into a single pattern. Let’s say you have three violin parts in 3 patterns. The library enables you to blend those patterns into a single idea with one line of code. Maxim showed great care in designing these APIs.
  • Music theory tools: The framework provides good concepts for working for notes, intervals, chords and other fundamental concepts of music.
  • Reading MIDI files: The early examples show that DryWetMIDI can read MIDI files well. I’ve seen some utility functions that enable you to dump MIDI files to CSVs to support debugging. The documentation hints at a chord extraction API that looks really cool. Looking forward to testing this.
  • Device interaction: DryWetMIDI enables makers to send MIDI events and receive them. This capability might become helpful if you’re making a music tutor app. You can use the music device interaction API to watch note events. The system can provide feedback to the player if they’re playing the right notes at the appropriate time.

Visions for MusicMaker.NET for .NET Core

In the following code example, I’ve built an API to describe drum patterns using strings.
The strings represent sound at a resolution of 16th notes. Using the “MakeDrumTrack” service,
we can quickly express patterns of percussion.

IMidiServices midiServices = new MidiServices();
var service = new MakeDrumTrackService(midiServices);
var command = new MakeDrumTrackCommand
{
    BeatsPerMinute = 50,
    FileName = fileName,
    Tracks = new List<DrumTrackRow>
    {
        new()
        {
            Pattern = "x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|",
            InstrumentNumber = DrumConstants.HiHat
        },
        new()
        {
            Pattern = "x---|----|x---|----|x---|----|x---|----|",
            InstrumentNumber = DrumConstants.AcousticBassDrum
        },
        new()
        {
            Pattern = "----|x---|----|x--x|----|x---|----|x--x|",
            InstrumentNumber = DrumConstants.AcousticSnare
        },
        new()
        {
            Pattern = "-x-x|x-x-|-x-x|x--x|-xx-|xx--|-xx-|x--x|",
            InstrumentNumber = DrumConstants.HiBongo
        }
    },
    UserId = "system"
};

// act
var response = service.MakeDrumTrack(command);

Using the ArpeggioPlayer service, we’ll be able to render out a small fragement of music given a list of chords and an arpeggio spec.

var tempo = 180;
var instrument = (byte)Instruments.AcousticGrandPiano;
var channel = 1;

var track = new ChordPlayerTrack(instrument, channel, tempo);
var command = ArpeggioPatternCommandFactory.MakeArpeggioPatternCommand1();
var player = new ArpeggioPlayer(track, );
var chordChanges = GetChords1();  // Am | G | F | E

player.PlayFromChordChanges(chordChanges);

// Write MIDI file with DryWetMIDI
var midiFile = new MidiFile();
midiFile.Chunks.Add(track.MakeTrackChunk());
midiFile.Write("arp1.mid", true);

In the following method, the maker can describe the arpeggio patterns using ASCII art strings. The arpeggio patterns operate at resolution of sixteenth notes. This works fine for most POP or eletronic music. In future work, we can build web apps or mobile UX to enable the user to design the arpeggio patterns or drum patterns.

public static MakeArpeggioPatternCommand MakeArpeggioPatternCommand1()
{
    var command = new MakeArpeggioPatternCommand
    {
        Pattern = new ArpeggioPattern
        {
            Rows = new List<ArpeggioPatternRow>
            {
                new() { Type = ArpeggioPatternRowType.Fifth, Octave = 2, Pattern = "----|----|----|---s|" },
                new() { Type = ArpeggioPatternRowType.Third, Octave = 2, Pattern = "----|--s-|s---|s---|" },
                new() { Type = ArpeggioPatternRowType.Root, Octave = 2, Pattern =  "---s|-s-s|---s|-s--|" },
                new() { Type = ArpeggioPatternRowType.Fifth, Octave = 1, Pattern = "--s-|s---|--s-|--s-|" },
                new() { Type = ArpeggioPatternRowType.Third, Octave = 1, Pattern = "-s--|----|-s--|----|" },
                new() { Type = ArpeggioPatternRowType.Root, Octave = 1, Pattern =  "s---|----|s---|----|" }
            },
            InstrumentNumber = Instruments.Banjo
        },
        UserId = "mrosario",
        BeatsPerMinute = 120,
        Channel = 0
    };
    return command;
}

The previous code sample writes out a music fragment like the following.

If you’re interested in following my work here, check out the following repo.

Getting Started with PhaserJs and TypeScript

Curious about building 2D games with web skills? In this post, we’ll explore tools and patterns to use PhaserJs and JavaScript to make engaging 2D games. We’ll cover tools to make experiences with our favorite language: TypeScript.

Reference links

Related links

Quick start for Phaser 3 and TypeScript

As I have time for small project experiments for myself, I decided to explore new developments with Phaser JS. In general, Phaser JS seems like a fun framework to enable novice game makers to build fun 2D games. The javascript language has become a popular choice since the language exists in every web browser.

What can you build with Phaser 3? Check out some examples here.
Tetris
Robowhale
Fun Math Game

In this blog post, we walk through building a small space shooter game.

As you start Phaser JS development, many tutorials walk you through some process to setup a web server to serve html and javascript content. Unfortunately, plain JavaScript alone does not guide makers to create well formed code. In plain Javascript, coders need to create stuff in baby steps. In this process, you should test stuff at each step. If you use a tool like Visual Studio code, the tool provides awesome guidance and autocomplete for devs. It would be nice if tools could improve to help you find more code faults and common syntax mistakes.

The TypeScript language invented by Anders Hejlsberg comes to the rescue. The TypeScript language and related coding tools provides robust feedback to the coder while constructing code. The JavaScript language does not support ideas like class structures or interfaces. Classes enable makers to describe a consistent template for making objects, their related methods, and properties. In a similar way, interfaces enable coders to describe the properties and methods connected to an object, but does not define implementations of methods. It turns out these ideas provide an increased structure and guidance to professional developers to create large applications using JavaScript. When your tools help you find mistakes faster, you feel like you move faster. This provides great support for early stage devs. TypeScript borrows patterns and ideas from C#, another popular language for game developers and business developers.

I found a pretty nice starter kit that integrates TypeScript, a working web server, and Phaser 3 JS together. Here’s the general steps for setting up your Phaser 3 development environment.

Install Visual Studio Code

Install NodeJs and NPM

  • NodeJs enables coders to create JavaScript tools outside of the browser.
  • npm – When you build software in modern times, tools that you build will depend upon other lego blocks. Those lego blocks may depend upon others. The node package manager makes it easy to install NodeJs tools and their related dependences.
  • Use the following blog post to install NodeJs and NPM
  • Installing NodeJs and NPM from kinsta.com

Install Yarn

  • Install Yarn
  • Yarn is a package manager that provides more project organization tools.

Download Phaser 3+TypeScript repository

On my environment, I have unziped the files to /home/michaelprosario/phaser3-rollup-typescript-master.

Finish the setup and run

cd /home/michaelprosario/phaser3-rollup-typescript-master
yarn install
yarn dev

At this point, you should see that the system has started a web server using vite. Open your browser to http://localhost:3000. You should see a bouncing Phaser logo.

Open up Visual Studio Code and start hacking

  • Type CTRL+C in the terminal to stop the web server.
  • In the terminal, type ‘code .’ to load Visual Studio Code for the current folder.
  • Once Visual Studio Code loads, select “Terminal > New Terminal”
  • In the terminal, execute ‘yarn dev’
  • This will run your development web server and provide feedback to the coder on syntax errors every time a file gets saved.
  • If everything compiles, the web server serves your game at http://localhost:3000

TypeScript Sample Code

Open src/scenes/Game.ts using Visual Studio Code. If you’ve done Java or some C#, the code style should feel more familiar.

import Phaser from 'phaser';

// Creates a scene called demo as a class
export default class Demo extends Phaser.Scene {
  constructor() {
    super('GameScene');
  }

  preload() {
    // preload image asset into memory
    this.load.image('logo', 'assets/phaser3-logo.png');
  }

  create() {
    // add image to scene
    const logo = this.add.image(400, 70, 'logo');
    // bounce the logo using a tween
    this.tweens.add({
      targets: logo,
      y: 350,
      duration: 1500,
      ease: 'Sine.inOut',
      yoyo: true,
      repeat: -1
    });
  }
}

Make Unity 3D Games To Amaze Your Friends!

Hello makers! Like many in the computer industry, I had the dream of learning how to build video games. When the math class seemed difficult, I found inspiration to move forward since I had strong motivation to learn how to build video games someday! Unity 3D and their amazing community of game creators have created powerful opportunities for curious makers to build games that amaze your friends. From my first encounters with Unity 3D, I felt that they have done a good job of educating their users. In the past few years, I greatly admire the new strategies they have created to engage learners in their tools.

The idea of “modding” has engaged generations of gamers. (Thank you Minecraft and Roblox!). We’ve become used to the idea that games setup a robust environment where you can build big and crazy things. In lots of games, you’re placed into a position of saving the world. (i.e. you’ve been given a motivation to do something bigger than yourself that’s fun). The Unity 3D “microgame” tutorials provide students with the basic shell of well crafted game experiences. In this context, the Unity 3D team have created tutorial experience to gently guide learners through the Unity 3D environment, programming concepts, and their system for building Unity “lego” blocks. In this experience, you get to select your adventure. Do you want to build your own Lego game? Do you want to make your own version of Super Mario brothers? You can challenge yourself by building a cool kart racing game. In the videos below, I wanted to give a shout out to the Lego action “game jam” and the Kart Racing tutorials.

I always enjoy learning new Unity tricks from other developers. It has been fun to pick apart aspects of these games. In the newest Kart racing tutorials, you can also learn about the newer machine learning capabilities of Unity 3D. ( ML Agents ) It kind of blows my mind that these ideas can now appear in tutorials for early stage coders. As I’ve tested these experiences with my kids, they have enjoyed creating novel kart racing experiences and environments. My older son has enjoyed customizing his own shooter game.

Make sure to check out Unity 3D’s Learning index here: https://learn.unity.com/

If you make something cool, please share a link below in the comments!

Your First Game Jam: LEGO Ideas Edition

In this edition, you will discover how to build a quest in your LEGO® Microgame using the newly released “Speak” and “Counter” LEGO® Behaviour Bricks. Learn step-by-step with a special guest from the LEGO® Games division and our Unity team to create your own unique, shareable game.

Build Your Own Karting Microgame

It’s never been easier to start creating with Unity. From download to Microgame selection, to modding, playing, and sharing your first playable game, this video shows you what you can accomplish in as little as 30 minutes!

For detailed step-by-step Unity tutorials, check out

The Official Guide to Your First Day in Unity playlist.

Related Posts

14 AFrame.IO Resources For Your WebXR Project

AFrame Logo

I’m a big fan of the work of the AFrame.IO community.  Thank you to Mozilla, Diego Marcos, Kevin Ngo, and Don McCurdy for their influence and effort to build a fun and productive platform for building WebVR experiences.   In this post, I’ve collected a few Github repositories and resources to support you in building AFrame experiences.

Talk Abstract: In the next few years, augmented reality and virtual reality will continue to provide innovations in gaming, education, and training. Other applications might include helping you tour your next vacation resort or explore a future architecture design. Thanks to open web standards like WebXR, web developers can leverage their existing skills in JavaScript and HTML to create delightful VR experiences. During this session, we will explore A-Frame.io, an open source project supported by Mozilla enabling you to craft VR experiences using JavaScript and a growing ecosystem of web components.

https://github.com/ngokevin/kframe
Kevin’s collection of A-Frame components and scenes.

https://webvr.donmccurdy.com/
Awesome WebXR from Don McCurdy

https://github.com/feiss/aframe-environment-component
Infinite background environments for your A-Frame VR scene in just one file.

https://github.com/aframevr/aframe-school
Interactive workshop and lessons for learning A-Frame and WebVR.

https://aframe.io/aframe-registry/
Official registry of cool AFrame stuff

https://github.com/donmccurdy/aframe-physics-system
Components for A-Frame physics integration, built on CANNON.js.

Experiment with AR and A-Frame
AFrame now has support for ARCore. Paint the real world with your XR content! Using FireFox Reality for iOS, you can leverage ARKit on your favorite IPad or IPhone.

https://github.com/michaelprosario/aframe
I’ve collected a small collection of demo apps to explore some of the core ideas of AFrame.

AFrame Layout Component
Automatically positions child entities in 3D space, with several layouts to choose from.

Animation
An animation component for A-Frame using anime.js. Also check out the animation-timeline component for defining and orchestrating timelines of animations.

Super Hands
All-in-one natural hand controller, pointer, and gaze interaction library for A-Frame. Seems to work well with Oculus Quest.

A-Frame Component loading Google Poly models from Google Poly
Component enables you to quickly load 3D content from Google Poly

aframe-htmlembed-component
HTML Component for A-Frame VR that allows for interaction with HTML in VR. Demo

https://github.com/nylki/aframe-lsystem-component
L-System/LSystem component for A-Frame to draw 3D turtle graphics. Using Lindenmayer as backend.

Thanks to the amazing work from Mozilla, WebXR usability has improved leveraging specialized FireFox browsers
FireFox Reality
FireFox Reality for HoloLens 2 – For raw ThreeJs scripts, it works well. I’m still doing testing on AFrame scenes.

If you live in Central Florida or Orlando, consider checking out our local chapter of Google developer Group.  We enjoy building a fun creative community of developers, sharing ideas, code, and supporting each other in the craft of software.  Learn more about our community here:

GDGCentralFlorida.org

Top Stories on InspiredToEducate.NET

Build a Space Shooter with Phaser3 and JavaScript(Tutorial3)

In this blog post series, I want to unpack building a 2D shooter game using Phaser3.js. Phaser3 provides a robust and fast game framework for early-stage JavaScript developers. In this tutorial, we will work to add aliens to the scene, give them some basic movement, and blowing them up. Sound like a plan? Here’s what we will build.

Please make sure to check out Tutorial 1 to get started with this project. You’ll need to build upon the code and ideas from the previous blog posts. (post 1, post 2)

To see the code in a completed state, feel free to visit this link. Let’s start by making some modifications to the scene class to preload an enemy sprite graphic. The PNG file will represent how the alien should be drawn to screen. We associate the name ‘enemy1’ with our PNG file.

class Scene1 extends Phaser.Scene {

    preload() {
        this.load.image('ship', 'assets/SpaceShooterRedux/PNG/playerShip1_orange.png');
        this.load.image('laser', 'assets/SpaceShooterRedux/PNG/Lasers/laserBlue01.png');
        this.load.image('enemy1', 'assets/SpaceShooterRedux/PNG/Enemies/enemyBlack3.png');
    }

    ...

In the Phaser game framework, we associate moving game entities with sprites. To define a sprite, we build out an enemy class. When we put a sprite into our scene(as the class is constructed), a special function will be called the constructor. We’ve designed the constructor so that we can set the enemy location at a point (x,y) coordinate and connect it to the scene.

In the constructor, we accomplish the following work. We set the texture of the sprite to ‘enemy1’ and set it the position. Next, we connect this sprite to the physics engine of the scene. We’ll use the physics engine to detect when the enemy gets hit by lasers. We also initialize the deltaX factor to 3. It’s not super exciting, but the aliens will shiver from side to side randomly. This, however, is good enough for a simple lesson. After to complete this tutorial, I encourage you to go crazy with making the aliens move any way you want!

    class Enemy1 extends Phaser.GameObjects.Sprite {

    constructor(scene, x, y) {
        super(scene, x, y);
        this.setTexture('enemy1');
        this.setPosition(x, y);
        scene.physics.world.enable(this);

        this.gameObject = this;
        this.deltaX = 3;
    }

    ...

Adding movement to aliens

So, we’re ready to start moving some aliens. Let’s do this! We’re going to write three simple methods on the Enemy1 class. Following the pattern of all Photon sprites, the update method will be called every game tick. It’s your job to tell the sprite how to move. Keep in mind, we’re going to do a simple “side to side” behavior randomly. In the update method, we start by picking a number between 0 and 3. If k is 2, we make the sprite move left using the “this.moveLeft()” function. Otherwise, we make it move to the right using “this.moveRight()”

    update() {
        let k = Math.random() * 4;
        k = Math.round(k);

        if (k == 2) {
            this.moveLeft();
        }
        else if (k == 3) {
            this.moveRight();
        }
    }

    moveLeft() {
        if (this.x > 0) {
            this.x -= this.deltaX;
        }
    }

    moveRight() {
        if (this.x < SCREEN_WIDTH) {
            this.x += this.deltaX;
        }
    }

Make lots of aliens

At this point, you want to see lots of moving aliens. Let’s add the code to the scene class to construct the aliens. In the scene class, the “create” method will be used to construct all objects. This includes our ship and the aliens. Firstly, we create a special collection object called enemies. We’ll use this collection to track the enemies with the physics system. (this.enemies = this.physics.add.group()) On the next line, we create an Array so that we have a simple way to track our enemies that need updating. In the loop, we’re creating 21 aliens, placing them in random locations, and adding them to our collections. (enemies and enemies2)

class Scene1 extends Phaser.Scene {

    ...

    create() {
        this.cursors = this.input.keyboard.createCursorKeys();
        this.myShip = new Ship(this, 400, 500);
        this.add.existing(this.myShip);

        // ======= adding enemies ============
        this.enemies = this.physics.add.group();
        this.enemies2 = new Array();

        let k = 0;
        for (k = 0; k < 21; k++) {
            let x = Math.random() * 800;
            let y = Math.random() * 400;

            this.enemy = new Enemy1(this, x, y);
            this.add.existing(this.enemy);
            this.enemies.add(this.enemy);
            this.enemies2.push(this.enemy);
        }
    }

In order to invoke our update code for all enemies, we need to make one more edit to the scene class. In the “update” method, we need to add a loop to call “update” on all enemies

    update() {
        // there's more code related to the ship here 

        let j = 0;
        for (j = 0; j < this.enemies2.length; j++) {
            let enemy = this.enemies2[j];
            enemy.update();
        }
    }

At this point, we should see our aliens wiggling on the screen. And there’s much rejoicing!

Aliens go boom! Let’s do collision detection

In the laser class that we built in the last post, we need to make a few edits. Check out the code below. In the constructor of the ShipLaser, we set the texture, position, speed, and store the parent scene in “this.scene.” We connect the laser instance to the physics engine using “scene.physics.world.enable.” In the next line, we tell the game framework to check for collisions between this laser and the enemies. When a collision happens, we handle the hit using the “handleHit” function.

    class ShipLaser extends Phaser.GameObjects.Sprite {

    constructor(scene, x, y) {
        super(scene, x, y);
        this.setTexture('laser');
        this.setPosition(x, y);
        this.speed = 10;
        this.scene = scene;

        // check out new code below ...
        scene.physics.world.enable(this);
        scene.physics.add.collider(this, scene.enemies, this.handleHit, null, this);
    }

In the handle hit function, you’ll notice that the laserSprite and enemySprite have been passed as parameters to the method. In Phaser, you can receive these references so that we can define behaviors associated with both sprites. In this case, we’re just going to destroy the objects.

    handleHit(laserSprite, enemySprite) {
        enemySprite.destroy(true);
        laserSprite.destroy(true);
    }

Hope this has been helpful. Please let me know if you have any questions.

Space shooter graphic