3 Reasons Why Google Developer Groups Support Community Growth

Hi, friends. While looking through old blog post I have written, I discovered that I wrote my first blog post about Google developer group ten years ago. I have enjoyed the journey. In this blog post, I wanted to explore some of my personal motivations, stories and feelings toward supporting our local Google developer groups. For my kids and my community, I enjoy advancing the mission of helping “students young and old to love learning through making, tinkering, and exploration.” I appreciate the GDG community enabling me to explore this mission using Google tools and the culture of open innovation.

Growing Students: In the past month, I had bumped into this cool episode on the Google Cloud Platform Podcast with one of my mentors, Dr. Laurie White.

Google Cloud for Higher Education with Laurie White and Aaron Yeats

Dr. Robert Allen and Dr. White invited me to join their Google Developer Group(GDG) while I lived in Macon, GA. The GDG of Macon focused on serving the students of Mercer University and the Macon community. I think they sensed my curiousity for teaching software engineering and invited me to teach some of my first sessions. (Google app engine, Firebase, JS, etc.) The experiences amplified a corner of my soul that enjoyed helping college students jump into the crazy world of software engineering. In the podcast, Dr. White unscores that traditional computer science education has many strengths. The average CS program, however, does not address many critical topics desired by engineering teams. (i.e. working with a cloud provider, engineering software for easy testing, test automation, user centered design, etc.) These gaps become blockers to early stage developers seeking work. I found joy helping connecting these students to address these gaps and connect them with opportunities around AI, web, mobile and open source stuff. In the Google ecosystem, there are tribes of mentors who want to help you become successful.

Growing a community of professionals: For many developer community organizers, we recognize the opportunity and promise of software craftmanship. We live in amazing industry not blocked by atoms and the need of physical source material. In the world of software, you can start a business with a strong concept, persistence, and good habits for incremental learning. In the world of software, you can find a good technology job by becoming a little bit better every day AND connecting with a supportive community. For many, software engineering helps real people feed and elevate themselves and their families. I believe that’s an important mission. I believe our GDG communities hit a high mark in helping professionals to grow and making the experience “excellent.” As GDG organizers, we’re passionate about helping you and your teams become successful with your cloud strategies, mobile/web apps, empower creators with AI and design culture. I have had the blessing of many mentors. Dr. Allen gave me my first Google Cardboard and introduced me to Unity3D. I now work with a wonderful design firm focused on creating learning experiences with virtual and augmented reality. It’s important to remember that small sparks can grow to bigger things. It’s important to give back and grow the next generation. We seek to become sparks for others.
Growing future startups: I continue to believe that small businesses will continue to become our engine of economic growth. The news often paints sad picture of our world as broken. We love to support startups who believe they can meaningfully improve the world and help others become successful too. To that end, I love that Google helps startups become successful through their various growth programs like Google developer groups, women tech makers, startup.google.com and student groups. Google’s learning team has put a lot of care into growing an open learning ecosystem through codelabs.google.com, web.dev/learn, Flutter.dev, kaggle.com/learn and other product guides. Learning becomes more joyful when you can learn as a tribe. Why go solo?

Invite to DevFest Florida

If you’re looking for supportive mentors and a growth oriented meetup community, I extend a warm invite to you to DevFestFlorida.org. Working with my fellow GDG organizers across Tampa, Miami, and Orlando, we’re organizing one of the largest local dev conferences in the south to help you learn and grow. It’s an experience designed by developers for our local developers. DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology. Lots of hands-on learning and fun.

Consider joinining us for DevFest Florida Orlando.
– WHERE: Seminole State College Wayne M. Densch Partnership Center in Sanford, FL
– WHEN: Oct 14th
– Check out the details on our tracks:
Web
Mobile
Cloud and data
AI
Startup

Learn more about DevFest Florida Orlando – Oct 14th
Use the following SECRETSALE code to get 10% off tickets

Bird Watching With Python and TensorFlowJS ( Part 3 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see anything. In the future, we want to search the database for birds. This post will focus on the problem of detecting objects in the image and storing them into a database. Check out part 1 and part 2 to more context on this project.

TensorFlow.js is an open-source JavaScript library developed by Google’s TensorFlow team. It enables machine learning and deep learning tasks to be performed directly in web browsers and Node.js environments using JavaScript or TypeScript. TensorFlow.js brings the power of TensorFlow, a popular machine learning framework, to the JavaScript ecosystem, making it accessible for web developers and data scientists.

Under the TensorFlow Js Framework, you have access to the COCOS-SSD model that detects 80 classes of common objects. The output response reports a list of objects found in the image, a confidence factor, bounding boxes to point to each object. Check out this video for an example.

In the following code, we import some of our dependencies. This includes
– TFJS – TensorflowJs
– cocosSSd – TensorFlow model for common object detection
– amqp – A library for connecting to rabbitMQ
– supabase/supabase-js – To log data of objects found, we will send our data to Supabase
– azure/storage-blob – To download pictures from Azure blob storage, we add a client library to connect to the cloud

const tf = require("@tensorflow/tfjs-node")
const amqp = require('amqplib');
const cocosSSd = require("@tensorflow-models/coco-ssd")
const { createCanvas, loadImage } = require('canvas');
const { createClient } = require('@supabase/supabase-js');
const { BlobServiceClient } = require("@azure/storage-blob");
const { v1: uuidv1 } = require("uuid");
var fs = require('fs');

My friend Javier got me excited about trying out https://supabase.com/. If you’re looking for a simple document or relational database solution with an easy api, it’s pretty cool. This code will grab some details from the environment and setup a supabase client.

const supabaseUrl = process.env.SUPABASEURL;
const supabaseKey = process.env.SUPABASEKEY;
const supabase = createClient(supabaseUrl, supabaseKey)

To learn more about Supabase, check out supabase.com.

In our situation, the job-processor program and the watcher program will probably run on two different machines. I will try to run the watcher process on a RaspberryPi. The job processor will probably run on some other machine. The watch program takes pictures and stores the files into Microsoft Azure blob storage. The watcher signals the job processor by sending a message through rabbitMQ.

Let’s setup the connection to Azure Blob storage.

const AZURE_BLOB_STORAGE_CONNECTION_STRING = process.env.AZURE_BLOB_STORAGE_CONNECTION_STRING;

if (!AZURE_BLOB_STORAGE_CONNECTION_STRING) 
{
  throw Error('Azure Storage Connection string not found');
}

const containerName = "picturesblobstorage";
const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_BLOB_STORAGE_CONNECTION_STRING);
const containerClient = blobServiceClient.getContainerClient(containerName);

When we want to download a file from Azure blob storage, we leverage our container client.

async function downloadPictureFromBlobStorage(fileName)
{  
  try 
  {
    const blobClient = containerClient.getBlobClient(fileName);
    console.log(`Downloading blob ${fileName} to ${fileName}`);
    const downloadBlockBlobResponse = await blobClient.downloadToFile(fileName);
    console.log(`Downloaded ${downloadBlockBlobResponse.contentLength} bytes`);
    return true;
  } catch (err) {
    console.error(err.message);
    return false;
  }  
}

Let’s setup our class for getting insight from our object detection algorithm. In the following class, the “makeCanvasFromFilePath” method loads the picture into memory as a canvas. Using CocosSSD mode, we detect objects in the image using the predict method.

class ObjectDetection 
{
    constructor()
    {
        this.model = null;
    }

    async predict(image)
    {
        if(!this.model)
        {
            this.model = await cocosSSd.load();
        }

        const canvas = await this.makeCanvasFromFilePath(image);    
        const predictions = await this.model.detect(canvas);

        return { predictions: predictions }
    }

    async makeCanvasFromFilePath(image) {
        const img = await loadImage(image);
        const canvas = createCanvas(img.width, img.height);
        const ctx = canvas.getContext('2d');
        ctx.drawImage(img, 0, 0);
        return canvas;
    }
}

const objectDetection = new ObjectDetection();

Let’s configure RabbitMQ

// RabbitMQ connection URL
const rabbitmqUrl = 'amqp://localhost';

// Queue name to consume messages from
const queueName = 'review-picture-queue';

The “processJsonMessage” method is the heart of this nodeJs script. At a high level, the system does the following tasks.
– Read a JSON message from the watcher program.
– Download the picture from Azure blob storage.
– Run object detection on the file.
– Store findings into database ( Supabase )

// Create a function to process JSON messages
async function processJsonMessage(message) {
  try {
    const json = JSON.parse(message.content.toString());
    // Replace this with your custom processing logic for the JSON data
    console.log('Received JSON:', json);
    console.log(json.fileName);

    // need function to download file from blob storage 
    const fileDownloaded = await downloadPictureFromBlobStorage(json.fileName);
    if(fileDownloaded)
    {
      // Run TF prediction ...
      const response = await objectDetection.predict(json.fileName);
      console.log(response)

      // Store data in supabase ....
      const { error } = await supabase.from('watch_log').insert({ file_name: json.fileName, json: response })    
      if(error)
      {
        console.log("error object defined");
        console.log(error);
      }  

      deletePictureFromBlobStorage(json.fileName);
      fs.unlinkSync(json.fileName);

    }else{
      console.log("Error downloading file from blob storage");
    }

  } catch (error) {
    console.error('Error processing JSON message:', error.message);
  }
}

Here’s some sample data captured as JSON:

{
  "predictions": [
    {
      "bbox": [
        -0.36693572998046875,
        163.0312156677246,
        498.0821228027344,
        320.0614356994629
      ],
      "class": "person",
      "score": 0.6217759847640991
    }
  ]
}

In this last section, we connect ourselves to RabbitMQ so that we can start to accept work.


// Connect to RabbitMQ and consume messages async function consume() { try { const connection = await amqp.connect(rabbitmqUrl); const channel = await connection.createChannel(); await channel.assertQueue(queueName, { durable: false }); console.log(`Waiting for messages in ${queueName}. To exit, press Ctrl+C`); channel.consume(queueName, (message) => { if (message !== null) { processJsonMessage(message); channel.ack(message); } }); } catch (error) { console.error('Error:', error.message); } } consume();

That’s about it. If need to see the completed project files, check out the following github link:
https://github.com/michaelprosario/birdWatcher

If you’re interested in exploring more tutorials on TensorFlowJs, check out the following links to code labs:
TensorFlowJs Code Labs

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Bird Watching With Python and TensorFlowJS ( Part 2 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see birds. Check out the first post here. In this post, we will focus on the problem of taking pictures every minute or so. For fun, I decided to build this part in Python. To review the system overview, check out my previous blog post here.

The solution for the watcher involves the following major elements and concepts.
– Setup a connection to Azure blob storage. To keep things simple, Azure blob storage enables you to securely store files in Microsoft Azure cloud at low cost.
– Set the time interval for taking pictures
– Setup connection to message queue system. The watch program needs to send a message to an analysis program that will analyze the image content. Please keep in mind that RabbitMQ is simply “email for computer programs.” It’s a way for programs to message each other to do work. I will be running the watcher program on a pretty low-complexity Raspberry PI 2. In my case, I wanted to off-load the image analysis to another computer system with a bit more horse power. In future work, we might move the analysis program to a cloud function. That’s a topic for a future post.

Here’s some pseudo code.
– Setup the program to take pictures
– Loop
– Take a picture
– Store the picture on disk
– Upload the picture to Azure blob storage
– Signal the analysis program to review the picture
– Delete the local copy of the picture
– Wait until we need to take a picture

Setting the stage

Let’s start by sketching out the functions for setting up the blob storage, rabbit message queue, and camera.
At the top of the python file, we need to import the following:

import  cv2
import  time
import  pika
import  json
import  os
from  azure.storage.blob  import  BlobServiceClient

In the following code, we setup the major players of the blob storage, rabbit message queue, and camera.

container_client  =  setup_blob_storage() 
# Set the time interval in seconds
interval  =  60  # every min 
# Initialize the webcam
cap  =  cv2.VideoCapture(0)  

# Check if the webcam is opened successfully

if  not  cap.isOpened():
    print("Error: Could not open the webcam.")
    exit()
queue_name, connection, channel  =  setup_rabbit_message_queue()

Take a picture

In the later part of the program, we start to loop to take a picture and send the data to the analysis program.

ret, frame  =  cap.read() 
if  not  ret:
    print("Error: Could not read frame from the webcam.")
    break  

timestamp, filename = store_picture_on_disk(frame)
print(f"Image captured and saved as {filename}")

Send the picture to Blob Storage

local_file_path  =  filename
blob_name  =  filename 
with  open(local_file_path, "rb") as  data:
    container_client.upload_blob(name=blob_name, data=data)

Signal analysis program to review image using a message

# Prepare a JSON message
message  = {
'fileName': filename,
'timestamp': timestamp,
}
message_json  =  json.dumps(message)

# Send the JSON message to RabbitMQ
channel.basic_publish(exchange='', routing_key=queue_name, body=message_json)
print(f"Message sent to RabbitMQ: {message_json}")

In the previous code sketches, we have not implemented several key functions. Let’s fill in those functions now. You’ll need to position these functions near the top of your script.

setup_blob_storage

Please use this link to learn about Azure Blob storage, account configuration, and Python code patterns.

container_name  =  "picturesblobstorage"
def  setup_blob_storage():
    connect_str  =  "Get connection string for your Azure storage account"
    blob_service_client  =  BlobServiceClient.from_connection_string(connect_str)
    container_client  =  blob_service_client.get_container_client(container_name)
    return  container_client

setup_rabbit_message_queue

Setup connection to message queue system.

def  setup_rabbit_message_queue():
    rabbitmq_host  =  'localhost'
    rabbitmq_port  =  5672
    rabbitmq_username  =  'guest'
    rabbitmq_password  =  'guest'
    queue_name  =  'review-picture-queue'

    # Initialize RabbitMQ connection and channel with authentication
    credentials  =  pika.PlainCredentials(rabbitmq_username, rabbitmq_password)
    connection  =  pika.BlockingConnection(pika.ConnectionParameters(host=rabbitmq_host,port=rabbitmq_port,credentials=credentials))
    channel  =  connection.channel()

    # Declare a queue for sending messages
    channel.queue_declare(queue=queue_name)
    return  queue_name,connection,channel

To keep this blog post brief, I will not be able to jump into all the details regarding setting up RabbitMQ on your local system. Please refer to this 10-minute video for details on setting up this sub-system.

This blog post does a great job of setting up RabbitMQ with “docker-compose.” It’s a light weight way to setup stuff in your environment.

Easy RabbitMQ Deployment with Docker Compose (christian-schou.dk)

store_picture_on_disk

def  store_picture_on_disk(frame):
    timestamp  =  time.strftime("%Y%m%d%H%M%S")
    filename  =  f"image_{timestamp}.jpg"
    cv2.imwrite(filename, frame)
    return  timestamp,filename

In our final blog post, we’ll use NodeJs to load the COCO-SSD model into memory and let it comment upon the image in question.

You can check out the code solution in progress at the following github repository.

https://github.com/michaelprosario/birdWatcher

Check out object-detection.js to see how how object detection will work. Check out watcher.py for a completed version of this tutorial.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Make A Bird Detector with TensorFlowJs ( Part 1 )


In the Rosario tradition of boldly exploring nature, Sarah and my eldest have gotten into bird watching. It’s been cool to see my son and my wife going on hikes and finding cool birds with a local meetup. My wife gave me a challenge to make a bird watcher device for our yard and our bird house. In her vision, we want to understand when we see the most birds in the back yard and capture great photos. In future work, we might even identify the type of bird. In our post today, I thought we would talk through the high level code I’ve prototyped. This will become a fun family project and give me an opportunity to play with some TensorFlowJs.

In the past 12 years, the industry has exploded with innovations involving machine learning. We see these innovations when we ask our home assistant to play a song, using ChatGPT, or using speech to text. In the domain of bird watching, we might build a machine learning model using pictures of different birds with labels specifying the type of bird. A machine learning(ML) system observes patterns in the input data set(set of bird pictures with labels) and constructs rules or structures so the system can classify future images. In contrast to traditional computer programming, we do not explicitly define the code or rules. We train the model using examples and feedback so it learns. In this case, we want to determine if a picture contains a bird.

In this prototype, I will leverage a pretrained ML system or model called COCO-SSD. The COCO aspect of the model finds 80 different classes of things in the context of the picture. (including birds) The model will estimate if it detects a bird in the picture and a bounding box location for the object. The model makes a best attempt to segment the picture and report on all the objects it can see and provide labels.

This diagram provides an overview of the prototype system.

Major elements

  • Watcher – In this project, Python takes pictures every 5 minutes. Pictures get stored to the file system. The file name of the picture gets stored in a message that eventually gets added to a queue.
  • RabbitMQ – We’re using RabbitMQ with JSON messages to manage our queue plumbing. You can think of RabbitMQ as email for computer programs. You can insert messages into different folders. Job processor programs start executing when they receive messages in these folders. This also enables us to create multi-program solutions in different languages.
  • Job Processor – The job processor, written in JavaScript using NodeJS, monitors the message queue for work. When it receives a file name to process, we load the image into memory and request the machine learning process to review it. The COCO-SSD model will report a list of objects it detects with confidence factors associated. If the system finds a bird, the process will write a database record with the details.
  • Database – For this solution, we’re currently prototyping the solution using Supabase. On many of my weekend projects, I enjoy getting to rapidly create structures and store data in the cloud. Under the hood, it uses PostgresDB and feels pretty scalable. Thank you to my friend Javier who introduced me to this cool tool.

The job processor element using TensorFlowJS to execute the object detection model. TensorFlowJs is a pretty amazing solution for executing ML models in the browser or NodeJS backends. Learn more with the following talk.

In our next post, we’ll dive into the details of the job processor process.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Exploring Motivation

As a parent, I care deeply about how my kids own their habits of learning. In grade school, I received the gift of violin lessons from age five through high school. Looking back, my capacity for music making has become one of my most cherished life skills. Music is just fun! Music brings people together. In my faith life, music helps our community to pray. In short, I have internalized my motivation to explore music making. Beyond violin, I now play guitar, piano, do music recording, sing and ukulele.

If I’m honest with myself, I can remember the times that practicing violin felt like a chore. I can recall those times that I got in trouble because I did not practice enough. As an adult, I now have an appreciation for the gift of habits and can see the value that music-making has created in my life. When you start learning a complex piece of music, you have to deal with the emotions of feeling “overloaded.” With great mentorship of my parents and teaching, I learned the gift of taking things slow. We explored ways to chop up a piece of music into small phrases and gain competency. To move fast in music-making, you have to start slow and correct.

Reflecting on my music journey, I can see the joy, benefits and value of musicianship clearly now. The eight-year-old Michael did not have that kind of motivation. Over the next few months, I’m hoping to study some stuff around motivation so that I have additional tools to serve my kids in their lifelong journey. In this blog, we commit to the mission of helping students love learning through making, engineering, and exploration. It’s hard to keep my kids motivated on their projects and activities at times. I have a little one taking violin lessons now. It’s a joy to see her grow. And it can feel like a challenge getting her to practice regularly. Every parent has their version of this. How do I get my kids to eat their vegetables?

My wife and I will often tell each other that “they don’t come out saints.” In this phrase, we acknowledge that mentorship and parenting are hard. I also acknowledge that good parenting requires healthy habits from myself. (prayer, reflection, planning, etc.)

In this post, I did some reflection over the following Edutopia post about student motivation. Will probably do more over the next month.

To Increase Student Engagement, Focus on Motivation by Nina Parrish.

The post reflects on the idea that students tend to have more motivation to explore if they have the gift of autonomy. In my favorite book, “Invent to learn”, they foster the idea that students should have the space to select meaningful “hands-on” creative projects. If my kids really want to engineer a skateboard, I should try to cheer them on. In theory, I should also support them as their walk through the related math/construction skills. Secondly, students feel motivated when they feel like they’re gaining ground on their skills. (i.e. growing in mastery) For some kids, it’s hard for them to chop big projects into smaller stories or tasks. I probably should use my Scrum master skills on them to help them decompose problems more. That might be a fun experiment I can do soon. 🙂 Of course, kids feel motivated when they know that you care. That’s a good reminder.

To celebrate my wife a bit, I feel she did a great job inspiring motivation for our oldest son Peter. Peter has shown great curiousity around marine biology for years now. They also have tons of quality time bird watching too. As I write this blog post, we’re picking up Peter from a cool marine science camp down in the Florida keys. ( Pigeion Key ) I appreciate my wife for finding this amazing camp that empowered Peter to explore his natural curiousity. Picking him up, he’s on fire and excited for all the science and creatures he’s explored this week. Go Dr. Rosario!! I’m thankful that he had this life changing experience.

  • As a honest student of becoming a better parent, I would ask for your ideas on inspiring motivation.
  • What did a teacher or mentor do to inspire you?
  • What has worked for your kids? Very open to your inspirations.

Have a great day!

Gitpod: Cloud Dev Environment That Saves You Time


In the good old mainframe days, professionals may have used a “dumb terminal.” This terminal had enough power to execute input and output tasks with a user, but the deeper magic happened on more powerful main frame computer. In 2023, student makers may have a Chromebook, a great inexpensive laptop for academic computing. In common cases, it’s hard do larger dev projects on the laptop alone due to limited speed and capacity. Over the past six months, I have enjoyed using Gitpod.io, a cloud based code editor and development environment empowering devs with high performance, isolation, and security. With gitpod, a dumb Chromebook workstation becomes a robust dev machine for web development and data science learning.

Gitpod.io is a powerful online Integrated Development Environment (IDE) that allows developers to write, test, and deploy code without the need for local installations of software. Gitpod.io is built on top of Git and leverages the power of Docker containers to provide a lightweight and fast environment for developers to work in. For makers familiar with Visual studio code, you’ll find the Gitpod experience very inviting since the tool builds upon the user experience of VSCode. I have used many of my favorite VSCode extensions for .NET, nodejs, Azure, and Python with Gitpod.

You can start a new workspace with just a few clicks, and it automatically clones your repository, installs dependencies, and sets up your environment. This means you can start coding right away without having to worry about configuring your development environment. When teaching new skills to developers, this benefit becomes very helpful to mentors or workshop organizers.

Gitpod.io also provides a range of features to make the development process more efficient. For example, it has built-in support for code completion, debugging, and code reviews, as well as a terminal that allows you to run commands directly from your workspace. In the past week, I have focused on learning new Python data science environments. (PyTorch) Using a easy github template template, I had a fast, web based, Python environment running quickly. I also appreciated that the Python notebooks worked well inside of VSCode.

https://databaseline.tech/zoose-3.0/

Gitpod provides a generous free tier to help you get started. If your software team needs more time on the platform, they offer reasonable pay plans. I hope that you consider checking out gitpod.io for your next web dev or data science project. In many situations, having access to a high performance coding environment through a browser helps the flow of your creative project.

To learn more about the origins of this cool tool, check out this podcast with the founders of Gitpod. Their CTO, Chris Weichel, does a good job talking through the benefits of Gitpod for professional software teams and saving pro devs time.
Chris Weichel talks about GitPod time saving in the enterprise

Make music with code using DotNet Core

Curious about making music with code? As a software engineer and music guy, I have enjoyed seeing the connections between music and computers. The first computer programmer, Ada Lovelace predicted that computers will move beyond doing boring math problems into the world of creative arts. If a problem can be converted to a system of symbols, she reasoned that computers could help. She used music as her example.

In previous experiments, I have explored the ideas of code and music using TypeScript, NodeJs, and Angular. You can find this work here.

After looking around GitHub, I found a really cool music library for C# devs. I’m hoping to use it to create tools to make quick backup tracks for practicing improv. It’s just fun to explore electronic music, theory, and computational music. Make sure to check out the blog post by Maxim. ( the author of DryWetMidi ) It’s a pretty comprehensive guide to his library.

What is MIDI?

MIDI stands for musical instrument digital interface. (MIDI) Under a file format like WAV or MP3, the computer stores the raw wave form data about sound. The MIDI file format and protocols operate an conceptual layer of music data. You can think of a MIDI file having many tracks. You can assign different instruments ( sounds to tracks ). In each track, the musician can record songs as many events. MIDI music events might include turning a note on, turning a note off, engaging the sustain pedal, and changing tempo. MIDI music software like Garage band, Cakewalk and Bandlab can send the MIDI event data to a software synth which interprets the events into sound. In general, the MIDI event paradigm can be extended to support other things like lighting, lyrics, and other stuff.

DryWetMidi Features

  • Writing MIDI files: For my experiments, I have used DryWetMIDI to explore projects for making drum machines and arpeggio makers. I’m really curious about using computers to generate the skeleton of songs. Can computers generate a template for a POP song, a fiddle tune, or a ballad? We’re about to find out! DryWetMIDI provides a lower level API for raw MIDI event data. The higher level “Pattern” and “PatternBuilder” APIs enable coders to think of expressing a single thread of musical ideas. Let’s say you’re trying to describe a piece for a string quartet. The “PatternBuilder” API enables you to use a fluent syntax to describe the notes played by the cello player. While playing with this API, I have to say that I loved the ability to combine musical patterns. The framework can stack or combine musical patterns into a single pattern. Let’s say you have three violin parts in 3 patterns. The library enables you to blend those patterns into a single idea with one line of code. Maxim showed great care in designing these APIs.
  • Music theory tools: The framework provides good concepts for working for notes, intervals, chords and other fundamental concepts of music.
  • Reading MIDI files: The early examples show that DryWetMIDI can read MIDI files well. I’ve seen some utility functions that enable you to dump MIDI files to CSVs to support debugging. The documentation hints at a chord extraction API that looks really cool. Looking forward to testing this.
  • Device interaction: DryWetMIDI enables makers to send MIDI events and receive them. This capability might become helpful if you’re making a music tutor app. You can use the music device interaction API to watch note events. The system can provide feedback to the player if they’re playing the right notes at the appropriate time.

Visions for MusicMaker.NET for .NET Core

In the following code example, I’ve built an API to describe drum patterns using strings.
The strings represent sound at a resolution of 16th notes. Using the “MakeDrumTrack” service,
we can quickly express patterns of percussion.

IMidiServices midiServices = new MidiServices();
var service = new MakeDrumTrackService(midiServices);
var command = new MakeDrumTrackCommand
{
    BeatsPerMinute = 50,
    FileName = fileName,
    Tracks = new List<DrumTrackRow>
    {
        new()
        {
            Pattern = "x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|",
            InstrumentNumber = DrumConstants.HiHat
        },
        new()
        {
            Pattern = "x---|----|x---|----|x---|----|x---|----|",
            InstrumentNumber = DrumConstants.AcousticBassDrum
        },
        new()
        {
            Pattern = "----|x---|----|x--x|----|x---|----|x--x|",
            InstrumentNumber = DrumConstants.AcousticSnare
        },
        new()
        {
            Pattern = "-x-x|x-x-|-x-x|x--x|-xx-|xx--|-xx-|x--x|",
            InstrumentNumber = DrumConstants.HiBongo
        }
    },
    UserId = "system"
};

// act
var response = service.MakeDrumTrack(command);

Using the ArpeggioPlayer service, we’ll be able to render out a small fragement of music given a list of chords and an arpeggio spec.

var tempo = 180;
var instrument = (byte)Instruments.AcousticGrandPiano;
var channel = 1;

var track = new ChordPlayerTrack(instrument, channel, tempo);
var command = ArpeggioPatternCommandFactory.MakeArpeggioPatternCommand1();
var player = new ArpeggioPlayer(track, );
var chordChanges = GetChords1();  // Am | G | F | E

player.PlayFromChordChanges(chordChanges);

// Write MIDI file with DryWetMIDI
var midiFile = new MidiFile();
midiFile.Chunks.Add(track.MakeTrackChunk());
midiFile.Write("arp1.mid", true);

In the following method, the maker can describe the arpeggio patterns using ASCII art strings. The arpeggio patterns operate at resolution of sixteenth notes. This works fine for most POP or eletronic music. In future work, we can build web apps or mobile UX to enable the user to design the arpeggio patterns or drum patterns.

public static MakeArpeggioPatternCommand MakeArpeggioPatternCommand1()
{
    var command = new MakeArpeggioPatternCommand
    {
        Pattern = new ArpeggioPattern
        {
            Rows = new List<ArpeggioPatternRow>
            {
                new() { Type = ArpeggioPatternRowType.Fifth, Octave = 2, Pattern = "----|----|----|---s|" },
                new() { Type = ArpeggioPatternRowType.Third, Octave = 2, Pattern = "----|--s-|s---|s---|" },
                new() { Type = ArpeggioPatternRowType.Root, Octave = 2, Pattern =  "---s|-s-s|---s|-s--|" },
                new() { Type = ArpeggioPatternRowType.Fifth, Octave = 1, Pattern = "--s-|s---|--s-|--s-|" },
                new() { Type = ArpeggioPatternRowType.Third, Octave = 1, Pattern = "-s--|----|-s--|----|" },
                new() { Type = ArpeggioPatternRowType.Root, Octave = 1, Pattern =  "s---|----|s---|----|" }
            },
            InstrumentNumber = Instruments.Banjo
        },
        UserId = "mrosario",
        BeatsPerMinute = 120,
        Channel = 0
    };
    return command;
}

The previous code sample writes out a music fragment like the following.

If you’re interested in following my work here, check out the following repo.

Getting Started with PhaserJs and TypeScript

Curious about building 2D games with web skills? In this post, we’ll explore tools and patterns to use PhaserJs and JavaScript to make engaging 2D games. We’ll cover tools to make experiences with our favorite language: TypeScript.

Reference links

Related links

Easy 3D scanning tools for iOS in 2022

The task of making 3d models for games can feel daunting. In 2022, we have many tools to rapidly creating 3d models using scanning methods. I’m amazed how this robust computer science and computer vision technology has become accessible to makers and creatives. Let’s say you need to create a 3d model of a statue and 3d print a copy. In our post today, I wanted to connect our readers to a wonderful app called Trnio and a few others. For IPhone and Ipad users that have ARKit, maker can create impressive 3d models by recording a scan of their target objects or capturing pictures. The following video outlines the process for Trnio.

Under the hood, 3d scanning works by exploring each frame and computing the estimated camera position of the device. Using the camera position and feature points extracted from the frame, the system can do analysis of the movement of feature points over time. Using algorithms that extract 3d structure from motion, the app can estimate a model of the 3d object. Really cool stuff.

When testing this application with my kitchen table and few other car parts, I found the app easy to use with notable results. You can inspect some of the results of scans on SketchFab.

https://sketchfab.com/trnio

In the more recent editions of iOS devices, users have access to LIDAR scanners on these devices. The LIDAR sensor provides depth information more robustly to algorithms increasing 3d model quality. Fernado Herrera does a nice review of a few other scanning options that leverage LIDAR. He mentioned that the LIDAR scans worked best on large structures. I appreciated his comments on Qlone which focuses on scanning smaller items using a QR code template. The reviews looked a bit mixed on the app stores though.

We love to hear from our readers. If there’s another tool that you love for 3d scanning, please share in the comments. If you make something cool, please share that with us too!!

Related apps:
TRNIO

Quick start for Phaser 3 and TypeScript

As I have time for small project experiments for myself, I decided to explore new developments with Phaser JS. In general, Phaser JS seems like a fun framework to enable novice game makers to build fun 2D games. The javascript language has become a popular choice since the language exists in every web browser.

What can you build with Phaser 3? Check out some examples here.
Tetris
Robowhale
Fun Math Game

In this blog post, we walk through building a small space shooter game.

As you start Phaser JS development, many tutorials walk you through some process to setup a web server to serve html and javascript content. Unfortunately, plain JavaScript alone does not guide makers to create well formed code. In plain Javascript, coders need to create stuff in baby steps. In this process, you should test stuff at each step. If you use a tool like Visual Studio code, the tool provides awesome guidance and autocomplete for devs. It would be nice if tools could improve to help you find more code faults and common syntax mistakes.

The TypeScript language invented by Anders Hejlsberg comes to the rescue. The TypeScript language and related coding tools provides robust feedback to the coder while constructing code. The JavaScript language does not support ideas like class structures or interfaces. Classes enable makers to describe a consistent template for making objects, their related methods, and properties. In a similar way, interfaces enable coders to describe the properties and methods connected to an object, but does not define implementations of methods. It turns out these ideas provide an increased structure and guidance to professional developers to create large applications using JavaScript. When your tools help you find mistakes faster, you feel like you move faster. This provides great support for early stage devs. TypeScript borrows patterns and ideas from C#, another popular language for game developers and business developers.

I found a pretty nice starter kit that integrates TypeScript, a working web server, and Phaser 3 JS together. Here’s the general steps for setting up your Phaser 3 development environment.

Install Visual Studio Code

Install NodeJs and NPM

  • NodeJs enables coders to create JavaScript tools outside of the browser.
  • npm – When you build software in modern times, tools that you build will depend upon other lego blocks. Those lego blocks may depend upon others. The node package manager makes it easy to install NodeJs tools and their related dependences.
  • Use the following blog post to install NodeJs and NPM
  • Installing NodeJs and NPM from kinsta.com

Install Yarn

  • Install Yarn
  • Yarn is a package manager that provides more project organization tools.

Download Phaser 3+TypeScript repository

On my environment, I have unziped the files to /home/michaelprosario/phaser3-rollup-typescript-master.

Finish the setup and run

cd /home/michaelprosario/phaser3-rollup-typescript-master
yarn install
yarn dev

At this point, you should see that the system has started a web server using vite. Open your browser to http://localhost:3000. You should see a bouncing Phaser logo.

Open up Visual Studio Code and start hacking

  • Type CTRL+C in the terminal to stop the web server.
  • In the terminal, type ‘code .’ to load Visual Studio Code for the current folder.
  • Once Visual Studio Code loads, select “Terminal > New Terminal”
  • In the terminal, execute ‘yarn dev’
  • This will run your development web server and provide feedback to the coder on syntax errors every time a file gets saved.
  • If everything compiles, the web server serves your game at http://localhost:3000

TypeScript Sample Code

Open src/scenes/Game.ts using Visual Studio Code. If you’ve done Java or some C#, the code style should feel more familiar.

import Phaser from 'phaser';

// Creates a scene called demo as a class
export default class Demo extends Phaser.Scene {
  constructor() {
    super('GameScene');
  }

  preload() {
    // preload image asset into memory
    this.load.image('logo', 'assets/phaser3-logo.png');
  }

  create() {
    // add image to scene
    const logo = this.add.image(400, 70, 'logo');
    // bounce the logo using a tween
    this.tweens.add({
      targets: logo,
      y: 350,
      duration: 1500,
      ease: 'Sine.inOut',
      yoyo: true,
      repeat: -1
    });
  }
}