Bird Watching With Python and TensorFlowJS ( Part 3 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see anything. In the future, we want to search the database for birds. This post will focus on the problem of detecting objects in the image and storing them into a database. Check out part 1 and part 2 to more context on this project.

TensorFlow.js is an open-source JavaScript library developed by Google’s TensorFlow team. It enables machine learning and deep learning tasks to be performed directly in web browsers and Node.js environments using JavaScript or TypeScript. TensorFlow.js brings the power of TensorFlow, a popular machine learning framework, to the JavaScript ecosystem, making it accessible for web developers and data scientists.

Under the TensorFlow Js Framework, you have access to the COCOS-SSD model that detects 80 classes of common objects. The output response reports a list of objects found in the image, a confidence factor, bounding boxes to point to each object. Check out this video for an example.

In the following code, we import some of our dependencies. This includes
– TFJS – TensorflowJs
– cocosSSd – TensorFlow model for common object detection
– amqp – A library for connecting to rabbitMQ
– supabase/supabase-js – To log data of objects found, we will send our data to Supabase
– azure/storage-blob – To download pictures from Azure blob storage, we add a client library to connect to the cloud

const tf = require("@tensorflow/tfjs-node")
const amqp = require('amqplib');
const cocosSSd = require("@tensorflow-models/coco-ssd")
const { createCanvas, loadImage } = require('canvas');
const { createClient } = require('@supabase/supabase-js');
const { BlobServiceClient } = require("@azure/storage-blob");
const { v1: uuidv1 } = require("uuid");
var fs = require('fs');

My friend Javier got me excited about trying out If you’re looking for a simple document or relational database solution with an easy api, it’s pretty cool. This code will grab some details from the environment and setup a supabase client.

const supabaseUrl = process.env.SUPABASEURL;
const supabaseKey = process.env.SUPABASEKEY;
const supabase = createClient(supabaseUrl, supabaseKey)

To learn more about Supabase, check out

In our situation, the job-processor program and the watcher program will probably run on two different machines. I will try to run the watcher process on a RaspberryPi. The job processor will probably run on some other machine. The watch program takes pictures and stores the files into Microsoft Azure blob storage. The watcher signals the job processor by sending a message through rabbitMQ.

Let’s setup the connection to Azure Blob storage.


  throw Error('Azure Storage Connection string not found');

const containerName = "picturesblobstorage";
const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_BLOB_STORAGE_CONNECTION_STRING);
const containerClient = blobServiceClient.getContainerClient(containerName);

When we want to download a file from Azure blob storage, we leverage our container client.

async function downloadPictureFromBlobStorage(fileName)
    const blobClient = containerClient.getBlobClient(fileName);
    console.log(`Downloading blob ${fileName} to ${fileName}`);
    const downloadBlockBlobResponse = await blobClient.downloadToFile(fileName);
    console.log(`Downloaded ${downloadBlockBlobResponse.contentLength} bytes`);
    return true;
  } catch (err) {
    return false;

Let’s setup our class for getting insight from our object detection algorithm. In the following class, the “makeCanvasFromFilePath” method loads the picture into memory as a canvas. Using CocosSSD mode, we detect objects in the image using the predict method.

class ObjectDetection 
        this.model = null;

    async predict(image)
            this.model = await cocosSSd.load();

        const canvas = await this.makeCanvasFromFilePath(image);    
        const predictions = await this.model.detect(canvas);

        return { predictions: predictions }

    async makeCanvasFromFilePath(image) {
        const img = await loadImage(image);
        const canvas = createCanvas(img.width, img.height);
        const ctx = canvas.getContext('2d');
        ctx.drawImage(img, 0, 0);
        return canvas;

const objectDetection = new ObjectDetection();

Let’s configure RabbitMQ

// RabbitMQ connection URL
const rabbitmqUrl = 'amqp://localhost';

// Queue name to consume messages from
const queueName = 'review-picture-queue';

The “processJsonMessage” method is the heart of this nodeJs script. At a high level, the system does the following tasks.
– Read a JSON message from the watcher program.
– Download the picture from Azure blob storage.
– Run object detection on the file.
– Store findings into database ( Supabase )

// Create a function to process JSON messages
async function processJsonMessage(message) {
  try {
    const json = JSON.parse(message.content.toString());
    // Replace this with your custom processing logic for the JSON data
    console.log('Received JSON:', json);

    // need function to download file from blob storage 
    const fileDownloaded = await downloadPictureFromBlobStorage(json.fileName);
      // Run TF prediction ...
      const response = await objectDetection.predict(json.fileName);

      // Store data in supabase ....
      const { error } = await supabase.from('watch_log').insert({ file_name: json.fileName, json: response })    
        console.log("error object defined");


      console.log("Error downloading file from blob storage");

  } catch (error) {
    console.error('Error processing JSON message:', error.message);

Here’s some sample data captured as JSON:

  "predictions": [
      "bbox": [
      "class": "person",
      "score": 0.6217759847640991

In this last section, we connect ourselves to RabbitMQ so that we can start to accept work.

// Connect to RabbitMQ and consume messages async function consume() { try { const connection = await amqp.connect(rabbitmqUrl); const channel = await connection.createChannel(); await channel.assertQueue(queueName, { durable: false }); console.log(`Waiting for messages in ${queueName}. To exit, press Ctrl+C`); channel.consume(queueName, (message) => { if (message !== null) { processJsonMessage(message); channel.ack(message); } }); } catch (error) { console.error('Error:', error.message); } } consume();

That’s about it. If need to see the completed project files, check out the following github link:

If you’re interested in exploring more tutorials on TensorFlowJs, check out the following links to code labs:
TensorFlowJs Code Labs

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Bird Watching With Python and TensorFlowJS ( Part 2 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see birds. Check out the first post here. In this post, we will focus on the problem of taking pictures every minute or so. For fun, I decided to build this part in Python. To review the system overview, check out my previous blog post here.

The solution for the watcher involves the following major elements and concepts.
– Setup a connection to Azure blob storage. To keep things simple, Azure blob storage enables you to securely store files in Microsoft Azure cloud at low cost.
– Set the time interval for taking pictures
– Setup connection to message queue system. The watch program needs to send a message to an analysis program that will analyze the image content. Please keep in mind that RabbitMQ is simply “email for computer programs.” It’s a way for programs to message each other to do work. I will be running the watcher program on a pretty low-complexity Raspberry PI 2. In my case, I wanted to off-load the image analysis to another computer system with a bit more horse power. In future work, we might move the analysis program to a cloud function. That’s a topic for a future post.

Here’s some pseudo code.
– Setup the program to take pictures
– Loop
– Take a picture
– Store the picture on disk
– Upload the picture to Azure blob storage
– Signal the analysis program to review the picture
– Delete the local copy of the picture
– Wait until we need to take a picture

Setting the stage

Let’s start by sketching out the functions for setting up the blob storage, rabbit message queue, and camera.
At the top of the python file, we need to import the following:

import  cv2
import  time
import  pika
import  json
import  os
from  import  BlobServiceClient

In the following code, we setup the major players of the blob storage, rabbit message queue, and camera.

container_client  =  setup_blob_storage() 
# Set the time interval in seconds
interval  =  60  # every min 
# Initialize the webcam
cap  =  cv2.VideoCapture(0)  

# Check if the webcam is opened successfully

if  not  cap.isOpened():
    print("Error: Could not open the webcam.")
queue_name, connection, channel  =  setup_rabbit_message_queue()

Take a picture

In the later part of the program, we start to loop to take a picture and send the data to the analysis program.

ret, frame  = 
if  not  ret:
    print("Error: Could not read frame from the webcam.")

timestamp, filename = store_picture_on_disk(frame)
print(f"Image captured and saved as {filename}")

Send the picture to Blob Storage

local_file_path  =  filename
blob_name  =  filename 
with  open(local_file_path, "rb") as  data:
    container_client.upload_blob(name=blob_name, data=data)

Signal analysis program to review image using a message

# Prepare a JSON message
message  = {
'fileName': filename,
'timestamp': timestamp,
message_json  =  json.dumps(message)

# Send the JSON message to RabbitMQ
channel.basic_publish(exchange='', routing_key=queue_name, body=message_json)
print(f"Message sent to RabbitMQ: {message_json}")

In the previous code sketches, we have not implemented several key functions. Let’s fill in those functions now. You’ll need to position these functions near the top of your script.


Please use this link to learn about Azure Blob storage, account configuration, and Python code patterns.

container_name  =  "picturesblobstorage"
def  setup_blob_storage():
    connect_str  =  "Get connection string for your Azure storage account"
    blob_service_client  =  BlobServiceClient.from_connection_string(connect_str)
    container_client  =  blob_service_client.get_container_client(container_name)
    return  container_client


Setup connection to message queue system.

def  setup_rabbit_message_queue():
    rabbitmq_host  =  'localhost'
    rabbitmq_port  =  5672
    rabbitmq_username  =  'guest'
    rabbitmq_password  =  'guest'
    queue_name  =  'review-picture-queue'

    # Initialize RabbitMQ connection and channel with authentication
    credentials  =  pika.PlainCredentials(rabbitmq_username, rabbitmq_password)
    connection  =  pika.BlockingConnection(pika.ConnectionParameters(host=rabbitmq_host,port=rabbitmq_port,credentials=credentials))
    channel  =

    # Declare a queue for sending messages
    return  queue_name,connection,channel

To keep this blog post brief, I will not be able to jump into all the details regarding setting up RabbitMQ on your local system. Please refer to this 10-minute video for details on setting up this sub-system.

This blog post does a great job of setting up RabbitMQ with “docker-compose.” It’s a light weight way to setup stuff in your environment.

Easy RabbitMQ Deployment with Docker Compose (


def  store_picture_on_disk(frame):
    timestamp  =  time.strftime("%Y%m%d%H%M%S")
    filename  =  f"image_{timestamp}.jpg"
    cv2.imwrite(filename, frame)
    return  timestamp,filename

In our final blog post, we’ll use NodeJs to load the COCO-SSD model into memory and let it comment upon the image in question.

You can check out the code solution in progress at the following github repository.

Check out object-detection.js to see how how object detection will work. Check out for a completed version of this tutorial.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Make A Bird Detector with TensorFlowJs ( Part 1 )

In the Rosario tradition of boldly exploring nature, Sarah and my eldest have gotten into bird watching. It’s been cool to see my son and my wife going on hikes and finding cool birds with a local meetup. My wife gave me a challenge to make a bird watcher device for our yard and our bird house. In her vision, we want to understand when we see the most birds in the back yard and capture great photos. In future work, we might even identify the type of bird. In our post today, I thought we would talk through the high level code I’ve prototyped. This will become a fun family project and give me an opportunity to play with some TensorFlowJs.

In the past 12 years, the industry has exploded with innovations involving machine learning. We see these innovations when we ask our home assistant to play a song, using ChatGPT, or using speech to text. In the domain of bird watching, we might build a machine learning model using pictures of different birds with labels specifying the type of bird. A machine learning(ML) system observes patterns in the input data set(set of bird pictures with labels) and constructs rules or structures so the system can classify future images. In contrast to traditional computer programming, we do not explicitly define the code or rules. We train the model using examples and feedback so it learns. In this case, we want to determine if a picture contains a bird.

In this prototype, I will leverage a pretrained ML system or model called COCO-SSD. The COCO aspect of the model finds 80 different classes of things in the context of the picture. (including birds) The model will estimate if it detects a bird in the picture and a bounding box location for the object. The model makes a best attempt to segment the picture and report on all the objects it can see and provide labels.

This diagram provides an overview of the prototype system.

Major elements

  • Watcher – In this project, Python takes pictures every 5 minutes. Pictures get stored to the file system. The file name of the picture gets stored in a message that eventually gets added to a queue.
  • RabbitMQ – We’re using RabbitMQ with JSON messages to manage our queue plumbing. You can think of RabbitMQ as email for computer programs. You can insert messages into different folders. Job processor programs start executing when they receive messages in these folders. This also enables us to create multi-program solutions in different languages.
  • Job Processor – The job processor, written in JavaScript using NodeJS, monitors the message queue for work. When it receives a file name to process, we load the image into memory and request the machine learning process to review it. The COCO-SSD model will report a list of objects it detects with confidence factors associated. If the system finds a bird, the process will write a database record with the details.
  • Database – For this solution, we’re currently prototyping the solution using Supabase. On many of my weekend projects, I enjoy getting to rapidly create structures and store data in the cloud. Under the hood, it uses PostgresDB and feels pretty scalable. Thank you to my friend Javier who introduced me to this cool tool.

The job processor element using TensorFlowJS to execute the object detection model. TensorFlowJs is a pretty amazing solution for executing ML models in the browser or NodeJS backends. Learn more with the following talk.

In our next post, we’ll dive into the details of the job processor process.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Make Unity 3D Games To Amaze Your Friends!

Hello makers! Like many in the computer industry, I had the dream of learning how to build video games. When the math class seemed difficult, I found inspiration to move forward since I had strong motivation to learn how to build video games someday! Unity 3D and their amazing community of game creators have created powerful opportunities for curious makers to build games that amaze your friends. From my first encounters with Unity 3D, I felt that they have done a good job of educating their users. In the past few years, I greatly admire the new strategies they have created to engage learners in their tools.

The idea of “modding” has engaged generations of gamers. (Thank you Minecraft and Roblox!). We’ve become used to the idea that games setup a robust environment where you can build big and crazy things. In lots of games, you’re placed into a position of saving the world. (i.e. you’ve been given a motivation to do something bigger than yourself that’s fun). The Unity 3D “microgame” tutorials provide students with the basic shell of well crafted game experiences. In this context, the Unity 3D team have created tutorial experience to gently guide learners through the Unity 3D environment, programming concepts, and their system for building Unity “lego” blocks. In this experience, you get to select your adventure. Do you want to build your own Lego game? Do you want to make your own version of Super Mario brothers? You can challenge yourself by building a cool kart racing game. In the videos below, I wanted to give a shout out to the Lego action “game jam” and the Kart Racing tutorials.

I always enjoy learning new Unity tricks from other developers. It has been fun to pick apart aspects of these games. In the newest Kart racing tutorials, you can also learn about the newer machine learning capabilities of Unity 3D. ( ML Agents ) It kind of blows my mind that these ideas can now appear in tutorials for early stage coders. As I’ve tested these experiences with my kids, they have enjoyed creating novel kart racing experiences and environments. My older son has enjoyed customizing his own shooter game.

Make sure to check out Unity 3D’s Learning index here:

If you make something cool, please share a link below in the comments!

Your First Game Jam: LEGO Ideas Edition

In this edition, you will discover how to build a quest in your LEGO® Microgame using the newly released “Speak” and “Counter” LEGO® Behaviour Bricks. Learn step-by-step with a special guest from the LEGO® Games division and our Unity team to create your own unique, shareable game.

Build Your Own Karting Microgame

It’s never been easier to start creating with Unity. From download to Microgame selection, to modding, playing, and sharing your first playable game, this video shows you what you can accomplish in as little as 30 minutes!

For detailed step-by-step Unity tutorials, check out

The Official Guide to Your First Day in Unity playlist.

Related Posts

How Mom Sparked a Growth Mindset in Our Families

Hello, family. In my post today, I wanted to reflect upon how our mom has loved and inspired her family through her life. I hope these stories of my mother, Belen Rosario, might offer motivation to other families. At InspiredToEducate.NET, our mission is to help students love learning through creative projects and exploration. Writing this post helped me understand my root system and personal curiosity for the power of learning and how a learning mindset can grow communities.

In my own way, I hope this post helps me and my family meditate on the life of my mother Belen. On Nov 20th, 2020, my mother celebrated her birthday into heaven after a challenging battle with cancer. I’m so excited that she’s finally at peace. I praise God that she enjoys the light and comfort of our heavenly Father and the amazing jam session of praise with all the saints and angels.

For my brother and I, we have been so blessed to have a loving mom and dad. Let’s be real. In the Rosario home, we’re not unlike any other family. We have our imperfections and vices. I, however, feel that my mom, Belen, lived out some of the best qualities of a Catholic momma. I hope I can foster her legacy of being a good Catholic parent.

My mom encouraged a spirit of generosity: Belen was born on Christmas day in 1945 to a loving family of teachers in the Philippines. Belen actually means Bethlehem in Spanish. My grandfather Pedro taught her family about the enabling power of learning. Grandpa Pedro had the vision of enabling his daughters and son to live a thriving life, but lacked the financial means to provide university education to all of his children. According to the family stories, Momma worked very hard in her schooling to explore the sciences and eventually earned a B.S. degree at University of Santo Tomas in Manilla in the Philippines. As she grew as a professional, she would live a modest life and send money back to her family. As a young adult, she earned an opportunity to immigrate to the United States which strongly needed medical technologists skilled in chemistry. She knew that making a transition to the US would take her away from her family in the Philippines, but knew that it would create greater opportunities for her and her larger family. My mom and my wonderful father Moses met while they both worked in a medical lab in New York. As I recall my mom’s words, she says “I’m not sure why this young guy kept following me around.” They, however, fell in love and started their family together. My mom and dad have always encouraged a spirit of generosity. They sent money back to the Philippines to help fund the education of her siblings. We’re proud of our momma who helped her family members earn degrees in engineering, medicine and finance. Mom’s story captures the best of the American dream. She came to the US with a spunky drive and educational opportunities. She converted those assets into a beautiful life for herself and opportunities for her family. I have loved hearing stories of how mom and dad helped Tita Gloria and Tito Ernie jumpstart their marriage and life in the US. While we didn’t have a lot, my mom and dad have lived out the “go giver” attitude to help friends in need. #ProudOfMom #ProudOfDad

Keeping the faith: One of the most precious gifts that mom and dad gave to us was the gift of faith. My mom and dad made great financial sacrifices to make sure Francis and I had the best in Catholic education. I also had the opportunity to attend Jesuit High school in Tampa, one of the finest Catholic schools in town. Given that we grew up in Florida, we grew up with the legends and stories of watching the space program. (from Apollo and to the Shuttle program) I can recall fun stories of my dad leading us through fun slide shows of exploring space. From my mom’s side, she did a great job of encouraging our curiosity in science. If we wanted to learn about something, we had a cool encyclopedia and tons of other educational materials so that we could explore our curiosity. Many people put science and faith into different boxes. We were blessed with a family that encouraged the wonder of science and understood that God’s hand orchestrated every detail. While we weren’t a perfect family, we learned the value of our faith, the habits of prayer, and the beautiful rituals of our Catholic faith. These habits have helped us shape our hearts for the Lord. As the family faced the trials of cancer for my mother and brother, I kept seeing my mom turn to Jesus and calling upon the mercy of Momma Mary through the rosary.

Fostering creativity and music: Great art and technical work requires the discipline of incremental practice, trial, failing, and persistence. I feel like Francis and I learned these lessons through our family culture of tinkering, art and music. My dad created opportunities to get early exposure to computers and their creative power. My first experiences with a computer gave us exposure to creating art on a computer or simple code experiments. My brother and I had robust opportunities to learn and explore music. We enjoyed our opportunities to learn violin, piano, and sing. ( And mom loved to sing!) To be honest, when we started playing violin it sounded like we were killing cats. Not sure how we progressed beyond that. At some later point, we gained enough skills to join our music ministry at Christ the King. Some of my favorite memories involve me and my brother getting to serve at 5:30 pm contemporary choirs, sharing music at the carnival and serving in youth ministry. My mom and dad largely supported the strengths of my brother in performing and visual art. It’s been cool to see his passions lean toward creative digital fabrication and digital media. On a personal level, I didn’t realize it at the time, but these became pivot points for preparing me for future work in music ministry later in college. I know these experiences helped us gain a growth mindset for our respective careers.

Toward the end, I helped my mom reflect upon the influence of her life. I talked about Pam and Wilson who met while serving in my campus ministry choir at UCF. Like many beautiful stories, Pam and Wilson fell in love through their shared passion for Christ and music. They have a beautiful Catholic family that I cherish. Since those precious years of being a founding member of and, hundreds of students have changed their lives because of their deep encounter with Jesus in these ministries. Holy men and women have decided to give their lives to Jesus as priests and nuns. Generations of Catholic families will continue to be born. My mom has beautiful spiritual grand-children because she planted the love of God and music in my heart. To return to the simple teaching of Mother Teresa, can loving your family truly change the world? It’s a great hypothesis for families to consider. It’s a hypothesis that we consider testing with our loved ones. I know our family will always be proud of our dearest Belen.

While we’re sad to lose momma on Earth, we’re excited for momma for her birthday in heaven. Can’t wait to hear the stories of her meeting Jesus, her mom, dad, and other dear ones in heaven. We’re so glad that she now enjoys a glorified body, the love of Christ, and no more pain. Excited to seek Lola Belen’s intersession. Saint Belen Rosario … Pray for us!!

14 AFrame.IO Resources For Your WebXR Project

AFrame Logo

I’m a big fan of the work of the AFrame.IO community.  Thank you to Mozilla, Diego Marcos, Kevin Ngo, and Don McCurdy for their influence and effort to build a fun and productive platform for building WebVR experiences.   In this post, I’ve collected a few Github repositories and resources to support you in building AFrame experiences.

Talk Abstract: In the next few years, augmented reality and virtual reality will continue to provide innovations in gaming, education, and training. Other applications might include helping you tour your next vacation resort or explore a future architecture design. Thanks to open web standards like WebXR, web developers can leverage their existing skills in JavaScript and HTML to create delightful VR experiences. During this session, we will explore, an open source project supported by Mozilla enabling you to craft VR experiences using JavaScript and a growing ecosystem of web components.
Kevin’s collection of A-Frame components and scenes.
Awesome WebXR from Don McCurdy
Infinite background environments for your A-Frame VR scene in just one file.
Interactive workshop and lessons for learning A-Frame and WebVR.
Official registry of cool AFrame stuff
Components for A-Frame physics integration, built on CANNON.js.

Experiment with AR and A-Frame
AFrame now has support for ARCore. Paint the real world with your XR content! Using FireFox Reality for iOS, you can leverage ARKit on your favorite IPad or IPhone.
I’ve collected a small collection of demo apps to explore some of the core ideas of AFrame.

AFrame Layout Component
Automatically positions child entities in 3D space, with several layouts to choose from.

An animation component for A-Frame using anime.js. Also check out the animation-timeline component for defining and orchestrating timelines of animations.

Super Hands
All-in-one natural hand controller, pointer, and gaze interaction library for A-Frame. Seems to work well with Oculus Quest.

A-Frame Component loading Google Poly models from Google Poly
Component enables you to quickly load 3D content from Google Poly

HTML Component for A-Frame VR that allows for interaction with HTML in VR. Demo
L-System/LSystem component for A-Frame to draw 3D turtle graphics. Using Lindenmayer as backend.

Thanks to the amazing work from Mozilla, WebXR usability has improved leveraging specialized FireFox browsers
FireFox Reality
FireFox Reality for HoloLens 2 – For raw ThreeJs scripts, it works well. I’m still doing testing on AFrame scenes.

If you live in Central Florida or Orlando, consider checking out our local chapter of Google developer Group.  We enjoy building a fun creative community of developers, sharing ideas, code, and supporting each other in the craft of software.  Learn more about our community here:

Top Stories on InspiredToEducate.NET

3D Modeling for Minecraft using TinkerCad – Online Meetup June 20th

As adult learners or students, we’re all looking for new fruitful activities that we can share with our friends and family. In this hands-on workshop, we’re partnering with Google Developer Group of Central Florida to learn how you can build 3D stuff for a 3D printer, a Unity game, and Minecraft!

  • WHO: Families, developers, tinkerers
  • WHERE: Online Google Meet
  • WHEN: June 20th at 1pm

In this workshop, we’ll build amazing stuff in Minecraft that will WOW your friends! You’ll learn the basics of 3D modeling using TinkerCAD, a free tool for modeling! You’ll have fun constructing 3D worlds and playing them in Minecraft. Using TinkerCAD, we’ll convert your 3D worlds into Minecraft schematics that can be imported using WorldEdit.

For families, we hope that you consider bringing your kids with you and learning together.

For developers, we’ll cover a few API’s to build 3D models using JavaScript too.

You’ll need to register for a free account on TinkerCad. You’ll also need to obtain the Minecraft Java Edition. You may want to install WorldEdit ahead of time too: Setup WorldEdit on Minecraft

To join the video meeting, click this link: Meeting Link on Google Meet

I hope that you can join us!

Related posts:

Create Async-JAM sessions with your music friends at

Hey, Music makers! In the past few months, my family and I have discovered an amazing web-based music recording tool that we just had to share. I believe that some of the best ideas in life come from ideas mixing. In the world of music making, we love having the opportunity to elaborate or jam upon the ideas of other musicians. It’s a core experience. The website makes it possible for music makers to build music in a fun and social manner.

I had the amazing opportunity as a kid to learn musicianship deeply. From my mother, I learned a great deal of discipline and habits required to become a proficient violin player. I learned to appreciate classical music and the joy of making music with others. These lessons also empowered me to serve in my church and use my gift of music to uplift others. My father, gifted me with the perspective and skills of a rock keyboard player. My brother and I grew up listening to a lot of classic rock with Elton John, Billy Joel, Eagles, Emerson, Lake & Palmer, Chicago, etc. While I loved classical music, I also desired to play like an Elton John. Being computer geeks, my dad invested very early in getting us access to MIDI music recording equipment and a simple keyboard. As a teen, I can remember losing many hours during the summer learning how to record electronic music. We even recorded some of my dad’s songs too. These are some of my most precious memories.

With this story in mind, I want to create these experiences for my kids too. It’s been fun to explore with my kids and explore their musical creativity together. For my little singer, we record some Disney tracks. One of my boys really enjoys building techno right now. And makes it easy. I hope you consider checking out to explore music making in your family too!

Kid Techno Samples

Key benefits of

  • It just works in the browser: is like Google Docs for musicians. To get started, you don’t need to install software onto your computer at all. Open up a web browser and navigate to and register for an account. From there, you hit the “create” button and you’re ready to start making music.
  • It works with your MIDI/audio controller: In our house, we have a pretty inexpensive MIDI/audio recording box. It’s a USB device that connects my laptop to my MIDI keyboard and our recording mics. It blows my mind that Google Chrome and can interface with audio recording and MIDI devices. Putting geek stuff aside, I can use to record small keyboard and audio fragments completely in the browser. Crazy!!
  • For R&B and techno oriented creatives, has a robust library of audio loops for mixing. All of these loops can be layered and arranged in a multi-track manner.
    Loop library

  • The best ideas come from mixing with other musical ideas. With, you can now share your music in the same manner that you would share a Facebook post or a Google document. This creates an opportunity for creators to market their skills, connect with new musical friends, and gain inspirations from others.

  • A great deal of band lab works on mobile devices and phones too. This can be fun if you’re feeling creative on the go!

Quick tour of features

Multi-track recording provides an user experience to support multi-track recording. For creatives who want to leverage basic software-based synths in their MIDI creations, you can expect the common piano roll interface. I have to say that I enjoy the simplicity as compared to other recording tools. Unfortunately, I have not found a way to output my MIDI back to my external keyboard device. This matters for professional musician use cases where you have an amazing library of sounds on your keyboard. I do like that the multi-track experience enables you to mix different types of musical ideas: MIDI keyboard recordings, raw audio, drum loops, and audio loops.

Drum patterns

The Drum patterns interface enables you to define a collection of drum patterns. For the pattern A, you might define a drum pattern that works for a verse. For pattern B, you might define another drum pattern for your chorus. You can define another pattern that you might use on a bridge. As I’m trying to engage my kids in music making, I like to share the drum pattern maker with them. They instantly get it and enjoy iterating on ideas.

Are you curious about, but don’t have a keyboard? Don’t worry! They have you covered. They have a simple interface for playing notes using your normal computer keyboard. For simple techno recording, you can still have fun with this interface.

To give you more perspective, check out this YouTube video from Eumonik. I like his honest review and tour of

Hope you enjoy BandLab to create async-JAM sessions with your music friends and family.

7 Creative DIY Project Ideas For Family Fun

Like many parents, my wife and I seek out activities that have a fun factor while we learn small lessons about math, science, art, or crafting. It’s fun to find activities that help avoid the default desire for screen time. I started putting together a plan for our kids over the next few weeks. Like many makers, I enjoy checking out new projects ideas on If you haven’t checked out Instructables, I am certain that you can find a project for you there! I thought I’d share seven projects that looked cool.

pvc tent
PVC Tent:

In our house, the kids really enjoy building forts. I really like the idea of using PVC to frame the structure of the fort. It looks like a pretty cheap build. Honestly, building forts with cardboard works just fine too. Big box forts can keep our kids playing for hours!

lego cross bow
Lego Crossbow:
Sometimes, kids enjoy being little warriors. This looked like a fun build for fans of Lego technics. The build reminds me of the activities from the book “Weapons of Mini-Destruction.”

lego chess
Lego chess:
In general, I think we might start exploring the idea of building board games using Legos. I got this concept after seeing this simple chess set. It has been fun starting to teach chess to the kids too.

DIY Cardboard Lamp:
This just looks very cool. It might be fun to do a 3D printing twist on twist on this project too!

board game
DIY Board Game:
Speaking of board games again, I really appreciated this post on building board games that teach. Besides that, the author had very practical tips to prototype board game layouts with common objects and simple computer tools like power point. Thanks for the awesome ideas.

bird house
Duct Tape Bird House:
With the family staying in the house more, we have started enjoying bird watching more. This hack with boxes and Duct tape got the attention of one of my little ones.

cardboard dome
Card board project dome
This just looked cool!

Got other cool project ideas? Please share a link with us and our readers! We love to hear from you.

Build a Space Shooter with Phaser3 and JavaScript(Tutorial3)

In this blog post series, I want to unpack building a 2D shooter game using Phaser3.js. Phaser3 provides a robust and fast game framework for early-stage JavaScript developers. In this tutorial, we will work to add aliens to the scene, give them some basic movement, and blowing them up. Sound like a plan? Here’s what we will build.

Please make sure to check out Tutorial 1 to get started with this project. You’ll need to build upon the code and ideas from the previous blog posts. (post 1, post 2)

To see the code in a completed state, feel free to visit this link. Let’s start by making some modifications to the scene class to preload an enemy sprite graphic. The PNG file will represent how the alien should be drawn to screen. We associate the name ‘enemy1’ with our PNG file.

class Scene1 extends Phaser.Scene {

    preload() {
        this.load.image('ship', 'assets/SpaceShooterRedux/PNG/playerShip1_orange.png');
        this.load.image('laser', 'assets/SpaceShooterRedux/PNG/Lasers/laserBlue01.png');
        this.load.image('enemy1', 'assets/SpaceShooterRedux/PNG/Enemies/enemyBlack3.png');


In the Phaser game framework, we associate moving game entities with sprites. To define a sprite, we build out an enemy class. When we put a sprite into our scene(as the class is constructed), a special function will be called the constructor. We’ve designed the constructor so that we can set the enemy location at a point (x,y) coordinate and connect it to the scene.

In the constructor, we accomplish the following work. We set the texture of the sprite to ‘enemy1’ and set it the position. Next, we connect this sprite to the physics engine of the scene. We’ll use the physics engine to detect when the enemy gets hit by lasers. We also initialize the deltaX factor to 3. It’s not super exciting, but the aliens will shiver from side to side randomly. This, however, is good enough for a simple lesson. After to complete this tutorial, I encourage you to go crazy with making the aliens move any way you want!

    class Enemy1 extends Phaser.GameObjects.Sprite {

    constructor(scene, x, y) {
        super(scene, x, y);
        this.setPosition(x, y);;

        this.gameObject = this;
        this.deltaX = 3;


Adding movement to aliens

So, we’re ready to start moving some aliens. Let’s do this! We’re going to write three simple methods on the Enemy1 class. Following the pattern of all Photon sprites, the update method will be called every game tick. It’s your job to tell the sprite how to move. Keep in mind, we’re going to do a simple “side to side” behavior randomly. In the update method, we start by picking a number between 0 and 3. If k is 2, we make the sprite move left using the “this.moveLeft()” function. Otherwise, we make it move to the right using “this.moveRight()”

    update() {
        let k = Math.random() * 4;
        k = Math.round(k);

        if (k == 2) {
        else if (k == 3) {

    moveLeft() {
        if (this.x > 0) {
            this.x -= this.deltaX;

    moveRight() {
        if (this.x < SCREEN_WIDTH) {
            this.x += this.deltaX;

Make lots of aliens

At this point, you want to see lots of moving aliens. Let’s add the code to the scene class to construct the aliens. In the scene class, the “create” method will be used to construct all objects. This includes our ship and the aliens. Firstly, we create a special collection object called enemies. We’ll use this collection to track the enemies with the physics system. (this.enemies = On the next line, we create an Array so that we have a simple way to track our enemies that need updating. In the loop, we’re creating 21 aliens, placing them in random locations, and adding them to our collections. (enemies and enemies2)

class Scene1 extends Phaser.Scene {


    create() {
        this.cursors = this.input.keyboard.createCursorKeys();
        this.myShip = new Ship(this, 400, 500);

        // ======= adding enemies ============
        this.enemies =;
        this.enemies2 = new Array();

        let k = 0;
        for (k = 0; k < 21; k++) {
            let x = Math.random() * 800;
            let y = Math.random() * 400;

            this.enemy = new Enemy1(this, x, y);

In order to invoke our update code for all enemies, we need to make one more edit to the scene class. In the “update” method, we need to add a loop to call “update” on all enemies

    update() {
        // there's more code related to the ship here 

        let j = 0;
        for (j = 0; j < this.enemies2.length; j++) {
            let enemy = this.enemies2[j];

At this point, we should see our aliens wiggling on the screen. And there’s much rejoicing!

Aliens go boom! Let’s do collision detection

In the laser class that we built in the last post, we need to make a few edits. Check out the code below. In the constructor of the ShipLaser, we set the texture, position, speed, and store the parent scene in “this.scene.” We connect the laser instance to the physics engine using “” In the next line, we tell the game framework to check for collisions between this laser and the enemies. When a collision happens, we handle the hit using the “handleHit” function.

    class ShipLaser extends Phaser.GameObjects.Sprite {

    constructor(scene, x, y) {
        super(scene, x, y);
        this.setPosition(x, y);
        this.speed = 10;
        this.scene = scene;

        // check out new code below ...;
        scene.physics.add.collider(this, scene.enemies, this.handleHit, null, this);

In the handle hit function, you’ll notice that the laserSprite and enemySprite have been passed as parameters to the method. In Phaser, you can receive these references so that we can define behaviors associated with both sprites. In this case, we’re just going to destroy the objects.

    handleHit(laserSprite, enemySprite) {

Hope this has been helpful. Please let me know if you have any questions.

Space shooter graphic