Level Up Your Coding Skills for Free: Exploring FreeCodeCamp

Learning to code with FreeCodeCamp

In this post, I wanted to give a shout out to one of my favorite tools for leveling up with code: FreeCodeCamp. This non-profit organization has the mission to create high value code training for the world. I have a few friends who have engaged in their content for starting web development. In my experience, I have found courses appropriate for advanced developers too. The founder of FreeCodeCamp, Quincy Larson, has made it clear that they hope to keep materials and content for free forever. Quincy observes the economic power a technology career can give to individuals and families. As a teacher who turned to software development, he saw an opportunity to apply his skills as a teacher to open opportunities for countless others. I deeply admire the contribution he’s made to the world.

According to this article from Forbes.com, FreeCodeCamp has celebrated many noteworthy achievements:
– The organization has graduated over 40 thousand students from their devs to noteworthy companies
– Over 11 thousand tutorials
– Their courses provide thousands of “hands on learning” content
Great YouTube channel and supportive forums

At my level, I know that I want to refresh my thinking around responsive design and CSS. They also have a feature rich course on PyTorch, one of the hottest machine learning frameworks in the Python community.

If you’re looking into jumping into the software development or software quality market and not break the bank, I can’t recommend FreeCodeCamp enough. I have started coaching a few friends through this content. I would recommend exploring the following topics and actions if you want to make yourself a potent contributor to the industry.
– Get to know responsive web design, HTML and CSS
– Get to know JavaScript
– Get to know a little bit of Express or NestJs to explore backend development
– React has become the most popular front-end framework in our industry. When appropriate, get to know this popular component framework
– Build a project that involves “screen scraping” data that you care about.
– Build a simple project that involves a simple front-end and backend. Build something that you care about.

Get better a little bit every day: We do need to respect the journey of software craftsmanship. You can become overwhelmed by the level of information and natural anxiety connected to learning something out of your comfort zone. It’s important to focus on enjoying the journey and getting better a little bit at a time.

How do you get started? Check out the FreeCodeCamp website and sign up. I would recommend exploring the following topic themes: Responsive web design, JavaScript, Front-end development libraries and backend API’s. Create a pace that works for you. Most code camps last 6 to 9 months. I personally keep a habit of learning something new for 20 minutes each day.

Make a good Github profile with sample projects: As a hiring leader, I enjoy checking out the public github profile of potentials. Once you have explored some of the basics, make sure to share your work on your public github profile. Please see the following article for details on this. A well groomed github profile can help communicate your passion and commitment to growing in learning.

Do not travel your coding journey alone. I encourage you to connect with the FreeCodeCamp forums or find a Google developer group or similar meetup. In our Google developer group, we try to keep an active Discord that’s friendly to people learning to code.

Build with projects: In the past year, I have been a bit more quiet on blogging due to some family challenges. I, however, continue to grow myself using projects that are fun. If you explore this blog, you can see that we have built a lot of stuff around the passions of my family: music, bird watching, minecraft, robots, and lego building. At InspiredToEducate, we believe that you should build projects that you find fun. It’s a great way to keep your motivation high because you want to build a thing that solves a problem in your life.

Related Links:

Learning To Code Google Gemini AI

learning To Code Python
One of my friends reached out to me curious about trying to build a simple AI bot. In my social circles, I have observed many friends and family showing curiosity in learning to code. Many of these friends do not come from a formal programming background but want a way to get started. As AI continues to gain influence in our culture, I feel passionate about making sure that people feel that AI can become a helpful tool and grow a learning culture. In my view, it’s important that our workforce can adapt to a more AI-enabled environment. I believe anyone can learn to code. I also believe that citizens should feel agency and empowerment to direct AI to build a better way of life. In this brief post, I wanted to outline a few tools that would help an early-stage developer start exploring the world of AI bot construction.

  • Code Academy using Python: I greatly appreciate Code Academy for helping people “get started” in Python and JavaScript. We learn best when we connect to a concept and immediately apply it. Code academy was one of the first tools to encourage a “hands-on” approach to learning computer languages quickly.
  • Google AI Studio: Before jumping into tons of code, it’s ideal to explore what might be possible. Google AI Studio provides a simple tool to explore AI question-and-answer experiences with Google Gemini. The tool provides a nice prompt gallery to help you explore interesting uses and prompt patterns. By pressing the “get the code” button, the tool will draft your active prompt into Python or many other languages.
  • Google Gemini Python SDK: Once you have a feel for the Python language and the concepts of Gemini, you can explore more detailed tutorials for leveraging the features of Google Gemini.

These three baby steps will be enough to get you started with making a very simple bot.

As you want to explore making more robust “front-ends” or user interfaces for your bot, it will become more important to learn skills like Python back-end API design, HTML, Javascript, and responsive design. The following Free code camp links will help you explore those topics.

If you follow the FreeCodeCamp tutorials rigorously, you can achieve a few certifications. I encourage early-stage developers to post their learning projects on a public GitHub account. If you were going to hire an interior designer for your home, you would ask potentials to show a sample of work. For UX designers and programmers, it’s important to build a portfolio showing your journey of learning, your work quality, or your project impact.

Related Posts

3 Reasons Why Google Developer Groups Support Community Growth

Hi, friends. While looking through old blog post I have written, I discovered that I wrote my first blog post about Google developer group ten years ago. I have enjoyed the journey. In this blog post, I wanted to explore some of my personal motivations, stories and feelings toward supporting our local Google developer groups. For my kids and my community, I enjoy advancing the mission of helping “students young and old to love learning through making, tinkering, and exploration.” I appreciate the GDG community enabling me to explore this mission using Google tools and the culture of open innovation.

Growing Students: In the past month, I had bumped into this cool episode on the Google Cloud Platform Podcast with one of my mentors, Dr. Laurie White.

Google Cloud for Higher Education with Laurie White and Aaron Yeats

Dr. Robert Allen and Dr. White invited me to join their Google Developer Group(GDG) while I lived in Macon, GA. The GDG of Macon focused on serving the students of Mercer University and the Macon community. I think they sensed my curiousity for teaching software engineering and invited me to teach some of my first sessions. (Google app engine, Firebase, JS, etc.) The experiences amplified a corner of my soul that enjoyed helping college students jump into the crazy world of software engineering. In the podcast, Dr. White unscores that traditional computer science education has many strengths. The average CS program, however, does not address many critical topics desired by engineering teams. (i.e. working with a cloud provider, engineering software for easy testing, test automation, user centered design, etc.) These gaps become blockers to early stage developers seeking work. I found joy helping connecting these students to address these gaps and connect them with opportunities around AI, web, mobile and open source stuff. In the Google ecosystem, there are tribes of mentors who want to help you become successful.

Growing a community of professionals: For many developer community organizers, we recognize the opportunity and promise of software craftmanship. We live in amazing industry not blocked by atoms and the need of physical source material. In the world of software, you can start a business with a strong concept, persistence, and good habits for incremental learning. In the world of software, you can find a good technology job by becoming a little bit better every day AND connecting with a supportive community. For many, software engineering helps real people feed and elevate themselves and their families. I believe that’s an important mission. I believe our GDG communities hit a high mark in helping professionals to grow and making the experience “excellent.” As GDG organizers, we’re passionate about helping you and your teams become successful with your cloud strategies, mobile/web apps, empower creators with AI and design culture. I have had the blessing of many mentors. Dr. Allen gave me my first Google Cardboard and introduced me to Unity3D. I now work with a wonderful design firm focused on creating learning experiences with virtual and augmented reality. It’s important to remember that small sparks can grow to bigger things. It’s important to give back and grow the next generation. We seek to become sparks for others.
Growing future startups: I continue to believe that small businesses will continue to become our engine of economic growth. The news often paints sad picture of our world as broken. We love to support startups who believe they can meaningfully improve the world and help others become successful too. To that end, I love that Google helps startups become successful through their various growth programs like Google developer groups, women tech makers, startup.google.com and student groups. Google’s learning team has put a lot of care into growing an open learning ecosystem through codelabs.google.com, web.dev/learn, Flutter.dev, kaggle.com/learn and other product guides. Learning becomes more joyful when you can learn as a tribe. Why go solo?

Invite to DevFest Florida

If you’re looking for supportive mentors and a growth oriented meetup community, I extend a warm invite to you to DevFestFlorida.org. Working with my fellow GDG organizers across Tampa, Miami, and Orlando, we’re organizing one of the largest local dev conferences in the south to help you learn and grow. It’s an experience designed by developers for our local developers. DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology. Lots of hands-on learning and fun.

Join us for DevFest Florida – Sep 28

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Why DevFest Florida Stands Out:

  • Industry Titans: Hear from top tech companies like Google, Microsoft, EA, Thoughtworks, and Mailchimp.
  • Career Catalyst: Explore our dedicated Career Development track for tailored insights.
  • Diverse Tech Landscape: Discover cutting-edge technologies like Angular, Flutter, Android, Machine Learning, Firebase, Cloud, Web, and more.
  • Inclusive Community: Connect with developers, designers, testers, tech entrepreneurs, and enthusiasts from all backgrounds.

What to Expect:

Expert Insights: Learn from 30+ speakers sharing their expertise across industries and tool stacks. Our featured speakers for 2024 include
– Dmitry Lyalin, Product Lead for Google FireBase
– John Papa, Partner GM – Developer Relations @ Microsoft
– Brooke Avery, Engineer & Technical Program Manager @ Limble
– Doug Leal, Director of Consulting (Data & Analytics) at CGI
– Roya Kandalan – Gen AI Research Scientist (The Cigna Group)

  • Hands-On Workshops: Dive deep into the latest technologies.
  • Networking Opportunities: Connect with like-minded professionals.
  • Inclusive Environment: Experience a welcoming atmosphere that celebrates diversity and inclusion.

Get 10% off your ticket when you use the code SECRETSALE

Get your tickets at EventBrite.com

Bird Watching With Python and TensorFlowJS ( Part 3 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see anything. In the future, we want to search the database for birds. This post will focus on the problem of detecting objects in the image and storing them into a database. Check out part 1 and part 2 to more context on this project.

TensorFlow.js is an open-source JavaScript library developed by Google’s TensorFlow team. It enables machine learning and deep learning tasks to be performed directly in web browsers and Node.js environments using JavaScript or TypeScript. TensorFlow.js brings the power of TensorFlow, a popular machine learning framework, to the JavaScript ecosystem, making it accessible for web developers and data scientists.

Under the TensorFlow Js Framework, you have access to the COCOS-SSD model that detects 80 classes of common objects. The output response reports a list of objects found in the image, a confidence factor, bounding boxes to point to each object. Check out this video for an example.

In the following code, we import some of our dependencies. This includes
– TFJS – TensorflowJs
– cocosSSd – TensorFlow model for common object detection
– amqp – A library for connecting to rabbitMQ
– supabase/supabase-js – To log data of objects found, we will send our data to Supabase
– azure/storage-blob – To download pictures from Azure blob storage, we add a client library to connect to the cloud

const tf = require("@tensorflow/tfjs-node")
const amqp = require('amqplib');
const cocosSSd = require("@tensorflow-models/coco-ssd")
const { createCanvas, loadImage } = require('canvas');
const { createClient } = require('@supabase/supabase-js');
const { BlobServiceClient } = require("@azure/storage-blob");
const { v1: uuidv1 } = require("uuid");
var fs = require('fs');

My friend Javier got me excited about trying out https://supabase.com/. If you’re looking for a simple document or relational database solution with an easy api, it’s pretty cool. This code will grab some details from the environment and setup a supabase client.

const supabaseUrl = process.env.SUPABASEURL;
const supabaseKey = process.env.SUPABASEKEY;
const supabase = createClient(supabaseUrl, supabaseKey)

To learn more about Supabase, check out supabase.com.

In our situation, the job-processor program and the watcher program will probably run on two different machines. I will try to run the watcher process on a RaspberryPi. The job processor will probably run on some other machine. The watch program takes pictures and stores the files into Microsoft Azure blob storage. The watcher signals the job processor by sending a message through rabbitMQ.

Let’s setup the connection to Azure Blob storage.

const AZURE_BLOB_STORAGE_CONNECTION_STRING = process.env.AZURE_BLOB_STORAGE_CONNECTION_STRING;

if (!AZURE_BLOB_STORAGE_CONNECTION_STRING) 
{
  throw Error('Azure Storage Connection string not found');
}

const containerName = "picturesblobstorage";
const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_BLOB_STORAGE_CONNECTION_STRING);
const containerClient = blobServiceClient.getContainerClient(containerName);

When we want to download a file from Azure blob storage, we leverage our container client.

async function downloadPictureFromBlobStorage(fileName)
{  
  try 
  {
    const blobClient = containerClient.getBlobClient(fileName);
    console.log(`Downloading blob ${fileName} to ${fileName}`);
    const downloadBlockBlobResponse = await blobClient.downloadToFile(fileName);
    console.log(`Downloaded ${downloadBlockBlobResponse.contentLength} bytes`);
    return true;
  } catch (err) {
    console.error(err.message);
    return false;
  }  
}

Let’s setup our class for getting insight from our object detection algorithm. In the following class, the “makeCanvasFromFilePath” method loads the picture into memory as a canvas. Using CocosSSD mode, we detect objects in the image using the predict method.

class ObjectDetection 
{
    constructor()
    {
        this.model = null;
    }

    async predict(image)
    {
        if(!this.model)
        {
            this.model = await cocosSSd.load();
        }

        const canvas = await this.makeCanvasFromFilePath(image);    
        const predictions = await this.model.detect(canvas);

        return { predictions: predictions }
    }

    async makeCanvasFromFilePath(image) {
        const img = await loadImage(image);
        const canvas = createCanvas(img.width, img.height);
        const ctx = canvas.getContext('2d');
        ctx.drawImage(img, 0, 0);
        return canvas;
    }
}

const objectDetection = new ObjectDetection();

Let’s configure RabbitMQ

// RabbitMQ connection URL
const rabbitmqUrl = 'amqp://localhost';

// Queue name to consume messages from
const queueName = 'review-picture-queue';

The “processJsonMessage” method is the heart of this nodeJs script. At a high level, the system does the following tasks.
– Read a JSON message from the watcher program.
– Download the picture from Azure blob storage.
– Run object detection on the file.
– Store findings into database ( Supabase )

// Create a function to process JSON messages
async function processJsonMessage(message) {
  try {
    const json = JSON.parse(message.content.toString());
    // Replace this with your custom processing logic for the JSON data
    console.log('Received JSON:', json);
    console.log(json.fileName);

    // need function to download file from blob storage 
    const fileDownloaded = await downloadPictureFromBlobStorage(json.fileName);
    if(fileDownloaded)
    {
      // Run TF prediction ...
      const response = await objectDetection.predict(json.fileName);
      console.log(response)

      // Store data in supabase ....
      const { error } = await supabase.from('watch_log').insert({ file_name: json.fileName, json: response })    
      if(error)
      {
        console.log("error object defined");
        console.log(error);
      }  

      deletePictureFromBlobStorage(json.fileName);
      fs.unlinkSync(json.fileName);

    }else{
      console.log("Error downloading file from blob storage");
    }

  } catch (error) {
    console.error('Error processing JSON message:', error.message);
  }
}

Here’s some sample data captured as JSON:

{
  "predictions": [
    {
      "bbox": [
        -0.36693572998046875,
        163.0312156677246,
        498.0821228027344,
        320.0614356994629
      ],
      "class": "person",
      "score": 0.6217759847640991
    }
  ]
}

In this last section, we connect ourselves to RabbitMQ so that we can start to accept work.


// Connect to RabbitMQ and consume messages async function consume() { try { const connection = await amqp.connect(rabbitmqUrl); const channel = await connection.createChannel(); await channel.assertQueue(queueName, { durable: false }); console.log(`Waiting for messages in ${queueName}. To exit, press Ctrl+C`); channel.consume(queueName, (message) => { if (message !== null) { processJsonMessage(message); channel.ack(message); } }); } catch (error) { console.error('Error:', error.message); } } consume();

That’s about it. If need to see the completed project files, check out the following github link:
https://github.com/michaelprosario/birdWatcher

If you’re interested in exploring more tutorials on TensorFlowJs, check out the following links to code labs:
TensorFlowJs Code Labs

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Bird Watching With Python and TensorFlowJS ( Part 2 )

In this series, we will continue building a small system to capture pictures of my back yard and detect if we see birds. Check out the first post here. In this post, we will focus on the problem of taking pictures every minute or so. For fun, I decided to build this part in Python. To review the system overview, check out my previous blog post here.

The solution for the watcher involves the following major elements and concepts.
– Setup a connection to Azure blob storage. To keep things simple, Azure blob storage enables you to securely store files in Microsoft Azure cloud at low cost.
– Set the time interval for taking pictures
– Setup connection to message queue system. The watch program needs to send a message to an analysis program that will analyze the image content. Please keep in mind that RabbitMQ is simply “email for computer programs.” It’s a way for programs to message each other to do work. I will be running the watcher program on a pretty low-complexity Raspberry PI 2. In my case, I wanted to off-load the image analysis to another computer system with a bit more horse power. In future work, we might move the analysis program to a cloud function. That’s a topic for a future post.

Here’s some pseudo code.
– Setup the program to take pictures
– Loop
– Take a picture
– Store the picture on disk
– Upload the picture to Azure blob storage
– Signal the analysis program to review the picture
– Delete the local copy of the picture
– Wait until we need to take a picture

Setting the stage

Let’s start by sketching out the functions for setting up the blob storage, rabbit message queue, and camera.
At the top of the python file, we need to import the following:

import  cv2
import  time
import  pika
import  json
import  os
from  azure.storage.blob  import  BlobServiceClient

In the following code, we setup the major players of the blob storage, rabbit message queue, and camera.

container_client  =  setup_blob_storage() 
# Set the time interval in seconds
interval  =  60  # every min 
# Initialize the webcam
cap  =  cv2.VideoCapture(0)  

# Check if the webcam is opened successfully

if  not  cap.isOpened():
    print("Error: Could not open the webcam.")
    exit()
queue_name, connection, channel  =  setup_rabbit_message_queue()

Take a picture

In the later part of the program, we start to loop to take a picture and send the data to the analysis program.

ret, frame  =  cap.read() 
if  not  ret:
    print("Error: Could not read frame from the webcam.")
    break  

timestamp, filename = store_picture_on_disk(frame)
print(f"Image captured and saved as {filename}")

Send the picture to Blob Storage

local_file_path  =  filename
blob_name  =  filename 
with  open(local_file_path, "rb") as  data:
    container_client.upload_blob(name=blob_name, data=data)

Signal analysis program to review image using a message

# Prepare a JSON message
message  = {
'fileName': filename,
'timestamp': timestamp,
}
message_json  =  json.dumps(message)

# Send the JSON message to RabbitMQ
channel.basic_publish(exchange='', routing_key=queue_name, body=message_json)
print(f"Message sent to RabbitMQ: {message_json}")

In the previous code sketches, we have not implemented several key functions. Let’s fill in those functions now. You’ll need to position these functions near the top of your script.

setup_blob_storage

Please use this link to learn about Azure Blob storage, account configuration, and Python code patterns.

container_name  =  "picturesblobstorage"
def  setup_blob_storage():
    connect_str  =  "Get connection string for your Azure storage account"
    blob_service_client  =  BlobServiceClient.from_connection_string(connect_str)
    container_client  =  blob_service_client.get_container_client(container_name)
    return  container_client

setup_rabbit_message_queue

Setup connection to message queue system.

def  setup_rabbit_message_queue():
    rabbitmq_host  =  'localhost'
    rabbitmq_port  =  5672
    rabbitmq_username  =  'guest'
    rabbitmq_password  =  'guest'
    queue_name  =  'review-picture-queue'

    # Initialize RabbitMQ connection and channel with authentication
    credentials  =  pika.PlainCredentials(rabbitmq_username, rabbitmq_password)
    connection  =  pika.BlockingConnection(pika.ConnectionParameters(host=rabbitmq_host,port=rabbitmq_port,credentials=credentials))
    channel  =  connection.channel()

    # Declare a queue for sending messages
    channel.queue_declare(queue=queue_name)
    return  queue_name,connection,channel

To keep this blog post brief, I will not be able to jump into all the details regarding setting up RabbitMQ on your local system. Please refer to this 10-minute video for details on setting up this sub-system.

This blog post does a great job of setting up RabbitMQ with “docker-compose.” It’s a light weight way to setup stuff in your environment.

Easy RabbitMQ Deployment with Docker Compose (christian-schou.dk)

store_picture_on_disk

def  store_picture_on_disk(frame):
    timestamp  =  time.strftime("%Y%m%d%H%M%S")
    filename  =  f"image_{timestamp}.jpg"
    cv2.imwrite(filename, frame)
    return  timestamp,filename

In our final blog post, we’ll use NodeJs to load the COCO-SSD model into memory and let it comment upon the image in question.

You can check out the code solution in progress at the following github repository.

https://github.com/michaelprosario/birdWatcher

Check out object-detection.js to see how how object detection will work. Check out watcher.py for a completed version of this tutorial.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Make A Bird Detector with TensorFlowJs ( Part 1 )


In the Rosario tradition of boldly exploring nature, Sarah and my eldest have gotten into bird watching. It’s been cool to see my son and my wife going on hikes and finding cool birds with a local meetup. My wife gave me a challenge to make a bird watcher device for our yard and our bird house. In her vision, we want to understand when we see the most birds in the back yard and capture great photos. In future work, we might even identify the type of bird. In our post today, I thought we would talk through the high level code I’ve prototyped. This will become a fun family project and give me an opportunity to play with some TensorFlowJs.

In the past 12 years, the industry has exploded with innovations involving machine learning. We see these innovations when we ask our home assistant to play a song, using ChatGPT, or using speech to text. In the domain of bird watching, we might build a machine learning model using pictures of different birds with labels specifying the type of bird. A machine learning(ML) system observes patterns in the input data set(set of bird pictures with labels) and constructs rules or structures so the system can classify future images. In contrast to traditional computer programming, we do not explicitly define the code or rules. We train the model using examples and feedback so it learns. In this case, we want to determine if a picture contains a bird.

In this prototype, I will leverage a pretrained ML system or model called COCO-SSD. The COCO aspect of the model finds 80 different classes of things in the context of the picture. (including birds) The model will estimate if it detects a bird in the picture and a bounding box location for the object. The model makes a best attempt to segment the picture and report on all the objects it can see and provide labels.

This diagram provides an overview of the prototype system.

Major elements

  • Watcher – In this project, Python takes pictures every 5 minutes. Pictures get stored to the file system. The file name of the picture gets stored in a message that eventually gets added to a queue.
  • RabbitMQ – We’re using RabbitMQ with JSON messages to manage our queue plumbing. You can think of RabbitMQ as email for computer programs. You can insert messages into different folders. Job processor programs start executing when they receive messages in these folders. This also enables us to create multi-program solutions in different languages.
  • Job Processor – The job processor, written in JavaScript using NodeJS, monitors the message queue for work. When it receives a file name to process, we load the image into memory and request the machine learning process to review it. The COCO-SSD model will report a list of objects it detects with confidence factors associated. If the system finds a bird, the process will write a database record with the details.
  • Database – For this solution, we’re currently prototyping the solution using Supabase. On many of my weekend projects, I enjoy getting to rapidly create structures and store data in the cloud. Under the hood, it uses PostgresDB and feels pretty scalable. Thank you to my friend Javier who introduced me to this cool tool.

The job processor element using TensorFlowJS to execute the object detection model. TensorFlowJs is a pretty amazing solution for executing ML models in the browser or NodeJS backends. Learn more with the following talk.

In our next post, we’ll dive into the details of the job processor process.

If you’re wanting to learn more about TensorFlowJS and Machine Learning stuff, our Orlando Google Developer Group will be organizing a fun 1 day community conference on Oct 14th.

Join us for DevFest Florida – Oct 14

AI | Mobile | Web | Cloud | Community

DevFest Central Florida is a community-run one-day conference aimed to bring technologists, developers, students, tech companies, and speakers together in one location to learn, discuss and experiment with technology.

Exploring Motivation

As a parent, I care deeply about how my kids own their habits of learning. In grade school, I received the gift of violin lessons from age five through high school. Looking back, my capacity for music making has become one of my most cherished life skills. Music is just fun! Music brings people together. In my faith life, music helps our community to pray. In short, I have internalized my motivation to explore music making. Beyond violin, I now play guitar, piano, do music recording, sing and ukulele.

If I’m honest with myself, I can remember the times that practicing violin felt like a chore. I can recall those times that I got in trouble because I did not practice enough. As an adult, I now have an appreciation for the gift of habits and can see the value that music-making has created in my life. When you start learning a complex piece of music, you have to deal with the emotions of feeling “overloaded.” With great mentorship of my parents and teaching, I learned the gift of taking things slow. We explored ways to chop up a piece of music into small phrases and gain competency. To move fast in music-making, you have to start slow and correct.

Reflecting on my music journey, I can see the joy, benefits and value of musicianship clearly now. The eight-year-old Michael did not have that kind of motivation. Over the next few months, I’m hoping to study some stuff around motivation so that I have additional tools to serve my kids in their lifelong journey. In this blog, we commit to the mission of helping students love learning through making, engineering, and exploration. It’s hard to keep my kids motivated on their projects and activities at times. I have a little one taking violin lessons now. It’s a joy to see her grow. And it can feel like a challenge getting her to practice regularly. Every parent has their version of this. How do I get my kids to eat their vegetables?

My wife and I will often tell each other that “they don’t come out saints.” In this phrase, we acknowledge that mentorship and parenting are hard. I also acknowledge that good parenting requires healthy habits from myself. (prayer, reflection, planning, etc.)

In this post, I did some reflection over the following Edutopia post about student motivation. Will probably do more over the next month.

To Increase Student Engagement, Focus on Motivation by Nina Parrish.

The post reflects on the idea that students tend to have more motivation to explore if they have the gift of autonomy. In my favorite book, “Invent to learn”, they foster the idea that students should have the space to select meaningful “hands-on” creative projects. If my kids really want to engineer a skateboard, I should try to cheer them on. In theory, I should also support them as their walk through the related math/construction skills. Secondly, students feel motivated when they feel like they’re gaining ground on their skills. (i.e. growing in mastery) For some kids, it’s hard for them to chop big projects into smaller stories or tasks. I probably should use my Scrum master skills on them to help them decompose problems more. That might be a fun experiment I can do soon. 🙂 Of course, kids feel motivated when they know that you care. That’s a good reminder.

To celebrate my wife a bit, I feel she did a great job inspiring motivation for our oldest son Peter. Peter has shown great curiousity around marine biology for years now. They also have tons of quality time bird watching too. As I write this blog post, we’re picking up Peter from a cool marine science camp down in the Florida keys. ( Pigeion Key ) I appreciate my wife for finding this amazing camp that empowered Peter to explore his natural curiousity. Picking him up, he’s on fire and excited for all the science and creatures he’s explored this week. Go Dr. Rosario!! I’m thankful that he had this life changing experience.

  • As a honest student of becoming a better parent, I would ask for your ideas on inspiring motivation.
  • What did a teacher or mentor do to inspire you?
  • What has worked for your kids? Very open to your inspirations.

Have a great day!

Gitpod: Cloud Dev Environment That Saves You Time


In the good old mainframe days, professionals may have used a “dumb terminal.” This terminal had enough power to execute input and output tasks with a user, but the deeper magic happened on more powerful main frame computer. In 2023, student makers may have a Chromebook, a great inexpensive laptop for academic computing. In common cases, it’s hard do larger dev projects on the laptop alone due to limited speed and capacity. Over the past six months, I have enjoyed using Gitpod.io, a cloud based code editor and development environment empowering devs with high performance, isolation, and security. With gitpod, a dumb Chromebook workstation becomes a robust dev machine for web development and data science learning.

Gitpod.io is a powerful online Integrated Development Environment (IDE) that allows developers to write, test, and deploy code without the need for local installations of software. Gitpod.io is built on top of Git and leverages the power of Docker containers to provide a lightweight and fast environment for developers to work in. For makers familiar with Visual studio code, you’ll find the Gitpod experience very inviting since the tool builds upon the user experience of VSCode. I have used many of my favorite VSCode extensions for .NET, nodejs, Azure, and Python with Gitpod.

You can start a new workspace with just a few clicks, and it automatically clones your repository, installs dependencies, and sets up your environment. This means you can start coding right away without having to worry about configuring your development environment. When teaching new skills to developers, this benefit becomes very helpful to mentors or workshop organizers.

Gitpod.io also provides a range of features to make the development process more efficient. For example, it has built-in support for code completion, debugging, and code reviews, as well as a terminal that allows you to run commands directly from your workspace. In the past week, I have focused on learning new Python data science environments. (PyTorch) Using a easy github template template, I had a fast, web based, Python environment running quickly. I also appreciated that the Python notebooks worked well inside of VSCode.

https://databaseline.tech/zoose-3.0/

Gitpod provides a generous free tier to help you get started. If your software team needs more time on the platform, they offer reasonable pay plans. I hope that you consider checking out gitpod.io for your next web dev or data science project. In many situations, having access to a high performance coding environment through a browser helps the flow of your creative project.

To learn more about the origins of this cool tool, check out this podcast with the founders of Gitpod. Their CTO, Chris Weichel, does a good job talking through the benefits of Gitpod for professional software teams and saving pro devs time.
Chris Weichel talks about GitPod time saving in the enterprise

Make music with code using DotNet Core

Curious about making music with code? As a software engineer and music guy, I have enjoyed seeing the connections between music and computers. The first computer programmer, Ada Lovelace predicted that computers will move beyond doing boring math problems into the world of creative arts. If a problem can be converted to a system of symbols, she reasoned that computers could help. She used music as her example.

In previous experiments, I have explored the ideas of code and music using TypeScript, NodeJs, and Angular. You can find this work here.

After looking around GitHub, I found a really cool music library for C# devs. I’m hoping to use it to create tools to make quick backup tracks for practicing improv. It’s just fun to explore electronic music, theory, and computational music. Make sure to check out the blog post by Maxim. ( the author of DryWetMidi ) It’s a pretty comprehensive guide to his library.

What is MIDI?

MIDI stands for musical instrument digital interface. (MIDI) Under a file format like WAV or MP3, the computer stores the raw wave form data about sound. The MIDI file format and protocols operate an conceptual layer of music data. You can think of a MIDI file having many tracks. You can assign different instruments ( sounds to tracks ). In each track, the musician can record songs as many events. MIDI music events might include turning a note on, turning a note off, engaging the sustain pedal, and changing tempo. MIDI music software like Garage band, Cakewalk and Bandlab can send the MIDI event data to a software synth which interprets the events into sound. In general, the MIDI event paradigm can be extended to support other things like lighting, lyrics, and other stuff.

DryWetMidi Features

  • Writing MIDI files: For my experiments, I have used DryWetMIDI to explore projects for making drum machines and arpeggio makers. I’m really curious about using computers to generate the skeleton of songs. Can computers generate a template for a POP song, a fiddle tune, or a ballad? We’re about to find out! DryWetMIDI provides a lower level API for raw MIDI event data. The higher level “Pattern” and “PatternBuilder” APIs enable coders to think of expressing a single thread of musical ideas. Let’s say you’re trying to describe a piece for a string quartet. The “PatternBuilder” API enables you to use a fluent syntax to describe the notes played by the cello player. While playing with this API, I have to say that I loved the ability to combine musical patterns. The framework can stack or combine musical patterns into a single pattern. Let’s say you have three violin parts in 3 patterns. The library enables you to blend those patterns into a single idea with one line of code. Maxim showed great care in designing these APIs.
  • Music theory tools: The framework provides good concepts for working for notes, intervals, chords and other fundamental concepts of music.
  • Reading MIDI files: The early examples show that DryWetMIDI can read MIDI files well. I’ve seen some utility functions that enable you to dump MIDI files to CSVs to support debugging. The documentation hints at a chord extraction API that looks really cool. Looking forward to testing this.
  • Device interaction: DryWetMIDI enables makers to send MIDI events and receive them. This capability might become helpful if you’re making a music tutor app. You can use the music device interaction API to watch note events. The system can provide feedback to the player if they’re playing the right notes at the appropriate time.

Visions for MusicMaker.NET for .NET Core

In the following code example, I’ve built an API to describe drum patterns using strings.
The strings represent sound at a resolution of 16th notes. Using the “MakeDrumTrack” service,
we can quickly express patterns of percussion.

IMidiServices midiServices = new MidiServices();
var service = new MakeDrumTrackService(midiServices);
var command = new MakeDrumTrackCommand
{
    BeatsPerMinute = 50,
    FileName = fileName,
    Tracks = new List<DrumTrackRow>
    {
        new()
        {
            Pattern = "x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|x-x-|",
            InstrumentNumber = DrumConstants.HiHat
        },
        new()
        {
            Pattern = "x---|----|x---|----|x---|----|x---|----|",
            InstrumentNumber = DrumConstants.AcousticBassDrum
        },
        new()
        {
            Pattern = "----|x---|----|x--x|----|x---|----|x--x|",
            InstrumentNumber = DrumConstants.AcousticSnare
        },
        new()
        {
            Pattern = "-x-x|x-x-|-x-x|x--x|-xx-|xx--|-xx-|x--x|",
            InstrumentNumber = DrumConstants.HiBongo
        }
    },
    UserId = "system"
};

// act
var response = service.MakeDrumTrack(command);

Using the ArpeggioPlayer service, we’ll be able to render out a small fragement of music given a list of chords and an arpeggio spec.

var tempo = 180;
var instrument = (byte)Instruments.AcousticGrandPiano;
var channel = 1;

var track = new ChordPlayerTrack(instrument, channel, tempo);
var command = ArpeggioPatternCommandFactory.MakeArpeggioPatternCommand1();
var player = new ArpeggioPlayer(track, );
var chordChanges = GetChords1();  // Am | G | F | E

player.PlayFromChordChanges(chordChanges);

// Write MIDI file with DryWetMIDI
var midiFile = new MidiFile();
midiFile.Chunks.Add(track.MakeTrackChunk());
midiFile.Write("arp1.mid", true);

In the following method, the maker can describe the arpeggio patterns using ASCII art strings. The arpeggio patterns operate at resolution of sixteenth notes. This works fine for most POP or eletronic music. In future work, we can build web apps or mobile UX to enable the user to design the arpeggio patterns or drum patterns.

public static MakeArpeggioPatternCommand MakeArpeggioPatternCommand1()
{
    var command = new MakeArpeggioPatternCommand
    {
        Pattern = new ArpeggioPattern
        {
            Rows = new List<ArpeggioPatternRow>
            {
                new() { Type = ArpeggioPatternRowType.Fifth, Octave = 2, Pattern = "----|----|----|---s|" },
                new() { Type = ArpeggioPatternRowType.Third, Octave = 2, Pattern = "----|--s-|s---|s---|" },
                new() { Type = ArpeggioPatternRowType.Root, Octave = 2, Pattern =  "---s|-s-s|---s|-s--|" },
                new() { Type = ArpeggioPatternRowType.Fifth, Octave = 1, Pattern = "--s-|s---|--s-|--s-|" },
                new() { Type = ArpeggioPatternRowType.Third, Octave = 1, Pattern = "-s--|----|-s--|----|" },
                new() { Type = ArpeggioPatternRowType.Root, Octave = 1, Pattern =  "s---|----|s---|----|" }
            },
            InstrumentNumber = Instruments.Banjo
        },
        UserId = "mrosario",
        BeatsPerMinute = 120,
        Channel = 0
    };
    return command;
}

The previous code sample writes out a music fragment like the following.

If you’re interested in following my work here, check out the following repo.

Getting Started with PhaserJs and TypeScript

Curious about building 2D games with web skills? In this post, we’ll explore tools and patterns to use PhaserJs and JavaScript to make engaging 2D games. We’ll cover tools to make experiences with our favorite language: TypeScript.

Reference links

Related links