Build an AFrame.IO Scene on Oculus Quest with Teleportation

FireFox Mixed Reality

Hey web developers! Looking for a fun way to build VR experiences on the Oculus Quest? This tutorial will provide a brief guide to drafting an AFrame.IO VR experience that includes GLTF model loading and teleportation controls. As web developers, we have the unique opportunity to link data, models, and services to WebXR experiences. We really love seeing AFrame.IO work well on the Oculus platform. These are exciting times and trends!

AFrame.IO Script for Oculus WebXR

Fork the script at https://aframeexamples.glitch.me. In 2023, I feel that @ProfStemkoski has created one of the best collections of AFrame.IO templates. I like how he keeps his examples relatively small. It makes it easier to find a starting point for your project. Under the “quest-extras.html”, you’ll find an approachable example for starting with a “player movement” component that works with Oculus Quest. This example also shows an example for object interactivity via raycasting.

<!DOCTYPE html>
<html>

<head>
    <title>A-Frame: Quest movement and interaction</title>
    <meta name="description" content="Moving around an A-Frame scene with Quest touch controllers.">
    <script src="https://aframe.io/releases/1.3.0/aframe.min.js"></script>
    <script src="js/aframe-environment-component.js"></script>
    <script src="js/controller-listener.js"></script>
    <script src="js/player-move.js"></script>
    <script src="js/raycaster-extras.js"></script>
</head>

<body>

<script>
// if raycaster is pointing at this object, press trigger to change color
AFRAME.registerComponent("raycaster-color-change", {
    init: function () 
    {
        this.colors = ["red", "orange", "yellow", "green", "blue", "violet"];
        this.controllerData = document.querySelector("#controller-data").components["controller-listener"];
        this.hoverData      = this.el.components["raycaster-target"];
    },

    tick: function()
    {
        if (this.hoverData.hasFocus && this.controllerData.rightTrigger.pressed )
        {
            let index = Math.floor( this.colors.length * Math.random() );
            let color = this.colors[index];
            this.el.setAttribute("color", color);
        }

        if (!this.hoverData.hasFocus || this.controllerData.rightTrigger.released)
        {
            this.el.setAttribute("color", "#CCCCCC");
        }
    }
});


</script>

<a-scene environment="preset: default;" renderer="antialias: true;">

    <a-assets>
        <img id="gradient" src="images/gradient-fade.png" />
    </a-assets>

    <a-sky 
        color = "#000337">
    </a-sky>

    <!-- use a simple mesh for raycasting/navigation -->
    <a-plane
        width="100" height="100"
        rotation="-90 0 0"
        position="0 0.01 0"
        visible="false"
        class="groundPlane"
        raycaster-target>
    </a-plane>

    <a-entity 
        id="player" 
        position="0 0 0" 
        player-move="controllerListenerId: #controller-data;
                     navigationMeshClass: groundPlane;">

        <a-camera></a-camera>

        <a-entity 
            id="controller-data" 
            controller-listener="leftControllerId:  #left-controller; 
                                 rightControllerId: #right-controller;">
        </a-entity>

        <a-entity 
            id="left-controller"
            oculus-touch-controls="hand: left">
        </a-entity>

        <!-- experiment with raycasting interval; slight performance improvement but jittery appearance in world -->
        <a-entity
            id="right-controller"
            oculus-touch-controls="hand: right"
            raycaster="objects: .raycaster-target; interval: 0;"
            raycaster-extras="controllerListenerId: #controller-data; 
                              beamImageSrc: #gradient; beamLength: 0.5;">
        </a-entity>

    </a-entity>

    <a-torus-knot 
        p="2" q="3" radius="0.5" radius-tubular="0.1"
        position = "-2.5 1.5 -4"
        color="#CC3333"
        raycaster-target>
    </a-torus-knot>

    <a-box
        width = "2" height = "1" depth = "1"
        position = "-1 0.5 -3"
        rotation = "0 45 0"  
        color = "#FF8800"
        class = ""
        raycaster-target>
    </a-box>

    <a-sphere
        radius = "1.25"
        position = "0 1.25 -5"
        color = "#DDBB00"
        raycaster-target>
    </a-sphere>

    <a-cylinder
        radius = "0.5" height = "1.5"
        position = " 1 0.75 -3"
        color = "#008800" 
        raycaster-target>
    </a-cylinder>

    <a-cone
        radius-bottom = "1" radius-top = "0" height = "2"
        position = "3 1 -4"
        color = "#4444CC"
        raycaster-target>
    </a-cone>

    <a-torus 
        radius="0.5" radius-tubular="0.1"
        position = "2 3 -4"
        rotation = "30 -20 0"
        color="#8800FF"
        raycaster-target>
    </a-torus>

    <!-- demo interaction boxes -->

    <a-dodecahedron
        radius = "0.5"
        position = "-0.8 1 -2"
        color = "#EEEEEE"
        raycaster-target="canGrab: true;"
        raycaster-color-change>
    </a-dodecahedron>

    <a-icosahedron
        radius = "0.5"
        position = "0.8 1 -2"
        color = "#EEEEEE"
        raycaster-target="canGrab: true;"
        raycaster-color-change>
    </a-icosahedron>

</a-scene>

</body>
</html>

I also admire the work of Ada Rose Canon too. You can find a very complete starter kit for AFrame.IO here:
https://aframe-xr-starterkit.glitch.me/. This example shows features like collision detection, AR integration, and more.

Let us know if you make anything cool!!

Top Stories on InspiredToEducate.NET

14 AFrame.IO Resources For Your WebXR Project

AFrame Logo

I’m a big fan of the work of the AFrame.IO community.  Thank you to Mozilla, Diego Marcos, Kevin Ngo, and Don McCurdy for their influence and effort to build a fun and productive platform for building WebVR experiences.   In this post, I’ve collected a few Github repositories and resources to support you in building AFrame experiences.

Talk Abstract: In the next few years, augmented reality and virtual reality will continue to provide innovations in gaming, education, and training. Other applications might include helping you tour your next vacation resort or explore a future architecture design. Thanks to open web standards like WebXR, web developers can leverage their existing skills in JavaScript and HTML to create delightful VR experiences. During this session, we will explore A-Frame.io, an open source project supported by Mozilla enabling you to craft VR experiences using JavaScript and a growing ecosystem of web components.

https://github.com/ngokevin/kframe
Kevin’s collection of A-Frame components and scenes.

https://webvr.donmccurdy.com/
Awesome WebXR from Don McCurdy

https://github.com/feiss/aframe-environment-component
Infinite background environments for your A-Frame VR scene in just one file.

https://github.com/aframevr/aframe-school
Interactive workshop and lessons for learning A-Frame and WebVR.

https://aframe.io/aframe-registry/
Official registry of cool AFrame stuff

https://github.com/donmccurdy/aframe-physics-system
Components for A-Frame physics integration, built on CANNON.js.

Experiment with AR and A-Frame
AFrame now has support for ARCore. Paint the real world with your XR content! Using FireFox Reality for iOS, you can leverage ARKit on your favorite IPad or IPhone.

https://github.com/michaelprosario/aframe
I’ve collected a small collection of demo apps to explore some of the core ideas of AFrame.

AFrame Layout Component
Automatically positions child entities in 3D space, with several layouts to choose from.

Animation
An animation component for A-Frame using anime.js. Also check out the animation-timeline component for defining and orchestrating timelines of animations.

Super Hands
All-in-one natural hand controller, pointer, and gaze interaction library for A-Frame. Seems to work well with Oculus Quest.

A-Frame Component loading Google Poly models from Google Poly
Component enables you to quickly load 3D content from Google Poly

aframe-htmlembed-component
HTML Component for A-Frame VR that allows for interaction with HTML in VR. Demo

https://github.com/nylki/aframe-lsystem-component
L-System/LSystem component for A-Frame to draw 3D turtle graphics. Using Lindenmayer as backend.

Thanks to the amazing work from Mozilla, WebXR usability has improved leveraging specialized FireFox browsers
FireFox Reality
FireFox Reality for HoloLens 2 – For raw ThreeJs scripts, it works well. I’m still doing testing on AFrame scenes.

If you live in Central Florida or Orlando, consider checking out our local chapter of Google developer Group.  We enjoy building a fun creative community of developers, sharing ideas, code, and supporting each other in the craft of software.  Learn more about our community here:

GDGCentralFlorida.org

Top Stories on InspiredToEducate.NET

Build a Space Shooter with Phaser3 and JavaScript(Tutorial3)

In this blog post series, I want to unpack building a 2D shooter game using Phaser3.js. Phaser3 provides a robust and fast game framework for early-stage JavaScript developers. In this tutorial, we will work to add aliens to the scene, give them some basic movement, and blowing them up. Sound like a plan? Here’s what we will build.

Please make sure to check out Tutorial 1 to get started with this project. You’ll need to build upon the code and ideas from the previous blog posts. (post 1, post 2)

To see the code in a completed state, feel free to visit this link. Let’s start by making some modifications to the scene class to preload an enemy sprite graphic. The PNG file will represent how the alien should be drawn to screen. We associate the name ‘enemy1’ with our PNG file.

class Scene1 extends Phaser.Scene {

    preload() {
        this.load.image('ship', 'assets/SpaceShooterRedux/PNG/playerShip1_orange.png');
        this.load.image('laser', 'assets/SpaceShooterRedux/PNG/Lasers/laserBlue01.png');
        this.load.image('enemy1', 'assets/SpaceShooterRedux/PNG/Enemies/enemyBlack3.png');
    }

    ...

In the Phaser game framework, we associate moving game entities with sprites. To define a sprite, we build out an enemy class. When we put a sprite into our scene(as the class is constructed), a special function will be called the constructor. We’ve designed the constructor so that we can set the enemy location at a point (x,y) coordinate and connect it to the scene.

In the constructor, we accomplish the following work. We set the texture of the sprite to ‘enemy1’ and set it the position. Next, we connect this sprite to the physics engine of the scene. We’ll use the physics engine to detect when the enemy gets hit by lasers. We also initialize the deltaX factor to 3. It’s not super exciting, but the aliens will shiver from side to side randomly. This, however, is good enough for a simple lesson. After to complete this tutorial, I encourage you to go crazy with making the aliens move any way you want!

    class Enemy1 extends Phaser.GameObjects.Sprite {

    constructor(scene, x, y) {
        super(scene, x, y);
        this.setTexture('enemy1');
        this.setPosition(x, y);
        scene.physics.world.enable(this);

        this.gameObject = this;
        this.deltaX = 3;
    }

    ...

Adding movement to aliens

So, we’re ready to start moving some aliens. Let’s do this! We’re going to write three simple methods on the Enemy1 class. Following the pattern of all Photon sprites, the update method will be called every game tick. It’s your job to tell the sprite how to move. Keep in mind, we’re going to do a simple “side to side” behavior randomly. In the update method, we start by picking a number between 0 and 3. If k is 2, we make the sprite move left using the “this.moveLeft()” function. Otherwise, we make it move to the right using “this.moveRight()”

    update() {
        let k = Math.random() * 4;
        k = Math.round(k);

        if (k == 2) {
            this.moveLeft();
        }
        else if (k == 3) {
            this.moveRight();
        }
    }

    moveLeft() {
        if (this.x > 0) {
            this.x -= this.deltaX;
        }
    }

    moveRight() {
        if (this.x < SCREEN_WIDTH) {
            this.x += this.deltaX;
        }
    }

Make lots of aliens

At this point, you want to see lots of moving aliens. Let’s add the code to the scene class to construct the aliens. In the scene class, the “create” method will be used to construct all objects. This includes our ship and the aliens. Firstly, we create a special collection object called enemies. We’ll use this collection to track the enemies with the physics system. (this.enemies = this.physics.add.group()) On the next line, we create an Array so that we have a simple way to track our enemies that need updating. In the loop, we’re creating 21 aliens, placing them in random locations, and adding them to our collections. (enemies and enemies2)

class Scene1 extends Phaser.Scene {

    ...

    create() {
        this.cursors = this.input.keyboard.createCursorKeys();
        this.myShip = new Ship(this, 400, 500);
        this.add.existing(this.myShip);

        // ======= adding enemies ============
        this.enemies = this.physics.add.group();
        this.enemies2 = new Array();

        let k = 0;
        for (k = 0; k < 21; k++) {
            let x = Math.random() * 800;
            let y = Math.random() * 400;

            this.enemy = new Enemy1(this, x, y);
            this.add.existing(this.enemy);
            this.enemies.add(this.enemy);
            this.enemies2.push(this.enemy);
        }
    }

In order to invoke our update code for all enemies, we need to make one more edit to the scene class. In the “update” method, we need to add a loop to call “update” on all enemies

    update() {
        // there's more code related to the ship here 

        let j = 0;
        for (j = 0; j < this.enemies2.length; j++) {
            let enemy = this.enemies2[j];
            enemy.update();
        }
    }

At this point, we should see our aliens wiggling on the screen. And there’s much rejoicing!

Aliens go boom! Let’s do collision detection

In the laser class that we built in the last post, we need to make a few edits. Check out the code below. In the constructor of the ShipLaser, we set the texture, position, speed, and store the parent scene in “this.scene.” We connect the laser instance to the physics engine using “scene.physics.world.enable.” In the next line, we tell the game framework to check for collisions between this laser and the enemies. When a collision happens, we handle the hit using the “handleHit” function.

    class ShipLaser extends Phaser.GameObjects.Sprite {

    constructor(scene, x, y) {
        super(scene, x, y);
        this.setTexture('laser');
        this.setPosition(x, y);
        this.speed = 10;
        this.scene = scene;

        // check out new code below ...
        scene.physics.world.enable(this);
        scene.physics.add.collider(this, scene.enemies, this.handleHit, null, this);
    }

In the handle hit function, you’ll notice that the laserSprite and enemySprite have been passed as parameters to the method. In Phaser, you can receive these references so that we can define behaviors associated with both sprites. In this case, we’re just going to destroy the objects.

    handleHit(laserSprite, enemySprite) {
        enemySprite.destroy(true);
        laserSprite.destroy(true);
    }

Hope this has been helpful. Please let me know if you have any questions.

Space shooter graphic

Build a Space Shooter with Phaser3 and JavaScript(Tutorial1)

Like many computer enthusiasts, I grew up playing video games on the classic Nintendo entertainment system. Some of my favorite games included Super Mario brothers, Legend of Zelda, Tetris, and Star Force. It’s been fun to share these game classics with my kids. They still find them fun. In this blog post series, I want to unpack building a 2D shooter game using Phaser3.js. Phaser3 provides a robust and fast game framework for early stage JavaScript developers.

From exploring the options in 2D game creation using Javascript, Phaser has grown an impressive community of game makers. Here are a few game samples that you would want to explore.

Space shooter assets from www.kenney.nl

In this post series, I wanted to collect a few resources, tools, and links to help you get started building a space shooter game with Phaser 3. We’ll be drawing from the inspiration of classic games like Space Invaders.

In terms of JavaScript writing style, we’re going to keep the code samples as simple as possible to express core concepts. If you need to get started with JavaScript, I recommend checking out the free sources at CodeAcademy to start exploring the language. We’ll be drawing inspiration from classic games like Galaga and Star Force. Even though I’m trying to unpack these ideas for early stage JavaScript coders, I also want to provide examples that leverage ES6 coding structures. The Phaser3 documentation does not focus on this style of code organization. In general, I want to explore programming concepts that provide nice encapsulation and readability. We also want to make sure we can extend these coding patterns well.

The game we’re building may feel similar to this work here:
http://users.aber.ac.uk/eds/CS252_games/mwg2/JSGame/

In this tutorial, we want to encourage you to setup your work environment with Visual Studio code. You can also inspect the tutorial code in the following glitch sample.

https://glitch.com/edit/#!/awg-tutorial?path=shooter.js:1:0

Setting up Visual Studio Code

In this series, I recommend setting up Visual Studio Code on your computer. It’s a robust tool for web development and works well for JavaScript projects. You can find instructions to install VS Code using this link. I also recommend using the Visual Studio Code Live server extension to make it easier to hot re-load your code changes. Please refer to the following video for details.

Downloading some boilerplate code

To save some time, I have organized a ZIP file with a collection of code and graphics that you can leverage in building your own shooter game. You’re encouraged to play and build with these assets, sounds, and code samples to elaborate on your own game. For this first exercise, we just want to get a ship displayed to the screen and move it around using arrow keys.

  • Download the boilerplate code from here.
  • Extract the ZIP file to a location on your computer. We’ll call this your working directory. For me, I might store my files in “c:\alienWarGame-tutorial1.”

Let’s explore the contents of this boilerplate code

  • shooter.js: This JavaScript file will contain our game objects and behavior code.
  • shooter.html: This HTML file provides a home for our JavaScript game code. (see the code below) Please note that we’re downloading phaser.js (version 3.11) from a content delivery network. In the body of the HTML file, we import our shooter.js. Remember to install “Live server” extension for Visual Studio. You will be able to right click on the code HTML file and select “open with Live server.” This will load the file on a small HTTP server on your computer. All phaser games need to be hosted on a web server.



    
    Space Shooter Tutorial 1
        
    


        

Breaking Down Shooter.js

All Phaser games start with a little bit of configuration. In this “config” object below, we establish a screen size of 800 pixels by 600 pixels. This will be our drawing surface for the game. To help detect if objects bump into each other, we will be leveraging the ‘arcade’ physics engine included in Phaser3.

var SCREEN_WIDTH = 800;
var SCREEN_HEIGHT = 600;
var config = {
    type: Phaser.AUTO,
    width: SCREEN_WIDTH,
    height: SCREEN_HEIGHT,
    physics: {
        default: 'arcade'
    }
};

Ok. Let’s build our Space ship player object. In Phaser 3, they have been working to improve the framework to leverage modern JavaScript features like Es6 classes. The class will help us store the properties of the ship. Properties will include stuff like the location of the ship on the screen, object state, and texture. The class will also include several methods for moving the ship around. In the constructor method, we accept a scene object and location on the screen. (x,y). The call to “super” and “setPosition” helps associate the sprite with the parent scene and location. We call “setTexture” using the parameter of ‘ship’ to associate the ship graphic with the sprite. Finally, we set some variables(deltaX, deltaY) to configure how much the ship will move when we press keys.

class Ship extends Phaser.GameObjects.Sprite  {

    constructor(scene, x , y) {
        super(scene, x, y);
        this.setTexture('ship');
        this.setPosition(x, y);
        this.deltaX = 5;
        this.deltaY = 5;
    }

    moveLeft() {
        if (this.x > 0) {
            this.x -= this.deltaX;
        }
    }

    moveRight() {
        if (this.x < SCREEN_WIDTH) {
            this.x += this.deltaX;
        }
    }

    moveUp() {
        if (this.y > 0) {
            this.y -= this.deltaY;
        }
    }

    moveDown() {

        if (this.y < SCREEN_HEIGHT) {
            this.y += this.deltaY;
        }
    }

    preUpdate(time, delta) {
        super.preUpdate(time, delta);
    }
}

In the following method called ‘moveLeft’, we implement the math to move the sprite to the left. Since we’re moving left, we subtract 5 pixels from the current x position. We only execute this code if x is greater than zero. Most of the other movement methods operate in a similar manner.

    moveLeft() {
        if (this.x > 0) {
            this.x -= this.deltaX;
        }
    }

Let’s make a scene!

In the following class, we have to establish a Phaser scene object. It has a few simple tasks. In the “preload” method, need we load the ship sprite graphic from our assets folder.

class Scene1 extends Phaser.Scene {

    constructor(config) {
        super(config);
    }

    preload() {
        this.load.image('ship', 'assets/SpaceShooterRedux/PNG/playerShip1_orange.png');
    }

    create() {
        this.cursors = this.input.keyboard.createCursorKeys();
        this.myShip = new Ship(this, 400, 500);
        this.add.existing(this.myShip);
    }

    update() {
        if (this.cursors.left.isDown) {
            this.myShip.moveLeft();
        }

        if (this.cursors.right.isDown) {
            this.myShip.moveRight();
        }

        if (this.cursors.up.isDown) {
            this.myShip.moveUp();
        }

        if (this.cursors.down.isDown) {
            this.myShip.moveDown();
        }

        if (this.cursors.space.isDown) {
            // shooting guns goes here
        }
    }
}

The create method has does additional setup for our scene. We establish a property called cursors that will be used for detecting keyboard input like the arrow keys. We also create a ship instance at (400,500) and add it to the scene.

    create() {
        this.cursors = this.input.keyboard.createCursorKeys();
        this.myShip = new Ship(this, 400, 500);
        this.add.existing(this.myShip);
    }

We now can start moving the ship around using the update method. The if statements in update help us detect the various arrow keys. We call the appropriate method on the ship. In other words, if we press the LEFT button, we should call this.myShip.moveLeft().

    update() {
        if (this.cursors.left.isDown) {
            this.myShip.moveLeft();
        }

        if (this.cursors.right.isDown) {
            this.myShip.moveRight();
        }

        if (this.cursors.up.isDown) {
            this.myShip.moveUp();
        }

        if (this.cursors.down.isDown) {
            this.myShip.moveDown();
        }

        if (this.cursors.space.isDown) {
            // shooting guns goes here
        }
    }

Here’s the final step, we associate our game configuration information to a new game. We then add our scene.

var game = new Phaser.Game(config);
game.scene.add('scene1', Scene1, true, { x: 400, y: 300 });

If you're really excited to learn more, I have found the learning from the examples very helpful.

Phaser Labs Examples

Play Time With Angular 15 and PhaserJS

Curious about building 2D games with web skills? In this online meetup, we'll explore tools and patterns to use PhaserJs and JavaScript to make engaging 2D games. We'll cover tools to make experiences with our favorite language: TypeScript. We'll also cover some cool updates from the latest release of Angular. Come join us to learn about the latest innovations from our Angular community. Look forward to connecting with like-minded Orlando developers and designers. Hope you can join us! Make sure to bring a friend too!

Music Maker: Using NodeJS to Create Songs

Music maker screen shot

In my graduate school career, I had the opportunity with our evolutionary complexity lab to study creating music using neural networks and interactive genetic algorithms. It’s fun to study these two topics together since I enjoy crafting code and music. If you enjoy this too, you might enjoy Music maker, a library I created to explore generating music with code. Sonic Pi by Sam Aaron, a popular tool to teach music theory and Ruby code, inspired me to build this tool. My team from InspiredToEducate.NET enjoyed teaching a coding workshop on music using Sonic Pi. We, however, encountered a challenge of installing Sonic-Pi on a lab of computers. The workshop targets 5th-grade to 8th-grade students who have good typing skills. It would be cool if something like Sonic-Pi supported features like Blockly coding too.

In terms of musical motivations, I wanted to provide features like the following:

  • Like Sonic-Pi, the tool should make it easy to generate chords and scales.
  • I want it to feel simple like Sonic-Pi. I, however, don’t think I’ve achieved this yet.
  • I wanted the tool to have a concept of players who can generate music over a chord progression. I believe it would be cool to grow an ecosystem of players for various time signatures and musical types.
    I wanted to support the MIDI file format for output making it possible to blend music from this tool in sequencers broadly available on the market. This also enables us to print out sheet music using the MIDI files.
  • Building drum patterns can be tedious at times, I wanted to create a way to express rhythm simply.
  • We desired to have a browser-based interface that a teacher could install on a Raspberry Pi or some other computer. This was a key idea from one of my teachers.  I’m focusing on building a tool that works on a local area network.  (not the broad internet)
  • From a coding perspective, we need to build a tool that could interface with Blockly coding someday. JavaScript became a logical choice. I’ve wanted to explore a project the used TypeScript, NodeJS and Express too. I especially enjoyed using TypeScript for enums, classes, abstract classes, etc.

Here’s a sample MIDI file for your enjoyment:  jazz midi test

I do want to give a shout out to David Ingram of Google for putting together jsmidgen. David’s library handled all the low-level concerns for generating MIDI files, adding tracks, and notes. Please keep in mind that MIDI is a music protocol and file format that focuses on the idea of turning notes and off like switches over time. Make sure to check out his work. It’s great NodeJS library.

Here’s a quick tour of the API. It’s a work in progress.

Where do I get the code?

https://github.com/michaelprosario/music_maker

  • run app.js to load the browser application.   Once the application is running, you should find it running on http://localhost:3000 .
  • Make sure to check out the demo type scripts and music_maker.ts for sample code.

JSMIDGen reference

To learn more JSMIDGEN,
please visit https://github.com/dingram/jsmidgen

Hello world

var fs = require('fs');
var Midi = require('jsmidgen');
var Util = require('jsmidgen').Util;
import mm = require('./MusicMaker')

var beat=25;
var file = new Midi.File();

// Build a track
var track = new Midi.Track();
track.setTempo(80);
file.addTrack(track);

// Play a scale
var scale = mm.MakeScale("c4", mm.ScaleType.MajorPentatonic,2)

for(var i=0; i<scale.length; i++){
    track.addNote(0,scale[i],beat*2);
}

// Write a MIDI file
fs.writeFileSync('test.mid', file.toBytes(), 'binary');

Creating a new file and track

var file = new Midi.File();
var track = new Midi.Track();
track.setTempo(80);
file.addTrack(track);

// Play cool music here ...

Play three notes

track.addNote(0, mm.GetNoteNumber("c4"), beat);
track.addNote(0, mm.GetNoteNumber("d4"), beat);
track.addNote(0, mm.GetNoteNumber("e4"), beat);

Saving file to MIDI

fs.writeFileSync('test.mid', file.toBytes(), 'binary');

Playing a scale

var scale = mm.MakeScale("c4", mm.ScaleType.MajorPentatonic,2)

for(var i=0; i<scale.length; i++){
    track.addNote(0,scale[i],beat*2);
}

Playing drum patterns

var DrumNotes = mm.DrumNotes;
var addRhythmPattern = mm.AddRhythmPattern;
addRhythmPattern(track, "x-x-|x-x-|xxx-|x-xx",DrumNotes.ClosedHighHat);

Setup chord progression

var chordList = new Array();
chordList.push(new mm.ChordChange(mm.MakeChord("e4", mm.ChordType.Minor),4));
chordList.push(new mm.ChordChange(mm.MakeChord("c4", mm.ChordType.Major),4));
chordList.push(new mm.ChordChange(mm.MakeChord("d4", mm.ChordType.Major),4));
chordList.push(new mm.ChordChange(mm.MakeChord("c4", mm.ChordType.Major),4));

Play random notes from chord progression

var p = new mm.RandomPlayer
p.PlayFromChordChanges(track, chordList, 0);

Play root of chord every measure

var p = new mm.SimplePlayer
p.PlayFromChordChanges(track, chordList, 0);

Tour of chord players

var chordList = new Array();

// setup chord progression
chordList.push(new mm.ChordChange(mm.MakeChord("e4", mm.ChordType.Minor),4));
chordList.push(new mm.ChordChange(mm.MakeChord("c4", mm.ChordType.Major),4));
chordList.push(new mm.ChordChange(mm.MakeChord("d4", mm.ChordType.Major),4));
chordList.push(new mm.ChordChange(mm.MakeChord("c4", mm.ChordType.Major),4));

var chordPlayer = new mm.SimplePlayer
chordPlayer.PlayFromChordChanges(track, chordList, 0);

chordPlayer = new mm.Arpeggio1
chordPlayer.PlayFromChordChanges(track, chordList, 0);

chordPlayer = new mm.RandomPlayer
chordPlayer.PlayFromChordChanges(track, chordList, 0);

chordPlayer = new mm.BassPLayer1
chordPlayer.PlayFromChordChanges(track, chordList, 0);

chordPlayer = new mm.BassPLayer2
chordPlayer.PlayFromChordChanges(track, chordList, 0);

chordPlayer = new mm.BassPLayer3
chordPlayer.PlayFromChordChanges(track, chordList, 0);

chordPlayer = new mm.OffBeatPlayer
chordPlayer.PlayFromChordChanges(track, chordList, 0);

 

If you write music using Music Maker, got ideas for features, or you’d like to contribute to the project, drop me a line in the comments or contact me through GitHub!  Hope you have a great one!

 

Remotely Control IoT devices using NodeJs, Firebase, and Johnny5

Hello, makers!  In our blog post today, I wanted to share a simple way to remotely control IoT devices using NodeJs and Google Firebase.  Let’s say you’re trying to remotely control a small lego crane like this.  You’ll notice there are two servo motors connected to an Arduino.  You can learn more about how you can build this in our post on Arduino and Lego motor control.   For the scope of this blog post, let’s say we wanted to remotely control the servo motor at the bottom from any place in the world.  How would we do that?

Lego Crane

Firstly, check out Johnny 5,  a very nice NodeJs library for controlling IoT devices like Arduino’s, Raspberry Pi, and more.  I really appreciate the clarity of their documentation and API.  You can do a lot with a small amount of javascript.

I started to wonder if we could connect Johnny 5 to the real-time database of Firebase.   What’s a real-time database?  In a traditional relational database like MySQL, you need to declare database tables and structures.  You can make a database table to store a list of persons and their addresses (see sample code here).   After doing that, you can insert data into that table.  In this traditional database world, you can’t listen for inserts into a database table and easily write code to reach to that event.

The Firebase real-time database organizes information in a tree structure.  You can store information in that tree any way that you want.  Other users who have access to the database can listen for data changes at various locations in the tree and write code to react to that event.  Check out the following video to learn how the Firebase Real-time database works. Especially listen to how the value change event works.

This link review the details of getting started with FireBase on the web or NodeJS: https://firebase.google.com/docs/database/web/start

So, let’s explore the code for moving a single servo motor using Johnny 5 and Firebase.

At the top of the JavaScript file, we import johnny-five and firebase-admin. We start our firebase
database session by calling “initializeApp.”

On line 9, we create a “Board” object. I have my Arduino connected to my computer by a serial cable. Johnny5 handles this situation by default.

Once the board enters “ready” state, we create an instance of a servo motor connected to pin 10.

On line 14 and 15, we connect to a storage location called ‘servo_angle.’ Using the servo_angle “on value” event, we listen for changes to this location and
set the angle of the servo. And that’s it!

To write values into “servo_angle”, check out the following code.

In this script, we connect to the Firebase database in the same way. On line 10, we accept an angle from command line arguments. On line 16, we write that angle to ‘servo_angle’.

It’s a very simple pattern for making internet connected robots or home automation.

We love to hear from our readers. Leave a comment below if you get other ideas for internet connected robots, toys or devices.

Related Blog Posts

Building Chat Bot Apps with Google Actions

r2d2

In science fiction, we have dreamed about the days when we’ll talk to our computers to make things happen. In Star Trek, crew members can talk to the Holodeck computer to “program” and explore amazing virtual experiences. Tony Stark(aka. Ironman) constantly gets situation awareness from Jarvis during battle by simply talking to his device. We’re still a long way away from Holodecks, R2D2, and Ironman. As developers and makers, we can explore the potential of voice interactions with mobile devices today.

The Google Actions toolkit enables you to integrate your services into the voice command interface of a Google assistant. This technology touches millions of devices including phones, cars, and assistant devices. You can also integrate into services provided by Google or third party services.

This past weekend, our local Google developer group of Central Florida organized a hackathon to explore applications of voice user interfaces and Google Actions. We enjoy getting to organize community workshops like this. Love seeing our community come together. It’s always a great opportunity to learn, meet people, and generate new ideas.

Google Actions Hackathon

In general, Google actions work well in three major use cases. Users on the go. People starting their day. People relaxing at the end of the day.   For my Google Actions app, I tried to think of an application of Google Actions that would support our leadership team for our GDG. We recently adopted a Trello board to help us organize tasks for our club. If you’re not familiar with Trello, it’s simple a task management system popular with Agile teams. ( see a screenshot below )  As a busy Dad and professional, I typically think of stuff that needs to be accomplished for the GDG while I’m driving.

Trello board

I decided to create a simple Google Action to enable me to collect a task and share it on our leadership Trello board. I tried to explore this task in three phases.

1. Get to know the Google Actions API: I used a variety of resources to get to know the Google Actions interface. I, however, found this code lab very helpful.  After doing this code lab, I was able to slightly elaborate on the tutorial to create my own stuff.

https://codelabs.developers.google.com/codelabs/actions-1

2. Build Trello integration code to add a task to a list: On my local laptop, I started playing around with a few options for adding task information to a list.  I found the “node-trello” package for NodeJs worked really well.

https://github.com/adunkman/node-trello

3. Integrate the Google Actions API and the NodeJS code together

Here’s a quick tour of the Google Actions conversation setup. Using Dialogflow, it’s really cool that you can create conversational interface actions with almost no code.  JavaScript code becomes necessary if you need to integrate services or databases together.   Let’s focus on one intent: adding a task.   In general, intents enable you to accomplish a focused interaction on your Google assistant.   In my case, the user calls my action by doing the following:

  • Ok, Google.  Let me talk to GDG tasks
    • The system replies with a greeting and a prompt for a command
  • The user can reply “add task.”

In this intent, we can configure the system to respond to similar phrases to “add task.”

Add task

At this point, the intent collects two pieces of information.  (task name and task details).  We configure the intent to fulfill the conclusion of the intent with custom code.

Hope this code sample helps you understand the experience of building a Google Actions application.

Here’s a few more resources and ideas to help you write your own Google Actions app.

  • https://developers.google.com/actions/templates/
    • These are great tutorials for “non-coders” and programmers.  These templates are designed for teachers, educators or people curious about chat bot building.   The tutorials are designed to be very short.

“Growing Your Developer Career using Open Source” via @JohnBaluka

We are open

Whether you’re just starting in your career or you’ve been working in the industry for years, you can benefit from the culture and practice of open source. I want to thank John Baluka for sharing his reflections and personal journey on this topic. I really appreciate John’s fresh business perspective on using open source to advance your learning and business. I had the opportunity to hear him share his talk on this topic during an ONETUG meeting this past week. If you’re in the Orlando area, make sure to check out ONETUG. They’re a great community of programming professionals.

Some programming communities have stronger cultures of sharing and open source culture. As a web applications developer, we naturally love open source software. Programmers who leverage NodeJS and JavaScript operate in a very open way because the world wide web operates in that manner. I’ve been working as a C# developer for over 20 years. I’m very excited that our .NET community of developers has learned lessons from other languages and become open and collaborative. I still think it’s crazy that Microsoft has become the number one contributor to open source software. Stuff that used to be secret sauce has become open. On top of that, Microsoft has now bought GitHub.com. Look forward to seeing Microsoft and GitHub use their influence to increase the impact of open culture.

I believe that John hit on 5 thoughtful benefits for getting to know open source solutions. In John’s view, you need to be strategic on your investment of time.

1. Personal learning and growth: In John’s journey, he wanted to find an example of a large software architecture written in .NET and ASP.NET MVC. He selected NopCommerce, a cool e-commerce platform for .NET developers. John organized lessons and meta-patterns from dissecting this project into a talk. Some of the topics included dependency injection, language localization, data validation, plug-in architecture, and agile design. John offered us a challenge to select and study an open source project as a tool to advance your career in architecture or software leadership. On InspiredToEducate.NET, we have talked about this principle in the context of the makers movement. Everyone can learn something from reading code, exploring a 3D model, dissecting an electronics schematic, music, art, etc. What’s an open source project that fits into your space of passion?

2. Open source software enhances your public profile of work: When you hire an interior designer, how would you make your decision? You probably would review pictures of previous work to see if the designer fits with your tastes and requirements. For the average job interview in software engineering, it’s typically hard to show code from your previous gig. (i.e. corporate secrets, policies) Most companies don’t do their work in open source. By getting involved and contributing to an open source project, you can enhance your public profile of work. How does your GitHub reflect your strengths and skills?

3. Speed to solution: It’s important to remember that software developers aren’t paid to write code. We provide value and solve business problems. Open source software enables our teams to reduce time to market. Phil Haack, creator of ASP.NET MVC and engineer at GitHub, shared a reflection that businesses should always focus on their unique value proposition. (i.e. what makes your company different than other options ) Open source provides an opportunity for companies to partner or collaborate on elements outside of your unique value proposition. Why write a big workflow system or content system when you can integrate one?

4. Open source is social: To advance your career, it’s important to expand your network and relationships. Growing authentic relationships becomes critical in growing your business. By collaborating on open source, you have an opportunity to learn from others. You have the opportunity to invest and support peers around you. I personally get excited about supporting the growth of others.

5. Business models around open source software: I really appreciate John’s reflections on this aspect. I admire his pragmatic approach to selecting NopCommerce. On one level, the open source project followed good and clean patterns. In his view, the project isn’t perfect, but you can learn something from it. By sharing his reflections on the software design during user group meetups and conferences, he started getting consulting requests to support NopCommerce integrations. He challenged us to strategically select an open source project for learning with an eye toward job growth. In the NopCommerce space, you can earn money by building store themes, building plugins, providing support or integrations. Here’s a few more blog post that elaborate on this idea.

https://opensource.com/article/17/12/open-source-business-models
https://handsontable.com/blog/articles/5-successful-business-models-for-web-based-open-source-projects

What open source projects connect to your strengths, passions, and your career growth strategy? This was probably my favorite concept from John’s talk.

Again, I want to thank ONETUG and John Baluka for making this talk possible. I also appreciate John taking time after the meetup to hang out. I appreciate his accessibility.

Make sure to check out John’s talk and his resources.

Related Blog Posts

 

 

10 Trends You Need to Know from Google I/O 2018

Google IO - Logo

What’s Google I/O?

Google I/O is an annual software developer-focused conference which features a keynote on the latest updates and announcements by Google. The conference also hosts in-depth sessions focused on building web, mobile, and enterprise applications with Google and open web technologies such as Android, machine learning, TensorFlow, Chrome, Chrome OS, Google APIs, Google Web Toolkit, and App Engine.

In this blog post, I’m going to share my favorite announcements from the conference. Hope these items serve makers, app developers, and web developers.

Angular Updates

It’s Christmas time for Angular developers. Check out this talk to learn what’s new with Angular, Google’s platform for scalable front-end web development. Using Angular 5 at work has been fun. Love working with TypeScript and the component model. In general, it helps reduce common JavaScript errors. It has also created a great deal of unity between our back-end and front-end code.

Abstract: Angular has a flag that will cut hundreds of kilobytes off of your bundles, improve mobile experiences, and allow you to dynamically create components on the fly. Learn about these changes and what they mean for your applications.

Android Studio 3.2

Google has worked to improve the application model for Android for simplicity, power, and developer speed. I’m curious to test the speed of the new Android emulator.

Abstract: The last couple of years have seen a plethora of new features and patterns for Android developers. But how do developers know when to use existing APIs and features vs. new ones? This session will help developers understand how they all work together and learn what they should use to build solid, modern Android applications.

AIY

For our makers and tinkering readers, you might check out Google AIY projects. I find it interesting that you can go to your local Target store and pick up a Google AIY kit so that you can start experimenting with machine learning, voice control, and computer vision.

The following MagPi issue covers the AIY voice kit:
https://www.raspberrypi.org/magpi-issues/Essentials_AIY_Projects_Voice_v1.pdf

 

Abstract: AIY efforts at Google puts AI into various maker toolkits, to make things more playful and, more importantly, to help you solve real problems that matter to you and your communities. Join this session to learn how you can use these kits to start adding natural human interaction to your maker projects. It will feature demos on the Voice and Vision Kits, and some amazing AIY experiments built by the makers community around the world.

Flutter.IO

A few years ago, I had tried the Dart programming language and enjoyed it. For background, I work as a web app developer using C# and JavaScript. I find Dart very approachable. In the Flutter.IO project, Google has worked to expand the influence of Dart into building native iOS applications and Android apps. I find the “hot reload” feature of Flutter.IO very compelling. It’s awesome to go from idea to device quickly. My only reservation with Flutter is that it doesn’t have a declarative model for expressing components(or widgets).

Abstract: Come watch a single developer code a beautiful app in real-time from the ground-up that runs natively on iOS and Android, all from a single codebase. Along the way, learn how to marry Flutter’s latest multi-platform reactive UI elements, accelerometer, and audio capabilities with powerful Firebase SDK functionality. See this app painted to life piece-by-piece in under 40 minutes thanks to Flutter’s sub-second hot reload developer experience.

ARCore

Google’s ARCore framework received several notable updates. Firstly, Google ARCore enables developers to write Android apps that sense your environment. With these capabilities, developers can place 3D content layered over a view of the real world. This technology unlocks an amazing class of games, collaboration, and design applications that serve users in their physical spaces. The first version of Google ARCore focused on horizontal surfaces. Google has upgraded ARCORE to sense vertical surfaces(walls) and pictures. (i.e. custom tracker markers) Google now offers a way to shared markers or points of interest with multiple users. Let’s say you’re making an AR pool game using your dining room table. Multiple players of your game can collaboratively target the same dining room table and participate in a shared game experience. It should be noted that you can “instant preview” ARCore apps using ARCore Unity tools. This really helps you reduce your iteration cycles.

Abstract: Learn how to create shared AR experiences across iOS and Android and how to build apps using the new APIs revealed in the Google Keynote: Cloud Anchor and Augmented Images API. You’ll come out understanding how to implement them, how they work in each environment, and what opportunities they unlock for your users.

What’s new on Android on ChromeBooks

On InspiredToEducate.NET, we’re passionate about serving students, teachers, and makers of all ages. Since my wife works as a college professor, we’re constantly geeking out over various tools in educational technology. It’s very clear that Chrome books have made a positive impact in K-12 education. According to this article, Google Chromebooks command 58% of laptop devices in the K-12 market. That translates to millions of devices. It’s cool to see Google expand the capabilities of Google Chromebooks using their innovations in Android.

Abstract: With the Play Store on Chromebooks gaining traction, developers need to understand how to build high-quality apps and content for the new form factor. Attend this session to learn about adding support for larger screens, mouse and trackpad support, keyboard support (i.e. shortcut keys), free-from resizable windows, and stylus support for devices that have them.

Android Things

Abstract: Android Things is Google’s platform to support the development of Internet of Things devices. This talk will provide an update on the program and the future roadmap. Learn more about the breadth of hardware reference designs, the operating system, building apps, device management, and support from chip vendors. It will also discuss use-cases where edge computing can be used, and examples of prototype-to-production that demonstrate how Android Things is ready for commercial products.

Sceneform

Abstract: Sceneform SDK is a new library for Android that enables the rapid creation and integration of AR experiences in your app. It combines ARCore and a powerful physically-based 3D renderer. In this session, you’ll learn how to use the Sceneform SDK, and how to use its material system to create virtual objects that integrate seamlessly with the environment.

TensorFlow Lite

Over the years, Google has focused their energy on advancing machine learning capabilities. They have now entered a phase where application developers can now weave the power of machine learning brains(machine learning models) into their applications. Google TensorFlow enables app developers to train powerful neural network models so that computers can learn and use that intelligence in applications. In Google photos, I can do weird searches like “flowers in macon, ga.” Since Google have fast neural networks that can I identify flowers, Google can quickly return a list of photos with flowers matching my expectations. Wouldn’t it be cool if you could put these capabilities into your Raspberry Pi or Android app? TensorFlow Lite enables you to leverage pre-trained TensorFlow models in your apps. I’m very impressed by their focus on speed and efficiency.

Abstract: TensorFlow Lite enables developers to deploy custom machine learning models to mobile devices. This technical session will describe in detail how to take a trained TensorFlow model, and use it in a mobile app through TensorFlow Lite.

Google Lens

The following video demo’s some of Google’s cool innovations in computer vision. Using Google Lens, the photos app can identify objects in view. In the future, you’ll be able to point your phone at a store. Using an AR view, Google can tell you ratings, descriptions, and pictures related to the store.

Join the conversation at our next Google Developer Group.

Interested in digging deeper into these technology announcements? What are consequences of connecting some of these ideas together? What opportunities do these capabilities give to our local developer community?

We’ll dig deeper into the latest announcements from Google I/O conference. We’ll discuss the various pathways for leveraging these technologies in your career. We’re excited to discuss how these tools can benefit local startups, makers and businesses in Orlando, FL.

When: May 24, 2018 – 6pm to 9pm

https://www.meetup.com/GDG-Central-Florida/events/247996681/

6 Resources To Build Cool Minecraft Mods with Python

Looking for a fun way to explore learning to code with your students or children? Consider exploring writing Minecraft mods using Python. In our house, we continue to enjoy building(destroying) together as a family in shared Minecraft worlds. I appreciate that Minecraft helps the kids exercise their thinking about working in 3D. The python language favoring concise expression, fast feedback and quick iteration will keep students engaged.

 

As a parent, I have been searching for ways to make learning math more attractive for one of my kids. In this particular case, he loves to read and often enjoys finding ways to avoid doing tasks related to math. I’m so thankful that he has developed a joy in reading. I don’t think I had that motivation at his age. During a trip to a bookstore, he expressed interest in the book “Learn to Program Minecraft” by Craig Richardson. As an experiment, we picked up the book to explore his engagement level. In one week, he got to chapter 4 and started requesting that we practice coding Minecraft together after school. I felt something like this.

Seymour Papert, a key influence in the learning theory of constructionism, aspired to create a math world where children would play with math as a learning tool. I believe that he would be proud of the various open source projects that connect Minecraft to computational thinking.

To help you get started with coding Minecraft mods with Python, I wanted to share a few tools to help you get started.

1. Raspberry Pi: The Raspberry Pi is a great $40 computer build to engage students in playing with physical computing and computer science. If you run the raspbian operating system on your Raspberry Pi, you already have a copy of Minecraft installed and related python tools.

2. Setup for Windows and Mac: If you run Minecraft(java edition) on a Windows or Mac OS, you will find the following tutorial from instructables helpful. The tutorial walks you through the process of setting up your Minecraft server, setting up the python api, and configuring your Minecraft environment.

http://www.instructables.com/id/Python-coding-for-Minecraft/

3. Getting Started with Minecraft Pi: This resource from the Raspberry Pi foundation provides a concise set of steps to get started. Make sure to check out the link on playing with TNT. (The kids enjoy that one!)

https://projects.raspberrypi.org/en/projects/getting-started-with-minecraft-pi/

4. MagPi Magazine issue on Minecraft coding: I’m a big supporter of the MagPi Magazine. I often give this magazine as a gift to my geek friends. They recently published an issue on Minecraft coding that you’d enjoy.

https://www.raspberrypi.org/magpi-issues/Essentials_Minecraft_v1.pdf

5. Minecraft Python API cheat sheet: For experienced programmers who need a quick reference guide to the Minecraft Python API, I found the following link helpful.

http://www.stuffaboutcode.com/p/minecraft-api-reference.html?m=1

6. www.codecademy.com: This interactive tutorial provides a fun way to get started with python programming and many other languages. People learn best when you see a new idea and immediately apply it. Code academy was designed with this learning pattern in mind. You are coached to immediately apply every new programming concept in an online code editor.

Related Blog Posts