3D In 2D Canvas

In my recent simulation of an AC generator, I show the same device from two different views: A top view and a front view. To accomplish that, I used a clever technique called 3D Projection. Here, I’m going to talk about how I did that in JavaScript and rendered it on canvas.

What is 3D Projection?

Basically, it means that I define items in 3D space (each point is defined as an array [x, y, z]) and that I plot them in 2D space. For simplification, the 2D render can only be seen in the xy, xz, or yz plane.

Defining in 3D

Each point is defined as an array, [x, y, z].

Since we are using Canvas, it is easiest to define a rectangular face as a 2D array of 5 points: The first vertex, the remaining 3 vertices, and then the initial vertex again. This allows us to moveTo() the first index and then lineTo() till the remaining length of the array.

Since faces exist in 2D, and we’re defining 3 coordinates per point, for a single face, either the x, y, or z, coordinate will remain constant for all points.

A cuboid is defined a 3D array of faces. Depending on our views, we might not need all 6 faces to define a cuboid and can get away with only two or three faces.


var cuboid = [  
  // xy face
    [xa1, ya1, za],
    [xa2, ya2, za],
    [xa3, ya3, za],
    [xa4, ya4, za],
    [xa5, ya5, za],
  // xz face
    [xb1, yb, zb1],
    [xb2, yb, zb2],
    [xb3, yb, zb3],
    [xb4, yb, zb4],
    [xb5, yb, zb5],

Rendering in 2D.

Since we’re defining faces parallel to the primary planes, it is easiest to render the views of the primary planes themselves.

Imagine a shape on the xy plane. All points that define it can be written as (xi, yi, 0). Similarly, any shape on the xz plane defines all points as (xi, 0, zi). Basically, whichever axis you’re not rendering the shape on is 0.

This makes things easy for us.

Creating a function called plotFace, which takes three parameters:

  • face: The 2D array of points
  • path: The path to plot the shape on
  • a: The first axis of the plane
  • b: The second axis of the plane

Since in our arrays, the 0 index represents x and so on, we can simplify the function if a and b are directly passed as integers.

The plotFace function basically movesTo the initial point and linesTo the remaining ones.

function plotFace(face, path, a, b) {  
  var len = face.length;
  path.moveTo( face[a], face[b] );
  for ( var i = 0; i < len; i++ ) {
    var point = face[i];
    path.lineTo( point[a], point[b] );

A simple map through all faces and using plotFace on each can plot the entire shape to a single path.

path = new Path2D();  
item.faces.map(function(face) {  
  plotFace(face, path, a, b);

Finally, rendering the context using fill() will get our shape.


…And we’re done!

What’s next

Currently, we’re only projecting in the xy, xz, or xy plane. Next would be to be able to project in any arbitrary plane. I have yet to figure out the math for it, and I’m trying to do it without external help, so it might be a while before I publish a new article.

After that, perhaps manually raycasting to create shadows? That could be interesting, both aesthetic and performance wise.

Further Reading

Discuss on Twitter

Two really cool Node MySQL tips

Node MySQL is a great traditional alternative to mongo and all the jazz youngins are using. One important advice – never use + to concatenate queries unless you know what you’re doing.

1. Always escape using ? as placeholders

Queries are usually written as:

connection.query('SELECT * FROM foo WHERE bar = baz', function(err, results) {  
    // ...

If you want to check against a custom property, don’t do this.

connection.query('SELECT * FROM foo WHERE bar = ' + someVariable, function(err, results) {  
    // ...


connection.query('SELECT * FROM foo WHERE bar = ?', [someVariable], function(err, results) {  
    // ...

You can use multiple ? like so:

connection.query('SELECT * FROM foo WHERE ? = ?', [someProperty, someValue], function(err, results) {  
    // ...

2. Use the SET ? syntax

Node MySQL converts objects from { a: 'b' } to a = 'b' when escaped. Insertions with objects is thus easy:

var user = { id: 42, name: "Namanyay Goel" };  
connection.query('INSERT INTO users SET ?`, user, function(err, result) {  
    // ... 

Then you never have to do this

Learn more about Node MySQL’s escaping

Discuss on Twitter

Selling online allows retailers to sell their merchandise in any part of the world without additional expense, this allows online retailers to give better prices which is what customers are looking for, they usually visit websites like Raise in order to find the coupon codes from multiple retailers. It can also allow for free shipping, saving on the costs of transport. But with online shopping, retailers have an incentive to use a website or app that helps them to automate the process of selling a good or service.

Fintech firms, like the ones mentioned above, can help to automate the process by helping retailers collect data from a customer base as well as using mobile phones to direct them to the best products, products from a specific store and more.

But these companies will also need to be cognizant of all the risks and regulatory barriers as they work to break through the often labyrinthine regulatory hurdles.

The new industry is in its infancy with relatively little regulation and very little infrastructure in place for those starting the fintech industry. In fact, there are many fintech startups and a number of big players who are either active in the space or have already established themselves. These companies can help retailers move away from brick and mortar stores and into online locations, as well as make it easier for customers to discover and shop for a range of products.

With that in mind, it makes sense to start with companies that provide service for those seeking to shop for things online. I decided to explore the list of fintech startups in this space and find out which are the biggest players out there. I’ve created a short list of these companies below, broken into two distinct categories: those that work exclusively with brick-and-mortar stores and those that offer services in addition to online stores.


Discuss on Twitter

Jade locals with Gulp

One of the coolest features of Jade is the concept of locals: An object that can be passed to the compiler and used in the Jade code, allowing better separation of content and templates. Ideally, these locals are held in an external file.

After much tinkering, I figured something out:

var fs = require('fs');  
.pipe( p.jade({ 
    pretty: uglyLevel,
    data: JSON.parse( fs.readFileSync('src/data.js', { encoding: 'utf8' }) )
}) )


  • Gulp Jade’s docs show that the data or locals option to could be used to pass in a single object holding all the external data.
  • File I/O, or fs is node’s way of reading files. Using fs.readFileSync, I used a JSON file to hold all the data.
  • JSON.parse() is a native JS method to convert a string (The output of fs.readFileSync with utf8 encoding) to JSON.

Combining the three resulted in the above one liner, allowing me to use a data.js file to host all raw data and use loops to better template the code within. Win!

PS: If you’re wondering what the uglyLevel bit is…

Discuss on Twitter

Images and excerpts – A few practical problems with Ghost

Ghost is awesome, it really is! I’ve just started using and developing on it, but I love it already. It’s simple, smooth, and fast. You can feel the speed when you compare it to traditional CMS’ like WordPress or static generators like Jekyll – I find it to triumph both.

Development is pretty damn easy too. Installing Ghost on Windows was a breeze, and starting development even easier. I fired up Prepros, creating a SCSS file for better CSS, and started coding!

Ghost’s writer is it’s biggest advantage, though. Markdown is great to write, and the side-by-side compilation makes writing so much more fun.

However, there are indeed a few practical problems with Ghost that you may encounter soon in one of your projects. I’m going to talk about these here, along with some hacky solutions for them.

  • Images can’t use figure/figcaption: Currently, images on Ghost are simple <img> tags in paragraphs. I was looking around for image captioning and using figure/figcaption there, but with little results. A workaround by Lee Lam could be a quick solution, though.

This is a problem both of Markdown (Which does not seem to support two types of captions, i.e. one for the alt attribute and other a standard caption) and Ghost.

The issue is set to won’t fix until the Haunted Markdown parser is implemented.

  • No support for advanced excerpts: With WordPress, you could simple add a <!-- more --> somewhere and it handled excerpts with read more for you. Unfortuantely, this isn’t the case with Ghost and by default you see a paragraph of plaintext with a trailing …. Not something particularly beautiful.

Kraftner on Ghost Forums gives a great solution to that problem. Using {{ content }} instead of {{ excerpt }} allows you to output HTML instead of plaintext, and combining that with some clever CSS rules displays only one paragraph. I use a similar trick at TLDRtech where all uls are hidden in the ‘excerpt’.

My goal with the post was to highlight some of the common issues, and give hacky solutions for them. That said, I do love Ghost for many, many reasons

  • Ghost is fast. You literally feel the difference on the admin panel of Ghost compared to WordPress’
  • Installing Ghost is a breeze. It took me less than 2 minutes to install Ghost on my Windows computer. Granted, installing it on Apache is a bit more difficult, but there are good guides for that.
  • Theme development on Ghost is fun. Handlebars is fun to write, and I’ve set up SCSS compilation with Prepos as well.

Discuss on Twitter

Debug mode in gulp

I’ve been using gulp a lot lately (as you can see from my posts).

To the uninitiated, gulp is the hottest, sleekest and newest build system in town. Which I’m in love with and use almost everywhere now. Yup, it’s that awesome.

However, I had been having troubles with debugging while using gulp. It’s not exactly easy to debug one-line CSS or mangled JS now, is it?

So I came up with a solution, creating a switch variable and a new task, debug.

The debug variable

Everything will be controlled by a single variable, which I call debug. Set debug to be false at the start of your gulpfile.js.

var debug = false;  

In the default task, write a line:

gulp.task('default', function() {  
  debug = debug || false;

Why? So we can easily switch the variable from other tasks, and this change is passed to the default task.

The debug task

We need to now create a task that achieves three things:

  1. Sets debug to be true.
  2. Logs that gulp is running on ‘debug mode’.
  3. Set easy-debugging configuration options in all tasks.
gulp.task('debug', function() {  
  debug = true;
  gutil.log( gutil.colors.green('RUNNING IN DEBUG MODE') );

That’s my debug task. Here, gutil = require(gulp-util);. This logs a helpful message, and switches the debug variable to be true.

We can now use this information to make debug changes in our existing tasks.

Debug configuration in tasks

I’ve added a simple variable at the top of each task – uglyLevel. Depending on the task, uglyLevel can be true/false, or ‘compress’/’expanded’. The values are toggled using a simple ternary operator.

    var uglyLevel = debug ? true : false;

Then, these are passed on as values depending on the plugin. For example, with gulp-jade, uglyLevel must be a boolean value and will be used like so:

.pipe( p.jade({ pretty: uglyLevel }) )

gulp-uglify is similar:

.pipe( p.uglify({ compress: uglyLevel }) )

However, for gulp-stylus, uglyLevel is either ‘compress’ or ‘expanded’.

var uglyLevel = debug ? 'expanded' : 'compress';

gulp.src( src )  
  .pipe( p.stylus({ set: [uglyLevel] }) )

You can also try toggling sourcemaps if you’re using SASS, unfortunately the option isn’t available in Stylus yet. Many different ways to solve the same problem.


Simply run gulp debug in the command line instead of gulp. Done! Since debug task runs the default task, all additonal tasks like watch or connect will run automatically.

And there you have it, an easy and quick debug method for gulp.

Discuss on Twitter

Super simple static server in gulp

I recently spent a lot of time looking for a decent way to:

  1. Set up livereload on gulp
  2. Set up a static server.

Here are my findings.

First, I tried using gulp-livereload and gulp-embedlr. Using them together was decent and they were pretty fast, however, they were too complex for my simple goal.

Everything changed when the fire nation attacked once I stumbled upon gulp-connect.

Using gulp-connect

This plugin is extremely simple to use, I set up a server in literally 5 lines of code:

gulp.task('connect', p.connect.server({  
  root: ['_public'],
  port: 4242,
  livereload: true

Yup, that’s it!
(p.connect = require('gulp-connect'), btw).

Live Reload

Now, to actually reaload the page on changes to tasks, we need to pipe p.connect.reload() on each task.

I’ve found that piping it after gulp.dest() is the fastest, so add

  .pipe( gulp.dest( dest ) )
  .pipe( p.connect.reload() );

At the end of each task.
(Where dest refers to the destination path).

Proper watching

I include all ‘partials’ in a subfolder, and all files that are to be compiled in the root folder.

e.g., Jade’s partials/templates go into folders jade/layouts or jade/partials, while main that are to be compiled, like, index.jade or about.jade go in the jade folder.

Therefore, I just run tasks on the root folders, not any of the subfolders.

This creates a problem with live reloading. It would only reload if any of the files from the root folder is changed, but not if the subfolder files are changed.

To fix this, here’s what I changed my watch task to:

gulp.task('watch', ['connect'], function() {  
  gulp.watch( `src/styl/*.styl`, `src/styl/**/*.styl`, ['styles'] );

This runs the styles task, compiles properly, and livereloads on every file changed.

Discuss on Twitter

Checkout git branches through your browser

Most git workflows involve use of multiple branches for different sub-tasks, example, a new branch for an alternative layout for the homepage. However, managing braches on the server quicky gets tedious – SSHing in, navigation to the correct directory, then running git checkout <branch> – is tiring for all, right?.

That’s why I came up with a simple solution that used PHP and GET requests to checkout different branches on the server through the browser.

The Concept

What we’re trying to achieve here is:

  1. An easy way to pass a branch name to a script.
  2. That script uses that branch name to run a checkout in the correct directory.
  3. The output of the command is presented to us, to tell if it ran correctly or not.

Doing this with a small, but powerful, PHP script is our challenge.

The Code

GET Request

We’ll be passing variables as GET requests, because:

  1. It’s easy.
  2. It’s lazy.

So just make a variable holding the GET variable in your PHP

$branchname = $_GET['branch'];

We’ll also need to check if the user has actually supplied a request, if not, echo a helpful message and stop the script from executing further.

if (!$branchname) {  
  echo "Please enter a branchname, ?branch=<name>";
  return false;

Executing the command

We need to cd into the correct directory and run git checkout $branchname. We do that using shell_exec().

$command = 'cd <directory> && git checkout ' . $branchname;
$output = shell_exec($command . ' 2>&1');

You might not need to change directory, so feel free to remove cd <directory>. The rest is essential. 2>&1 directs stderr to stdout (Or put simply, outputs the result of the command).

Printing the output would be extremely helpful as well:

echo 'Checking out ' . $branchname . '&hellip;<br>';  
echo $output . '<br>';  

And we’re done. Upload it to your server with a filename like checkout-git-branch.php, and try it out!

Final code


$branchname = $_GET['branch'];

if (!$branchname) {  
  echo "Please enter a branchname, ?branch=<name>";
  return false;

$command = 'cd <directory> && git checkout ' . $branchname;
$output = shell_exec($command . ' 2>&1');

echo 'Checking out ' . $branchname . '&hellip;<br>';  
echo $output . '<br>';  

Discuss on Twitter

Using gulp at MakeUseOf

At MakeUseOf, since the start of the new theme, we simply wrote plain ‘ol CSS and normal JS. No cool stuff like concatenation of compression or minification. Plain code, edited and uploaded through Filezilla.

Now we’ve moved on to a better workflow – Using gulp, Vagrant, git & Github. Here I’ll talk about how we set up and use gulp.

Setting up gulp

Setting up gulp was surprisingly extremely easy. I just ran these two commands:

$ npm init
$ npm install gulp -g
$ npm install gulp --save-dev

And gulp was ready to go. To avoid syncing useless stuff, I added node_modules to .gitignore (And James reminded me to add .sass-cache as well).

The Gulpfile

We have two main requirements for scripts and styles currently:

  • Processing, minifying, and prefixing SASS and Compass.
  • Minifying and using includes on JS.

Multiple plugins are used to achieve this:

I’ve set up three tasks for gulp (including the watch task).

Loading Plugins

As you can see above, I’m using gulp-load-plugins here. This adds a global object that has all the plugins, and so I don’t need to manually add each plugin on install.

var gulp = require("gulp");  
var p = require("gulp-load-plugins")();  

Plugins can then be accessed through p.pluginName(), like, p.minifyCss().


MakeUseOf is a large site and gulp’s installed in the wp-content folder. Managing paths can get ugly, easily, hence I’ve made an object, paths, which has file paths to all used locations.

Javascript resides in js/src and js/src/plugins folders, which is compiled to js, and SCSS is in styles folder, which is compiled to style.css (Since we use WordPress).

var paths = {  
  m2014: {
    scripts: {
      src: 'themes/makeuseof2014/js/src/*.js',
      dest: 'themes/makeuseof2014/js'
    styles: {
      src: 'themes/makeuseof2014/styles/*.scss',
      dest: 'themes/makeuseof2014'
var m2014 = paths.m2014;  

m2014 here refers to the theme name, so the script can easily be modified for other themes as needed.


The styles task is responsible for doing three things:

  • Converting SASS to CSS.
  • Prefixing CSS.
  • Minifying CSS.

It’s a pretty straight-forward task

gulp.task('styles', function() {

  var src = m2014.styles.src;
  var dest = m2014.styles.dest

  // Compiles sass, autoprefixes, and compiles files
  gulp.src( src )
  .pipe( p.compass({
    css: 'themes/makeuseof2014',
    sass: 'themes/makeuseof2014/styles',
    style: 'compressed',
    comments: 'false'
  }) )
  .pipe( p.autoprefixer() )
  .pipe( p.minifyCss() )
  .pipe( gulp.dest( dest ) )

The src and dest variables are set so that I can easily use either in the main function.

gulp works through piping files (Can be in an array, can use the wildcard, etc) through a series of plugins. Each plugin can have specific settings with it, passed as arguments. If you’re familiar with jQuery, gulp should be pretty easy to understand and write.


Our goal with scripts was simply – Compressing them, and allowing use of includes.

gulp.task('scripts', function() {

  var src = m2014.scripts.src;
  var dest = m2014.scripts.dest

  gulp.src( dest + '*.js', { read: false } ).pipe( p.clean() );
  // Clean old files

  // Uglifies files from src folder -> main folder
  gulp.src( src )
  .pipe( p.include() ) // JS Includes
  .pipe( p.uglify() ) // Compresses JS
  .pipe( gulp.dest( dest ) );

Here’s how the scripts task looked.

Note the ‘clean’ thing – It deletes all compressed JS files from the js folder. dest + *.js deletes only the Javascript files in js folder, not in it’s subfolders. (Learnt this the hard way…)

Done using gulp-clean, This is important because we might delete source scripts some times, and in that case, the compiled script will still remain in the js folder.

Setting read to false will prevent node from reading the files, and will decrease time taken.


The watch tasks calls the above tasks whenever there’s a change in the files in the styles folder or in the js/src folders.

gulp.task('watch', function() {  
  gulp.watch(m2014.scripts.src, ['scripts']);
  gulp.watch([m2014.styles.src, m2014.styles.dest + '/**/*.scss'], ['styles']);

m2014.styles.dest + '/**/*.scss' checks for scss files in subfolders of style, otherwise it won’t run if a file in one of the subfolders was edited.

The default task

gulp.task('default', function() {  
  gulp.start('scripts', 'styles', 'watch');

Just runs the three tasks that we defined above.

Syncing files

At MakeUseOf we use a Vagrant set up and a git repo set up at the wp-content folder.

Gulp-related files that are synced are package.json and gulpfile.js. Others are added to .gitignore, and can be installed on each computer seperately (Through npm install, basically).

Further Reading

Discuss on Twitter

Super easy deployment with Git and Bitbucket

Git is the one of the best version control system around, and bitbucket offers unlimited free private repos. What’s left is a simple way to deploy to your server every push.

The solution? BitBucket hooks.

Introduction to BitBucket hooks

BitBucket Hooks allow an easy way to trigger scripts after each push . The one we’re looking for today is a POST hook.

A POST hook sends a ‘payload’ of information related to the repository and the git commit, formatted in JSON, as a POST request to a URL we supply. (Instructions for setting up and example payload data by BitBucket).

So, go on and create a script with an obscure and un-guessable name (security through obscurity), for example, deploy-correcthorsebatterystapler.php. Next, make a POST hook on the repo of your choice to call said php script.

What does the deploy script do?

Our script will do four things:

  1. Parse the payload sent by Bitbucket servers.
  2. Check the payload data.
  3. Pull from the remote repository.
  4. Log results.

Note step 3 – Pulling from the remote repository. For that, we’ll need to create a SSH key so that our PHP user can access and modify the remote repo without a password.

Setting up SSH

Who am I?

First, we need to find out who the PHP user is. We could do that through a PHP script that executes whoami in the shell. Run this:

<?php echo exec('whoami'); ?>  

Depending on the configuration, you could get apache, www-data, or any other. My PHP user is www-data, and since I’m lazy, I’ll write the post using www-data.

Creating keys for www-data

For creating the keys, we basically need to:

  1. Access the shell as the www-data (Requires sudo).
  2. Create keys.
  3. Add BitBucket.org as host for that key in the config file.

To give commands as any other user, we do sudo -u <username> <command>. So in this case, we’ll do sudo -u www-data.

The first step is to create a SSH key pair. Run sudo -u www-data ssh-keygen -t rsa. That would show the directory where SSH keys are stored for www-data, and, create a key pair. You’ll be prompted for the name and password of the key. I set the name to id_rsa-git, feel free to name it anything; but the password should be blank.

Now, we need to create a config file in www-data‘s SSH directory. A config file tells which host uses what key for SSH access. cd to the SSH directory (mine was /var/www/.ssh) and create file config in that folder.

(You may need to change permissions of .ssh to 0700 for cding in, do that by running sudo chmod 0700 /var/www/.ssh.)

The config file requires two lines:

Host bitbucket.org <more hosts, space separated>  
    IdentityFile <keyname>

My config file looks like

Host bitbucket.org github.com  
    IdentityFile /var/www/.ssh/id_rsa-git

…and you’re done here. Give yourself a pat on the back.

Back to the deploy script

Parsing and verifying the payload

The payload is in JSON, and to use it as a PHP object, we have to decode it.

    $payload = '';
    if ( isset($_POST['payload']) ){
        $payload = json_decode($_POST['payload']);
    } else {
        return false;
    $repo = $payload->repository

The above snippet checks if the payload exists, and if it does, sets $payload variable to data from BitBucket. Also set $repo to repository in payload.

Pull from the remote repo

This is simple – we need to run init, then add an origin, and then pull from the origin repo.

To enter a bash command in PHP, we need to use exec().

exec('git init && git remote add origin git@bitbucket.org:' . $repo->absolute_url . '.git . && git pull origin master');  

Logging runs

This is the easiest part. Using file_put_contents, we create a log file where times when the script was run are added.

file_put_contents('bitbucket-deployment.log', 'Last run on: ' . date('m/d/Y h:i:s a'), FILE_APPEND);  

…and you’re done. Congrats! Read further if you want to add more stuff to your script and want to get tips for debugging.

The final deploy script:


  $payload = '';
  if ( isset($_POST['payload']) ){
    $payload = json_decode($_POST['payload']);
  } else {
    return false;

  $repo = $payload->repository;

  exec('git init && git remote add origin git@bitbucket.org:' . $repo->absolute_url . '.git . && git pull origin master');

  file_put_contents('bitbucket-deployment.log', 'Last run on: ' . date('m/d/Y h:i:s a'), FILE_APPEND);



Echoing the output of shell commands and using demo payload data are two easy ways we can debug, just be sure to use a cloud database service to store your data.

Demo payload data

    $payload = '';
    if ( isset($_POST['payload']) ){
        $payload = json_decode($_POST['payload']);
        file_put_contents('payload.log', $_POST['payload']);
    } else {
        $payload = json_decode(file_get_contents('payload.log'))
        return false;
    $repo = $payload->repository

Put the output of $_POST['payload'] to payload.log, and run a testing push. A new file, payload.log, will be created and you’ll find demo data. Visiting the URL from your web browsers will let you retain and test with actual payload data, we suggest to use a backup the data, a cloud can help you. You can the use echos for testing, instead of the complicated file_put_contents(). Pretty cool, right?

Output shell command results to the browser

Changing the execution line to this

echo exec('git init 2>&1 && git remote add origin git@bitbucket.org:' . $repo->absolute_url . '.git . 2>&1  && git pull origin master 2>&1');  

Will echo the outputs of each command.

2>&1 redirects stderr to stdout, while the echo before exec(...) will print stdout in the browser.

Further reading

Discuss on Twitter