Debug mode in gulp

I’ve been using gulp a lot lately (as you can see from my posts).

To the uninitiated, gulp is the hottest, sleekest and newest build system in town. Which I’m in love with and use almost everywhere now. Yup, it’s that awesome.

However, I had been having troubles with debugging while using gulp. It’s not exactly easy to debug one-line CSS or mangled JS now, is it?

So I came up with a solution, creating a switch variable and a new task, debug.

The debug variable

Everything will be controlled by a single variable, which I call debug. Set debug to be false at the start of your gulpfile.js.

var debug = false;  

In the default task, write a line:

gulp.task('default', function() {  
  debug = debug || false;

Why? So we can easily switch the variable from other tasks, and this change is passed to the default task.

The debug task

We need to now create a task that achieves three things:

  1. Sets debug to be true.
  2. Logs that gulp is running on ‘debug mode’.
  3. Set easy-debugging configuration options in all tasks.
gulp.task('debug', function() {  
  debug = true;
  gutil.log('RUNNING IN DEBUG MODE') );

That’s my debug task. Here, gutil = require(gulp-util);. This logs a helpful message, and switches the debug variable to be true.

We can now use this information to make debug changes in our existing tasks.

Debug configuration in tasks

I’ve added a simple variable at the top of each task – uglyLevel. Depending on the task, uglyLevel can be true/false, or ‘compress’/’expanded’. The values are toggled using a simple ternary operator.

    var uglyLevel = debug ? true : false;

Then, these are passed on as values depending on the plugin. For example, with gulp-jade, uglyLevel must be a boolean value and will be used like so:

.pipe( p.jade({ pretty: uglyLevel }) )

gulp-uglify is similar:

.pipe( p.uglify({ compress: uglyLevel }) )

However, for gulp-stylus, uglyLevel is either ‘compress’ or ‘expanded’.

var uglyLevel = debug ? 'expanded' : 'compress';

gulp.src( src )  
  .pipe( p.stylus({ set: [uglyLevel] }) )

You can also try toggling sourcemaps if you’re using SASS, unfortunately the option isn’t available in Stylus yet. Many different ways to solve the same problem.


Simply run gulp debug in the command line instead of gulp. Done! Since debug task runs the default task, all additonal tasks like watch or connect will run automatically.

And there you have it, an easy and quick debug method for gulp.

Discuss on Twitter

Super simple static server in gulp

I recently spent a lot of time looking for a decent way to:

  1. Set up livereload on gulp
  2. Set up a static server.

Here are my findings.

First, I tried using gulp-livereload and gulp-embedlr. Using them together was decent and they were pretty fast, however, they were too complex for my simple goal.

Everything changed when the fire nation attacked once I stumbled upon gulp-connect.

Using gulp-connect

This plugin is extremely simple to use, I set up a server in literally 5 lines of code:

gulp.task('connect', p.connect.server({  
  root: ['_public'],
  port: 4242,
  livereload: true

Yup, that’s it!
(p.connect = require('gulp-connect'), btw).

Live Reload

Now, to actually reaload the page on changes to tasks, we need to pipe p.connect.reload() on each task.

I’ve found that piping it after gulp.dest() is the fastest, so add

  .pipe( gulp.dest( dest ) )
  .pipe( p.connect.reload() );

At the end of each task.
(Where dest refers to the destination path).

Proper watching

I include all ‘partials’ in a subfolder, and all files that are to be compiled in the root folder.

e.g., Jade’s partials/templates go into folders jade/layouts or jade/partials, while main that are to be compiled, like, index.jade or about.jade go in the jade folder.

Therefore, I just run tasks on the root folders, not any of the subfolders.

This creates a problem with live reloading. It would only reload if any of the files from the root folder is changed, but not if the subfolder files are changed.

To fix this, here’s what I changed my watch task to:

gulp.task('watch', ['connect'], function() { `src/styl/*.styl`, `src/styl/**/*.styl`, ['styles'] );

This runs the styles task, compiles properly, and livereloads on every file changed.

Discuss on Twitter

Checkout git branches through your browser

Most git workflows involve use of multiple branches for different sub-tasks, example, a new branch for an alternative layout for the homepage. However, managing braches on the server quicky gets tedious – SSHing in, navigation to the correct directory, then running git checkout <branch> – is tiring for all, right?.

That’s why I came up with a simple solution that used PHP and GET requests to checkout different branches on the server through the browser.

The Concept

What we’re trying to achieve here is:

  1. An easy way to pass a branch name to a script.
  2. That script uses that branch name to run a checkout in the correct directory.
  3. The output of the command is presented to us, to tell if it ran correctly or not.

Doing this with a small, but powerful, PHP script is our challenge.

The Code

GET Request

We’ll be passing variables as GET requests, because:

  1. It’s easy.
  2. It’s lazy.

So just make a variable holding the GET variable in your PHP

$branchname = $_GET['branch'];

We’ll also need to check if the user has actually supplied a request, if not, echo a helpful message and stop the script from executing further.

if (!$branchname) {  
  echo "Please enter a branchname, ?branch=<name>";
  return false;

Executing the command

We need to cd into the correct directory and run git checkout $branchname. We do that using shell_exec().

$command = 'cd <directory> && git checkout ' . $branchname;
$output = shell_exec($command . ' 2>&1');

You might not need to change directory, so feel free to remove cd <directory>. The rest is essential. 2>&1 directs stderr to stdout (Or put simply, outputs the result of the command).

Printing the output would be extremely helpful as well:

echo 'Checking out ' . $branchname . '&hellip;<br>';  
echo $output . '<br>';  

And we’re done. Upload it to your server with a filename like checkout-git-branch.php, and try it out!

Final code


$branchname = $_GET['branch'];

if (!$branchname) {  
  echo "Please enter a branchname, ?branch=<name>";
  return false;

$command = 'cd <directory> && git checkout ' . $branchname;
$output = shell_exec($command . ' 2>&1');

echo 'Checking out ' . $branchname . '&hellip;<br>';  
echo $output . '<br>';  

Discuss on Twitter

Using gulp at MakeUseOf

At MakeUseOf, since the start of the new theme, we simply wrote plain ‘ol CSS and normal JS. No cool stuff like concatenation of compression or minification. Plain code, edited and uploaded through Filezilla.

Now we’ve moved on to a better workflow – Using gulp, Vagrant, git & Github. Here I’ll talk about how we set up and use gulp.

Setting up gulp

Setting up gulp was surprisingly extremely easy. I just ran these two commands:

$ npm init
$ npm install gulp -g
$ npm install gulp --save-dev

And gulp was ready to go. To avoid syncing useless stuff, I added node_modules to .gitignore (And James reminded me to add .sass-cache as well).

The Gulpfile

We have two main requirements for scripts and styles currently:

  • Processing, minifying, and prefixing SASS and Compass.
  • Minifying and using includes on JS.

Multiple plugins are used to achieve this:

I’ve set up three tasks for gulp (including the watch task).

Loading Plugins

As you can see above, I’m using gulp-load-plugins here. This adds a global object that has all the plugins, and so I don’t need to manually add each plugin on install.

var gulp = require("gulp");  
var p = require("gulp-load-plugins")();  

Plugins can then be accessed through p.pluginName(), like, p.minifyCss().


MakeUseOf is a large site and gulp’s installed in the wp-content folder. Managing paths can get ugly, easily, hence I’ve made an object, paths, which has file paths to all used locations.

Javascript resides in js/src and js/src/plugins folders, which is compiled to js, and SCSS is in styles folder, which is compiled to style.css (Since we use WordPress).

var paths = {  
  m2014: {
    scripts: {
      src: 'themes/makeuseof2014/js/src/*.js',
      dest: 'themes/makeuseof2014/js'
    styles: {
      src: 'themes/makeuseof2014/styles/*.scss',
      dest: 'themes/makeuseof2014'
var m2014 = paths.m2014;  

m2014 here refers to the theme name, so the script can easily be modified for other themes as needed.


The styles task is responsible for doing three things:

  • Converting SASS to CSS.
  • Prefixing CSS.
  • Minifying CSS.

It’s a pretty straight-forward task

gulp.task('styles', function() {

  var src = m2014.styles.src;
  var dest = m2014.styles.dest

  // Compiles sass, autoprefixes, and compiles files
  gulp.src( src )
  .pipe( p.compass({
    css: 'themes/makeuseof2014',
    sass: 'themes/makeuseof2014/styles',
    style: 'compressed',
    comments: 'false'
  }) )
  .pipe( p.autoprefixer() )
  .pipe( p.minifyCss() )
  .pipe( gulp.dest( dest ) )

The src and dest variables are set so that I can easily use either in the main function.

gulp works through piping files (Can be in an array, can use the wildcard, etc) through a series of plugins. Each plugin can have specific settings with it, passed as arguments. If you’re familiar with jQuery, gulp should be pretty easy to understand and write.


Our goal with scripts was simply – Compressing them, and allowing use of includes.

gulp.task('scripts', function() {

  var src = m2014.scripts.src;
  var dest = m2014.scripts.dest

  gulp.src( dest + '*.js', { read: false } ).pipe( p.clean() );
  // Clean old files

  // Uglifies files from src folder -> main folder
  gulp.src( src )
  .pipe( p.include() ) // JS Includes
  .pipe( p.uglify() ) // Compresses JS
  .pipe( gulp.dest( dest ) );

Here’s how the scripts task looked.

Note the ‘clean’ thing – It deletes all compressed JS files from the js folder. dest + *.js deletes only the Javascript files in js folder, not in it’s subfolders. (Learnt this the hard way…)

Done using gulp-clean, This is important because we might delete source scripts some times, and in that case, the compiled script will still remain in the js folder.

Setting read to false will prevent node from reading the files, and will decrease time taken.


The watch tasks calls the above tasks whenever there’s a change in the files in the styles folder or in the js/src folders.

gulp.task('watch', function() {, ['scripts']);[m2014.styles.src, m2014.styles.dest + '/**/*.scss'], ['styles']);

m2014.styles.dest + '/**/*.scss' checks for scss files in subfolders of style, otherwise it won’t run if a file in one of the subfolders was edited.

The default task

gulp.task('default', function() {  
  gulp.start('scripts', 'styles', 'watch');

Just runs the three tasks that we defined above.

Syncing files

At MakeUseOf we use a Vagrant set up and a git repo set up at the wp-content folder.

Gulp-related files that are synced are package.json and gulpfile.js. Others are added to .gitignore, and can be installed on each computer seperately (Through npm install, basically).

Further Reading

Discuss on Twitter

Super easy deployment with Git and Bitbucket

Git is the one of the best version control system around, and bitbucket offers unlimited free private repos. What’s left is a simple way to deploy to your server every push.

The solution? BitBucket hooks.

Introduction to BitBucket hooks

BitBucket Hooks allow an easy way to trigger scripts after each push . The one we’re looking for today is a POST hook.

A POST hook sends a ‘payload’ of information related to the repository and the git commit, formatted in JSON, as a POST request to a URL we supply. (Instructions for setting up and example payload data by BitBucket).

So, go on and create a script with an obscure and un-guessable name (security through obscurity), for example, deploy-correcthorsebatterystapler.php. Next, make a POST hook on the repo of your choice to call said php script.

What does the deploy script do?

Our script will do four things:

  1. Parse the payload sent by Bitbucket servers.
  2. Check the payload data.
  3. Pull from the remote repository.
  4. Log results.

Note step 3 – Pulling from the remote repository. For that, we’ll need to create a SSH key so that our PHP user can access and modify the remote repo without a password.

Setting up SSH

Who am I?

First, we need to find out who the PHP user is. We could do that through a PHP script that executes whoami in the shell. Run this:

<?php echo exec('whoami'); ?>  

Depending on the configuration, you could get apache, www-data, or any other. My PHP user is www-data, and since I’m lazy, I’ll write the post using www-data.

Creating keys for www-data

For creating the keys, we basically need to:

  1. Access the shell as the www-data (Requires sudo).
  2. Create keys.
  3. Add as host for that key in the config file.

To give commands as any other user, we do sudo -u <username> <command>. So in this case, we’ll do sudo -u www-data.

The first step is to create a SSH key pair. Run sudo -u www-data ssh-keygen -t rsa. That would show the directory where SSH keys are stored for www-data, and, create a key pair. You’ll be prompted for the name and password of the key. I set the name to id_rsa-git, feel free to name it anything; but the password should be blank.

Now, we need to create a config file in www-data‘s SSH directory. A config file tells which host uses what key for SSH access. cd to the SSH directory (mine was /var/www/.ssh) and create file config in that folder.

(You may need to change permissions of .ssh to 0700 for cding in, do that by running sudo chmod 0700 /var/www/.ssh.)

The config file requires two lines:

Host <more hosts, space separated>  
    IdentityFile <keyname>

My config file looks like

    IdentityFile /var/www/.ssh/id_rsa-git

…and you’re done here. Give yourself a pat on the back.

Back to the deploy script

Parsing and verifying the payload

The payload is in JSON, and to use it as a PHP object, we have to decode it.

    $payload = '';
    if ( isset($_POST['payload']) ){
        $payload = json_decode($_POST['payload']);
    } else {
        return false;
    $repo = $payload->repository

The above snippet checks if the payload exists, and if it does, sets $payload variable to data from BitBucket. Also set $repo to repository in payload.

Pull from the remote repo

This is simple – we need to run init, then add an origin, and then pull from the origin repo.

To enter a bash command in PHP, we need to use exec().

exec('git init && git remote add origin' . $repo->absolute_url . '.git . && git pull origin master');  

Logging runs

This is the easiest part. Using file_put_contents, we create a log file where times when the script was run are added.

file_put_contents('bitbucket-deployment.log', 'Last run on: ' . date('m/d/Y h:i:s a'), FILE_APPEND);  

…and you’re done. Congrats! Read further if you want to add more stuff to your script and want to get tips for debugging.

The final deploy script:


  $payload = '';
  if ( isset($_POST['payload']) ){
    $payload = json_decode($_POST['payload']);
  } else {
    return false;

  $repo = $payload->repository;

  exec('git init && git remote add origin' . $repo->absolute_url . '.git . && git pull origin master');

  file_put_contents('bitbucket-deployment.log', 'Last run on: ' . date('m/d/Y h:i:s a'), FILE_APPEND);



Echoing the output of shell commands and using demo payload data are two easy ways we can debug, just be sure to use a cloud database service to store your data.

Demo payload data

    $payload = '';
    if ( isset($_POST['payload']) ){
        $payload = json_decode($_POST['payload']);
        file_put_contents('payload.log', $_POST['payload']);
    } else {
        $payload = json_decode(file_get_contents('payload.log'))
        return false;
    $repo = $payload->repository

Put the output of $_POST['payload'] to payload.log, and run a testing push. A new file, payload.log, will be created and you’ll find demo data. Visiting the URL from your web browsers will let you retain and test with actual payload data, we suggest to use a backup the data, a cloud can help you. You can the use echos for testing, instead of the complicated file_put_contents(). Pretty cool, right?

Output shell command results to the browser

Changing the execution line to this

echo exec('git init 2>&1 && git remote add origin' . $repo->absolute_url . '.git . 2>&1  && git pull origin master 2>&1');  

Will echo the outputs of each command.

2>&1 redirects stderr to stdout, while the echo before exec(...) will print stdout in the browser.

Further reading

Discuss on Twitter