The blog of
  • Delayed Reaction [My experience converting a jQuery/Knockout.js application to use the React library]
    Tuesday, March 22nd 2016

    It's important to stay up-to-date with technology trends and popular frameworks. That was part of the reason I wrote this blog using Node.js and it's part of the reason I recently converted a project to use the React library. That project was PassWeb, a simple, secure cloud-based password manager. I wrote PassWeb almost two years ago and use it nearly every day. If you're interested in how it works, please read the introductory blog post about PassWeb. For the purposes of this post, the thing to know is that PassWeb is built on the popular jQuery and Knockout.js frameworks.

    To be clear, both frameworks are perfectly good - but switching was a great opportunity to learn about React. :)


    The original architecture was pretty much what you'd expect: application logic lives in a JavaScript file and the user interface lives in an HTML file. My goal when converting to React was to make as few changes to the logic as possible in order to minimize the risk of introducing behavioral bugs. So I worked in stages:

    Having performed the bulk of the migration, all that remained was to identify and fix the handful of bugs that got introduced along the way.


    While JSX isn't required to use React, it's a natural fit and I chose JSX so I could get the full React experience. Putting JSX in the browser means using a transpiler to convert the embedded HTML to JavaScript. Babel provides excellent support for this via the React preset and was easy to work with. Because I was now running code through a transpiler, I also enabled the ES2015 Preset which supports newer features of the JavaScript language like let, const, and lambda expressions. I only scratched the surface of ES2015, but it was nice to be able to do so for "free".

    One thing I noticed as I migrated more and more code was that much of what I was writing was boilerplate to deal with the propagation of state to and from (observable) properties. I captured this repetitive code in three helper methods and doing so significantly simplified components. (Projects like ReactLink formalize this pattern within the React ecosystem.)


    Something I was curious about was how performance would differ after converting to React. For the most part, things were fast enough before that there was no need to optimize. Except for one scenario: filtering the list of items interactively. So I'd already tuned the Knockout implementation for better performance by toggling the visibility (CSS display:none) of unfiltered items instead of removing and re-adding them to/from the DOM.

    When I converted to React, I used the simplest implementation and - unsurprisingly - this scenario performed worse. The first thing I did was implement the shouldComponentUpdate function on the component corresponding to each list item (as recommended by the Advanced Performance section of the docs). React's built-in performance tools are very useful and quickly showed the need for this optimization (as well as confirming the benefits). Two helpful posts that discuss the topic further are Optimizing React Performance using keys, component life cycle, and performance tools and Performance Engineering with React.

    Implementing shouldComponentUpdate was a good start, but I had the same basic problem that adding and removing hundreds of elements just wasn't snappy. So I made the same visibility optimization, introducing another component to act as a thin wrapper around the existing one and deal exclusively with visibility. After that, the overall performance of the filter scenario was improved to approximate parity. (Actually, React was still a little slower for the 10,000 item case, but fared better in other areas, and I'm comfortable declaring performance roughly equivalent between the two implementations.)

    Other considerations are complexity and size. Two frameworks have been replaced by one, so that's a pretty clear win on the complexity side. Size is a little murky, though. The minified size of the React framework is a little smaller then the combined sizes of jQuery and Knockout. However, the size of the new JSX file is notably larger than the templated HTML it replaces (recall that the code for logic stayed basically the same). And compiling JSX tends to expand the size of the code. But fortunately, Babel lets you minify scripts and that's enough to offset most of the growth. In the end, the React version of PassWeb is slightly smaller than the jQuery/Knockout version - but not enough to be the sole reason to convert.


    Now that the dust has settled, would I do it all over again? Definitely! :)

    Although there weren't dramatic victories in performance or size, I like the modular approach React encourages and feel it may lead to simpler code. I also like that React combines UI logic and presentation better and allowed me to completely gut the HTML file (which now contains only head and script tags). I also see value in unifying an application's state into one place (formalized by libraries like Redux), though I deliberately didn't explore that here. Most importantly, this was a good learning experience and I really enjoyed getting to know React.

    I'll definitely consider React for my next project - maybe even finding an excuse to explore React Native...

    Tags: Technical Utilities Web
  • Catch common Markdown mistakes as you make them [markdownlint is a Visual Studio Code extension to lint Markdown files]
    Tuesday, December 8th 2015

    The lightweight, cross-platform Visual Studio Code editor recently gained support for extensions, third party packages that add or enhance capabilities of the tool. Of particular interest to me are linters, syntax checkers that help avoid mistakes and maintain consistency when working with a language (either code or markup). I've previously written about markdownlint, a Node.js linter for the Markdown markup language. After looking at the VS Code API, it seemed straightforward to create a markdownlint extension for Code. I did so and published markdownlint to the extension gallery where it can be installed via the command ext install markdownlint. What's nice about editor integration for a linter is that feedback is immediate and interactive: mistakes are highlighted as they're made and it's easy to click a link for information about any rule violation.

    If linting Markdown is something that interests you, please try the markdownlint extension for VS Code and share your feedback!

    Tags: Miscellaneous Node.js Technical
  • It's alive ... photo! [Live Photos via Web Components]
    Wednesday, November 18th 2015

    I'd been meaning to learn more about the Web Components standard and recently found the inspiration to do so in the form of a small project to explore the idea of bringing Apple's "Live Photo" experience to the web:

    Apple introduced Live Photos with iOS 9, a feature that automatically associates a short video with every picture that's taken. I was skeptical at first, wondering how relevant this would be for static content; and it turns out not to be all that compelling for some kinds of photos. But for dynamic scenes or people in motion, the video can add some really interesting context!

    Live Photos on iOS are (naturally) smooth and easy to use. I wondered what it might be like to bring a similar experience to the web. I'd also been looking for a reason to explore Web Components. And so live-photo-web was born!

    To find out more, please visit live-photo-web on GitHub and/or try out the interactive demo!

    Tags: Technical Web
  • Pie in the Sky-Hole [A Pi-Hole in the cloud for ad-blocking via DNS]
    Monday, August 24th 2015

    Inspired by Marco Arment's recent post about blocking advertisements on the web, I decided to explore the same idea. However, while Marco focuses on the annoyance of advertisements, I am interested in the security benefits of removing them. There have been numerous incidents of otherwise respectable websites compromising the security of their users due to the advertisements they include. Searches for "web site hacked 'ad network'" on Google and Bing provide some examples; another is this XSS attack on Troy Hunt's site, which is interesting thanks to the detailed analysis Troy provides. Popular sites of all kinds have been compromised in this way, and one might argue they should be treated as attackers because of the approach used to serve third-party ads.

    Marco's article describes an in-browser solution for ad-blocking, but I prefer something that automatically protects all the machines on my network (at least, while they're using the network; see below). So I set out looking for something that works at the network level and came across Pi-Hole, a DNS-based ad-blocker for the Raspberry Pi. Aside from the fact that I don't own a Pi, this seemed like exactly what I wanted. ;)

    Fortunately, there are no actual dependencies on Pi hardware, so I decided to create my own Pi-Hole on a server in the cloud - thus the name "Sky-Hole". To do so, I opened the Microsoft Azure Portal, created a small virtual machine running Ubuntu Server 15.04, and configured it according to the manual instructions for Pi-Hole (with a few customizations outlined below). Then I updated my wireless router to use Sky-Hole as the DNS server for my home network - and all my devices stopped showing advertisements!


    I used a minimal set of steps to configure the Sky-Hole and list them below so they're easy to reproduce. I made a couple of tweaks to the Pi-Hole process along the way and explain them in turn.

    First, create a virtual machine to run everything on (I've used both Microsoft Azure and Amazon Web Services, but any provider should do). Then, install dnsmasq:

    sudo apt-get -y install dnsmasq
    sudo update-rc.d dnsmasq enable
    sudo mv /etc/dnsmasq.conf /etc/dnsmasq.orig
    sudo nano /etc/dnsmasq.conf

    Configure dnsmasq.conf as follows (replacing "sky-hole" on the last line with the host name of your virtual machine):


    The addn-hosts option is meant to be optional, but I needed it because /etc/hosts was not updated by The host-record option was necessary to avoid a "sudo: unable to resolve host" error which showed up whenever I enabled dnsmasq. (Though this may be an artifact of the default virtual machine configuration under Azure.)

    Update 2015-08-30: host-record was similarly necessary on AWS, where the automatically-assigned host name was of the form ip-123-123-123-123.

    Now, download the Pi-Hole script and run it to generate the list of domain names to block:

    sudo curl -o /usr/local/bin/
    sudo chmod 755 /usr/local/bin/
    sudo /usr/local/bin/
    sudo sed -i "s/^[0-9\.]\+\s/ /g" /etc/pihole/gravity.list

    The last line is my own and replaces the virtual machine's IP address with an unusable address when redirecting undesirable sites. Because I'm not running a web server on the Sky-Hole, this seems like a more appropriate way to block unwanted domain names. (Besides, hostname -I in Azure reports the virtual machine's internal address which is on a private network.)

    Restart dnsmasq to apply the changes:

    sudo service dnsmasq restart

    Now, test things locally via ping, dig, nslookup (or similar) to verify that desirable domain names are returned as-is and undesirable ones are blocked by returning the IP. Assuming that's the case, update the virtual machine to accept incoming UDP traffic on port 53 (per the DNS specification) and test again from a different machine. If everything is working as expected, configure your router to use the Sky-Hole's public IP address for DNS resolution. This automatically applies to all devices on the local network and avoids the need to update each one manually.

    Update 2015-08-30: You may also want to enable TCP traffic on port 53 (per RFC 5966).

    Congratulations, you're done!


    • The nice thing about this approach is that it covers all the machines on your network. However, it can only protect machines when they're connected to that network. Taking a phone or tablet elsewhere or using cellular data exempts a device from this kind of protection.
      • So this may be an argument in favor of per-device ad-blocking - though perhaps as a strategy to be used in addition to (rather than instead of) a network-wide approach.
    • When creating the virtual machine, I used the Basic A1 size which would cost about $34.97 per month on Azure (though I don't plan to leave it running very long).
      • I tried the A0 size first (which would have cost $13.39 per month on Azure), but it ran out of memory building the domain list, seemingly due to this known issue.
    • As I note above, I chose not to configure a local web server on my Sky-Hole. While doing so offers interesting benefits, it didn't seem compelling for the purposes of this experiment and I preferred to keep thing simple. Should you choose to, directions are available in the Pi-Hole documentation.
    • If you end up using Pi-Hole like this (or on its own) please consider donating to the author, Jacob Salmela, to help support his work.


    I'm only been running Sky-Hole for a couple of days, but the usability and performance improvements for some sites are quite noticeable. More importantly, it seems to me the browsing experience is necessarily safer by virtue of removing not just a subset of traffic, but the subset which is most likely to contain unwanted content.

    As an experiment and a learning experience, Sky-Hole has been a successful side-project. I hope others find it interesting or thought-provoking and I welcome comments on improving or enhancing the approach!

    Tags: Miscellaneous Technical Web
  • Not romantically binding [promise-ring wraps Node.js callbacks with native ES6 Promises]
    Monday, July 20th 2015

    JavaScript Promises are a powerful way of working with asynchronous code. They make sequencing operations easy and offer a clear, predictable way to handle errors that might occur along the way. Much has been written about the benefits of Promises and I won't try to repeat it here.

    What I do hope to do is make Promises a slightly more natural part of the Node.js development experience. In version 0.12.* (as well as in io.js), ES6 Promises are natively available. But the standard set of modules (such as File System) still use their original callback-based design and there's a bit of a disconnect between how you might want to write something and how you're able to. Fortunately, most of the Promise libraries that are already available include wrappers to convert callback-based functions into ones that return a Promise. However, most of those libraries assume you'll be using their custom implementation of Promise (from the "olden days" when that was the only option). And while different Promises/A+ implementations are meant to be interoperable, it seems silly to pull in a second Promise implementation when a perfectly good one is already available.

    That's where promise-ring comes in: it's a tiny npm package that provides functions to convert typical callback-based APIs into their Promise-based counterparts using the V8 JavaScript engine's native Promise implementation. Briefly:

    promise-ring is small, simple library with no dependencies that eases the use of native JavaScript Promises in projects without a Promise library.

    Documentation is available in the README along with runnable samples demonstrating the use of each API. It's all quite simple and exactly what you'd expect. A bonus feature is the wrapAll function which makes it easier to work with modules that expose many different callback-based functions (such as the File System module; see below).

    For an example of using promise-ring and Promises to simplify code, here is a typical callback-based snippet to copy a file onto itself:

    var fs = require("fs");
    // Copy a file onto itself using callbacks
    fs.stat(file, function(err) {
      if (err) {
      } else {
        fs.readFile(file, encoding, function(errr, content) {
          if (errr) {
          } else {
            fs.writeFile(file, content, encoding, function(errrr) {
              if (errrr) {
              } else {
                console.log("Copied " + file);

    And here's the same code converted to use Promises via promise-ring:

    var pr = require("promise-ring");
    var fsp = pr.wrapAll(require("fs"));
    // Copy a file onto itself using Promises
      .then(function() {
        return fsp.readFile(file, encoding);
      .then(function(content) {
        return fsp.writeFile(file, content, encoding);
      .then(function() {
        console.log("Copied " + file);

    The second implementation is more concise, easier to follow, and DRY-er. That's the power of Promises! :)

    Find out more by visiting promise-ring on GitHub or promise-ring in the npm gallery.

    Tags: Node.js Technical
  • Lint-free documentation [markdownlint is a Node.js style checker and lint tool for Markdown files]
    Tuesday, May 12th 2015

    I'm a strong believer in using static analysis tools to identify problems and catch mistakes. The Node.js/io.js community has some great options for linting JavaScript code (ex: JSHint and ESLint), and I use them regularly. But code isn't the only important asset - documentation can be just as important to a project's success.

    The open-source community has pretty much standardized on Markdown for documentation which is a great choice because it's easy to read, write, and understand. That said, Markdown has a syntax, so there are "right" and "wrong" ways to do things - and not all parsers handle nuances the same way (though the CommonMark effort is trying to standardize). In particular, there are constructs that can lead to missing/broken text in some parsers but which are not obviously wrong in the original Markdown.

    To show what I mean, I created a Gist of common Markdown mistakes. If you're not a Markdown expert, you might learn something by comparing the source and output. :)

    Aside: The Markdown parser used by GitHub is quite good - but many issues are user error and it can't (yet) read your mind.


    You shouldn't need to be a Markdown expert to avoid silly mistakes - that's what we have computers for. When I looked around for a Node-based linter, I didn't see anything - but I did find a very nice implementation for Ruby by Mark Harrison. I don't tend to have Ruby available in my development environment, but I had an itch to scratch, so I installed it and added a couple of rules to Mark's tool for the checks I wanted. Mark kindly accepted the corresponding pull requests, and all was well.

    Except that once I'd tasted of the fruit of Markdown linting, I wanted to integrate it into other workflows - many of which are exclusively Node-based. I briefly entertained the idea of creating a Node package to install Ruby then use it to install and run a Ruby gem - but that made my head hurt...


    So I prototyped a Node version of markdownlint by porting a few rules over and then ran the idea by Mark. He was supportive (and raised some great points!), so I gradually ported the rest of the rules to JavaScript with the same numbering/naming system to make it easy for people to migrate between the two tools. Mark already had a fantastic test infrastructure and great documentation for rules, so I shamelessly reused both in the Node version. Configuration for JavaScript tools is typically JSON, so the Node version uses a slightly different format than Ruby (though both are simple/obvious). I started with a fully asynchronous API for efficiency, but ended up adding a synchronous version for scenarios where that's more convenient. I strived to achieve functional parity with the Ruby implementation (and continue to do so as Mark makes updates!), but duplicating the CLI was a non-goal (please have a look at the mdl gem if that's what you need).

    If this sounds interesting, please have a look at markdownlint on GitHub. As of this writing, it supports the same set of ~40 rules that the Ruby implementation does - you can read all about them in Mark's fantastic markdownlint exposes a single API which can be called in an asynchronous or synchronous manner and accepts an options object to identify the files/strings to lint and the set of rules to apply. It returns a simple object that lists the items that were checked along with the line numbers for any violations. The documentation shows of all of this and includes examples of calling markdownlint from both gulp and Grunt.


    To make sure markdownlint works well, I've integrated it into some of my own projects, including this blog which I wrote specifically to allow authoring in Markdown. That's a nice start, but it doesn't prove markdownlint can handle larger projects with significant documentation written by different people at different times. For that you'd need to integrate with a project like ESLint which has extensive documentation that's entirely Markdown-based.

    So I did. :) Supporting ESLint was one of the motivating factors behind porting markdownlint to Node in the first place: I love the tool and use it in all my projects. The documentation is excellent, but every now and then I'd come across weird or broken text. After submitting a couple of pull requests with fixes, I decided adding a Markdown linter to their test script would be a better way to keep typos out of the documentation. It turns out this was on the team's radar as well, and they - especially project owner Nicholas - were very helpful and accommodating as I introduced markdownlint and tweaked things to satisfy some of the rules.


    At this point, maybe I've convinced you markdownlint works for my own purposes and that it works for some other purposes, but it's likely you have special requirements or would like to "try before you buy". (Which seems an ironic thing to say about free software, but there's a cost to everything, so maybe it's not that unreasonable after all.) Well, I have just the thing for you:

    An interactive markdownlint demo that runs in the browser!

    Although browser support was not (is not!) a goal, the relevant code is all JavaScript with just one dependency (that itself offers browser support) and only two methods that need polyfills (trimLeft/trimRight). So it was actually fairly straightforward (with some help from Browserify) to create a standalone, offline-enabled web page that lets anyone use a (modern) browser to experiment with markdownlint and validate arbitrary content. To make it super easy to get started, I made some deliberate mistakes in the sample content for the demo - feel free to fix them for me. :)


    In summary:

    • Markdown is great
    • It's easy to read and write
    • Sometimes it doesn't do what you think
    • There are tools to help
    • markdownlint is one of them
    • Get it for Ruby or Node
    • Or try it in the browser
    Tags: Node.js Technical Web
  • Extensibility is a wonderful thing [A set of Visual Studio Code tasks for common npm functionality in Node.js and io.js]
    Thursday, April 30th 2015

    Yesterday at its Build conference, Microsoft released the Visual Studio Code editor which is a lightweight, cross-platform tool for building web and cloud applications. I've been using internal releases for a while and highly recommend trying it out!

    One thing I didn't know about until yesterday was support for Tasks to automate common steps like build and testing. As the documentation shows, there's already knowledge of common build frameworks, including gulp for Node.js and io.js. But for simple Node projects I like to automate via npm's scripts because they're simple and make it easy to integrate with CI systems like Travis. So I whipped up a simple tasks.json for Code that handles build, test, and lint for typical npm configurations. I've included it below for anyone who's interested.

    Note: Thanks to metadata, the build and test tasks are recognized as such by Code and easily run with the default hotkeys Ctrl+Shift+B and Ctrl+Shift+T.



      "version": "0.1.0",
      "command": "npm",
      "isShellCommand": true,
      "suppressTaskName": true,
      "tasks": [
          // Build task, Ctrl+Shift+B
          // "npm install --loglevel info"
          "taskName": "install",
          "isBuildCommand": true,
          "args": ["install", "--loglevel", "info"]
          // Test task, Ctrl+Shift+T
          // "npm test"
          "taskName": "test",
          "isTestCommand": true,
          "args": ["test"]
          // "npm run lint"
          "taskName": "lint",
          "args": ["run", "lint"]

    Updated 2015-05-02: Added --loglevel info to npm install for better progress reporting

    Updated 2016-02-27: Added isShellCommand, suppressTaskName, and updated args to work with newer versions of VS Code

    Tags: Node.js Miscellaneous
  • Solving puzzles at 30,000 feet [An iterative solution for the "Is this a binary search tree?" programming problem]
    Tuesday, April 7th 2015

    Sitting on a plane recently looking for a distraction, I recalled a programming challenge by James Michael Hare: Little Puzzlers-Is Tree a Binary Search Tree?. All I had to work with was a web browser, so I used JavaScript to come up with a solution. James subsequently blogged a recursive implementation in C# which is quite elegant. Wikipedia's Binary search tree page uses the same approach and C++ for its verification sample.

    Because I did things a little differently, I thought I'd share - along with a few thoughts:

     * Determines if a tree of {value, left, right} nodes is a binary search tree.
     * @param {Object} root Root of the tree to examine.
     * @returns {Boolean} True iff root is a binary search tree.
    function isBinarySearchTree(root) {
      var wrapper, node, stack = [{ node: root }];
      while (wrapper = stack.pop()) {
        if (node = wrapper.node) {
          if ((node.value <= wrapper.min) || (wrapper.max <= node.value)) {
            return false;
          stack.push({ node: node.left, min: wrapper.min, max: node.value },
                     { node: node.right, min: node.value, max: wrapper.max });
      return true;


    • Tree nodes are assumed to have a numeric value and references to their left and right nodes (both possibly null).
      • I used the name value (vs. data) because it is slightly more specific.
    • I decided on an iterative algorithm because it has two notable advantages over recursion:
      • In the worst case for a tree with N nodes, an iterative solution has bookkeeping for N/2 nodes (when starting to process the leaf nodes of a balanced tree assuming nodes were queued) whereas a recursive solution has bookkeeping for all N nodes (when processing the deepest node of a completely unbalanced tree).
        • Because there are two recursive calls, I don't think tail recursion can be counted on to fix the worst-case behavior.
      • The memory used for bookkeeping by an iterative solution comes from the heap which is generally much larger than the thread stack.
      • To be fair, neither advantage is likely to be significant in practice - but they make good discussion points during an interview. :)
    • The iterative algorithm has a disadvantage:
      • Bookkeeping requires an additional object type (wrapper in the code above) which associates the relevant min and max bounds with pending node instances.
        • ... unless you avoid the wrapper by augmenting the node elements themselves.
          • ... which is quite easy in JavaScript thanks to its dynamic type system.
        • The creation/destruction of wrapper objects creates additional memory pressure.
          • Although these objects are short-lived and therefore low-impact for typical garbage collection algorithms.
    • I intended the code to be concise, so I made use of assignments in conditional expressions.
    • The code uses a stack (vs. a queue) because stacks tend to be simpler than queues - especially when implemented with an array.
    • I made use of the fact that comparing a number to undefined evaluates to false so I could avoid specifying explicit minimum/maximum values (as in the Wikipedia example) or making HasValue checks (as in James's example).
    • If you have a different approach or a suggestion to simplify this one, please share!
      • And note: I'm interested in algorithmic changes, not tweaks like removing extra parenthesis. :)
    Tags: Miscellaneous Technical
  • Supporting both sides of the Grunt vs. Gulp debate [check-pages is a Gulp-friendly task to check various aspects of a web page for correctness]
    Tuesday, February 10th 2015

    A few months ago, I wrote about grunt-check-pages, a Grunt task to check various aspects of a web page for correctness. I use grunt-check-pages when developing my blog and have found it very handy for preventing mistakes and maintaining consistency.

    Two things have changed since then:

    1. I released multiple enhancements to grunt-check-pages that make it more powerful
    2. I extracted its core functionality into the check-pages package which works well with Gulp


    First, an overview of the improvements; here's the change log for grunt-check-pages:

    • 0.1.0 - Initial release, support for checkLinks and checkXhtml.
    • 0.1.1 - Tweak README for better formatting.
    • 0.1.2 - Support page-only mode (no link or XHTML checks), show response time for requests.
    • 0.1.3 - Support maxResponseTime option, buffer all page responses, add "no-cache" header to requests.
    • 0.1.4 - Support checkCaching and checkCompression options, improve error handling, use gruntMock.
    • 0.1.5 - Support userAgent option, weak entity tags, update nock dependency.
    • 0.2.0 - Support noLocalLinks option, rename disallowRedirect option to noRedirects, switch to ESLint, update superagent and nock dependencies.
    • 0.3.0 - Support queryHashes option for CRC-32/MD5/SHA-1, update superagent dependency.
    • 0.4.0 - Rename onlySameDomainLinks option to onlySameDomain, fix handling of redirected page links, use page order for links, update all dependencies.
    • 0.5.0 - Show location of redirected links with noRedirects option, switch to crc-hash dependency.
    • 0.6.0 - Support summary option, update crc-hash, grunt-eslint, nock dependencies.
    • 0.6.1 - Add badges for automated build and coverage info to README (along with npm, GitHub, and license).
    • 0.6.2 - Switch from superagent to request, update grunt-eslint and nock dependencies.
    • 0.7.0 - Move task implementation into reusable check-pages package.
    • 0.7.1 - Fix misreporting of "Bad link" for redirected links when noRedirects enabled.

    There are now more things you can validate and better diagnostics during validation. For information about the various options, visit the grunt-check-pages package in the npm repository.


    Secondly, I started looking into Gulp as an alternative to Grunt. My blog's Gruntfile.js is the most complicated I have, so I tried converting it to a gulpfile.js. Conveniently, existing packages supported everything I already do (test, LESS, lint) - though not what I use grunt-check-pages for (no surprise).

    Clearly, the next step was to create a version of the task for Gulp - but it turns out that's not necessary! Gulp's task structure is simple enough that invoking standard asynchronous helpers is easy to do inline. So all I really needed was to factor out the core functionality into a reusable method.

    Here's how that looks:

     * Checks various aspects of a web page for correctness.
     * @param {object} host Specifies the environment.
     * @param {object} options Configures the task.
     * @param {function} done Callback function.
     * @returns {void}
    module.exports = function(host, options, done) { ... }

    With that in place, it's easy to invoke check-pages - whether from a Gulp task or something else entirely. The host parameter handles log/error messages (pass console for convenience), options configures things in the usual fashion, and the done callback gets called at the end (with an Error parameter if anything went wrong).

    Like so:

    var gulp = require("gulp");
    var checkPages = require("check-pages");
    gulp.task("checkDev", [ "start-development-server" ], function(callback) {
      var options = {
        pageUrls: [
        checkLinks: true,
        onlySameDomain: true,
        queryHashes: true,
        noRedirects: true,
        noLocalLinks: true,
        linksToIgnore: [
        checkXhtml: true,
        checkCaching: true,
        checkCompression: true,
        maxResponseTime: 200,
        userAgent: 'custom-user-agent/1.2.3',
        summary: true
      checkPages(console, options, callback);
    gulp.task("checkProd", function(callback) {
      var options = {
        pageUrls: [
        checkLinks: true,
        maxResponseTime: 500
      checkPages(console, options, callback);

    As a result, grunt-check-pages has become a thin wrapper over check-pages and there's no duplication between the two packages (though each has a complete set of tests just to be safe). For information about the options above, visit the check-pages package in the npm repository.


    The combined effect is that I'm able to do a better job validating web site updates and I can use whichever of Grunt or Gulp feels more appropriate for a given scenario. That's good for peace of mind - and a great way to become more familiar with both tools!

    Tags: Node.js Technical Web
  • Everything old is new again [crc-hash is a Node.js Crypto Hash implementation for the CRC algorithm]
    Tuesday, January 27th 2015

    Yep, another post about hash functions... True, I could have stopped when I implemented CRC-32 for .NET or when I implemented MD5 for Silverlight. Certainly, sharing the code for four versions of ComputeFileHashes could have been a good laurel upon which to rest.

    But then I started using Node.js, and found one more hash-oriented itch to scratch. :)

    From the project page:

    Node.js's Crypto module implements the Hash class which offers a simple Stream-based interface for creating hash digests of data. The createHash function supports many popular algorithms like SHA and MD5, but does not include older/simpler CRC algorithms like CRC-32. Fortunately, the crc package in npm provides comprehensive CRC support and offers an API that can be conveniently used by a Hash subclass.

    crc-hash is a Crypto Hash wrapper for the crc package that makes it easy for Node.js programs to use the CRC family of hash algorithms via a standard interface.

    With just one (transitive!) dependency, crc-hash is lightweight. Because it exposes a common interface, it's easy to integrate with existing scenarios. Thanks to crc, it offers support for all the popular CRC algorithms. You can learn more on the crc-hash npm page or the crc-hash GitHub page.


    • One of the great things about the Node community is the breadth of packages available. In this case, I was able to leverage the comprehensive crc package by alexgorbatchev for all the algorithmic bits.
    • After being indifferent on the topic of badges, I discovered and its elegance won me over. You can see the five badges I picked near the top of on the npm/GitHub pages above.
    Tags: Node.js Technical Web