Say goodbye to dead links and inconsistent formatting [grunt-check-pages is a simple Grunt task to check various aspects of a web page for correctness]
As part of converting my blog to a custom Node.js app, I wrote a set of tests to validate its routes, structure, content, and behavior (using mocha/grunt-mocha-test). Most of these tests are specific to my blog, but some are broadly applicable and I wanted to make them available to anyone who was interested. So I created a Grunt plugin and published it to npm:
grunt-check-pages
An important aspect of creating web sites is to validate the structure and content of their pages. The
checkPages
task provides an easy way to integrate this testing into your normal Grunt workflow.By providing a list of pages to scan, the task can:
- Validate all external links point to live content (similar to the W3C Link Checker)
- Validate page structure for XHTML compliance (akin to the W3C Markup Validation Service)
Link validation is fairly uncontroversial: you want to ensure each hyperlink on a page points to valid content.
grunt-check-pages
supports the standard HTML link types (ex: <a href="..."/>
, <img src="..."/>
) and makes an HTTP HEAD request to each link to make sure it's valid.
(Because some web servers misbehave, the task also tries a GET
request before reporting a link broken.)
There are options to limit checking to same-domain links, to disallow links that redirect, and to provide a set of known-broken links to ignore.
(FYI: Links in draft elements (ex: picture) are not supported for now.)
XHTML compliance might be a little controversial. I'm not here to persuade you to love XHTML - but I do have some experience parsing HTML and can reasonably make a few claims:
- HTML syntax errors are tricky for browsers to interpret and (historically) no two work the same way
- Parsing ambiguity leads to rendering issues which create browser-specific quirks and surprises
- HTML5 is more prescriptive about invalid syntax, but nothing beats a well-formed document
- Being able to confidently parse web pages with simple tools is pleasant and quite handy
- Putting a close '/' on your
img
andbr
tags is a small price to pay for peace of mind :)
Accordingly, grunt-check-pages
will (optionally) parse each page as XML and report the issues it finds.
grunt.initConfig({
checkPages: {
development: {
options: {
pageUrls: [
'http://localhost:8080/',
'http://localhost:8080/blog',
'http://localhost:8080/about.html'
],
checkLinks: true,
onlySameDomainLinks: true,
disallowRedirect: false,
linksToIgnore: [
'http://localhost:8080/broken.html'
],
checkXhtml: true
}
},
production: {
options: {
pageUrls: [
'http://example.com/',
'http://example.com/blog',
'http://example.com/about.html'
],
checkLinks: true,
checkXhtml: true
}
}
}
});
Something I find useful (and outline above) is to define separate configurations for development and production. My development configuration limits itself to links within the blog and ignores some that don't work when I'm self-hosting. My production configuration tests everything across a broader set of pages. This lets me iterate quickly during development while validating the live deployment more thoroughly.
If you'd like to incorporate grunt-check-pages
into your workflow, you can get it via grunt-check-pages on npm or grunt-check-pages on GitHub.
And if you have any feedback, please let me know!
Footnote: grunt-check-pages is not a site crawler; it looks at exactly the set of pages you ask it to. If you're looking for a crawler, you may be interested in something like grunt-link-checker (though I haven't used it myself).