The blog of dlaa.me

Posts from July 2014

Just because I'm paranoid doesn't mean they're not out to get me [Open-sourcing PassWeb: A simple, secure, cloud-based password manager]

I've used a password manager for many years because I feel that's the best way to maintain different (strong!) passwords for every account. I chose Password Safe when it was one of the only options and stuck with it until a year or so ago. Its one limitation was becoming more and more of an issue: it only runs on Windows and only on a PC. I'm increasingly using Windows on other devices (ex: phone or tablet) or running other operating systems (ex: iOS or Linux), and was unable to access my passwords more and more frequently.

To address that problem, I decided to switch to a cloud-based password manager for access from any platform. Surveying the landscape, there appeared to be many good options (some free, some paid), but I had a fundamental concern about trusting important personal data to someone else. Recent, high-profile hacks of large corporations suggest that some companies don't try all that hard to protect their customers' data. Unfortunately, the incentives aren't there to begin with, and the consequences (to the company) of a breach are fairly small.

Instead, I thought it would be interesting to write my own cloud-based password manager - because at least that way I'd know the author had my best interests at heart. :) On the flip side, I introduce the risk of my own mistake or bug compromising the system. But all software has bugs - so "Better the evil you know (or can manage) than the evil you don't". Good news is that I've taken steps to try to make things more secure; bad news is that it only takes one bug to throw everything out the window...

Hence the disclaimer:

I've tried to ensure PassWeb is safe and secure for normal use in low-risk environments, but do not trust me. Before using PassWeb, you should evaluate it against your unique needs, priorities, threats, and comfort level. If you find a problem or a weakness, please let me know so I can address it - but ultimately you use PassWeb as-is and at your own risk.

With that in mind, I'm open-sourcing PassWeb's code for others to use, play around with, or find bugs in. In addition to being a good way of sharing knowledge and helping others, this will satisfy the requests for code that I've already gotten. :)

Some highlights:

  • PassWeb's client is built with HTML, CSS, and JavaScript and is an offline-enabled single-page application.
  • It runs on all recent browsers (I've tried) and is mobile-friendly so it works well on phones, too.
  • Entries have a title, the name/password for an account, a link to the login page, and a notes section for additional details (like answers to security questions).
  • PassWeb can generate strong, random passwords; use them as-is, tweak them, or ignore them and type your own.
  • The small server component runs on ASP.NET or Node.js (I provide both implementations and a set of unit tests).
  • Data is encrypted via AES/CBC and only ever decrypted on the client (the server never sees the user name or password).
  • The code is small, with few dependencies, and should be easy to audit.

For details, instructions, and the code, visit the GitHub repository: https://github.com/DavidAnson/PassWeb

If you find PassWeb interesting or useful, that's great! If you have any thoughts or suggestions, please tell me. If you find a bug, let me know and I'll try to fix it.

Another tool in the fight against spelling errors. [Added HtmlStringExtractor for pulling strings, attributes, and comments from HTML/XML files and URLs]

When I blogged simple string extraction utilities for C# and JavaScript code, I thought I'd covered the programming languages that matter most to me. Then I realized my focus was too narrow; I spend a lot of time working with markup languages and want content there to be correctly spelled as well.

So I dashed off another string extractor based on the implementation of JavaScriptStringExtractor:

HtmlStringExtractor Runs on Node.js. Dependencies via npm. htmlparser2 for parsing, glob for globbing, and request for web access. Wildcard matching includes subdirectories when ** is used. Pass a URL to extract strings from the web.

HtmlStringExtractor works just like its predecessors, outputting all strings/attributes/comments to the console for redirection or processing. But it has an additional power: the ability to access content directly from the internet via URL. Because so much of the web is HTML, it seemed natural to support live-extraction - which in turn makes it easier to spell-check a web site and be sure you're including "hidden" text (like title and alt attributes) that copy+paste don't cover.

Its HTML parser is very forgiving, so HtmlStringExtractor will happily work with HTML-like languages such as XML, ASPX, and PHP. Of course, the utility of doing so decreases as the language gets further removed from HTML, but for many scenarios the results are quite acceptable. In the specific case of XML, output should be entirely meaningful, filtering out all element and attribute metadata and leaving just the "real" data for review.

In keeping with the theme of "small and simple", I didn't add an option to exclude attributes by name - but you can imagine that filtering out things like id, src, and href would do a lot to reduce noise. Who knows, maybe I'll support that in a future update. :)

For now, things are simple. The StringExtractors GitHub repository has the complete code for all three extractors.

Enjoy!

Aside: As I wrote this post, I realized there's another "language" I use regularly: JSON. Because of its simple structure, I don't think there's a need for JsonStringExtractor - but if you feel otherwise, please let me know! (It'd be easy to create.)