The blog of dlaa.me

Posts from January 2011

sudo localize & make me-a-sandwich [Free PseudoLocalizer class makes it easy for anyone to identify potential localization issues in .NET applications]

I've previously written about the benefits of localization and shown how to localize Windows Phone 7 applications. The techniques I describe in that post constitute a good workflow that's just as suitable for WPF and Silverlight desktop applications! But even with good processes in place, the way localization is done in the real world has some challenges...

You see, localization can be expensive: hiring people to translate an entire application, re-testing it in the newly supported locale, fixing resulting bugs, etc.. So teams often wait till near the end of the release cycle - after the UI has stabilized - to start localizing the application. This is a perfectly reasonable approach, but there are invariably a few surprises - usually some variation of "What do you mean that string is hard-coded in English and can't be translated??". It sure would be nice if teams could do some kind of low-cost localization in order to identify - and fix - oversights like this long before they turn into problems...

Yep - that process is known as pseudo-localization. What pseudo-localization does is create alternate versions of resource strings by using different characters that look similar enough to the original English characters that text is still readable - but obviously "localized". (This is one of those "a picture is worth a thousand words" moments, so please check out the Wikipedia article or bear with me for just a moment...) Additionally, because some languages tend to have longer phrases than English (German, I'm looking at you!), there's often an additional aspect of string lengthening to simulate that and help detect wrapping, clipping, and the like.

Aside: It's important to remember that the character "translations" are chosen exclusively on the basis of how similar they look to the original character and not on the basis of linguistic correctness, cultural influence, or anything like that!

 

Here's the sample application I created for this post running in its normal English mode. (It's not very sophisticated, but it's good enough for our purposes.) Can you tell if all the strings are properly localizable? Are there any other localization concerns here?

Normal application

 

Pseudo-localization isn't a new concept and there are a variety of tools out there that do a fine job of it. However, none of the ones I found during a quick search appeared to be free, simple to use, and unencumbered by restrictions (i.e., licensing, distribution, etc.). So I thought it would be fun to write my own and share it with the community as open source under the very permissive Ms-PL license. :)

The class I've created is called PseudoLocalizer and it automatically pseudo-localizes standard .NET RESX-based resources. Using RESX-based resources is the recommend approach for building localizable Silverlight and Windows Phone 7 applications, the recommended approach for building localizable Windows Forms applications, and also a perfectly fine approach for building localizable WPF applications.

Aside: The recommended technique for localizing WPF applications is actually something else, but I have some issues with that approach and won't be discussing it here.

 

I've said it's easy to use PseudoLocalizer: all it takes are three special steps:

  1. Add the (self-contained) PseudoLocalizer.cs file from the sample download to your project.

  2. Add the following code somewhere it will be run once and run early (ex: the application's constructor):

    #if PSEUDOLOCALIZER_ENABLED
        Delay.PseudoLocalizer.Enable(typeof(ProjectName.Properties.Resources));
    #endif
  3. Add PSEUDOLOCALIZER_ENABLED to the list of conditional compilation symbols for your project (via the Project menu, Properties item, Build tab in Visual Studio).

Done! Not only will all localizable strings be automatically pseudo-localized, but images will, too! How does one pseudo-localize an image?? Well, I considered a variety of techniques, but settled on simply inverting the colors to create a negative image. This has the nice benefit of keeping the image dimensions the same (which is sometimes important) as well as preserving any directional aspects it might have (i.e., an image of an arrow pointing left has the same meaning after being "localized").

Aside: I've never seen image pseudo-localization before, but it seems like the obvious next step. Although many images don't need to be localized (ex: a gradient background), some images have text in them or make assumptions that aren't true in all locales (ex: thumbs up means "good"). So it seems pretty handy to know which images aren't localizable! :)

 

Okay, let's see how well you did on the localization quiz! Here's the sample application with PseudoLocalizer enabled:

Pseudo-localized application

Hum... While it's clear most of the text was correctly localizable (and therefore automatically pseudo-localized), it appears the lazy developer (uh, wait, that's me...) forgot to put one of the strings in the RESX file: "Goodbye" is hardcoded to English in the XAML. :( Fortunately, the image is localizable, so it can be changed if necessary. Unfortunately, the fixed-width application window seems to be a little too narrow for longer translations (like German) and is clipping two of the pseudo-localized strings! Nuts, I'm afraid this application may need a bit more work before we can feel good about our ability to ship versions for other languages...

 

[Click here to download the complete source code the PseudoLocalizer and the WPF sample application shown above.]

 

Notes:

  • What I've described in this post works only on WPF; Silverlight has a couple of limitations that prevent what I've done from working exactly as-is. My next blog post will discuss how to overcome those limitations and use PseudoLocalizer on Silverlight, too!

  • The way PseudoLocalizer works so seamlessly is by examining the type of the generated resource class and using (private) reflection to swap in a custom ResourceManager class for the default one. This PseudoLocalizerResourceManager works the same as the default one for loading resources - then it post-processes strings and bitmaps to provide its pseudo-localization effects. Private reflection (i.e., examining and/or modifying the internal data of an object) is generally a bad practice - but I tend to believe it's acceptable here because the target class is simple, its (generated) code is part of the project, and PseudoLocalizer is only ever going to be used as a development tool (i.e., never in a released application).

  • For readers who want a brief overview of how to use RESX-resources in WPF, here you go:

    1. Set the Access Modifier to "Public" in the RESX designer so the auto-generated resource property accessors will be accessible to the WPF data-binding system.

    2. Create an instance of the auto-generated RESX class (typically named "Resources") as a WPF-accessible resource:

      <Window.Resources>
          <properties:Resources x:Key="Resources" xmlns:properties="clr-namespace:PseudoLocalizerWPF.Properties"/>
      </Window.Resources>
    3. Create a Binding that references the relevant properties of that resource wherever you want to use them:

      <TextBlock Text="{Binding Path=AlphabetLower, Source={StaticResource Resources}}"/>
  • Because RESX files expose images as instances of System.Drawing.Image and WPF deals with System.Windows.Media.ImageSource, I wrote a simple IValueConverter to convert from the former to the latter. (And it's part of the source code download.) Using BitmapToImageSourceConverter follows the usual pattern for value converters:

    1. Create an instance as a resource:

      <delay:BitmapToImageSourceConverter x:Key="BitmapToImageSourceConverter" xmlns:delay="clr-namespace:Delay"/>
    2. Use that resource in the relevant Binding:

      <Image Source="{Binding Path=Image, Source={StaticResource Resources}, Converter={StaticResource BitmapToImageSourceConverter}}"/>

 

Because the cost of fixing defects increases (dramatically) as a product gets closer to shipping, it's important to find and fix issues as early as possible. Localization can be a tricky thing to get all the kinks worked out of, so it's helpful to have good tools around to ensure a team is following good practices. With its low cost, simple implementation and friction-free usage model, I'm hopeful PseudoLocalizer will become another good tool to help people localize successfully!

"Might as well face it, you're addicted to blob..." [BlobStoreApi update adds container management, fragmented response handling, other improvements, and enhanced Amazon S3 support by Delay.Web.Helpers!]

As part of my previous post announcing the Delay.Web.Helpers assembly for ASP.NET and introducing the AmazonS3Storage class to enable easy Amazon Simple Storage Service (S3) blob access from MVC/Razor/WebMatrix web pages, I made some changes to the BlobStoreApi.cs file it built on top of. BlobStoreApi.cs was originally released as part of my BlobStore sample for Silverlight and BlobStoreOnWindowsPhone sample for Windows Phone 7; I'd made a note to update those two samples with the latest version of the code, but intended to blog about a few other things first...

 

Fragmented response handling

BlobStoreOnWindowsPhone sample

That's when I was contacted by Steve Marx of the Azure team to see if I could help out Erik Petersen of the Windows Phone team who was suddenly seeing a problem when using BlobStoreApi: calls to GetBlobInfos were returning only some of the blobs instead of the complete list. Fortunately, Steve knew what was wrong without even looking - he'd blogged about the underlying cause over a year ago! Although Windows Azure can return up to 5,000 blobs in response to a query, it may decide (at any time) to return fewer (possibly as few as 0!) and instead include a token for getting the rest of the results from a subsequent call. This behavior is deterministic according to where the data is stored in the Azure cloud, but from the point of view of a developer it's probably safest to assume it's random. :)

So something had changed for Erik's container and now he was getting partial results because my BlobStoreApi code didn't know what to do with the "fragmented response" it had started getting. Although I'd like to blame the documentation for not making it clear that even very small responses could be broken up, I really should have supported fragmented responses anyway so users would be able to work with more than 5,000 blobs. Mea culpa...

I definitely wanted to add BlobStoreApi support for fragmented Azure responses, but what about S3? Did my code have the same problem there? According to the documentation: yes! Although I don't see mention of the same "random" fragmentation behavior Azure has, S3 enumerations are limited to 1,000 blobs at a time, so that code needs the same update in order to support customer scenarios with lots of blobs. Fortunately, the mechanism both services use to implement fragmentation is quite similar and I was able to add most of the new code to the common RestBlobStoreClient base class and then refine it with service-specific tweaks in the sub-classes AzureBlobStoreClient and S3BlobStoreClient.

Fortunately, the extra work to support fragmented responses is all handled internally and is therefore invisible to the developer. As such, it requires no changes to existing applications beyond updating to the new BlobStoreApi.cs file and recompiling!

 

Container support

While dealing with fragmented responses was fun and a definite improvement, it's not the kind of thing most people appreciate. People don't usually get excited over fixes, but they will get weak-kneed for new features - so I decided to add container/bucket management, too! Specifically, it's now possible to use BlobStoreApi to list, create, and delete Azure containers and S3 buckets using the following asynchronous methods:

public abstract void GetContainers(Action<IEnumerable<string>> getContainersCompleted, Action<Exception> getContainersFailed);
public abstract void CreateContainer(string containerName, Action createContainerCompleted, Action<Exception> createContainerFailed);
public abstract void DeleteContainer(string containerName, Action deleteContainerCompleted, Action<Exception> deleteContainerFailed);

To be clear, each instance of AzureBlobStoreClient and S3BlobStoreClient is still associated with a specific container (the one that was passed to its constructor) and its blob manipulation methods act only on that container - but now it's able to manipulate other containers, too.

Aside: In an alternate universe, I might have made the new container methods static because they don't have a notion of a "current" container the same way the blob methods do. However, what I want more than that is to be able to leverage the existing infrastructure I already have in place for creating requests, authorizing them, handling responses, etc. - all of which take advantage of instance methods on RestBlobStoreClient that AzureBlobStoreClient and S3BlobStoreClient override as necessary. In this world, it's not possible to have static methods that are also virtual, so I sacrificed the desire for the former in order to achieve the efficiencies of the latter. Maybe one day I'll refactor things so it's possible to have the best of both worlds - but for now I recommend specifying a null container name when creating instances of AzureBlobStoreClient and S3BlobStoreClient when you want to ensure they're used only for container-specific operations (and not for blobs, too).

 

Other improvements

Delay.Web.Helpers AmazonS3Storage sample

With containers getting a fair amount of love this release, it seemed appropriate to add a containerName parameter to the AzureBlobStoreClient constructor so it would look the same as S3BlobStoreClient's constructor. This minor breaking change means it's now necessary to specify a container name when creating instances of AzureBlobStoreClient. If you want to preserve the behavior of existing code (and don't want to have to remember that "$root" is the magic root container name for Azure (and previous default)), you can pass AzureBlobStoreClient.RootContainerName for the container name.

In the process of doing some testing of the BlobStoreApi code, I realized I hadn't previously exposed a way for an application to find out about asynchronous method failures. While it's probably true that part of the reason someone uses Azure or S3 is so they don't need to worry about failures, things can always go wrong and sometimes you really need to know when they do. So in addition to each asynchronous method taking a parameter for the Action to run upon successful completion, they now also take an Action<Exception> parameter which they'll call (instead) when a failure occurs. This new parameter is necessary, so please provide a meaningful handler instead of passing null and assuming things will always succeed. :)

I've also added a set of new #defines to allow BlobStoreApi users to easily remove unwanted functionality. Because they're either self-explanatory or simple, I'm not going to go into more detail here (please have a look at the code if you're interested): BLOBSTOREAPI_PUBLIC, BLOBSTOREAPI_NO_URIS, BLOBSTOREAPI_NO_AZUREBLOBSTORECLIENT, BLOBSTOREAPI_NO_S3BLOBSTORECLIENT, and BLOBSTOREAPI_NO_ISOLATEDSTORAGEBLOBSTORECLIENT.

Aside: Actually, BLOBSTOREAPI_PUBLIC deserves a brief note: with this release of the BlobStoreApi, the classes it implements are no longer public by default (they're expected to be consumed, not exposed). This represents a minor breaking change, but the trivial fix is to #define BLOBSTOREAPI_PUBLIC which restores things to being public as they were before. That said, it might be worth taking this opportunity to make the corresponding changes (if any) to your application - but that's entirely up to you and your schedule.

The last thing worth mentioning is that I've tweaked BlobStoreApi to handle mixed-case blob/container names properly. Formerly, passing in upper-case characters for a blob name could result in failures for both Azure and S3; with this change in place, that scenario should work correctly.

 

BlobStore with silly sample data

 

New BlobStore and BlobStoreOnWindowsPhone samples

I've updated the combined BlobStore/BlobStoreOnWindowsPhone download to include the latest version of BlobStoreApi with all the changes outlined above. Although there are no new features in either sample, they both benefit from fragmented container support and demonstrate how to use the updated API methods properly.

[Click here to download the complete BlobStore source code and sample applications for both Silverlight 4 and Windows Phone 7.]

 

New Delay.Web.Helpers release and updated sample

The Delay.Web.Helpers download includes the latest version of BlobStoreApi (so it handles fragmented containers) and includes new support for container management in the form of the following methods on the AmazonS3Storage class:

/// <summary>
/// Lists the blob containers for an account.
/// </summary>
/// <returns>List of container names.</returns>
public static IList<string> ListBlobContainers() { ... }

/// <summary>
/// Creates a blob container.
/// </summary>
/// <param name="containerName">Container name.</param>
public static void CreateBlobContainer(string containerName) { ... }

/// <summary>
/// Deletes an empty blob container.
/// </summary>
/// <param name="containerName">Container name.</param>
public static void DeleteBlobContainer(string containerName) { ... }

As I discuss in the introductory post for Delay.Web.Helpers, I'm deliberately matching the existing WindowsAzureStorage API with my implementation of AmazonS3Storage, so these new methods look and function exactly the same for both web services. As part of this release, I've also updated the Delay.Web.Helpers sample page to show off the new container support as well as added some more automated tests to verify it.

[Click here to download the Delay.Web.Helpers assembly, its complete source code, all automated tests, and the sample web site.]

 

Summary

While I hadn't thought to get back to blobs quite so soon, finding out about the fragmented response issue kind of forced my hand. But all's well that ends well - I'm glad to have added container supported to Delay.Web.Helpers because that means it now has complete blob-related feature parity with the WindowsAzureStorage web helper and that seems like a Good Thing. What's more, I got confirmation that the Windows Azure Managed Library does not support Silverlight, so my BlobStoreApi is serving duty as an unofficial substitute for now. [Which is cool! And a little scary... :) ]

Whether you're using Silverlight on the desktop, writing a Windows Phone 7 application, or developing a web site with ASP.NET using MVC, Razor, or WebMatrix, I hope today's update helps your application offer just a little more goodness to your users!

There's a blob on your web page - but don't wipe it off [New Delay.Web.Helpers assembly brings easy Amazon S3 blob access to ASP.NET web sites!]

Microsoft released a bunch of new functionality for its web platform earlier today - and all of them are free! The goodies include new tools like WebMatrix, ASP.NET MVC3 and the Razor syntax, and IIS Express - as well updates to existing tools like the Web Platform Installer and Web Deploy. These updates work together to enable a variety of new scenarios for the web development community while simultaneously making development simpler and more powerful. I encourage everyone to check it out!

 

Web Helpers...

One of the things I find particularly appealing is the idea of "web helpers": chunks of code that can be easily added to a project to make common tasks easier - or enable new ones! Helpers can do all kinds of things, including integrating with bit.ly, Facebook, PayPal, Twitter, and so-on. For an example of cool they are, here's how easy it is to add an interactive Twitter profile section to any ASP.NET web page [and it's even easier if you don't customize the colors :) ]:

@Twitter.Profile(
    "DavidAns",
    height:150,
    numberOfTweets:20,
    scrollBar:true,
    backgroundShellColor:"#8ec1da",
    tweetsBackgroundColor:"#ffffff",
    tweetsColor:"#444444",
    tweetsLinksColor:"#1985b5")

Which renders as:

Twitter profile search

You can see other examples of useful web helpers (including charting, cryptography, Xbox gamer cards, ReCaptcha, and more) in the ASP.NET social networking tutorial.

 

Aside: While web helper assemblies like System.Web.Helpers and Microsoft.Web.Helpers are obviously easy to use from ASP.NET Razor and ASP.NET MVC, what's neat (and not immediately obvious) is that they don't need to be tied to those technologies. At the end of the day, helper assemblies are normal .NET assemblies and some of their functionality can be used equally well in non-web scenarios!

 

The inspiration...

One of the web helpers that caught my eye when I was looking around is the Windows Azure Storage Helper which makes it easy to incorporate Windows Azure blob and table access into ASP.NET applications. If you happened to be reading this blog a few months ago, you might remember that I already have some experience working with Windows Azure and Amazon S3 blobs in the form of my BlobStore Silverlight application and BlobStoreOnWindowsPhone Windows Phone 7 sample.

When I wrote BlobStore for Silverlight, I didn't see any pre-existing libraries I could make use of (the Windows Azure Managed Library didn't (and maybe still doesn't?) work on Silverlight and there were some licensing concerns with the S3 libraries I came across), so I implemented my own support for what I needed: list/get/set/delete support for blobs on Windows Azure and Amazon Simple Storage Service (S3). But now that we're talking about ASP.NET, web helpers run on the server and the Windows Azure Storage Helper is able to leverage the Windows Azure Managed Library for all the heavy lifting so it can expose things in a clean, web helper-suitable way (via the WindowsAzureStorage static class and its associated methods).

That's good stuff and the WindowsAzureStorage helper seems quite comprehensive - but what about customers who are already using Amazon S3 and can't (or won't) switch to Windows Azure? Well, that's where I come in. :) I'd thought it would be neat to create my own web helper assembly, and this seemed like the perfect opportunity - so I've created the Delay.Web.Helpers web helper assembly by leveraging the work I'd already done for BlobStore. This enables basic list/get/set/delete functionality for S3 blobs without a much additional effort and seems like a great way to help out more customers!

 

The API...

Most people don't like having many different ways to do the same thing, so I decided early on that the fundamental API for AmazonS3Storage would be the same as that already exposed by WindowsAzureStorage. That led directly to the following API (the two "secret key" properties have been renamed to match Amazon's terminology, but they're used exactly the same):

namespace Delay.Web.Helpers
{
    public static class AmazonS3Storage
    {
        public static string AccessKeyId { get; set; }
        public static string SecretAccessKey { get; set; }

        public static IList<string> ListBlobs(string bucketName);
        public static void UploadBinaryToBlob(string blobAddress, byte[] content);
        public static void UploadBinaryToBlob(string blobAddress, Stream stream);
        public static void UploadBinaryToBlob(string blobAddress, string fileName);
        public static void UploadTextToBlob(string blobAddress, string content);
        public static byte[] DownloadBlobAsByteArray(string blobAddress);
        public static void DownloadBlobAsStream(string blobAddress, Stream stream);
        public static void DownloadBlobToFile(string blobAddress, string fileName);
        public static string DownloadBlobAsText(string blobAddress);
        public static void DeleteBlob(string blobAddress);
    }
}

And because I wanted to be sure AmazonS3Storage could really be dropped into an application that already used WindowsAzureStorage, I did exactly that: I downloaded the WindowsAzureStorage sample application, performed a find-and-replace on the class name, updated the names of the two "secret key" properties, removed some unsupported functionality, and then verified that the sample ran and worked correctly. (It did!)

 

The enhancements...

While I think the WindowsAzureStorage APIs works well for listing and reading existing blobs, it's not immediately clear what a blobAddress string is made up of or how to create one, so I might get a little confused in scenarios that create blobs from scratch... It so happens that the blobAddress structure is ContainerName/BlobName (which I preserve in AmazonS3Storage), but that feels like an implementation detail developers focusing on the upload scenario shouldn't need to know. Therefore, I've also exposed the following set of methods with separate bucketName and blobName parameters (and otherwise the same signatures):

public static void UploadBinaryToBlob(string bucketName, string blobName, byte[] content);
public static void UploadBinaryToBlob(string bucketName, string blobName, Stream stream);
public static void UploadBinaryToBlob(string bucketName, string blobName, string fileName);
public static void UploadTextToBlob(string bucketName, string blobName, string content);
public static byte[] DownloadBlobAsByteArray(string bucketName, string blobName);
public static void DownloadBlobAsStream(string bucketName, string blobName, Stream stream);
public static void DownloadBlobToFile(string bucketName, string blobName, string fileName);
public static string DownloadBlobAsText(string bucketName, string blobName);
public static void DeleteBlob(string bucketName, string blobName);

I'm optimistic this provides a somewhat easier model for developers to work with, but I encourage folks to use whatever they think is best! :)

Aside: I hadn't previously needed create/list/delete functionality for buckets, so the corresponding methods are not part of today's release. However, I expect them to be rather easy to add and will likely do so at a future date.

 

The testing...

To try to ensure I hadn't made any really dumb mistakes, I wrote a complete automated test suite for the new AmazonS3Storage class. With the exception of verifying each ArgumentNullException for null parameters, the test suite achieves complete coverage of the new code and runs as a standard Visual Studio test project (i.e., here's an example of using web helpers "off the web").

 

The sample...

As much fun as testing is, a sample is often the best way to demonstrate how an API is used in practice - so I also put together a (very) quick Razor-based web site to provide simple list/get/set/delete access to an arbitrary Amazon S3 account/bucket:

Delay.Web.Helpers AmazonS3Storage sample

 

The example...

Using the Delay.Web.Helpers web helper is as easy as you'd expect; here's the piece of code that lists the bucket's contents (which is only as long as it is because it handles invalid credentials and empty buckets):

@if (configured) {
    <p class="Emphasized">
        Contents of bucket <strong>@bucket</strong>:
    </p>
    var blobs = AmazonS3Storage.ListBlobs(@bucket);
    if (blobs.Any()) {
        foreach (var blobAddress in blobs) {
            <p>
                <a href="@Href("DownloadBlob", new { BlobAddress = blobAddress})">@blobAddress.Split('/').Last()</a>
                -
                <a href="@Href("DeleteBlob", new { BlobAddress = blobAddress})" class="Subdued">[Delete]</a>
            </p>
        }
    } else {
        <em>[Empty]</em>
    }
} else {
    <p>
        <strong>[Please provide valid Amazon S3 account credentials in order to use this sample.]</strong>
    </p>
}

The one "gotcha" that's easy to forget (and that causes compile errors when you do!) is the using statement that goes at the top of the .cshtml file and tells Razor where to find the implementation of the AmazonS3Storage class:

@using Delay.Web.Helpers

 

The download...

[Click here to download the Delay.Web.Helpers assembly, its complete source code, all automated tests, and the sample web site.]

To use the Delay.Web.Helpers assembly in your own projects, simply download the ZIP file above, unblock it (click here for directions on "unblocking" a file), extract its contents to disk, and reference Delay.Web.Helpers.dll - or copy the Delay.Web.Helpers.dll and Delay.Web.Helpers.xml files to wherever they're needed (typically the Bin directory of a web site).

Aside: Technically, the Delay.Web.Helpers.xml file is optional - its sole purpose is to provide enhanced IntelliSense (specifically: method and parameter descriptions) in tools like Visual Studio. If you don't care about that, feel free to omit this file!

To play around with the included sample web site, just open the folder Delay.Web.Helpers\SourceCode\SampleWebSite in WebMatrix or Visual Studio (as a web site) and run it.

To have a look at the source code for the Delay.Web.Helpers assembly, tweak it, run the automated tests, or open the sample web site (a slightly different way), just open Delay.Web.Helpers\SourceCode\Delay.Web.Helpers.sln in Visual Studio.

 

The summary...

Being able to repurpose code is a great thing! In this case, all it took to convert my existing BlobStoreApi into an ASP.NET Razor/MVC-friendly web helper was writing a few thin wrapper methods. With a foundation like that, all it takes is some testing to catch mistakes, a quick sample to show it all off, and all of a sudden you've got yourself the makings of a snazzy blog post - and a new web helper! :)

SayIt, don't spray it! [Free program (and source code) makes it easy to use text-to-speech from your own programs and scripts]

Over the holiday break, I was asked to create a program to "speak" (via text-to-speech) simple sentences that were provided on the command-line. None of the available offerings were "just right" for the customer's scenario, so I was asked if I knew of any other options... Although I hadn't used it before, I figured the .NET System.Speech assembly/classes would make solving this task pretty easy, so I decided to give it a quick try.

And about three minutes later, I was done [ :) ]:

using System.Speech.Synthesis;

class Program
{
    static void Main(string[] args)
    {
        (new SpeechSynthesizer()).Speak(string.Join(" ", args));
    }
}

 

Because that was so easy, it felt like cheating; I decided to add support for a few more features to round out the offering and turn the whole thing into a free tool and blog post. The program I've written is called SayIt and is a window-less .NET 4 application to speak arbitrary text from the command line (or from a file) while allowing the user to make simple customizations (such as gender and volume).

Here's the "documentation" that shows up when SayIt is run without any parameters:

Use the command line to tell SayIt what to say and how to say it.

SayIt.exe
    [--Gender Male|Female|Neutral]
    [--Volume Silent|ExtraSoft|Soft|Medium|Loud|ExtraLoud|Default]
    [--Rate ExtraSlow|Slow|Medium|Fast|ExtraFast]
    [--Text <Text.txt>]
    [--SSML <SSML.xml>]
    [<Text to say>]

Examples:
    SayIt.exe Hello world
    SayIt.exe --Gender Female --Text C:\My\File.txt

Version 2011-01-04
http://blogs.msdn.com/b/delay/

 

For fun, I created the following icon (if you don't recognize it, here's a hint):

SayIt icon

 

When you run SayIt with the proper parameters (combine them in any order!), SayIt uses the Windows text-to-speech engine to speak the text using the default output device. It does this without creating or showing a window, so other programs can make use of SayIt without distracting the user. Alternatively, SayIt can be called from batch files to provide status updates for custom scripts and the like. Simple text can be passed directly on the command line, while more complicated (or lengthy) text can be passed in a file (via the --Text option).

Here are a few ideas to get you started:

SayIt Build completed
SayIt Processing file 12 of 25
SayIt You've got mail!
SayIt I'm sorry, Dave. I'm afraid I can't do that.
SayIt Barbra Streisand

 

Aside from the simple gender, volume, and rate customizations that can be done on the command-line, the most likely tweak is fine-tuning the pronunciation of a particular word or words. (Though most sentences are quite understandable by default, some words come out a little garbled!) Fortunately, there's a W3C standard for customizing pronunciation and SayIt supports that standard (via the --SSML parameter): Speech Synthesis Markup Language.

SSML is a simple, XML-based syntax for controlling the behavior of text-to-speech applications like SayIt. I won't go into the gory details here (the SSML specification has everything you need), but I will highlight the phoneme element in the form of a simple sample that's part of the small SayIt test suite:

<!-- IPA pronunciations based on http://en.wiktionary.org/wiki/tomato -->
<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='en-US'>
  You say <phoneme alphabet="ipa" ph="təˈmaɪto">tomato</phoneme>.
  I say <phoneme alphabet="ipa" ph="təˈmeɪto">tomato</phoneme>.
</speak>

 

Although technology hasn't come quite as far as Arthur C. Clarke envisioned in 2001: A Space Odyssey, it's still pretty amazing what the common household computer is capable of. Speech - and other forms of "natural input" - can be a great way to personalize the computing experience and the ease with which SayIt can be incorporated into existing programs and scripts should make it a natural fit for many scenarios. Instead of using an inefficient polling approach to find out when your tasks finish, let the computer tell you - literally! :)

 

[Click here to download the .NET 4 SayIt application, complete source code, and simple tests.]

 

Notes:

  • The default English install of Windows 7 comes with only one "voice", Microsoft Anna, which is female. Therefore, even if you specifically request a male voice (via --Gender Male or SSML), you'll probably still hear Anna. This is not a bug in SayIt. :)
  • SayIt exposes the same volume settings that the System.Speech assembly does (via the PromptVolume enumeration), but you probably won't be able to increase the speech volume because the default setting is "ExtraLoud". This is not a bug in SayIt, either.
  • The speech engine is pretty good about spelling out acronyms like you'd want, but if you ever need to force the issue, just prefix the relevant word/acronym with the '-' character:
    SayIt Register your car at the -DOT.