The blog of dlaa.me

Use it or lose it! [New Delay.FxCop code analysis rule helps identify unused resources in a .NET assembly]

My previous post outlined the benefits of automated code analysis and introduced the Delay.FxCop custom code analysis assembly. The initial release of Delay.FxCop included only one rule, DF1000: Check spelling of all string literals, which didn't seem like enough to me, so today's update doubles the number of rules! :) The new rule is DF1001: Resources should be referenced - but before getting into that I'm going to spend a moment more on spell-checking...

 

What I planned to write for the second code analysis rule was something to check the spelling of .NET string resources (i.e., strings from a RESX file). This seemed like another place misspellings might occur and I'd heard of other custom rules that performed this same task (for example, here's a sample by Jason Kresowaty). However, in the process of doing research, I discovered rule CA1703: Resource strings should be spelled correctly which is part of the default set of rules!

To make sure it did what I expected, I started a new application, added a misspelled string resource, and ran code analysis. To my surprise, the misspelling was not detected... However, I noticed a different warning that seemed related: CA1824: Mark assemblies with NeutralResourcesLanguageAttribute "Because assembly 'Application.exe' contains a ResX-based resource file, mark it with the NeutralResourcesLanguage attribute, specifying the language of the resources within the assembly." Sure enough, when I un-commented the (project template-provided) NeutralResourcesLanguage line in AssemblyInfo.cs, the desired warning showed up:

CA1703 : Microsoft.Naming : In resource 'WpfApplication.Properties.Resources.resx', referenced by name
'SampleResource', correct the spelling of 'mispelling' in string value 'This string has a mispelling.'.

In my experience, a some people suppress CA1824 instead of addressing it. But as we've just discovered, they're also giving up on free spell checking for their assembly's string resources. That seems silly, so I recommend setting NeutralResourcesLanguageAttribute for its helpful side-effects!

Note: For expository purposes, I've included an example in the download: CA1703 : Microsoft.Naming : In resource 'WpfApplication.Properties.Resources.resx', referenced by name 'IncorrectSpelling', correct the spelling of 'mispelling' in string value 'This string has a single mispelling.'.

 

Once I realized resource spell checking was unnecessary, I decided to focus on a different pet peeve of mine: unused resources in an assembly. In much the same way stale chunks of unused code can be found in most applications, it's pretty common to find resources that aren't referenced and are just taking up valuable space. But while there's a built-in rule to detect certain kinds of uncalled code (CA1811: Avoid uncalled private code), I'm not aware of anything similar for resources... And though it's possible to perform this check manually (by searching for the use of each individual resource), this is the kind of boring, monotonous task that computers are made for! :)

Therefore, I've created the second Delay.FxCop rule, DF1001: Resources should be referenced, which compares the set of resource references in an assembly with the set of resources that are actually present. Any cases where a resource exists (whether it's a string, stream, or object), but is not referenced in code will result in an instance of the DF1001 warning during code analysis.

Aside: For directions about how to run the Delay.FxCop rules on a standalone assembly or integrate them into a project, please refer to the steps in my original post.

As a word of caution, there can be cases where DF1001 reports that a resource isn't referenced from code, but that resource is actually used by an assembly. While I don't think it will miss typical uses from code (either via the automatically-generated Resources class or one of the lower-level ResourceManager methods), the catch is that not all resource references show up in code! For example, the markup for a Silverlight or WPF application is included as a XAML/BAML resource which is loaded at run-time without an explicit reference from the assembly itself. DF1001 will (correctly; sort of) report this resource as unused, so please remember that global code analysis suppressions can be used to squelch false-positives:

[assembly: SuppressMessage("Usage", "DF1001:ResourcesShouldBeReferenced", MessageId = "mainwindow.baml",
    Scope = "resource", Target = "WpfApplication.g.resources", Justification = "Loaded by WPF for MainWindow.xaml.")]
Aside: There are other ways to "fool" DF1001, such as by loading a resource from a different assembly or passing a variable to ResourceManager.GetString. But in terms of how things are done 95% of the time, the rule's current implementation should be accurate. Of course, if you find cases where it misreports unused resources, please let me know and I'll look into whether it's possible to improve things in a future release!

 

[Click here to download the Delay.FxCop rule assembly, associated .ruleset files, samples, and the complete source code.]

 

Stale references are an unnecessary annoyance: they bloat an assembly, waste time and money (for example, when localized unnecessarily), confuse new developers, and generally just get in the way. Fortunately, detecting them in an automated fashion is easy with DF1001: Resources should be referenced! After making sure unused resources really are unused, remove them from your project - and enjoy the benefits of a smaller, leaner application!

Tags: Technical

Speling misteaks make an aplikation look sily [New Delay.FxCop code analysis rule finds spelling errors in a .NET assembly's string literals]

No matter how polished the appearance of an application, web site, or advertisement is, the presence of even a single spelling error can make it look sloppy and unprofessional. The bad news is that spelling errors are incredibly easy to make - either due to mistyping or because one forgot which of the many, conflicting special cases applies in a particular circumstance. The good news is that technology to detect and correct spelling errors exists and is readily available. By making regular use of a spell-checker, you don't have to be a good speller to look like one. Trust me! ;)

Spell-checking of documents is pretty well covered these days, with all the popular word processors offering automated, interactive assistance. However, spell-checking of code is not quite so far along - even high-end editors like Visual Studio don't tend to offer interactive spell-checking support. Fortunately, it's possible - even easy! - to augment the capabilities of many development tools to integrate spell-checking into the development workflow. There are a few different ways of doing this: one is to incorporate the checking into the editing experience (like this plugin by coworker Mikhail Arkhipov) and another is to do the checking as part of the code analysis workflow (like code analysis rule CA1703: ResourceStringsShouldBeSpelledCorrectly). I'd already been toying with the idea of implementing my own code analysis rules, so I decided to experiment with the latter approach...

Aside: If you're not familiar with Visual Studio's code analysis feature, I highly recommend the MSDN article Analyzing Managed Code Quality by Using Code Analysis. Although the fully integrated experience is only available on higher-end Visual Studio products, the same exact code analysis functionality is available to everyone with the standalone FxCop tool which is free as part of the Microsoft Windows SDK for Windows 7 and .NET Framework 4. (FxCop has a dedicated download page with handy links, but it directs you to the SDK to do the actual install.)
Unrelated aside: In the ideal world, all of an application's strings would probably come from a resource file where they can be easily translated to other languages - and therefore string literals wouldn't need spell-checking. However, in the real world, there are often cases where user-visible text ends up in string literals (ex: exception messages) and therefore a rule like this seems to have practical value. If the string resources of your application are already perfectly separated, congratulations! However, if your application doesn't use resources (or uses them incompletely!), please continue reading... :)

 

As you might expect, it's possible to create custom code analysis rules and easily integrate them into your build environment; a great walk-through can be found on the Code Analysis Team Blog. If you still have questions after reading that, this post by Tatham Oddie is also quite good. And once you have an idea what you're doing, this documentation by Jason Kresowaty is a great resource for technical information.

Code analysis is a powerful tool and has a lot of potential for improving the development process. But for now, I'm just going to discuss a single rule I created: DF1000: CheckSpellingOfAllStringLiterals. As its name suggests, this rule checks the spelling of all string literals in an assembly. To be clear, there are other rules that check spelling (including some of the default FxCop/Visual Studio ones), but I didn't see any that checked all the literals, so this seemed like an interesting place to start.

Aside: Programs tend to have a lot of strings and those strings aren't always words (ex: namespace prefixes, regular expressions, etc.). Therefore, this rule will almost certainly report a lot of warnings run for the first time! Be prepared for that - and be ready to spend some time suppressing warnings that don't matter to you. :)

 

As I typically do, I've published a pre-compiled binary and complete source code, so you can see exactly how CheckSpellingOfAllStringLiterals works (it's quite simple, really, as it uses the existing introspection and spell-checking APIs). I'm not going to spend a lot of time talking about how this rule is implemented, but I did want to show how to use it so others can experiment with their own projects.

Important: Everything I show here was done with the Visual Studio 2010/.NET 4 toolset. Past updates to the code analysis infrastructure are such that things may not work with older (or newer) releases.

To add the Delay.FxCop rules to a project, you'll want to know a little about rule sets - the MSDN article Using Rule Sets to Group Managed Code Analysis Rules is a good place to start. I've provided two .ruleset files in the download: Delay.FxCop.ruleset which contains just the custom rule I've written and AllRules_Delay.FxCop.ruleset which contains my custom rule and everything in the shipping "Microsoft All Rules" ruleset. (Of course, creating and using your own .ruleset is another option!) Incorporating a custom rule set into a Visual Studio project is as easy as: Project menu, ProjectName Properties..., Code Analysis tab, Run this rule set:, Browse..., specify the path to the custom rule set, Build menu, Run Code Analysis on ProjectName.

Note: For WPF projects, you may also want to uncheck Suppress results from generated code in the "Code Analysis" tab above because the XAML compiler adds GeneratedCodeAttribute to all classes with an associated .xaml file and that automatically suppresses code analysis warnings for those classes. (Silverlight and Windows Phone projects don't set this attribute, so the default "ignore" behavior is fine.)

Assuming your project contains a string literal that's not in the dictionary, the Error List window should show one or more warnings like this:

DF1000 : Spelling : The word 'recieve' is not in the dictionary.

At this point, you have a few options (examples of which can be found in the TestProjects\ConsoleApplication directory of the sample):

  • Fix the misspelling.

    Duh. :)

  • Suppress the instance.

    If it's an isolated use of the word and is correct, then simply right-clicking the warning and choosing Suppress Message(s), In Source will add something like the following attribute to the code which will silence the warning:

    [SuppressMessage("Spelling", "DF1000:CheckSpellingOfAllStringLiterals", MessageId = "leet")]

    While you're at it, feel free to add a Justification message if the reason might not be obvious to someone else.

  • Suppress the entire method.

    If a method contains no user-visible text, but has lots of strings that cause warnings, you can suppress the entire method by omitting the MessageId parameter like so:

    [SuppressMessage("Spelling", "DF1000:CheckSpellingOfAllStringLiterals")]
  • Add the word to the custom dictionary.

    If the "misspelled" word is correct and appears throughout the application, you'll probably want to add it to the project's custom dictionary which will silence all relevant warnings at once. MSDN has a great overview of the custom dictionary format as well as the exact steps to take to add a custom dictionary to a project in the article How to: Customize the Code Analysis Dictionary.

 

Alternatively, if you're a command-line junkie or don't want to modify your Visual Studio project, you can use FxCopCmd directly by running it from a Visual Studio Command Prompt like so:

C:\T>"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\FxCopCmd.exe"
  /file:C:\T\ConsoleApplication\bin\Debug\ConsoleApplication.exe
  /ruleset:=C:\T\Delay.FxCop\Delay.FxCop.ruleset /console
Microsoft (R) FxCop Command-Line Tool, Version 10.0 (10.0.30319.1) X86
Copyright (C) Microsoft Corporation, All Rights Reserved.

[...]
Loaded delay.fxcop.dll...
Loaded ConsoleApplication.exe...
Initializing Introspection engine...
Analyzing...
Analysis Complete.
Writing 1 messages...

C:\T\ConsoleApplication\Program.cs(12,1) : warning  : DF1000 : Spelling : The word 'recieve' is not in the dictionary.
Done:00:00:01.4352025

Or else you can install the standalone FxCop tool to get the benefits of a graphical user interface without changing anything about your existing workflow!

 

[Click here to download the Delay.FxCop rule assembly, associated .ruleset files, samples, and the complete source code.]

 

Spelling is one of those things that's easy to get wrong - and also easy to get right if you apply the proper technology and discipline. I can't hope to make anyone a better speller ('i' before 'e', except after 'c'!), but I can help out a little on the technology front. I plan to add new code analysis rules to Delay.FxCop over time - but for now I hope people put DF1000: CheckSpellingOfAllStringLiterals to good use finding spelling mistakes in their applications!

Tags: Technical

Safe X (ml parsing with XLINQ) [XLinqExtensions helps make XML parsing with .NET's XLINQ a bit safer and easier]

XLINQ (aka LINQ-to-XML) is a set of classes that make it simple to work with XML by exposing the element tree in a way that's easy to manipulate using standard LINQ queries. So, for example, it's trivial to write code to select specific nodes for reading, create well-formed XML fragments, or transform an entire document. Because of its query-oriented nature, XLINQ makes it easy to ignore parts of a document that aren't relevant: if you don't query for them, they don't show up! Because it's so handy and powerful, I encourage folks who aren't already familiar to find out more.

Aside: As usual, flexibility comes with a cost and it is often more efficient to read and write XML with the underlying XmlReader and XmlWriter classes because they don't expose the same high-level abstractions. However, I'll suggest that the extra productivity of developing with XLINQ will often outweigh the minor computational cost it incurs.

 

When I wrote the world's simplest RSS reader as a sample for my post on WebBrowserExtensions, I needed some code to parse the RSS feed for my blog and dashed off the simplest thing possible using XLINQ. Here's a simplified version of that RSS feed for reference:

<rss version="2.0">
  <channel>
    <title>Delay's Blog</title>
    <item>
      <title>First Post</title>
      <pubDate>Sat, 21 May 2011 13:00:00 GMT</pubDate>
      <description>Post description.</description>
    </item>
    <item>
      <title>Another Post</title>
      <pubDate>Sun, 22 May 2011 14:00:00 GMT</pubDate>
      <description>Another post description.</description>
    </item>
  </channel>
</rss>

The code I wrote at the time looked a lot like the following:

private static void NoChecking(XElement feedRoot)
{
    var version = feedRoot.Attribute("version").Value;
    var title = feedRoot.Element("channel").Element("title").Value;
    ShowFeed(version, title);
    foreach (var item in feedRoot.Element("channel").Elements("item"))
    {
        title = item.Element("title").Value;
        var publishDate = DateTime.Parse(item.Element("pubDate").Value);
        var description = item.Element("description").Value;
        ShowItem(title, publishDate, description);
    }
}

Not surprisingly, running it on the XML above leads to the following output:

Delay's Blog (RSS 2.0)
  First Post
    Date: 5/21/2011
    Characters: 17
  Another Post
    Date: 5/22/2011
    Characters: 25

 

That code is simple, easy to read, and obvious in its intent. However (as is typical for sample code tangential to the topic of interest), there's no error checking or handling of malformed data. If anything within the feed changes, it's quite likely the code I show above will throw an exception (for example: because the result of the Element method is null when the named element can't be found). And although I don't expect changes to the format of this RSS feed, I'd be wary of shipping code like that because it's so fragile.

Aside: Safely parsing external data is a challenging task; many exploits take advantage of parsing errors to corrupt a process's state. In the discussion here, I'm focusing mainly on "safety" in the sense of "resiliency": the ability of code to continue to work (or at least not throw an exception) despite changes to the format of the data it's dealing with. Naturally, more resilient parsing code is likely to be less vulnerable to hacking, too - but I'm not specifically concerned with making code hack-proof here.

 

Adding the necessary error-checking to get the above snippet into shape for real-world use isn't particularly hard - but it does add a lot more code. Consequently, readability suffers; although the following method performs exactly the same task, its implementation is decidedly harder to follow than the original:

private static void Checking(XElement feedRoot)
{
    var version = "";
    var versionAttribute = feedRoot.Attribute("version");
    if (null != versionAttribute)
    {
        version = versionAttribute.Value;
    }
    var channelElement = feedRoot.Element("channel");
    if (null != channelElement)
    {
        var title = "";
        var titleElement = channelElement.Element("title");
        if (null != titleElement)
        {
            title = titleElement.Value;
        }
        ShowFeed(version, title);
        foreach (var item in channelElement.Elements("item"))
        {
            title = "";
            titleElement = item.Element("title");
            if (null != titleElement)
            {
                title = titleElement.Value;
            }
            var publishDate = DateTime.MinValue;
            var pubDateElement = item.Element("pubDate");
            if (null != pubDateElement)
            {
                if (!DateTime.TryParse(pubDateElement.Value, out publishDate))
                {
                    publishDate = DateTime.MinValue;
                }
            }
            var description = "";
            var descriptionElement = item.Element("description");
            if (null != descriptionElement)
            {
                description = descriptionElement.Value;
            }
            ShowItem(title, publishDate, description);
        }
    }
}

 

It would be nice if we could somehow combine the two approaches to arrive at something that reads easily while also handling malformed content gracefully... And that's what the XLinqExtensions extension methods are all about!

Using the naming convention SafeGet* where "*" can be Element, Attribute, StringValue, or DateTimeValue, these methods are simple wrappers that avoid problems by always returning a valid object - even if they have to create an empty one themselves. In this manner, calls that are expected to return an XElement always do; calls that are expected to return a DateTime always do (with a user-provided fallback value for scenarios where the underlying string doesn't parse successfully). To be clear, there's no magic here - all the code is very simple - but by pushing error handling into the accessor methods, the overall experience feels much nicer.

To see what I mean, here's what the same code looks like after it has been changed to use XLinqExtensions - note how similar it looks to the original implementation that used the simple "write it the obvious way" approach:

private static void Safe(XElement feedRoot)
{
    var version = feedRoot.SafeGetAttribute("version").SafeGetStringValue();
    var title = feedRoot.SafeGetElement("channel").SafeGetElement("title").SafeGetStringValue();
    ShowFeed(version, title);
    foreach (var item in feedRoot.SafeGetElement("channel").Elements("item"))
    {
        title = item.SafeGetElement("title").SafeGetStringValue();
        var publishDate = item.SafeGetElement("pubDate").SafeGetDateTimeValue(DateTime.MinValue);
        var description = item.SafeGetElement("description").SafeGetStringValue();
        ShowItem(title, publishDate, description);
    }
}

Not only is the XLinqExtensions version almost as easy to read as the simple approach, it has all the resiliancy benefits of the complex one! What's not to like?? :)

 

[Click here to download the XLinqExtensions sample application containing everything shown here.]

 

I've found the XLinqExtensions approach helpful in my own projects because it enables me to parse XML with ease and peace of mind. The example I've provided here only scratches the surface of what's possible (ex: SafeGetIntegerValue, SafeGetUriValue, etc.), and is intended to set the stage for others to adopt a more robust approach to XML parsing. So if you find yourself parsing XML, please consider something similar!

 

PS - The complete set of XLinqExtensions methods I use in the sample is provided below. Implementation of additional methods to suit custom scenarios is left as an exercise to the reader. :)

/// <summary>
/// Class that exposes a variety of extension methods to make parsing XML with XLINQ easier and safer.
/// </summary>
static class XLinqExtensions
{
    /// <summary>
    /// Gets the named XElement child of the specified XElement.
    /// </summary>
    /// <param name="element">Specified element.</param>
    /// <param name="name">Name of the child.</param>
    /// <returns>XElement instance.</returns>
    public static XElement SafeGetElement(this XElement element, XName name)
    {
        Debug.Assert(null != element);
        Debug.Assert(null != name);
        return element.Element(name) ?? new XElement(name, "");
    }

    /// <summary>
    /// Gets the named XAttribute of the specified XElement.
    /// </summary>
    /// <param name="element">Specified element.</param>
    /// <param name="name">Name of the attribute.</param>
    /// <returns>XAttribute instance.</returns>
    public static XAttribute SafeGetAttribute(this XElement element, XName name)
    {
        Debug.Assert(null != element);
        Debug.Assert(null != name);
        return element.Attribute(name) ?? new XAttribute(name, "");
    }

    /// <summary>
    /// Gets the string value of the specified XElement.
    /// </summary>
    /// <param name="element">Specified element.</param>
    /// <returns>String value.</returns>
    public static string SafeGetStringValue(this XElement element)
    {
        Debug.Assert(null != element);
        return element.Value;
    }

    /// <summary>
    /// Gets the string value of the specified XAttribute.
    /// </summary>
    /// <param name="attribute">Specified attribute.</param>
    /// <returns>String value.</returns>
    public static string SafeGetStringValue(this XAttribute attribute)
    {
        Debug.Assert(null != attribute);
        return attribute.Value;
    }

    /// <summary>
    /// Gets the DateTime value of the specified XElement, falling back to a provided value in case of failure.
    /// </summary>
    /// <param name="element">Specified element.</param>
    /// <param name="fallback">Fallback value.</param>
    /// <returns>DateTime value.</returns>
    public static DateTime SafeGetDateTimeValue(this XElement element, DateTime fallback)
    {
        Debug.Assert(null != element);
        DateTime value;
        if (!DateTime.TryParse(element.Value, out value))
        {
            value = fallback;
        }
        return value;
    }
}
Tags: Technical

"Sort" of a follow-up post [IListExtensions class enables easy sorting of .NET list types; today's updates make some scenarios faster or more convenient]

Recently, I wrote a post about the IListExtensions collection of extension methods I created to make it easy to maintain a sorted list based on any IList(T) implementation without needing to create a special subclass. In that post, I explained why I implemented IListExtensions the way I did and outlined some of the benefits for scenarios like using ObservableCollection(T) for dynamic updates on Silverlight, WPF, and Windows Phone where the underlying class doesn't intrinsically support sorting. A couple of readers followed up with some good questions and clarifications which I'd encourage having a look for additional context.

 

During the time I've been using IListExtensions in a project of my own, I have noticed two patterns that prompted today's update:

  1. It's easy to get performant set-like behavior from a sorted list. Recall that a set is simply a collection in which a particular item appears either 0 or 1 times (i.e., there are no duplicates in the collection). While this invariant can be easily maintained with any sorted list by performing a remove before each add (recall that ICollection(T).Remove (and therefore IListExtensions.RemoveSorted) doesn't throw if an element is not present), it also means there are two searches of the list every time an item is added: one for the call to RemoveSorted and another for the call to AddSorted. While it's possible to be a bit more clever and avoid the extra search sometimes, the API doesn't let you to "remember" the right index between calls to *Sorted methods, so you can't get rid of the redundant search every time.

    Therefore, I created the AddOrReplaceSorted method which has the same signature as AddSorted (and therefore ICollection(T).Add) and implements the set-like behavior of ensuring there is at most one instance of a particular item (i.e., the IComparable(T) search key) present in the collection at any time. Because this one method does everything, it only ever needs to perform a single search of the list and can help save a few CPU cycles in relevant scenarios.

  2. It's convenient to be able to call RemoveSorted/IndexOfSorted/ContainsSorted with an instance of the search key. Recall from the original post that IListExtensions requires items in the list to implement the IComparable(T) interface in order to define their sort order. This is fine most of the time, but can require a bit of extra overhead in situations where the items' sort order depends on only some (or commonly just one) of their properties.

    For example, note that the sort order the Person class below depends only on the Name property:

    class Person : IComparable<Person>
    {
        public string Name { get; set; }
        public string Details { get; set; }
    
        public int CompareTo(Person other)
        {
            return Name.CompareTo(other.Name);
        }
    }

    In this case, using ContainsSorted on a List(Person) to search for a particular name would require the creation of a fake Person instance to pass as the parameter to ContainsSorted in order to match the type of the underlying collection. This isn't usually a big deal (though it can be if the class doesn't have a public constructor!), but it complicates the code and seems like it ought to be unnecessary.

    Therefore, I've added new versions of RemoveSorted/IndexOfSorted/ContainsSorted that take a key parameter and a keySelector Func(T, K). The selector is passed an item from the list and needs to return that item's sort key (the thing that its IComparable(T).CompareTo operates on). Not surprisingly, the underlying type of the keys must implement IComparable(T); keys are then compared directly (instead of indirectly via the containing items). In this way, it's possible to look up (or remove) a Person in a List(Person) by passing only the person's name and not having to bother with the temporary Person object at all!

 

In addition to the code changes discussed above, I've updated the automated test project that comes with IListExtensions to cover all the new scenarios. Conveniently, the new implementation of AddOrReplaceSorted is nearly identical to that of AddSorted and can be easily validated with SortedSet(T). Similarly, the three new key-based methods have all been implemented as variations of the pre-existing methods and those have been modified to call directly into the new methods. Aside from a bit of clear, deliberate redundancy for AddOrReplaceSorted, there's hardly any more code in this release than there was in the previous one - yet refactoring the implementation slightly enabled some handy new scenarios!

 

[Click here to download the IListExtensions implementation and its complete unit test project.]

 

Proper sorting libraries offer a wide variety of ways to sort, compare, and work with sorted lists. IListExtensions is not a proper sorting library - nor does it aspire to be one. :) Rather, it's a small collection of handy methods that make it easy to incorporate sorting into some common Silverlight, WPF, and Windows Phone scenarios. Sometimes you're forced to use a collection (like ObservableCollection(T)) that doesn't do everything you want - but if all you're missing is basic sorting functionality, then IListExtensions just might be the answer!

When you live on the bleeding edge, be prepared for a few nicks [Minor update to the Delay.Web.Helpers ASP.NET assembly download to avoid a NuGet packaging bug affecting Razor installs]

I had a surprise last week when coworker Bilal Aslam mentioned he was using my Delay.Web.Helpers assembly but wasn't able to install the 1.1.0 version with the ASP.NET Razor "_Admin" control panel. (Fortunately, the previous version (1.0.0) did install and had the functionality he needed, so he was using that one for the time being.) I quickly told Bilal he was crazy because I remembered testing with the NuGet plugin for Visual Studio and knew it installed successfully. At which point he demonstrated the problem for me - and I was forced to admit defeat. :)

Aside: Delay.Web.Helpers is a collection of ASP.NET web helpers that provide access to Amazon Simple Storage Service (S3) buckets and blobs as well as easy ways to create "data URIs". (And eventually more stuff as I get time to add it...)

Naturally, the first thing I did was to repeat my previous testing in Visual Studio - and it worked fine just like I remembered. So I tried with the Razor administration interface and it failed just like Bilal showed me: "System.InvalidOperationException: The 'schemaVersion' attribute is not declared.". Because the previous version (1.0.0) didn't have this problem, I was a little confused; I'd built everything from the same .nuspec file, so it wasn't clear why the Razor/1.1.0 scenario would be uniquely broken.

At that point, I contacted a couple folks on the NuGet team and got a quick answer: for some (short) period of time, the official version of nuget.exe created packages with a schemaVersion attribute on the package/metadata element of the embedded .nuspec file and the presence of this attribute causes the Razor install implementation to fail with the exception we were seeing. I'd created 1.0.0 with a good version of nuget.exe, but apparantly created 1.1.0 with the broken version. :(

The team's recommendedation was to re-create my packages with the current nuget.exe and re-deploy them to the NuGet servers. I did that and the result is version 1.1.1 of the Delay.Web.Helpers package and its associated Delay.Web.Helpers.SampleWebSite package. "Once bitten, twice shy", so I verified the install in both Visual Studio and Razor now that I know they're different and can fail independently.

Aside: There are no changes to the Delay.Web.Helpers assembly or samples in this release. The only changes are the necessary tweaks to the NuGet metadata for both packages to install successfully under Razor. Therefore, if you've already installed 1.1.0 successfully, there's no need to upgrade.
Further aside: The standalone ZIP file with the assembly, source code, automated tests, and sample web site is unaffected by this update.

 

To sum things up, if you created a NuGet package sometime around early April and you expect it to be installable with the Razor administration panel, I'd highly recommend trying it out to be sure! :)

Something "sort" of handy... [IListExtensions adds easy sorting to .NET list types - enabling faster search and removal, too!]

If you want to display a dynamically changing collection of items in WPF, Silverlight, or Windows Phone, there are a lot of collection classes to pick from - but there's really just one good choice: ObservableCollection(T). Although nearly all the IList(T)/ICollection(T)/IEnumerable(T) implementations work well for static data, dynamic data only updates automatically when it's in a collection that implements INotifyCollectionChanged. And while it's possible to write your own INotifyCollectionChanged code, doing a good job takes a fair amount of work. Fortunately, ObservableCollection(T) does nearly everything you'd want and is a great choice nearly all of the time.

Unless you want your data sorted...

By design, ObservableCollection(T) doesn't sort data - that's left to the CollectionView class which is the officially recommended way to sort lists for display (for more details, please refer to the Data Binding Overview's "Binding to Collections" section). The way CollectionView works is to add an additional layer of indirection on top of your list. That gets sorted and the underlying collection isn't modified at all. This is a fine, flexible design (it enables a variety of other scenarios like filtering, grouping, and multiple views), but sometimes it'd be easier if the actual collection were sorted and the extra layer wasn't present (in addition to imposing a bit of overhead, working with CollectionView requires additional code to account for the indirection).

 

So it would be nice if there were a handy way to sort an ObservableCollection(T) - something like the List(T).Sort method. Unfortunately, ObservableCollection(T) doesn't derive from List(T), so it doesn't have that method... Besides, it'd be better if adding items to the list put them in the right place to begin with - instead of adding them to the wrong place and then re-sorting the entire list after the fact. Along the same lines, scenarios that could take advantage of sorting for faster look-ups would benefit from something like List(T).BinarySearch - which also doesn't exist on ObservableCollection(T).

All we really need to do here is provide custom implementations of add/remove/contains/index-of for ObservableCollection(T) and we'd have the best of both worlds. One way of doing that is to subclass - but that ties the code to a specific base class and limits its usefulness somewhat (just like Sort and BinarySearch for List(T) above). What we can do instead is implement these helper methods in a standalone class and enable them to target the least common denominator, IList(T), and therefore apply in a variety of scenarios (i.e., all classes that implement that interface). What's more, these helpers can be trivially written as extension methods so they'll look just like APIs on the underlying classes!

 

This sounds promising - let's see how it might work by considering the complete IList(T) interface hierarchy:

public interface IList<T> : ICollection<T>, IEnumerable<T>, IEnumerable
{
    T this[int index] { get; set; }         // Good as-is
    int IndexOf(T item);                    // Okay as-is; could be faster if sorted
    void Insert(int index, T item);         // Should NOT be used with a sorted collection (might un-sort it)
    void RemoveAt(int index);               // Good as-is
}
public interface ICollection<T> : IEnumerable<T>, IEnumerable
{
    int Count { get; }                      // Good as-is
    bool IsReadOnly { get; }                // Good as-is
    void Add(T item);                       // Needs custom implementation that preserves sort order
    void Clear();                           // Good as-is
    bool Contains(T item);                  // Okay as-is; could be faster if sorted
    void CopyTo(T[] array, int arrayIndex); // Good as-is
    bool Remove(T item);                    // Okay as-is; could be faster if sorted
}
public interface IEnumerable<T> : IEnumerable
{
    IEnumerator<T> GetEnumerator();         // Good as-is
}
public interface IEnumerable
{
    IEnumerator GetEnumerator();            // Good as-is
}

To create a sorted IList(T), there's only one method that needs to be written (add) and three others that should be written to take advantage of the sorted collection for better performance (remove, contains, and index-of). (Aside: If you know a list is sorted, finding the right location changes from an O(n) problem to an O(log n) problem. Read more about "big O" notation here.) The only additional requirement we'll impose is that the elements of the collection must have a natural order. One way this is commonly done is by implementing the IComparable(T) interface on the item class. Basic .NET types already do this, as do other classes in the framework (ex: DateTime, Tuple, etc.). Because this interface has just one method, it's easy to add - and can often be implemented in terms of IComparable(T) for its constituent parts!

 

So here's what the IListExtensions class I've created looks like:

static class IListExtensions
{
    public static void AddSorted<T>(this IList<T> list, T item) where T : IComparable<T> { ... }
    public static bool RemoveSorted<T>(this IList<T> list, T item) where T : IComparable<T> { ... }
    public static int IndexOfSorted<T>(this IList<T> list, T item) where T : IComparable<T> { ... }
    public static bool ContainsSorted<T>(this IList<T> list, T item) where T : IComparable<T> { ... }
}

You can use it to create and manage a sorted ObservableCollection(T) simply by adding "Sorted" to the code you already have!

 

[Click here to download the IListExtensions implementation and its complete unit test project.]

 

One downside to the extension method approach is that the existing List(T) methods remain visible and can be called by code that doesn't know to use the *Sorted versions instead. For Contains, IndexOf, and Remove, this is inefficient, but will still yield the correct answer - but for Add and Insert it's a bug because these two methods are likely to ruin the sorted nature of the list when used without care. Once a list becomes unsorted, the *Sorted methods will return incorrect results because they optimize searches based on the assumption that the list is correctly sorted. Subclassing would be the obvious "solution" to this problem, but it's not a good option here because the original methods aren't virtual on ObservableCollection(T)...

I'm not aware of a good way to make things foolproof without giving up on the nice generality benefits of the current approach, so this seems like one of those times where you just need to be careful about what you're doing. Fortunately, most programs probably only call the relevant methods a couple of times, so it's pretty easy to visit all the call sites and change them to use the corresponding *Sorted method instead. [Trust me, I've done this myself. :) ]

Aside: There's a subtle ambiguity regarding what to do if the collection contains duplicate items (i.e., multiple items that sort to the same location). It doesn't seem like it will matter most of the time, so IListExtensions takes the performant way out and returns the first correct answer it finds. It's important to note this is not necessarily the first of a group of duplicate items, nor the last of them - nor will it always be the same one of them! Basically, if the items' IComparable(T) implementation says two items are equivalent, then IListExtensions assumes they are and that they're equally valid answers. If the distinction matters in your scenario, please feel free to tweak this code and take the corresponding performance hit. :) (Alternatively, if the items' IComparable(T) implementation can be modified to distinguish between otherwise "identical" items, the underlying ambiguity will be resolved and things will be deterministic again.)

 

It's usually best to leverage platform support for something when it's available, so please look to CollectionView for your sorting needs in WPF, Silverlight, and Windows Phone applications. But if you end up in a situation where it'd be better to maintain a sorted list yourself, maybe IListExtensions is just what you need!

Don't shoot the messenger [A WebBrowserExtensions workaround for Windows Phone and a BestFitPanel tweak for infinite layout bounds on Windows Phone/Silverlight/WPF]

One of the neat things about sharing code with the community is hearing how people have learned from it or are using it in their own work. Of course, the more people use something, the more likely they are to identify problems with it - which is great because it provides an opportunity to improve things! This blog post is about addressing two issues that came up around some code I published recently.

 

The Platform Workaround (WebBrowserExtensions)

WebBrowserExtensions on Windows Phone

Roger Guess contacted me a couple of days after I posted the WebBrowserExtensions code to report a problem he saw when using it on Windows Phone 7 with the platform's NavigationService in a scenario where the user could hit the Back button to return to a page with a WebBrowser control that had its content set by the WebBrowserExtensions.StringSource property. (Whew!) Instead of seeing the content that was there before, the control was blank! Sure enough, I was able to duplicate the problem after I knew the setup...

My initial theory was that the WebBrowser was discarding its content during the navigation and not being reinitialized properly when it came back into view. Sure enough, some quick testing confirmed this was the case - and what's more, the same problem happens with the official Source property as well! That made me feel a little better because it suggests a bug with the platform's WebBrowser control rather than my own code. :)

The workaround I came up with for StringSource (and that was kindly verified by Roger) should work just as well for the official Source property: I created an application-level event handler for the Loaded event on the WebBrowser and use that event to re-apply the correct content during the "back" navigation. I updated the Windows Phone sample application and added a new button/page to demonstrate the fix in action.

If this scenario is possible with your application, please consider applying a similar workaround!

Aside: Although it should be possible to apply the workaround to the WebBrowserExtensions code itself, I decided that wasn't ideal because of the event handler: the entire StringSource attached dependency property implementation is static, and tracking per-instance data from static code can be tricky. In this case, it would be necessary to ensure the Loaded event handler was added only once, that it was removed when necessary, and that it didn't introduce any memory leaks. Because such logic is often much easier at the application level and because the same basic workaround is necessary for the official WebBrowser.Source property and because it applies only to Windows Phone, it seemed best to leave the core WebBrowserExtensions implementation as-is.
Further aside: This same scenario works fine on Silverlight 4, so it's another example of a Windows Phone quirk that needs to be worked around. (Recall from the previous post that it was already necessary to work around the fact that the Windows Phone WebBrowser implementation can't be touched outside the visual tree.) That's a shame because the scenario itself is reasonable and consistent with the platform recommendation to use NavigationService for everything. The fact that it seems broken for the "real" Source property as well makes me think other people will run into this, too. :(

[Click here to download the WebBrowserExtensions class and samples for Silverlight, Windows Phone, and WPF.]

 

The Layout Implementation Oversight (BestFitPanel)

Eitan Gabay contacted me soon after I posted my BestFitPanel code to report an exception he saw when using one of the BestFitPanel classes as the ItemsPanel of a ListBox at design-time. I hadn't tried that particular configuration, but once I did, I saw the same message: "MeasureOverride of element 'Delay.MostBigPanel' should not return PositiveInfinity or NaN as its DesiredSize.". If you've dealt much with custom Panel implementations, this probably isn't all that surprising... Although coding layout is often straightforward, there can be a variety of edge cases depending on how the layout is done. (For example: only one child, no children, no available size, nested inside different kinds of parent containers, etc..)

In this case, it turns out that the constraint passed to MeasureOverride included a value of double.PositiveInfinity and BestFitPanel was returning that same value. That isn't allowed because the MeasureOverride method of an element is supposed to return the smallest size the element can occupy without clipping - and nothing should require infinite size! (If you think about it, though, the scenario is a little wacky for BestFitPanel: what does it mean to make the best use of an infinite amount of space?)

There are two parts to my fix for this problem. The first part is to skip calling the CalculateBestFit override for infinite bounds (it's unlikely to know what to do anyway) and to Measure all the children at the provided size instead. This ensures all children get a chance to measure during the measure pass - which some controls require in order to render correctly. The second part of the fix is to return a Size with the longest width and height of any child measured when infinite bounds are passed in. Because children are subject to the same rule about not returning an infinite value from Measure, this approach means BestFitPanel won't either and that the Panel will occupy an amount of space that's related to the size of its content (instead of being arbitrary like 0x0, 100x100, etc.).

The combined effect of these changes is to fix the reported exception, provide a better design-time experience, and offer an more versatile run-time experience as well!

All BestFitPanels overlapped

[Click here to download the source code for all BestFitPanels along with sample projects for Silverlight, WPF, and Windows Phone.]

 

The more a piece of code gets looked at and used, the more likely it is that potential problems are uncovered. It can be difficult to catch everything on your own, so it's fantastic to have a community of people looking at stuff and providing feedback when something doesn't work. Thanks again to Robert and Eitan for bringing these issues to my attention and for taking the time to try out early versions of each fix!

I'm always hopeful people won't have problems with my code - but when they do, I really appreciate them taking the time to let me know! :)

"Those who cannot remember the past are condemned to repeat it." [WebBrowserExtensions.StringSource attached dependency property makes Silverlight/Windows Phone/WPF's WebBrowser control more XAML- and binding-friendly]

The WebBrowser control is available in Silverlight 4, Windows Phone 7, and all versions of WPF. It's mostly the same everywhere, though there are some specific differences to keep in mind when using it on Silverlight-based platforms. WebBrowser offers two ways to provide its content: by passing a URI or by passing a string with HTML text:

  1. If you have a URI, you can set the Source (dependency) property in code or XAML or you can call the Navigate(Uri) method from code.

    Aside: It's not clear to me what the Navigate(Uri) method enables that the Source property doesn't, but flexibility is nice, so I won't dwell on this. :)
  2. On the other hand, if you have a string, your only option is to call the NavigateToString(string) method from code.

    XAML and data-binding support for strings? Nope, not so much...

 

I'm not sure why all three platforms have the same limitation, but I suspect there was a good reason at some point in time and maybe nobody has revisited the decision since then. Be that as it may, the brief research I did before writing this post suggests that a good number of people have been inconvenienced by the issue. Therefore, I've written a simple attached dependency property to add support for providing HTML strings in XAML via data binding!

<phone:WebBrowser delay:WebBrowserExtensions.StringSource="{Binding MyProperty}"/>

As you can see above, this functionality is made possible by the StringSource property which is exposed by the WebBrowserExtensions class. It's a fairly simple attached property that just passes its new value on to the WebBrowser's NavigateToString method to do the real work. For everyone's convenience, I've tried to make sure my StringSource implementation works on Silverlight 4, Windows Phone 7, and WPF.

Aside: The StringSource property is read/write from code and XAML, but does not attempt to detect WebBrowser navigation by other means (and somehow "transform" the results into a corresponding HTML string). Therefore, if you're interleaving multiple navigation methods in the same application, reading from StringSource may not be correct - but writing to it should always work!

Aside: Things are more complicated on Windows Phone because the WebBrowser implementation there throws exceptions if it gets touched outside the visual tree. Therefore, if WINDOWS_PHONE is defined (and by default it is for phone projects), this code catches the possible InvalidOperationException and deals with it by creating a handler for the WebBrowser's Loaded event that attempts to re-set the string once the control is known to be in the visual tree. If the second attempt fails, the exception is allowed to bubble out of the method. This seems to work nicely for the typical "string in XAML" scenario, though it's possible more complex scenarios will require a more involved workaround.

My thanks go out to Roger Guess for trying an early version of the code and reminding me of this gotcha!

 

To prove to ourselves that StringSource behaves as we intend, let's create the world's simplest RSS reader! All it will do is download a single RSS feed, parse it for the titles and content of each post, and display those titles in a ListBox. There'll be a WebBrowser control using StringSource to bind to the ListBox's SelectedItem property (all XAML; no code!), so that when a title is clicked, its content will automatically be displayed by the WebBrowser!

 

Here's what it looks like on Silverlight (note that the sample must be run outside the browser because of network security access restrictions in Silverlight):

WebBrowserExtensions on Silverlight

 

And here's the same code running on Windows Phone:

WebBrowserExtensions on Windows Phone

 

And on WPF:

WebBrowserExtensions on WPF

 

[Click here to download the WebBrowserExtensions class and the samples above for Silverlight, Windows Phone, and WPF.]

 

The StringSource attached dependency property is simple code for a simple purpose. It doesn't have a lot of bells and whistles, but it gets the job done nicely and fills a small gap in the platform. You won't always deal with HTML content directly, but when you do, StringSource makes it easy to combine the WebBrowser control with XAML and data binding!

Images in a web page: meh... Images *in* a web page: cool! [Delay.Web.Helpers assembly now includes an ASP.NET web helper for data URIs (in addition to Amazon S3 blob/bucket support)]

Delay.Web.Helpers DataUri sample page

The topic of "data URIs" came up on a discussion list I follow last week in the context of "I'm in the process of creating a page and have the bytes of an image from my database. Can I deliver them directly or must I go through a separate URL with WebImage?" And the response was that using a data URI would allow that page to deliver the image content inline. But while data URIs are fairly simple, there didn't seem to be a convenient way to use them from an ASP.NET MVC/Razor web page.

Which was kind of fortuitous for me because I've been interested in learning more about data URIs for a while and it seemed that creating a web helper for this purpose would be fun. Better yet, I'd already released the Delay.Web.Helpers assembly (with support for Amazon S3 blob/bucket access), so I had the perfect place to put the new DataUri class once I wrote it! :)

Aside: For those who aren't familiar, the WebImage class provides a variety of handy methods for dealing with images on the server - including a Write method for sending them to the user's browser. However, the Write method needs to be called from a dedicated page that serves up just the relevant image, so it isn't a solution for the original scenario.

 

In case you've not heard of them before, data URIs are a kind of URL scheme documented by RFC 2397. They're quite simple, really - here's the relevant part of the specification:

data:[<mediatype>][;base64],<data>

dataurl    := "data:" [ mediatype ] [ ";base64" ] "," data
mediatype  := [ type "/" subtype ] *( ";" parameter )
data       := *urlchar
parameter  := attribute "=" value

It takes two pieces of information to create a data URI: the data and its media type (ex: "image/png"). (Although the media type appears optional above, it defaults to "text/plain" when absent - which is unsuitable for most common data URI scenarios.) Pretty much the only interesting thing you can do with data URIs on the server is write them, so the DataUri web helper exposes a single Write method with five flavors. The media type is always passed as a string (feel free to use the MediaTypeNames class to help here), but the data can be provided as a file name string, byte[], IEnumerable<byte>, or Stream. That's four methods; the fifth one takes just the file name string and infers the media type from the file's extension (ex: ".png" -> "image/png").

Aside: Technically, it would be possible for the other methods to infer media type as well by examining the bytes of data. However, doing so would require quite a bit more work and would always be subject to error. On the other hand, inferring media type from the file's extension is computationally trivial and much more likely to be correct in practice.

 

For an example of the DataUri helper in action, here's the Razor code to implement the "image from the database" scenario that started it all:

@{
    IEnumerable<dynamic> databaseImages;
    using (var database = Database.Open("Delay.Web.Helpers.Sample.Database"))
    {
        databaseImages = database.Query("SELECT * FROM Images");
    }

    // ...

    foreach(var image in databaseImages)
    {
        <p>
            <img src="@DataUri.Write(image.Content, image.MediaType)" alt="@image.Name"
                 width="@image.Width" height="@image.Height" style="vertical-align:middle"/>
            @image.Name
        </p>
    }
}

Easy-peasy lemon-squeezy!

 

And here's an example of using the file name-only override for media type inference:

<script src="@DataUri.Write(Server.MapPath("Sample-Script.js"))" type="text/javascript"></script>

Which comes out like this in the HTML that's sent to the browser:

<script src="data:text/javascript;base64,77u/ZG9j...PicpOw==" type="text/javascript"></script>
Aside: This particular example (using a data URI for a script file) doesn't render in all browsers. Specifically, Internet Explorer 8 (and earlier) blocks script delivered like this because of security concerns. Fortunately, Internet Explorer 9 has addressed those concerns and renders as expected. :)

 

[Click here to download the Delay.Web.Helpers assembly, complete source code, automated tests, and the sample web site.]

[Click here to go to the NuGet page for Delay.Web.Helpers which includes the DLL and its associated documentation/IntelliSense XML file.]

[Click here to go to the NuGet page for the Delay.Web.Helpers.SampleWebSite which includes the sample site that demonstrates everything.]

 

Data URIs are pretty neat things - though it's important to be aware they have their drawbacks as well. Fortunately, the Wikipedia article does a good job discussing the pros and cons, so I highly recommend looking it over before converting all your content. :) Creating a data URI manually isn't rocket science, but it is the kind of thing ASP.NET web helpers are perfectly suited for. If you're a base-64 nut, maybe you'll continue doing this by hand - but for everyone else, I hope the new DataUri class in the Delay.Web.Helpers assembly proves useful!

Each one is the best - for different definitions of "best" [The BestFitPanel collection of layout containers provides flexible, easy-to-use options for Silverlight, WPF, and Windows Phone applications]

Just over a year ago, a couple of readers asked me about a WPF/Silverlight Panel that arranged things to make "best use" of available space without requiring the developer to set a bunch of stuff up in advance or know how many child elements there would be. Interestingly, this is not a scenario the default Panel implementations handle particularly well...

  • Grid [WPF/SL/WP] is capable of pretty much anything, but requires the developer to explicitly specify how everything lines up relative to the rows and columns they must manually define.

  • StackPanel [WPF/SL/WP] arranges an arbitrary number of items in a tightly-packed line, but overflows when there are too many and leaves empty space when there are too few.

  • Canvas [WPF/SL/WP] provides the ultimate in flexibility, but contains absolutely no layout logic and pushes all that overhead onto the developer.

  • WrapPanel [WPF/SLTK/WPTK] flows its elements "book-style" left-to-right, top-to-bottom, but runs content off the screen when there's not enough room and can size things surprisingly unless you tell it how big items should be.

    Aside: When scrolling content that doesn't fit is acceptable, WrapPanel can be quite a good choice. And if you like the idea, but want something a little more aesthetically pleasing, please have a look at my BalancedWrapPanel implementation... :)
    Further aside: On the other hand, if you're looking for something more like a StackPanel but with multiple columns (or rows), you might instead be interested in my BandedStackPanel implementation.
  • DockPanel [WPF/SLTK] crams everything against the edge of its layout slot and leaves a big "chunk" in the center for whatever element is lucky enough to end up there.

  • UniformGrid [WPF] does okay at sensible layout without a lot of fuss - but its default behavior can leave a lot of blank space and so it's best if you tell it in advance how many items there are.

 

That said, please don't get me wrong: I'm not complaining about the default set of layout containers - I think they're all good at what they do! However, in the context of the original "just do the right thing for me" scenario, none of them quite seems ideal.

So when this question came up before, I mentioned I'd written some code that seemed pretty relevant, but that it was for Windows Forms and therefore didn't map cleanly to the different layout model used by Silverlight and WPF. Soon thereafter, I created a sample project to implement a "best fit" panel for Silverlight and WPF (and got nearly all the code written!) - but then found myself distracted by other topics and never managed to write it up formally...

 

Until now!

Today I'm sharing the three Panel classes I originally wrote for Silverlight and WPF, two abstract base classes they're built on, an extra Panel I wrote just for this post and a Windows Phone 7 sample application! (Because this code supports Silverlight 3, it works just as well on the phone as on the desktop.) Hopefully the extra goodness in today's release will offset the delay in posting it... :)

 

The foundation for everything, BestFitPanel is an abstract base class that implements MeasureOverride and ArrangeOverride to arrange its children in a grid that's M columns wide and N rows high. What's nice is that the values of M and N are left to subclasses to define by overriding the CalculateBestFit method. Therefore, a subclass only needs to worry about columns/rows and the base class only needs to worry about handling layout.

 

MostBigPanel is a BestFitPanel subclass that figures out which values of M and N maximize the length of the smaller dimension (be it width or height) of each item. In other words, it avoids long, skinny rectangles in favor of more evenly proportioned ones.

MostBigPanel

 

MostFullPanel is a BestFitPanel subclass that maximizes the total area occupied by the Panel's children. Specifically, an arrangement without any empty cells will be preferred over one with an empty cell or two - even if the shape of the resulting items is less balanced.

MostFullPanel

 

Sometimes it's nice to optimize for the "shape" of individual items - and for that there's the BestAnglePanel abstract base class which chooses the combination of M and N that yields items with a diagonal closest to some angle A determined by the GetIdealAngle override.

 

MostSquarePanel is a BestAnglePanel subclass that uses a value of 45° for A and therefore prefers arrangements where items are closest to being square.

MostSquarePanel

 

MostGoldenPanel, on the other hand, is a BestAnglePanel subclass that uses a value for A that matches that of a horizontally-oriented golden rectangle. Golden rectangles are said to be among the most aesthetically pleasing shapes, and this class makes it easy to create layouts based around them.

MostGoldenPanel

 

Of course, there are very few values of M and N to choose from, so it's not uncommon that all the implementations above choose the same values. The interesting differences tend to show up at various "special" sizes where each BestFitPanel selects a different layout. This is why the sample application allows you to enable all the panels at once: the sample content is translucent, so you can see where things differ and how each implementation is handling a particular configuration. I made sure all the arrangements above were unique - here's how it looks when they're all shown at once:

All BestFitPanels overlapped

 

For a real-world example of BestFitPanel in action, I've adapted the "ImageLoading" sample from my Windows Phone 7 PhonePerformance project to use MostBigPanel (which is what I would have used if I'd written this post beforehand!). If you're not familiar with that sample, it finds all the followers of an arbitrary Twitter account and shows their images. Because it's impossible to know in advance how many followers an account has, trying to use one of the "in-box" Panel implementations is likely to be tricky or require writing code to configure things at run-time. But BestFitPanel makes this scenario easy by automatically showing all the items and optimizing for the most important attribute ("bigness" in this case). Here's the same code/XAML with different numbers of followers (400, 200, and 100) to show how things "just work":

BestFitPanel with 400 items BestFitPanel with 200 items BestFitPanel with 100 items

 

[Click here to download the complete source code for all the BestFitPanels along with sample projects for Silverlight, WPF, and Windows Phone 7.]

 

The concept of a reusable, container-agnostic Panel for layout is tremendously powerful. The "stock" implementations for Silverlight, WPF, and Windows Phone are all quite useful, but sometimes you'll find that writing a custom Panel is the only way to get exactly the layout you're looking for. Fortunately, layout code is pretty straightforward - and classes like BestFitPanel and BestAnglePanel make it even easier. So the next time you're looking for a flexible container that works sensibly without requiring a bunch of prior knowledge or hand-holding, I hope you'll remember this post and consider using a BestFitPanel - or a custom subclass! :)