The blog of dlaa.me

Posts tagged "Technical"

Computing the size of your boat [Sample code to help analyze storage space requirements]

Yesterday I mentioned a quick C# program I wrote to help analyze storage space requirements. There was some interest in how that program worked, so I'm posting the complete source code for anyone to use.

using System;
using System.Collections.Generic;
using System.IO;

class SizeOfFilesCreatedOnDate
{
    
private const string outputFileName = "SizeOfFilesCreatedOnDate.csv";

    
static void Main(string[] args)
    {
        
// Create a dictionary to hold the date/size pairs (sorted for subsequent output)
        SortedDictionary<DateTime, long> sizeOfFilesCreatedOnDate = new SortedDictionary<DateTime, long>();

        
// Tally the contents of each specified directory
        // * If no command-line argument was given, default to the current directory
        if (0 == args.Length)
        {
            args =
new string[] { Environment.CurrentDirectory };
        }
        
foreach (string directory in args)
        {
            AddDirectoryContents(directory,
ref sizeOfFilesCreatedOnDate);
        }

        
// Output all date/size pairs to a CSV file in the current directory
        using (StreamWriter writer = File.CreateText(outputFileName))
        {
            writer.WriteLine(
"Date,Size,Cumulative");
            
long cumulative = 0;
            
foreach (DateTime date in sizeOfFilesCreatedOnDate.Keys)
            {
                
long size = sizeOfFilesCreatedOnDate[date];
                cumulative += size;
                writer.WriteLine(
"{0},{1},{2}", date.ToShortDateString(), size, cumulative);
            }
        }
        
Console.WriteLine("Output: {0}", outputFileName);
    }

    
private static void AddDirectoryContents(string directory, ref SortedDictionary<DateTime, long> sizeOfFilesCreatedOnDate)
    {
        
// Display status
        Console.WriteLine("Scanning: {0}", directory);

        
// Tally each child file in the parent directory
        foreach (string file in Directory.GetFiles(directory))
        {
            
// Get a FileInfo for the file
            FileInfo fileInfo = new FileInfo(file);

            
// Get the creation time of the file
            // * If last write < creation, then the file was moved at least once; use the earlier date
            // * The difference between local/UTC (~hours) is unimportant at this scale (~years); use local
            DateTime date = fileInfo.CreationTime.Date;
            
if (fileInfo.LastWriteTime.Date < date)
            {
                date = fileInfo.LastWriteTime.Date;
            }

            
// Update the relevant date/size pair
            long size;
            
if (!sizeOfFilesCreatedOnDate.TryGetValue(date, out size))
            {
                size = 0;
            }
            sizeOfFilesCreatedOnDate[date] = size + fileInfo.Length;
        }

        
// Recursively tally each child directory in the parent directory
        foreach (string childDirectory in Directory.GetDirectories(directory))
        {
            AddDirectoryContents(childDirectory,
ref sizeOfFilesCreatedOnDate);
        }
    }
}

Notes:

  • I wrote this code for a simple one-time purpose, so there's no fancy/friendly user interface.
  • There's also no error-checking. In particular, if it bumps into a directory/file that it doesn't have permission to access, the resulting UnauthorizedAccessException will bubble up and terminate the process. (While this is unlikely to occur when using the program for its intended purpose of examining your data files, it is pretty likely to occur if playing around and pointing it at C:\.)
  • Other than adding comments and support for specifying multiple directories on the command-line, this is the same code I used to generate my chart.
  • The code for handling last write time being earlier than creation time was something I discovered a need for experimentally when I noticed that considering only creation time reported that none of my files were older than a couple of years. Apparently when I moved stuff around a couple of years ago, the copy to my current drive preserved the file's last write time, but reset its creation time (perhaps because of the FAT->NTFS transition).

Enjoy!

Tags: Technical

"You're gonna need a bigger boat." [A brief look at data storage requirements in today's world]

I've previously blogged about my data storage/backup strategy. Briefly, I've got one big drive in my home server that stores all the data my family cares about: mostly music, pictures, and videos (with a little bit of other stuff for good measure). To protect the data, I've got another equally big external drive that I connect occasionally and use for backups by simply mirroring the content of the internal drive.

As things stand today, the internal drive is 320GB and the external drive is 300GB, but I've hit the wall and am almost out of space to add new files. Looking at hard drive prices these days, the sweet spot (measured in $/GB) seems to be with 500GB drives at about $140 (PATA or SATA). Any smaller than that and the delta from 300GB isn't enough to be interesting - any larger than that and the cost really goes up.

I was already prepared to buy a new drive every year or so to allow for growth, so I was curious if getting a 500GB drive now would do the trick. I wrote a quick program to look at every file I backup and tally up the size according to the date the file was created. The C# program walks the whole directory tree, sums the sizes by date, and writes out a simple CSV file with the results. The idea here is to chart the rate at which I'm adding data in order to predict when I'd run out of space next. (Yes, it's easy to come up with more sophisticated heuristics, but this is really just a back-of-the-envelope calculation and doesn't need to be perfect to be meaningful.)

Last night I opened the CSV file in Excel and charted the data. The resulting chart looks like this:

Data Storage Space (GB)

The blue line represents the cumulative size of the data I had at each point in time (horizontal axis) measured in GB (vertical axis) - you can see that I'm just above 300GB today. The red line is Excel's exponential trend line for the same data - it matches the blue line almost perfectly, so it seems pretty safe to say that my data storage needs are increasing exponentially. I was kind of afraid of that, because it means the 500GB drives I've been considering are likely to fill up within the next 8 months!

Clearly, I need to be prepared to spend more on hard drives than I'd initially planned to - or else I'm going to need to significantly change how I do things. I've got some ideas I'm still considering, but charting this data was a good wake-up call that drive capacity isn't increasing as rapidly as I might like. :)

I think that data storage and backup are issues that will affect all of us pretty soon (if they're not already). Backing up to DVDs doesn't scale well once you need more than 10 or so DVDs, and backing up over the network just doesn't seem practical when you're talking about numbers this large. Even ignoring the need to backup, simply storing all the data you have is rapidly becoming an issue. With downloadable HD movie/TV content becoming popular, high megapixel still/video cameras being commonplace, and fast Internet connections becoming the norm, it seems to me that content is outpacing storage right now.

Here's hoping for a quantum leap in storage technology!

Updated on 2007/03/14: I've just posted the source code for the program I wrote to gather this data.

Tags: Technical

A brief bit 'bout backups [My current backup strategy]

I've seen a few references to backup strategies on blogs and discussion lists lately and thought I'd write a bit about the strategy I recently decided on and implemented. Of course, everyone has their own approach to file management, their own comfort level for security, and their own ideas about what's "best". That's life and I'm not going to try to persuade anyone that my way is better than their way - but I will outline my way in case it's useful for others, too. :)

The setup: My machine is running Windows 2003 Server and I try to keep as much unnecessary stuff off it as possible (no games, no P2P programs, no weird drivers, etc.). Along the same lines, all user accounts on the server are members of the restricted access Users group, not the Administrators group. The machine has one hard drive for storing the operating system and all programs (60 GB) and another hard drive for storing all data (320 GB). The data drive has a Mirror directory under which all data to be backed up is stored. The Mirror directory is ACLed to allow the Users group read/write access. Non-private subdirectories of it are shared out for read-only access by Users. I have an external USB 2.0 drive enclosure for backing up to (200 GB) that is normally powered off and that I mirror the Mirror directory to every couple of days or so. The external drive is ACLed to allow only members of the Backup Operators group to make changes. My data consists of the usual personal stuff (email, source code, etc.), all digital photos I've ever taken, all digital video I've ever taken, sentimental stuff (like wedding videos, baby's ultrasound video, etc.), and some of my music collection in WMA Lossless format. Very little data changes day-to-day, so a simple tool like RoboCopy (free with the Windows 2003 Resource Kit) is more than enough to keep the backup directory in sync (use RoboCopy's /MIR switch to make this easy). Along with the rest of the data is a file that records the MD5 hash of every file in the backup. As my data storage needs increase (which they do each time I take a picture or shoot a video!), I'll eventually buy a new large hard drive and swap it for the smallest of the two data drives currently in use. As long as my storage needs don't grow too rapidly, I'm figuring the cost of upgrading to be about $100 each year (that's the cost of a mid-sized drive like the 320 GB I purchased a few months ago). I'm counting on storage capacity to continue increasing like it has so that I'll always be able to buy $100 drives when I need to increase the storage space.

Benefits provided by this approach:

  • All the data I care about is stored in two independent locations, so there's no single point of failure. (Duh, that's why it's a backup.)
  • Hard drive media doesn't suffer from the same "bit rot" problems that can render writable CDs/DVDs unreadable after just a couple of years.
  • The backup drive is completely separate from the primary drive, so if I ever make a mistake and delete something important, I can easily recover it from the backup. (Some RAID-based solutions immediately mirror all changes and therefore don't have this benefit.) Similarly, a destructive virus on my main machine can't immediately destroy all copies of any data.
  • I look over the list of changes whenever I perform the mirroring to the external drive, so I have an additional opportunity to catch accidental deletions, mysterious changes, etc..
  • I have immediate access to all of my data from any machine in my home. If I decide to look at old photos, I can access them just as easily as the photos I took yesterday.
  • All family members store their data under the Mirror directory (via appropriately ACLed shares), so everybody's data is automatically backed up.
  • In the event of a slow-moving catastrophe (ex: a flood) I can easily grab the external backup drive and take it with me wherever I go. All data will be accessible from any other Windows computer in the world.
  • The overall cost was minimal to set up (~$100) and should be minimal to maintain (~$100/year).
  • Data is separate from applications, so I can reinstall or upgrade the operating system whenever I want without worrying about the data itself.
  • User accounts have limited privileges and are therefore less likely to accidentally compromise the machine when reading email or surfing the web.
  • The MD5 hashes mean that it's easy to verify the contents of my backup drive and that I'll be able to detect data corruption problems if they ever happen.
  • The backup drive is ACLed so that I can't accidentally delete data on it.

Problems this approach does not solve:

  • Both drives are at the same physical location, so all data can be lost in the event of a sudden catastrophe (ex: fire, earthquake). Possible mitigation: Set up a third external drive (after the first upgrade) and keep that drive somewhere far away. It may not be big enough to hold everything, but I'm happy to exclude music from the off site backup. Drawback: Inconvenience of updating the off site drive.
  • "Old data" is lost quickly. For example: if I accidentally delete an important file, I need to detect that mistake at the time of the next mirroring or else that file is gone for good. Possible mitigation: Multiple backup drives at staged intervals (ex: 1 week, 1 month, 3 months). Drawback: Cost.
  • A thief who steals the computer or external drive might have access to personal data. Possible mitigation: Encryption. Drawback: Inconvenience of decrypting files to use them and/or backing up EFS keys.
  • This solution may not scale well if my data storage needs increase faster than storage technology does. Possible mitigation: Move to a different backup strategy. Drawback: That strategy will have its own problems.

I think this overview touches on pretty much all of the key points of this strategy. It's obviously not a perfect solution, but it meets most of my requirements and I'm pretty happy with how it's been working out so far. However, I'm always open to improvements - if you have any suggestions, I'd love to hear them!

Tags: Technical

When the GAC makes you gack (Part 2) [How something can be both IN and NOT IN the GAC at the same time]

In Part 1 we investigated a curious tool failure and discovered that it's possible for something to be both IN and NOT IN the GAC at the same time. The results of the investigation so far have been informative, but unrevealing. So let's try another approach...

The Sysinternals Filemon tool should let us see exactly where the tool is looking for vjslib.dll and maybe that will help figure out why it can't be found. Run Filemon, specify a filter of "*tool_name*" to limit the output, then run the program of interest. Filemon will capture a whole bunch of stuff, so save the output to a file where it can be searched more easily. We'll start with a simple string search for "vjslib" to see what turns up:

D:\Temp>findstr vjslib Filemon.LOG
327     6:58:10 PM      GACBlog.exe:212 QUERY INFORMATION       C:\WINDOWS\assem
bly\GAC_64\vjslib\2.0.0.0__b03f5f7f11d50a3a     PATH NOT FOUND  Attributes: Erro
r
328     6:58:10 PM      GACBlog.exe:212 QUERY INFORMATION       C:\WINDOWS\assem
bly\GAC_MSIL\vjslib\2.0.0.0__b03f5f7f11d50a3a   PATH NOT FOUND  Attributes: Erro
r
329     6:58:10 PM      GACBlog.exe:212 QUERY INFORMATION       C:\WINDOWS\assem
bly\GAC\vjslib\2.0.0.0__b03f5f7f11d50a3a        PATH NOT FOUND  Attributes: Erro
r
...

Oooh, that's interesting, there seem to be multiple GACs: GAC_64, GAC_MSIL, and GAC each get probed unsuccessfully. But we recall from Part 1 that gacutil told us vjslib was in the GAC, so what GAC is it in??

C:\WINDOWS\assembly>dir vjslib.dll /s
 Volume in drive C is C
 Volume Serial Number is 0C48-9782

 Directory of C:\WINDOWS\assembly\GAC_32\vjslib\2.0.0.0__b03f5f7f11d50a3a

2006-02-21  11:18 AM         3,661,824 vjslib.dll
               1 File(s)      3,661,824 bytes

     Total Files Listed:
               1 File(s)      3,661,824 bytes
...

Ah-ha! There's a fourth GAC, GAC_32, that contains the required assembly, but that GAC isn't getting checked when the tool is run and so the tool is failing.

At this point we can begin to guess that the problem is unique to my machine because I'm the only person who's trying to run the tool under a 64-bit OS. (Which also explains why most people won't have been able to reproduce this problem with the sample code in Part 1.) Now that we understand the problem a little better, we can pretty easily find more detailed information about what's going on by doing some quick web searching. In this case, Junfeng Zhang's post GAC, Assembly ProcessorArchitecture, and Probing explains the motivation behind multiple GACs.

Okay, so we've figured out what the problem is: the tool is compiled to be architecture-independent and the GAC probing sequence for architecture-independent programs on 64-bit OSes does not include the architecture-dependent 32-bit GAC where vjslib.dll is actually installed. The question now becomes how to fix the tool so that it will run successfully on both 32- and 64-bit OSes.

My first thought was to find out how to modify the probing sequence for the tool in order to include the 32-bit GAC. While that may be possible and that approach may work, a little more thought convinced me it wasn't the right approach to take here. The way I reasoned, if the tool has a dependency on a 32-bit reference, then the tool is not really architecture-independent after all! The problem is that the tool is claiming to be something it's not and that's what's causing it problems. If, when we compiled the tool, we were to specify the platform target of "x86" instead of "Any CPU" in Visual Studio (or using the /platform C# compiler option), then the tool should run successfully because it would naturally probe the 32-bit GAC where vjslib.dll lives.

And, indeed, that simple change fixes the tool, solves the problem, and answers the question of how something can be both IN and NOT IN the GAC at the same time! This investigation was a neat learning experience for me - I hope it's as much fun to read about as it was to experience! :)

Tags: Technical

When the GAC makes you gack (Part 1) [How something can be both IN and NOT IN the GAC at the same time]

I recently had occasion to use a particular tool for the first time and found that it didn't work on my machine. This was weird, because nobody else seemed to have any problems running the same tool on their machines. So I set out to determine what was wrong...

Simplifying things ridiculously for the purposes of this example, I'll note that the tool manipulates ZIP files, has a reference to "vjslib", and is compiled from code that looks something like this:

using java.util.zip;

class Program
{
    static void Main(string[] args)
    {
        new ZipFile("file.zip");
    }
}

(Aside: In case you're wondering what's up with the "java.util.zip" namespace and the reference to "vjslib.dll", I'll suggest that the author of this tool was probably following the recommendations of the article "Using the Zip Classes in the J# Class Libraries to Compress Files and Data with C#" which recommends exactly this approach. You may be aware that .NET 2.0 offers support for compressed streams via the classes in the new System.IO.Compression namespace. However, support for compressed streams is not the same thing as supporting the ZIP file format, so I believe this technique is still relevant.)

When run on my machine, the tool produces the following output (inadvertent profanity due to default 80-column wrapping of "assembly" removed for your protection):

Unhandled Exception: System.IO.FileNotFoundException: Could not load file or as*
embly 'vjslib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
' or one of its dependencies. The system cannot find the file specified.
File name: 'vjslib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d
50a3a'
   at Program.Main(String[] args)

That's odd, because my computer DOES have the Microsoft Visual J# Version 2.0 Redistributable Package installed as required (it comes with a Visual Studio Team Suite full install). But it's worth checking the GAC (Global Assembly Cache) anyway, just to be sure that vjslib is present there as we expect:

C:\Program Files\Microsoft.NET\SDK\v2.0 64bit>gacutil -l vjslib
Microsoft (R) .NET Global Assembly Cache Utility.  Version 2.0.50727.42
Copyright (c) Microsoft Corporation.  All rights reserved.

The Global Assembly Cache contains the following assemblies:
  vjslib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, pro
cessorArchitecture=x86

Number of items = 1

Yup, it's in there. So why can't it be found by the tool? To try to answer that question, we turn to the Assembly Binding Log Viewer (Fuslogvw.exe). Just run the viewer, enable the "Log bind failures to disk" setting, run the tool again, then refresh the viewer and open the failed binding entry to see the following (abbreviated) output:

*** Assembly Binder Log Entry  (2006-03-23 @ 10:52:44 AM) ***

The operation failed.
Bind result: hr = 0x80070002. The system cannot find the file specified.

...

LOG: Post-policy reference: vjslib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
LOG: GAC Lookup was unsuccessful.
...
LOG: All probing URLs attempted and failed.

Hum, that's really odd... We know vjslib is in the GAC, yet it can't be found in the GAC. My machine is correctly configured, has the necessary components installed, and appears to be working fine in every other respect.

So what's going on here??

(Stay tuned for the exciting answer in Part 2!)

Tags: Technical

An image is now worth two thousand HTML tags [How to: Use ASP.NET's IHttpHandler interface to display a custom image]

In a previous post, I referred to some MSDN sample code and outlined the process of creating an ASP.NET page to display an image instead of HTML. As is often the case, the relevant sample code was written to demonstrate a concept rather than to be "production-ready". In one of the comments for that post, Heath Stewart suggested the use of a .ashx file as a more efficient way to do the same thing. This seems like a great opportunity to learn a little more about ASP.NET, so let's give it a try!

First, a little research is in order - I recommend starting with the documentation for Creating HttpHandlers and following up by learning a little about the corresponding IHttpHandler Interface. Armed with that knowledge, we should be able to convert the modified sample code over to the IHttpHandler interface.

Begin by creating a new file in the web site directory you were already using. I used Visual Studio's "Add New Item" action to add a "Generic Handler" page that was automatically named Handler.ashx. I then pasted in the existing code from my earlier post, tweaked a few minor things, and had a working IHttpHandler in considerably less time than it's taken me to write this post. :)

Here's what I ended up with in my Handler.ashx:

<%@ WebHandler Language="C#" Class="Handler" %>

using System.Drawing;
using System.Drawing.Imaging;
using System.Web;

public class Handler : IHttpHandler
{

    public void ProcessRequest(HttpContext context)
    {
        // Set the page's content type to JPEG files
        context.Response.ContentType = "image/jpeg";

        // Create integer variables.
        int height = 100;
        int width = 200;

        // ...

        // Create a bitmap and use it to create a
        // Graphics object.
        using (Bitmap bmp = new Bitmap(width, height, PixelFormat.Format24bppRgb))
        {
            using (Graphics g = Graphics.FromImage(bmp))
            {

                // ...

                g.Clear(Color.LightGray);

                // ...

            }

            // Save the bitmap to the response stream and
            // convert it to JPEG format.
            bmp.Save(context.Response.OutputStream, ImageFormat.Jpeg);
        }
    }

    public bool IsReusable
    {
        get
        {
            return true;
        }
    }
}

As you can see, the code is almost identical to the initial Page-based implementation! The only thing I think is worth calling out is that I've chosen to return "true" for the IsReusable property. The documentation for the IHttpHandler.IsReusable Property suggests that this property can be used to prevent concurrent use of the instance. Since our simple implementation doesn't make use of any global state, it should be safely reentrant, and therefore returns "true" to help ASP.NET avoid creating unnecessary instances of the class.

There you have it: a simple change that gives us an implementation consuming fewer resources and consequently scaling better on a busy web server. Thanks for the great suggestion, Heath!!

Tags: Technical

Start using using today! [A bit about the IDisposable interface and the using statement]

There's plenty to say about the IDisposable interface and the using statement, but you probably don't have time to read it all (and I don't have time to write it all!), so I'm going to try to keep this short and simple.

First, let's make sure we're all on the same page. If you're not familiar with the relevant concepts, please take a moment to learn about .NET Garbage Collection, the IDisposable interface, the using statement, and the use of objects that implement IDisposable. (If you're a bibliophile, I understand that the book "Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries" (ISBN 0321246756) contains additional material in section 10.3, "Dispose Pattern".)

Now that we're all familiar with the concepts, I'd like to call attention to a few things:

  • Implementation of the IDisposable interface by the author of a class is optional and completely unnecessary for a correctly written class. Even without IDisposable, the Garbage Collector will eventually clean up all of the class's resources (possibly with the help of a Finalize method override).
  • However, without IDisposable, there is no way of controlling *when* a class's resources will be cleaned up. The Garbage Collector (GC) runs only when it needs to, so it could be seconds, minutes, or even hours after you're done using a class before the GC runs and cleans up those resources. Any resources held by that class (like file handles, sockets, SQL connections, etc.) will remain in use until that class is cleaned up by the GC. This can cause unexpected problems when, for example, the user has closed a file in an application, but the application continues holding on to that file and prevents the user from copying it, moving it, etc.. Worse still, such problems will occur "randomly" according to whether or not the GC has run.
  • So it's a good practice to call IDisposable.Dispose whenever you're done using a class that implements IDisposable so that its resources can be freed immediately and deterministically (i.e., predictably). As a side effect, this helps minimize the resource consumption of your application which always makes users happy.
  • However, simply adding a call to Dispose at the end of a block of code isn't a complete solution because it means that Dispose won't get called if an Exception is thrown anywhere within that block of code. If you're going to call Dispose, you want to *always* call Dispose, so you really want the call to Dispose to be within the finally block of a try-finally statement pair.
  • To address this, you can manually add a try-finally pair, *or* you can use the using statement which does so for you! Under the covers, the using statement maps to a try-finally pair which calls Dispose for the specified object within the finally block, but the beauty of the using statement is that it's a simpler, more concise way of doing so that keeps all the relevant code in one place and hides the gory details from view. You can even declare and initialize the object in the using statement itself! (See the documentation for using for examples.)

With these points in mind, I propose following guidelines whenever dealing with an object that implements the IDisposable interface:

  • Always call the object's Dispose() method
  • Call Dispose() under all conditions (i.e., within a finally block)
  • Call Dispose() as soon as you're done with the object

Conveniently, the using statement makes it easy to do *all* of these things! The using statement is a simple programming construct that's very readable and that helps your code perform reliably, predictably, and efficiently. It doesn't get much better than that, so if you aren't already, please start using using today!

Tags: Technical

An image is worth a thousand HTML tags [How to: Display a custom image with an ASP.NET web page]

Imagine that you want to generate a custom image for your web site and that the content of the image will be dynamic enough that it's not possible to create the images beforehand. (One example might be an image of a clock displaying the current time.) It sure would be nice if you were able to create that image from an ASP.NET .aspx page whenever it was needed. But .aspx pages have to return HTML code, don't they? As it happens, they don't! While that's the job they perform most of the time, they can actually return any arbitrary data you'd like.

In this case, we want figure out how to return an image. Like many things in life, it's pretty easy once you know where to look. Specifically, the MSDN documentation for the HttpResponse.OutputStream property is a great place to start. The example on that page shows how to generate a custom JPEG image and return it from an ASP.NET .aspx page. Click the "Copy Code" link, create a new ASP.NET web site in your favorite editor, replace the contents of Default.aspx with the sample code, and view it in your favorite browser. Problem solved! (Well, close enough - customizing that image is left as an exercise to the reader. :) )

Digging a little deeper, we see that the sample code looks something like this:

private void Page_Load(object sender, EventArgs e)
{
    // Set the page's content type to JPEG files
    // and clear all response headers.
    Response.ContentType = "image/jpeg";
    Response.Clear();

    // Buffer response so that page is sent
    // after processing is complete.
    Response.BufferOutput = true;

    // ...
 
    // Create integer variables.
    int height = 100;
    int width = 200;

    // ...

    // Create a bitmap and use it to create a
    // Graphics object.
    Bitmap bmp = new Bitmap(
        width, height, PixelFormat.Format24bppRgb);
    Graphics g = Graphics.FromImage(bmp);

    // ...

    g.Clear(Color.LightGray);
    
    // ...
    
    // Save the bitmap to the response stream and
    // convert it to JPEG format.
    bmp.Save(Response.OutputStream, ImageFormat.Jpeg);

    // Release memory used by the Graphics object
    // and the bitmap.
    g.Dispose();
    bmp.Dispose();

    // Send the output to the client.
    Response.Flush();
}

Were I to use this code in a web page, I might change it to look a little more like this:

private void Page_Load(object sender, EventArgs e)
{
    // Set the page's content type to JPEG files
    // and clear all response headers.
    Response.ContentType = "image/jpeg";
    Response.Clear();

    // ...
 
    // Create integer variables.
    int height = 100;
    int width = 200;

    // ...

    // Create a bitmap and use it to create a
    // Graphics object.
    using (Bitmap bmp = new Bitmap(
        width, height, PixelFormat.Format24bppRgb))
    {
        using (Graphics g = Graphics.FromImage(bmp))
        {

            // ...

            g.Clear(Color.LightGray);

            // ...

        }

        // Save the bitmap to the response stream and
        // convert it to JPEG format.
        bmp.Save(Response.OutputStream, ImageFormat.Jpeg);
    }
}

Why? Well, I made three basic changes that I believe improve upon the original sample:

  1. I removed the use of HttpResponse.BufferOutput and HttpResponse.Flush. There seems to me to be no need to restrict the flow of data to the client in this case (on the contrary, let's allow it to start rendering the image as soon as possible). Unnecessary code is bad code, so it's gone.
  2. I converted the sample to take advantage of the using statement. You can read more about using in MSDN or wait for my next post in which I'll cover this handy statement in more detail.
  3. I dispose of the Graphics object as soon as it's no longer needed. Again, I'll cover the reasons for this in more detail in my post about the using statement.

So there you have it: Custom image generation with ASP.NET. It's easy to do and it leverages all your existing knowledge about graphics under .NET!

Tags: Technical

ASP.Newbie [Learning ASP.NET for fun and profit]

My new group at Microsoft has me learning ASP.NET. While I'm no stranger to .NET (having coded in C# for a few years now), ASP.NET hasn't been something I've used before now. Sure, I've had a few ideas about simple web pages that seem like they could be handy, but I've never had the time (or web server) to play around with ASP.NET. Well, it looks like my time has come...

The good news is that it's easier than ever to get started with ASP.NET. Start by downloading Visual Web Developer Express, a professional-grade development environment that's free for a year! (It even comes with a small, development-friendly web server that will save you from having to install IIS - nice touch!) Then surf on over to http://asp.net/ where you'll find a wealth of information about ASP.NET. Pay particular attention to the Tutorials which provide a great overview of ASP.NET. And while you're there, maybe you'll want to check out the technology preview of Atlas, an ASP.NET extension that's going to make AJAX applications simple to develop.

Lots of good stuff. No money down. What are you waiting for??

Tags: Technical