Critical Development

Language design, framework development, UI design, robotics and more.

Archive for April, 2008

Phidgets Robotics Programming in C#

Posted by Dan Vanderboom on April 21, 2008

Download the Source

Bridgeware is the word you need to know for rapidly prototyping electronics and robotics.  These components bridge programs running on computers with electronic components such as sensors and motors.  There’s a good video about it by Trossen Robotics on You Tube.  I bought my components from Trossen Robotics online and I recommend them (they have a number of how-to videos).

Day 1

I got my Phidgets components in the mail, and went right to work assembling my pan-tilt camera mount, controlled with two servos from a controller board, and connected to my laptop from there by a USB cable.  A tiny allen wrench was included, but not a phillips screwdriver, which you will also need.  You can see the components assembled and connected in the picture below.

Photo of robotics components

The black thing in the back right corner is just a USB hub.  This is connected to (on the right) an 8/8/8 controller (with lots of room for more components to plug in), and then to the mini joystick.  The other USB connector plugs into a 4-servo controller, and then to the dual servos in the pan-tilt assembly in the back left of the picture.

On the software side, using the Phidgets .NET API was very easy.  Just instantiate a Servo object, provide an event handler to run when a device is attached, and from there I was able to set each servo position or read the position back.  The only confusing part was realizing that the servo controller board is represented by the Servo class, and the individual servos plugged into that controller are represented by the ServoServo class.  (Was this API written in Bora Bora, perhaps?)  I would have named them ServoController and Servo, respectively, but I got over it and moved on.

What you see visualized in the test application (see screenshot below) is a coordinate graph custom control that displays the position of both servos.  When I hooked up the mini joystick, I made the mistake of mapping the joystick position to the graph, but then realized that my graph would be useless unless I was controlling everything with the joystick.  I wanted to script out servo movements and replay the sequences and still have their position represented in the coordinate graph control, so I made that update (and ever since have been unable to calibrate the system to center itself).

image

Hooking up the hardware took only 15 minutes, and most of the application involving the Phidgets API took an hour or two at the most, but I spent the rest of the day creating the custom graph control and getting it to translate coordinate systems correctly.  The joystick maps values from 0 to 1000 along each axis, the servos have a servo position range of -22 to 232 on one axis and something close to that on the other, and the graph control was 150 pixels wide.  I made it general enough to work with any coordinate ranges on both axes.

First Impressions

I have to say that it’s really cool to see physical objects interacting with software that you write yourself.  (It reminds me of fun I had with a hydraulic robotic arm I programmed in high school using an Apple 2c and low-level calls to the parallel port, but this is way more powerful).  The bridgeware components are easy to connect, and the APIs are a breeze to use.  Building the intelligence between inputs and outputs, however, is the really fun and challenging part.  Even though this initial experiement was simple, I can already tell that coordinating a much more complicated set of inputs and outputs will require careful planning and the use of tools such as Microsoft Robotics Studio, which include the Concurrency & Coordination Runtime and Decentralized Software Services Protocol.

Now that I’ve gotten my feet wet and have some confidence that I can build, connect, and interface with these components (and have a box full of other goodies like 3-axis accelerometers, light and temperature sensors, sliders, buttons, and switches), I have a bunch of ideas for at least one cool summer project that I hope to announce in the next month or two.

Posted in Custom Controls, Microsoft Robotics Studio, My Software | 10 Comments »

Introduction to Port-Based Asynchronous Messaging

Posted by Dan Vanderboom on April 21, 2008

…using terminology and examples from the Concurrency & Coordination Runtime in C#

 

Port-Based Messaging

Port-based messaging has been in practice for a long time.  In the realm of low-level electronics, components have always been operating in parallel, hardware interface ports are designed around standards, and messages are posted to those ports, queuing up somewhere until the receiver is ready to process them.  This pattern has worked extremely well in multiple domains for a long time, and its key characteristic is the decoupling of data flow from control flow.

In sequential programs, one statement runs after another each time it’s run, and the behavior is predictable and repeatable. Concurrency is difficult because you have to consider all possible interleavings of multiple simultaneous tasks, so the overall behavior is nondeterministic. Depending on the relative timings of concurrent tasks, you could get different results each time if you’re not careful to set the appropriate locks on shared resources.  Port-based messaging architectures isolate islands of state across different execution contexts, and connect them with safe, atomic messages delivered through pre-defined ports.

Basic port-based asynchronous messaging

The posting of a message to a port, as shown in Figure 1, is followed by some handler method that is receiving and processing messages.  What’s not evident in the diagram, however, is that while data flows into the port, that posting is a non-blocking call.  The sender continues on doing other work, taking the time only to queue up a message somewhere.

Queuing is important because, even with large thread pools, we can’t guaranty that a receiver will be listening at the very moment a message arrives.  Letting them queue up means the receiver doesn’t have to block on a thread to wait.  Instead, the data waits and the thread checks messages on the port when it can.

Port showing message queue

What is a Port?

So what exactly is a port?  A port is a communication end point, but not in the sense of “a web service on a physical server”.  Think much more fine grained than that, even more fine-grained than methods.  With sequential programming, we commonly use try-catch blocks and handle both the exceptional and non-exceptional results of operations within a single method.  In port-based programming, you post a message to a port, which results in some handler method running on the receiving end, and different results can be sent back on different callback ports depending on the type of message.  Instead of calling a method that returns back to you when it ends, port-based programming is about always moving forward, and unwinding a call stack has very little meaning here.

Sequence diagram of port-based messaging

We can see in the sequence diagram above (Figure 3) a collection of services that communicate with and depend on each other.  Starting from the top, the left-most service posts a message to port 1, and then goes on to do other work or surrenders its thread back to the dispatcher for other tasks that are waiting to run.  A registered method on port 1 runs, and the logic there needs another service to complete it’s task, so it posts a message on port 2, and also continues processing without waiting.  The path of solid blue arrow lines traces the message path for normal execution.  If anything goes wrong, an exception can be posted to a different callback port, shown with a red outline in the diagram.

This diagram shows one possible composition of services and data flow.  Port sets, which are simply a collection of related ports, are shown as callback receivers in pairs, but they can consist of any number of ports with any mixture of messages types, depending on the needs of the system being coordinated.  In this example, if anything goes wrong in the handler methods at ports 2, 5, or 6, an exception message will be routed to port 6, where another handler method can compensate for or report on the error.  Also note that while during startup this system may have to process data at port 4 before the logic at ports 5, 7, and 8 can run… once it gets going, there could be activity operating at many ports concurrently (not just one port per service).

Arbiters, Dispatchers, and DispatcherQueues

Now it’s time to peel away some of the layers of simplification presented so far.  (It may help to have a beer or glass of wine at this point.)

An arbiter is a rule (or set of rules) about when and how to process messages for a specific port (or set of ports).  (It is helpful to think of arbiter as a data flow or message flow coordinator.)  Should messages be pulled off the queue as soon as they arrive?  Should the software wait until 5 messages have arrived before processing them all as a group?  Should messages be checked according to a timer firing every 20 seconds?  Should logic be run only when two ports have messages waiting (a join)?  What logic should be run when one of these conditions occurs?  Can method handlers on three specific ports run concurrently until a message arrives on a fourth port, whose handler must run exclusively, and when done the other three can run again (interleave)?  These are just a few of the many coordination patterns that can be expressed with different types of arbiters (and hierarchically nested arbiters, which are ingenious).

Arbiters, Dispatchers, and DispatcherQueues

Figure 4 illustrates that an arbiter is associated with a port to monitor and a method to execute under the right conditions.  The logic of the arbiter, depending on its type and configuration, determines whether to handle the message.  It gets its thread from a thread dispatcher, which contains a thread pool.  (Not the same as System.Threading.ThreadPool, though, as there can only be one of those per process.)

The next diagram (figure 5) could represent a join coordination.  An arbiter waits for messages on two ports, and depending on how it’s defined, it may process messages from one port repeatedly, but as soon as it receives a message on the second port (it may be an exception port, for example), the whole arbiter might tear itself down so that no more handling on those port will occur.  As you are probably starting to see, composition and attachment of arbiters are key to controlling specific patterns of coordination in arbitrarily powerful and flexible ways.

Multiple DispatcherQueues and complex Arbiters.

In the Concurrency & Coordination Runtime, we can attach and detach these arbiters during runtime; we don’t have to define them statically at compile time.  There has been some criticism towards this approach because dynamic arbitration rules are much more difficult to verify formally with analysis, and are therefore difficult to optimize compilation and thread management for, but the advantages of this flexibility are enormous and the performance (see this paper by Chrystanthakopoulos and Singh) has been very impressive compared to conventional multithreading approaches.  Ultimately, it’s not about whether we can guaranty 100% that nothing will go wrong using only the mathematical models currently in our repertoire, but whether we can be productive with these techniques to release software that meets acceptable quality standards across a broad range of application domains.

I don’t think we’re going to find a better set of strategies to work with anytime soon, and when we’ve pushed this technology hard enough, the tactics will be fine tuned and we can start absorbing some of these coordination concepts into languages themselves (without sacrificing the dynamism that a library of composable parts provides).  People are going to attempt concurrent programming whether it’s safe or not, and using a library such as the CCR significantly reduces the risk of ad hoc multi-threading code.

Cooperative Multitasking

When mainstream operating systems like Windows took their first steps to support multi-tasking, cooperative versus preemptive multi-tasking was a common topic.  The idea of an operating system depending on applications to surrender control in a well-behaved way was generally and rightfully considered a bad idea.  Any kind of error or instability in software could easily bring down the entire operating system, and enforcing a quickly growing community of software vendors to share nicely wasn’t a realistic option.  Being preemptive meant that the OS could forcefully stop an application from running after giving it a small, measured slice of time, and then switch the thread to a new context where another application could run for another time slice.  Regardless of how poorly applications ran, as benevolent dictator, the OS could ensure a fair scheduling of processor time.

The solution encapsulated in the Concurrency & Coordination Runtime is, on the other hand, a cooperative multi-tasking strategy.  However, because it operates within the local scope of an individual process, and is isolated from other processes in the same OS, its risk of destabilizing the system is nonexistent.  This deep level of cooperation, in fact, is what gives the CCR its great performance.  When used correctly, which George Chrysanthakopoulos (in this video) and his colleagues have brilliantly put within our reach in the CCR library, threads don’t sit around waiting on some resource or for long-running operations to complete; instead, control is freely surrendered back to the thread pool, where it is quickly assigned to a new task.

Finally, by surrendering threads freely instead of holding onto them, a continuous flow of threads through the different activities of the system is maintained, and there is therefore always an abundance of them to handle new messages waiting on ports.  Existing threads are not wasted.  As the Tao Te Ching says:

If you want to accord with the Tao,
just do your job, then let go.

Control & Data Flow: Sequential vs. Concurrent

In sequential programs, stacks are used to unwind method calls and provide return values (return messages), and threads follow the data flow; whereas in port-based programming, threads are managed by one or more thread dispatchers that are capable of maximizing the use of that thread by making it available in a pool and sharing it with with many other (potentially unrelated) tasks.  Data flows orthogonally and according to a different coordination strategy than control flow.  This task-thread agnosticism (the opposite of thread-affinity) is similar to the statelessness of a web server such as IIS; one or more threads from a large pool are injected into the tasks of processing, rendering, and serving up huge numbers of web pages, after which those threads are recycled back into the thread pool for execution of other tasks for a highly concurrent and scalable service platform.

So herein lies the trick: in order to split this coupling between data flow and control flow, a different means is needed to compose the two coordination strategies.  In C# and other popular imperative programming languages, methods implicitly pass thread control along with data arguments (the message), and the use of the stack for method calls asserts constraints on control flow, so making the CCR work involves some interesting patterns.

That’s why port-based programming is hard to get your head around.  It’s such a large shift from common sequential logic and it requires some additional planning (and good visualizations).  It’s obviously important to have a good set of tools for expressing that coordination, a simple set of conceptual primitives that allows us to compose arbitrarily-complex coordination patterns.  These primitives, including Message, Port, PortSet, Dispatcher (thread pool), and others provide the building blocks that we can use to define these relationships.  Once we define what we want to happen with these building blocks, the CCR can make it all happen.

This level of coordination is a level beyond the strategies used by most concurrent applications and frameworks in software today, primarily because there hasn’t been a pressing need for it until recently–processors had been growing phenomenally in speed for many years.  Now, however, we’re told that processor speed has plateaued, that we now have to scale out to scale up, spreading the processing load across multiple cores.  We are very fortunate that the work being done by researchers in fields like robotics can be applied in other service oriented architectures, and is a testament to the widespread use of the .NET Framework and the fantastic efforts of some very bright individuals.

Where to Find Microsoft Robotics Studio

Robotics Studio is a free download and can be found here, and while it is full of other good stuff, it’s worth checking out for the Concurrency and Coordination Runtime alone.

Posted in Compact Framework, Concurrency, Data Structures, Design Patterns, Microsoft Robotics Studio, Object Oriented Design, Service Oriented Architecture, Software Architecture | Leave a Comment »

Visual Studio Build Bug in CF Controls Project using Generics

Posted by Dan Vanderboom on April 21, 2008

So there I was, minding my own business, creating some custom controls in a reusable framework project.  I needed to apply some design-time attributes, and because I’m working with Compact Framework, I attempted to add my DesignTimeAttributes.xtma file to accomplish this, which I’ve done many times before.

Bang!  I got an error message: “genasm.exe has encountered a problem and needs to close.  We are sorry for the inconvenience.”  In the task list was this error: “Process is terminated due to StackOverflowException.”  Stack overflow?!

I checked (and checked again) to make sure that my .xtma file was properly formatted, which it was, and I simplified it to include only a single DesktopCompatible attribute; being a boolean, I figured it was hard to mess that up (compared to something trickier like a Designer attribute).  But alas, I was stumped, and for the past week or two gave up and went without the attributes in that project, putting the majority of my controls in a separate project.

Today I decided I was on the mission of isolating the problem and getting to the bottom of this.  While I don’t know the root cause (I’ve submitted a bug report to Microsoft), I can reproduce the problem and describe the pitfall so some poor soul can find it while Googling for help.

It happens in Compact Framework projects.  To reproduce it, create a CF 3.5 project (I chose class library), and add a file with the following code:

public class atree<T> where T : atree<T>
{
} 

public class btree : atree<btree>
{
}

 

Note that if these classes are changed from public to internal, the problem disappears.

Then add a DesignTimeAttributes.xtma file, change nothing in it, and try to build.  Mine looks like this:

<?xml version="1.0" encoding="utf-16"?>
<Classes xmlns="http://schemas.microsoft.com/VisualStudio/2004/03/SmartDevices/XMTA.xsd">
</Classes>

 

That’s it, and I get the nasty errors described above.  I’m really curious to know what genasm.exe is doing, how it’s using some recursive algorithm to examine somewhat self-referencing generic definitions (I admit that the generics constraint in the code above is a bit on the odd side).

Posted in Compact Framework, Custom Controls, Visual Studio | 7 Comments »

.NET Micro Framework vs. Microsoft Robotics Studio

Posted by Dan Vanderboom on April 12, 2008

The .NET Micro Framework is a compatible subset of the full .NET Framework, similar to how the Compact Framework is a subset.  But the Micro Framework can act as its own real time operating system (RTOS) instead of loading the tiny CLR in a host operating system, and works with a variety of hardware devices, including those that don’t have their own memory management unit (MMU).  The gist is that embedded applications as well as low-level drivers can now be written in managed code (a huge leap forward), and take advantage of garbage collection, strong-typing, and other nice features that are typically absent in embedded systems programming.  This supposedly reduces errors, boosts productivity, and reduces development costs.

I’ve been watching videos like this, reading about the Micro Framework for the past few months, have pre-ordered this book from Amazon, and have been itching to get my hands on a hardware development kit to start experimenting.  The only problem is that the interfaces for these embedded devices are so foreign to me (I2C, GPIO, etc.), and I’m not exactly sure what my options are for assembling components.  What kinds of sensors are available?  Do I have to solder these pieces together or is there a nice modular plug system similar to the SATA, IDE, and PCI connectors on modern computers (but on a smaller scale)?  Do I have to write my own device drivers for anything I plug in, or is there some abstraction layer for certain classes of inputs and outputs?

The other issue that makes me hesitate is the thought of programming without language conveniences like generics on a more resource-constrained device than the Windows Mobile devices I’m already frustrated with (and have been for the past four years).

I’m not saying that I’m not excited about managed code on tiny embedded devices, and I’m not saying I won’t start playing with (and blogging about) this important technology sometime in the next few months, but I’ve discovered another platform for embedded device development with the ability to interact directly with the physical world that offers a much lower technical barrier for entry, so that’s where I’m starting.

What I’m referring to is Microsoft Robotics Studio, which is by all accounts a phenomenal platform for more than just robotics, and seems to overlap somewhat with the Micro Framework’s reach (at the intersection of the computer-digital and physical-analog worlds).  The two critical components of this architecture are the Decentralized Software Services Protocol (DSSP, originally named WSAP) and the Concurrency & Coordination Runtime (CCR).  Make sure you watch this video on the CCR, and note how George emphasizes that “it’s not about concurrency, it’s about coordination”, which I found especially enlightening.  These are highly robust and general purpose libraries, and both of them will run on Compact Framework!  (I was very impressed when I read this.)

Without having studied them both in depth yet, the CCR seems to cannibalize on the Task Parallel Library (TPL), at least conceptually, offering a much more complete system of thread manipulation and synchronization than the original System.Threading types, all for a vertical industry that has much greater demands of a concurrency framework that must, for example, deliver massive concurrency and complex coordination rules in a highly-distributed system, all the while handling full and partial failures gracefully.  Some of the patterns and idioms for making concurrency and synchronization operations easy to write and understand are masterfully designed by Henrik Nielsen and George Chrysanthakopoulos (and I thought Vanderboom was a long name!).

The fact that the CCR and DSSP were developed together is significant.  Tasks running in parallel need to be manipulated, coordinated, and tracked regardless of whether they’re running on four processors in one computer or on 256 processors spread across a hundred devices, and distributed services need a dependable way of parallelizing their efforts on individual machines.  This is a synergistic relationship, each subsystem enhancing the elegance and usefulness of the other.  So why not use the CCR in every application instead of developing the TPL?  Are these two teams actively collaborating, or have they been building the two frameworks in isolation?

I also have to make special mention of the decentralized software services, which as a protocol sits on top of HTTP in a RESTful implementation.  It supports composition of services by creating partnerships with other services, defining securely which partnerships are allowed, offering identification and discovery of services, and much more.  In this assembly then, we have a robust library for building distributed, composable, decentralized services-oriented systems with event publication and subscription, with services that can be hosted on desktop and mobile computers alike.  Wow, call me a geek, but that’s really frickin’ cool!  Some of the ideas I have for future projects require this kind of architectural platform, and I’ve been casually designing bits and pieces of such a system (and got so far as getting the communication subsystem working in CF).  Now, however, I might be turning my attention to seeing if I can’t use the Robotics Studio APIs instead.

One thing to be clear about is that Robotics Studio doesn’t support Micro Framework, at least not yet.  I wonder exactly how much value there would be to adding this support.  With hardware options such as this 2.6″ x 2.3″ motherboard with a 1.6 GHz Intel Atom processor, video support, and up to 1 GB of RAM, capable of running Windows XP or XP Embedded, and priced at $95 (or a 1.1 GHz version for $45), we’re already at a small enough form factor and scale for virtually any autonomous or semi-autonomous robotics (or general embedded) applications.  There’s also the possibility for one or more Micro Framework devices to exchange messages with a hardware module that is supported in Robotics Studio, such as the tiny board just mentioned.

Where do we go next?  For me, I just couldn’t resist jumping in with both feet, so I ordered about $500 in robotics gear from Phidgets: USB interface boards, light and tempature sensors, a 3-axis accelerometer (think Wii controller), motors and servos, an RFID reader with tags, LEDs, buttons, knobs, sliders, switches, and more.  I plan to use some of this for projects at CarSpot, and the rest simply to be creative with and have fun.  I also pre-ordered this book, but the publication date is set for June 10th, so by the time I get it, I may have already figured most of it out.

Posted in Compact Framework, Concurrency, Distributed Architecture, Micro Framework, Microsoft Robotics Studio, Service Oriented Architecture, Software Architecture | 2 Comments »

Using Extension Methods to Manipulate Control Bitmaps in Compact Framework

Posted by Dan Vanderboom on April 11, 2008

I’m loving extension methods.  All of the methods that I wish BCL classes had, I can now add.  While I consider it highly unfortunate that we can’t yet add extension properties, events, or static members of any kind, still it’s a great amount of power in terms of making functionality discoverable in ways not possible before.

During the implementation of my Compact Framework application’s MVC framework, I wanted to support displaying views modally.  However, using a screen stack of UserControls that are all hosted in a single master Form object, I lose out on this built-in functionality and so found myself in need of creating this behavior myself.  One of the difficulties in doing this is displaying a view that may not cover every portion of other views beneath it; if the user clicks on one of the views “underneath”, that control gets activated, and if pressed on a control, that control will handle the event (such as Button.Click).

My solution to the problem is simple.  Take a snapshot of the master form and everything on it, create a PictureBox control that covers the whole form and bring it to front, and set its image to the snapshot bitmap.  Doing this provides the illusion that the user is still looking at the same form full of controls, and yet if they touch any part of the screen, they’ll be touching a PictureBox that just ignores them.  The application is then free to open a new view UserControl on top of that.  When the window is finally closed, the MVC infrastructure code tears down the PictureBox, and the real interface once again becomes available for interaction.

Screenshots before and after screen capture and darkening

In addition, I wanted the ability to emphasize the modal view, so you can see from the picture above that I decided to accomplish this by de-emphasizing the background bitmap.  By darkening the snapshot, the pop-up modal view really does seem to pop out.  The only problem with bitmap manipulation using the Compact Framework library is that it’s extremely slow, but I get around this by using some unsafe code to manipulate the memory region where the bitmap image gets mapped.  (If you’re unfamiliar with the unsafe keyword, don’t worry: this code actually is safe to use.)

Here is the full source code for taking a snapshot of a form (or any control), as well as adjusting the brightness.

using System;
using System.Linq;
using System.Collections.Generic;
using System.Text;
using System.Drawing;
using System.Drawing.Imaging;
using System.Windows.Forms;
using System.Runtime.InteropServices;

public static class ControlBitmapExtensions
{
    [DllImport("coredll.dll")]
    private static extern bool BitBlt(IntPtr hdc, int nXDest, int nYDest, int nWidth, int nHeight,
        IntPtr hdcSrc, int nXSrc, int nYSrc, int dwRop);

    public struct PixelData
    {
        public byte Blue;
        public byte Green;
        public byte Red;
    }

    public static Bitmap GetSnapshot(this Control Control)
    {
        Rectangle rect = new Rectangle(0, 0, Control.Width, Control.Height - 1);
        Graphics g = Control.CreateGraphics();
        Bitmap Snapshot = new Bitmap(rect.Width, rect.Height);
        Graphics gShapshot = Graphics.FromImage(Snapshot);
        BitBlt(gShapshot.GetHdc(), 0, 0, rect.Width, rect.Height, g.GetHdc(), rect.Left, rect.Top, 0xCC0020);
        gShapshot.Dispose();

        return Snapshot;
    }

    public static unsafe Bitmap AdjustBrightness(this Bitmap Bitmap, decimal Percent)
    {
        Percent /= 100;
        Bitmap Snapshot = (Bitmap)Bitmap.Clone();
        Rectangle rect = new Rectangle(0, 0, Bitmap.Width, Bitmap.Height);

        BitmapData BitmapBase = Snapshot.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
        byte* BitmapBaseByte = (byte*)BitmapBase.Scan0.ToPointer();

        // the number of bytes in each row of a bitmap is allocated (internally) to be equally divisible by 4
        int RowByteWidth = rect.Width * 3;
        if (RowByteWidth % 4 != 0)
        {
            RowByteWidth += (4 - (RowByteWidth % 4));
        }

        for (int i = 0; i < RowByteWidth * rect.Height; i += 3)
        {
            PixelData* p = (PixelData*)(BitmapBaseByte + i);

            p->Red = (byte)Math.Round(Math.Min(p->Red * Percent, (decimal)255));
            p->Green = (byte)Math.Round(Math.Min(p->Green * Percent, (decimal)255));
            p->Blue = (byte)Math.Round(Math.Min(p->Blue * Percent, (decimal)255));
        }

        Snapshot.UnlockBits(BitmapBase);
        return Snapshot;
    }

    public static Bitmap Brighten(this Bitmap Bitmap, decimal PercentChange)
    {
        return AdjustBrightness(Bitmap, 100 + PercentChange);
    }

    public static Bitmap Darken(this Bitmap Bitmap, decimal PercentChange)
    {
        return AdjustBrightness(Bitmap, 100 - PercentChange);
    }
}

 

Because Control is extended by GetSnapshot, and Bitmap is extended by AdjustBrightness, Brighten, and Darken, I can write very clear and simple code like this on the consuming side:

Bitmap bitmap = MyForm.GetSnapshot().Darken(40);

…and voila!  I have a snapshot.  Note that because Darken extends Bitmap, it can now be used with any Bitmap.  As we read from this code from left to right, we’re observing a pipeline of transformations.  MyForm is the source data, GetSnapshot is the first step, Darken is the next change, and with more extension methods on Bitmap we could continue to process this in a way that is very natural to think about and construct.

I do have to admit that I cheated a little, though.  Even with the direct memory manipulation with pointers, it still didn’t perform very well on the Symbol and DAP devices I tested on.  So instead of adjusting the brightness on every pixel, I only darken every third pixel.  They’re close enough together that you can’t really tell the difference; however, the closer to 100 percent you darken or brighten an image, the more apparent the illusion will be, since two thirds of the pixels won’t be participating.  So it’s good for subtle effects, but I wouldn’t count on it for all scenarios.

This every-third-pixel dirty trick happens in the for loop, where you see i += 3, so go ahead and experiment with it.  Just be careful not to set it to large even numbers or you’ll end up with stripes!

Posted in Algorithms, Compact Framework, Object Oriented Design, Problem Modeling, User Interface Design, Windows Forms, Windows Mobile | 5 Comments »

The Visitor Design Pattern in C# 3.0

Posted by Dan Vanderboom on April 9, 2008

I use many common design patterns on a regular basis–composite, MVC/MVP, adapter, strategy, factory, chain of command, etc.–but I’ve never come across a situation where I felt Visitor in the classic definition (GoF) made sense.  I had read about it, but the necessity of defining the interfaces for not only the Visitor classes (that’s not so bad) but also the elements being visited, makes it seem overly complex and therefore tainted for me.  What if you don’t own the source code to the elements, and don’t want to inherit from existing types (if they’re not sealed) just to implement an IVisitedElement interface?

I wanted a less intrusive way of visiting any set of objects, without making any special demands on or assumptions about their types, and I suspected that new features in C# 3.0 would provide a way to make it elegant and terse.  What’s needed in essence is to visit each object in a collection with a common function or object, and to perform some action, transform the object in some way, and/or calculate some end result (usually an aggregation).  Can we do that without having to implement special interfaces or disrupting the code model in place?

For the sake of completeness and to serve as a baseline for other implementations, I’ll show you what the classic Visitor pattern looks like.

UML for Visitor Design Pattern

Here is the code that corresponds to this diagram.

interface IEmployeeVisitor
{
    void Visit(Employee employee);
    void Visit(Manager manager);
}

interface IVisitorElement
{
    void Accept(IEmployeeVisitor Visitor);
}

class EmployeeCollection : List<Employee>
{
    public void Accept(IEmployeeVisitor Visitor)
    {
        foreach (Employee employee in this)
        {
            employee.Accept(Visitor);
        }
    }
}

class Employee : IVisitorElement
{
    public decimal Income;
    public Employee(decimal income) { Income = income; }

    public virtual void Accept(IEmployeeVisitor Visitor)
    {
        Visitor.Visit(this);
    }
}

class Manager : Employee, IVisitorElement
{
    public decimal Bonus;
    public Manager(decimal income, decimal bonus) : base(income) { Bonus = bonus; }

    public override void Accept(IEmployeeVisitor Visitor)
    {
        Visitor.Visit(this);
    }
}

class SumIncomeVisitor : IEmployeeVisitor
{
    public decimal TotalIncome = 0;
    public void Visit(Employee employee) { TotalIncome += employee.Income; }
    public void Visit(Manager manager) { TotalIncome += manager.Income + manager.Bonus; }
}

class Program
{
    static void Main()
    {
        EmployeeCollection employees = new EmployeeCollection();
        employees.Add(new Employee(100000));
        employees.Add(new Employee(125000));
        employees.Add(new Manager(210000, 35000));

        SumIncomeVisitor visitor = new SumIncomeVisitor();
        employees.Accept(visitor);
        decimal result = visitor.TotalIncome;

        Console.WriteLine(result);
        Console.ReadLine();
    }
}

 
The first major disadvantage is the amount of plumbing that must be in place, and the two-way dependencies created, between visitors and the objects to be visited.  Though specific types aren’t hard-coded, a conceptual two-way dependency implied by the interfaces’ knowledge of each other requires forethought and special accomodations on both sides from the beginning.  Management of dependencies is always important; how well we do so determines how applications become more complex as they grow.  So whenever possible I ensure that dependencies run in one direction.  This creates natural segmentation and layering, and ensures that components can be pulled apart from each other rather than congealing into something like a tangled ball of christmas tree lights.

Instead of passing a collection of Employee objects to some calculating Visitor, we tell the Employee to accept a Visitor object, which then just turns around and calls the Visitor.  That by itself seems rather indirect and convoluted.  Visiting a single element isn’t very exciting.  Nothing very interesting happens until you have a whole bunch of things to work with.  So in order to visit a collection, a custom collection type is defined with an Accept method that in turn calls Accept on each Employee.  This custom collection is yet another type we’re required to write when otherwise a List<Customer> or something similar would suffice.  And what happens when your data structure is something other than a basic list?  What if you have a tree of objects you’d like to visit?  Would you then have to implement a tree data structure that is visitor friendly?  How many aggregation types do you want to reinvent with visitation specifically in mind?

The rest of it isn’t so bad.  The SumIncomeVisitor class contains both the processing logic and state for any calculations needed by that Visitor.  One of these is instantiated (another extra step), passed to the collection’s Accept method, and therefore executed against all employees in the collection.  After all objects are visited, the SumIncomeVisitor object contains the final result.  This all works, but seems pretty klunky.  Perhaps the pattern is more interesting if IVisitorElement classes provide more sophisticated Accept implementations.  I can’t think of any examples off-hand but I’ll be thinking about and looking for these.

The code above is just shy of 80 lines long.  Can we accomplish exactly the same goal with less code, more simply and clearly?

class Employee
{
    public decimal Income;
    public Employee(decimal income) { Income = income; }
}

class Manager : Employee
{
    public decimal Bonus;
    public Manager(decimal income, decimal bonus) : base(income) { Bonus = bonus; }
}

class Program
{
    static void Main()
    {
        List<Employee> employees = new List<Employee>();
        employees.Add(new Employee(100000));
        employees.Add(new Employee(125000));
        employees.Add(new Manager(210000, 35000));

        decimal TotalIncome = 0;
        employees.ForEach(e => SumEmployeeIncome(e, ref TotalIncome));

        Console.WriteLine(TotalIncome);
        Console.ReadLine();
    }

    static void SumEmployeeIncome(Employee employee, ref decimal TotalIncome)
    {
        TotalIncome += employee.Income;

        if (employee is Manager)
            TotalIncome += (employee as Manager).Bonus;
    }
}

 
In this code, you’ll notice a few simplifications:
  1. There are no IVisitorElement or IEmployeeVisitor interfaces.
  2. Employee and Manager types exist without any knowledge of or explicit support for being visited.
  3. No custom collection is required, so a basic List<Employee> is used.

In order to make this work, we need the same basic things that we needed before: visiting/processing logic, and a place to store state for that processing.  In the second approach, the state is stored in the TotalIncome variable within the Main method, where the calculation is being requested, and the processing logic kept in another method of the same class.  I could have declared TotalIncome as a class variable, but I’d really like to restrict any “scratch pad” data used in a calculation to have as restricted a scope as possible.  In the classic Visitor pattern, the data is encapsulated with the processing logic.  By calling a method with a secondary ref parameter, I can declare TotalIncome within the Main method and avoid cluttering the class definition with data that’s only relevant to one method’s logic.  This is a lighter-weight, more in-line approach than defining separate types and having to instantiate a Visitor object (Visitor Object vs. Visitor Method).

The actual mechanism for visiting every object is the ForEach method.  The List<T> class includes a very useful ForEach method that allows you to pass in an Action<T> delegate to execute a method for each element.  ForEach can’t take a method with our second ref parameter; it can only accept an Action<T> delegate.  The lambda expression e => SumEmployeeIncome(e, ref TotalIncome) creates an anonymous method that does in fact match Action<T>.  The parameter e is of type Employee because the employees collection is List<Employee>, which means the Employee type is inferred for Action<Employee>.  The anonymous method represented by the lambda then calls SumEmployeeIncome, passing the Employee e object through as well as the TotalIncome state to be transformed on successive calls for each Employee.

Finally, SumEmployeeIncome acts as the Visitor.  Different logic can be performed for different types where inheritance is involved, as it is with this sample, by testing for types using the is operator.  This is in contrast to the dual Visit methods taking Employee and Manager types respectively.  Actually, the classic Visitor pattern could have used the same approach in this regard.
 
Where more complex state is needed for processing, a new Visitor-state type could be created to support the processing, and by using an object for this purpose, it wouldn’t be necessary to declare or pass the parameter by reference.  Another option would simply be to declare multiple ref parameters.
 

The List.ForEach method is awfully nice, but what if you’re working with another data structure, such as an array, an ArrayList, a LinkedList<T>, or even a Tree<T>?  Defining a simple extension method can provide this tool regardless of what kind of collection you’re working with.

public static void ForEach<T>(this IEnumerable<T> collection, Action<T> action)
{
    foreach (T item in collection)
    {
        if (action != null)
            action(item);
    }
}

That’s better.  Now if only that extension method had been defined in the first place, the specific one in the List<T> class wouldn’t be necessary.

There’s an even more succinct way to accomplish the specific example above, using the Sum extension method on IEnumerable<T>.

TotalIncome = employees.Sum(e => (e is Manager) ? (e as Manager).Bonus + e.Income : e.Income);

I don’t mind writing or reading code like this, and as more functional programming constructs are merged into C#, I think it’s important to flex these mental muscles and become familiar with the syntax, but one might argue that this is a little more cryptic than the preceding example.  If the calculation was any more complicated, it would make sense to use a statement lambda with curly braces instead of the shorter expression lambda shown above.  Here’s one way it could be written as a statement lambda:

TotalIncome = employees.Sum(e =>
    {
        decimal result = e.Income;

        if (e is Manager)
            result += (e as Manager).Bonus;

        return result;
    });

You can see that there is more opportunity here to perform other actions and participate in more complex calculations.  This approach is even lighter-weight than the second approach suggested above using a separate named method and external state passed by reference.  The approach you take should depend on the needs and constraints of the situation.  Lighter-weight approaches are good for ad-hoc processing, whereas the heavier approaches make more sense if the visiting logic needs to be reused in other places.

If we don’t need to share state across visitations of objects in a collection, we could simply use extension methods, which is the simplest option of all.  After all, the original intent of the Visitor pattern was to allow us to add functionality to a class without modifying the original element’s type.  According to dofactory.com:

Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates.

Extension methods “add” functionality to existing classes, or at least create a compelling illusion that they do.  Just reference an assembly and import the right namespace to add the operations.  It is possible to share state, but only by doing something like passing in some shared state object to each call.

If C# had the ability to define a variable that would be static to a defined closure in a method (which it doesn’t), we could use extension methods all the time without any drawbacks.  (I miss the static local variable feature of Turbo Pascal.)

Conclusion

With the use of lambda expressions and extension methods, we’ve been able to cut the amount of code for the Visitor pattern by more than half and found that there was no need to specially prepare the data model classes to support visitation.  While the classic Visitor pattern may have more potential in complex and custom Accept scenarios, in general the need to visit elements of a collection can be better accomplished with the judicious use of available language features in C# than by blindly following classic design patterns without consideration for how relevant they really are.

While I certainly encourage developers to become familiar with common patterns, I also encourage them to think carefully about the code they’re writing, and to ask themselves if it’s as clear and simple as it can be.  As software systems grow in size–sometimes becoming victims of their own success–small inefficiencies and muddied designs can snowball into unmanageability.  Apply a simple solution first, and then add complexity only when necessary.

Doubt and ask why constantly.  Be educated and familiar with the literature, but don’t dogmatically accept everything you read: think for yourself and hone your skills at every opportunity.  While the goals and forces in software tend to remain constant over time, the forms that made sense years ago may become unnecessary with today’s tools.

Posted in Algorithms, Data Structures, Design Patterns, Object Oriented Design, Problem Modeling, Software Architecture | 10 Comments »

VS Macro to Locate Solution Explorer Item – Corrected

Posted by Dan Vanderboom on April 8, 2008

A few weeks ago, I posted an article about a Visual Studio macro (with source code) to locate and select an item in Solution Explorer corresponding to the active document.  I discovered that Solution Explorer loads its UIHierarchy items on demand, so items only become available in thet tree structure if their parent node has been expanded.

The key to solving this was a single line of code:

solution.FindProjectItem(DTE.ActiveDocument.FullName).ExpandView()

This forces Solution Explorer to load the necessary items and also expands the tree so that the item becomes visible.  After that, selecting the item becomes possible.

I’ve updated the original article to correct the source code, and from what I can tell from more extensive testing, it seems to work all the time now.

Posted in My Software, Visual Studio, Visual Studio Extensibility | Leave a Comment »

Tree Data Structure Source Code Posted

Posted by Dan Vanderboom on April 3, 2008

[Updated 8/14/2014] The source code for this library can be found here at GitHub. Also check out my blog post announcing it.

For those of you who have been waiting for the source code to the Tree<T> article I wrote, I’m happy to announce that it’s finally available.  I had originally intended to supply some non-trivial examples of its use, but with my super busy schedule at work and otherwise, I’ve had to reduce the scope just to get it out there.  One of my examples is large enough to warrant one or more large articles, so I also didn’t want to just toss them in there without sufficient explanation as to how it all works.

While I work on getting those ready, check out Damon Payne’s article on using a non-binary tree for modeling dependencies among concurrently-running tasks with the Task Parallel Library.  This is a great use of the tree data structure.  It would be interesting to see how that would work for tasks with cross-branch dependencies.  Does the tree become a graph?  Would iteration become a garbage-collection-like traversal?  How would it need to respond to insertion of tasks or modification or dependencies during execution?

My non-binary tree article, which has been updated with a short section just before the conclusion, can be found here.  At the very top of the article is a link to the source code in the form of a Visual Studio 2008 solution.  For those of you with VS2005, you should be able to easily extract the code and create a VS2005 project out of it.  I specifically targeted .NET 2.0 instead of 3.5, and originally tested it against Compact Framework.

I’ll also be doing further development of this library, so if you have any cool feature requests, let me know.

Posted in Algorithms, Concurrency, Data Structures, My Software, Object Oriented Design, Problem Modeling, Software Architecture | 6 Comments »