Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘Concurrency’ Category

Observations on the Evolution of Software Development

Posted by Dan Vanderboom on September 18, 2008

Neoteny in the Growth of Software Flexibility and Power

Neoteny is a biological phenomenon of an organism’s development observed across multiple generations of a species.  According to Wikipedia, neoteny is “the retention, by adults in a species, of traits previously seen only in juveniles”, and accounts for many evolutionary shifts, including the human brain’s ability to remain elastic and malleable later in life than those of our distant ancestors.

So how does this relate to software?  Software is a great deal like an organic species.  The species emerged (not long ago), incubated in a more or less fragile state for a number of decades, and continues to evolve today.  Each software application or system built is a new member of the species, and over the generations they have become more robust, intelligent, and useful.  We’ve even formed a symbiotic relationship with software.

Consider the fact that software running on computers was at one time compiled to machine language code for a specific processor.  With the invention of platform-independent instruction sets and their associated runtimes performing just-in-time compilation (Java’s JVM and .NET Framework’s CLR), we’ve delayed the actual production of machine language code until it’s actually needed on the target machine.  The compiler produces a slightly more abstract representation of the program logic, and an extra translation step at installation or runtime is needed to complete the process to make the software usable.

With the growing popularity of dynamic languages such as Lisp, Python, and the .NET Framework’s upcoming release of its Dynamic Language Runtime (DLR), we’re taking another step of neoteny.  Instead of a compiler generating instruction byte codes, a “compiler for any dynamic language implemented on top of the DLR has to generate DLR abstract trees, and hand it over to the DLR libraries” (per Wikipedia).  These abstract syntax trees (AST), normally an intermediate artifact created deep within the bowels of a traditional compiler (and eventually discarded), are now persisted as compiler output.

Traits previously seen only in juveniles… now retained by adults.  Not too much of a metaphorical stretch!  The question is: how far can we go?  And I think the answer depends on the ability of hardware to support the additional “just in time” processing that needs to occur, executing more of the compiler’s tail-end tasks within the execution runtime itself, providing programming languages with greater flexibility and power until the compilation stages we currently execute at design-time almost entirely disappear (to be replaced, perhaps, by new pre-processing tasks.)

I remember my Turbo Pascal compiler running on a 33 MHz processor with 1 MB of RAM, and now my cell phone runs at 620 MHz (with a graphics accelerator) and has gigabytes of memory and storage.  And yet with the state of things today, the inclusion of language-specific compilers within the runtime is still quite infeasible.  In the .NET Framework, there are too many potential languages that people might attempt to include in such a runtime: C#, F#, VB, Boo, IronPython, etc.  Trying to cram all of those compilers into a universal runtime that would fit (and perform well) on a cell phone or other mobile device isn’t yet feasible, which is why we have technologies with approaches like System.Reflection.Emit (on the full .NET Framework), and Mono.Cecil (which works on Compact Framework as well).  These work at the platform-independent CIL level, and so can interpret and generate programs generically, interact with each others’ components, and so on.  One metaprogramming mechanism can therefore be reused across all .NET languages, and this metalinguistic programming trend is being discussed on the C# and other language design teams.

I’ve just started using Mono.Cecil, chosen because it is cross-platform friendly (and open source).  The API isn’t very intuitive, but because the source is available, and because extension methods can go a long way to making it more accessible, it’s a great option.  The documentation is sparse, and assembly generation has some performance issues, but it’s a work-in-progress with tremendous potential.  If you’re doing any kind of static analysis or have any need to dynamically generate and consume types and assemblies (to get around language limitations, for example), I’d encourage you to check it out.  A comparison of Mono.Cecil to System.Reflection can be found here.  Another library called LinFu, which performs lots of mind-bending magic and actually uses Mono.Cecil, is also worth exploring.

VB10 will supposedly be moving to the DLR to become a truly dynamic language, which considering their history of support for late binding, makes a lot of sense.  With a dynamic language person on the C# 4.0 team (Jim Hugunin from IronPython), one wonders if C# won’t eventually go the same route, while keeping its strongly-typed feel and IDE feedback mechanisms.  You might laugh at the idea of C# supporting late binding (dynamic lookup), but this is being planned regardless of the language being static or dynamic.

As the DLR evolves, performance optimizations are being discovered and implemented that may close the gap between pre-compiled and dynamically interpreted languages.  Combine this with manageable concurrent execution, and the advantages we normally attribute to static languages may soon disappear altogether.

The Precipitous Growth of Software System Complexity

We’re truly on the cusp of a precipitous period of growth for software complexity, as an exploding array of devices and diverse platforms around the world connect in an ever-more immersive Internet.  Taking full advantage of parallel and distributed computing environments by solving the challenges of concurrency and coordination, as well as following the trend toward increased integration among software components, is pushing software complexity into new orders of magnitude.  The strategies we come up with for organizing these systems will have to take several key factors into consideration, and we will have to raise the level of abstraction to a point that may be hard for us to imagine with our existing tools and languages.

One aspect that’s clear is the rise of declarative or intention-based syntax, whether represented as XML, Domain Specific Langauges (DSL), attribute decoration, or a suite of new visual modeling editors.  This is in part a consequence of raising the abstraction level, as lower-level libraries are entrusted to solve common problems and take advantage of common opportunities.

Another is the use of Inversion of Control (IoC) containers and dependency injection in component based architectures, thereby standardizing the lifecycle of the application and its components, and providing a common environment or ecosystem for all of its components, as well as introducing a common protocol for component location, creation, access, and disposal.  This level of consistency is valuable for sharing a common understanding of how to troubleshoot software components.  The more predictable a component’s interaction with the rest of the system, the easier it is to debug and modify; conversely, the more unique it and its communication system is, the more disparity there is among components, and the more difficult to understand and modify without introducing errors.  If software is a species and applications are individuals, then components are the cells of a system.

Even the introduction of functional programming languages into the mainstream over the past couple years is due, in part, to the ability of those languages to provide more declarative support, more syntactic flexibility, and new ways of dealing with concurrency and coordination issues (such as immutable values) and light-weight, ad hoc data structures (tuples).

Balancing the Forces of Coupling, Cohesion, and Modularity

On a fundamental level, the more that components are independent, the less coupled and the more modular and flexible they are.  But the more they can communicate with and are allowed to benefit from each other, the more interdependent they become.  This adds to cohesiveness and synergy, but also stronger coupling to a community of abstractions.

A composition of services has layers and segments of interdependence, and while there are dependencies, these should be dependencies on abstractions (interfaces and not implementations).  Since there will be at least one implementation of each service, and the extensibility exists to build others as needed, dependency is only a liability when the means for fulfilling it are not extensible.  Both sides of a contract need to be fulfilled regardless; service-oriented or component-based designs merely provide a mechanism for each side to implement and fulfill its part of the contract, and ideally the system also provides a discovery mechanism for the service provider to publish its availability for other components to discover and consume it.

If you think about software components as a hierarchy or tree of services, with services of one layer depending on more root services, it’s easy to see how this simplifies the perpetual task of adding new and revising existing functionality.  You’re essentially editing an outline, and you have opportunities to move services around, reorganize dependencies easily, and have many of the details of the software’s complexity absorbed into this easy-to-use outline structure (and its supporting infrastructure).  Systems of arbitrary complexity become feasible, and then relatively routine.  There’s a somewhat steep learning curve to get to this point, but once you’ve crossed it, your opportunities extend endlessly for no additional mental cost.  At least not in terms of how to compose your system out of individual parts.

Absorbing Complexity into Frameworks

The final thing I want to mention is that a rise in overall complexity doesn’t mean that the job of software developers necessarily has to become more difficult than it is currently.  With the proper design of components that abstract away the complexity into reusable frameworks with intuitive interfaces, developers at the business logic level don’t need to be aware of the inner complexity, in the same way that software developers are largely absolved of the responsibility of thinking about the processor’s inner workings.  As we build our technology stack higher and higher, like the famed Tower of Babel, we must make sure that it’s organized and structured in a way to support that upward growth and the load imposed upon it… so it doesn’t come crashing down.

The requirements for building components tomorrow will not be the same as they were yesterday.  As illustrated in this account of the effort involved in a feature change at Microsoft, in the future, we will also want to consider issues such as tool-assisted refactorability (and patterns that frustrate this, such as “magic strings”), and due to an explosion of component libraries, discoverability of types, members, and their use.

A processor can handle any complexity of instruction and data flow.  The trick is in organizing all of this in a way that other developers can understand and work with.

Posted in Compact Framework, Component Based Engineering, Concurrency, Design Patterns, Development Environment, Distributed Architecture, Functional Programming, Mobile Devices, Object Oriented Design, Problem Modeling, Reflection, Service Oriented Architecture, Software Architecture, Visual Studio | 1 Comment »

Concurrency & Coordination With Futures in C#

Posted by Dan Vanderboom on July 3, 2008

A future is a proxy or placeholder for a value that may not yet be known, usually because the calculation is time consuming.  It is used as a synchronization construct, and it is an effective way to define dependencies among computations that will execute when all of their factors have been calculated, or in other words, to construct an expression tree with each node potentially computing in parallel.  According to the Wikipedia article on futures and promises, using them can dramatically reduce latency in distributed systems.

Damon pointed out that the Parallel Extensions library contains a Future<T> class, so I started looking around for examples and explanations of how they work, what the syntax is like, and I ran across a frightening example implementing the asynchronous programming model with Future<T>, as well as going in the other direction, wrapping an APM implementation with Future<T>.  Other articles give pretty good explanations but trivial examples.  From what I gathered briefly, the ContinueWith method for specifying the next step of calculation to process doesn’t seem to provide an intuitive way to indicate that several calculations may be depending upon the current one (unless it can be called multiple times?).  Using ContinueWith, you’re always specifying forward the calculation task that depends on the current future object.  It also surprised me a little that Future inherits from Task, because my understanding of a future is that it’s primarily defined as a value-holding object.  But considering that a future really holds an expression that needs to be calculated, making Future a Task doesn’t seem so odd.

So I decided to implement my own Future<T> class before looking at the parallel extensions library too deeply.  I didn’t want to prejudice my solution, because I wanted to make an exercise of it and see what I would naturally come up with.

Though I tried avoiding prejudice, I still wound up characterizing it in my head as an task, and thought that a future would simply be a pair of Action and Reaction methods (both of the Action delegate type).  The Action would execute and could do whatever it liked, including evaluate some expression and store it in a variable.  If the Action completed, the Reaction method (a continuation) would run, and these could be specified using lambdas.  Because I was storing the results in a local variable (result), swallowed up and made accessible with a closure, I didn’t see a need for a Value property in the future and therefore no need to make the type generic.  Ultimately I thought it silly to have a Reaction method, since anything you needed to happen sequentially after a successful Action, you could simply store at the end of the Action method itself.

FutureTask task = new FutureTask(
    () => result = CalculatePi(10),
    () => new FutureTask(
        () => result += "...",
        () => Console.WriteLine(result),
        ex2 => Console.WriteLine("ex2: " + ex2.Message)),
    ex1 => Console.WriteLine("ex1: " + ex1.Message));

The syntax is what I was most concerned with, and as I started playing around with nesting of futures to compose my calculation, I started to feel like I was onto something.  After all, it was almost starting to resemble some of the F# code I’ve been looking at, and I took that style of functional composition to be a good sign.  As you can see from the code above, I also include a constructor parameter of type Action<Exception> for compensation logic to run in the event that the original action fails.  (The result variable is a string, CalculatePi returns a string, and so the concatenation of the ellipsis really does make sense here.)

The problem that started nagging me was the thought that a composite computation of future objects might not be able to be defined all in one statement like this, not building the dependency tree from the bottom up.  You can really only define the most basic factors (the leaf nodes of a dependency tree) at the beginning this way, and then the expressions that depend directly upon those leaf nodes, etc.  What if you have 50 different starting future values, and you can only proceed with the next step in the calculation once 5 of those specific futures have completed evaluation?  How would you express those dependencies with this approach?

That’s when I started to think about futures as top-down hierarchical data container objects, instead of tasks that have pointers to some next task in a sequence.  I created a Future<T> class whose constructor takes an optional name (to aid debugging), a method of type Func<T> (which is a simple expression, supplied as a lambda in my examples), and finally an optional params list of other Future<T> objects on which that future value depends.

The first two futures in the code below start calculating pi (3.1415926535) and omega (which I made up to be a string of 9s).  They have no dependencies, so they can start calculating right away.  The paren future has two dependencies, supplied as two parameters at the end of the argument list: pi and omega.  You can see that the values pi.Value and omega.Value are used in the expression, which will simply surround the concatenated string value with parentheses and spaces.

var pi = new Future<string>("pi", () => CalculatePi(10));
var omega = new Future<string>("omega", () => CalculateOmega());

var paren = new Future<string>("parenthesize", () => Parenthesize(pi.Value + " < " + omega.Value), pi, omega);

var result = new Future<string>("bracket", () => Bracket(paren.Value), paren);

Finally, the result future has a dependency on the paren future.  This surrounds the result of paren.Value with brackets and spaces.  Because the operations here are trivial, I’ve added Thread.Sleep statements to all of these methods to simulate more computationally expensive work.

Dependencies Among Futures

The program starts calculating pi and omega concurrently, and then immediately builds the paren future, which because of its dependencies waits for completion of the pi and omega futures.  But it doesn’t block the thread.  Execution continues immediately to build the result future, and then moves on to the next part of the program.  When each part of the expression, each future, completes, it will set a Complete boolean property to true and invoke a Completed event.  Any attempt to access the Value property of one of these futures will block the thread until it (and all of the futures it depends on) have completed evaluation.

Furthermore, if an exception occurs, all of the futures that depend on it will no longer attempt to evaluate, and the exceptions will be thrown as part of an AggregateException when accessing the Value property.  This AggregateException contains all of the individual exceptions that were thrown as part of evaluating each future expression.  If both pi and omega fail, result should be able to hand me a list of all Exceptions below it in the tree structure that automatically gets formed.

There are two bits of code I added as icing on this cake.  The first is the use of the implicit operator to convert a variable of type Future<T> to type T.  In other words, if you have a Future<string> called result, you can now pass result into methods where a string parameter is expected, etc.  In the code listing at the end of the article, you’ll notice that I reference pi and omega instead of pi.Value and omega.Value (as in the code snippet above).

public static implicit operator T(Future<T> Future)
{
    return Future.Value;
}

The other helpful bit is an override of ToString, which allows you to hover over a future variable in debug mode and see its name (if you named it), whether it’s Complete or Incomplete, and any errors encountered during evaluation.

public override string ToString()
{
    return Name + ", " + (Complete ? "Complete" : "Incomplete") + (Error != null ? "Error=" + Error.Message : string.Empty);
}

Debug Experience of Future

What I’d really like to do is have the ability to construct this composite expression in a hierarchical form in the language, with a functionally composed syntax, replacing any parameter T with a Future<T>, something like this:

var result = new Future<string>("bracket", () => Bracket(
    new Future<string>("parenthesize", () => Parenthesize(
        new Future<string>("pi", () => CalculatePi(10))
        + " < " +
        new Future<string>("omega", () => CalculateOmega())
    ))
));

The Bracket and Parenthesize methods both require a string, but I give them an object that will at some point (“in the future”) evaluate to a string.  Another term used for future is promise, although there is a distinction in some languages that support both, but you can think in terms of giving those methods the promise that they’ll get a string later, at which time they can proceed with their own evaluation.  This effectively creates lazy evaluation, sometimes referred to as normal-order evaluation.

There are a few problems with this code, however.  First of all, though it’s composed functionally from the top down and returns the correct answer, it takes too long to do it: about 8 seconds instead of 4.  That means it’s processing all of the steps sequentially.  This happens because the future objects we’re handing to the Parenthesize and Bracket methods have to be converted from Future<string> to string before they can be evaluated in the expression, and doing that activates the implicit operator, which executes the Value property getter.  This completely destroys the asynchronous behavior we’re going for, by insisting on resolving it immediately with the wait built-into the Value property.  The string concatenation expression evaluates sequentially one piece at a time, and when that’s done, the next level up evaluates, and so on.

The solution is to declare our futures as factors we depend on at each level, which start them executing right away due to C#’s order of evaluation, and declare the operations we want to perform in terms of those predefined futures.  After a few hours of rearranging definitions, declaration order, and experimenting with many other details (including a brief foray into being more indirect with Func<Future<T>>), this is the working code I came up with:

Future<string> FuturePi = null, FutureOmega = null, FutureConcat = null, FutureParen = null;

var result = new Future<string>(
    () => Bracket(FutureParen),
    (FutureParen = new Future<string>(
        () => Parenthesize(FutureConcat),
        (FutureConcat = new Future<String>(
            () => FuturePi + " < " + FutureOmega,
            (FuturePi = new Future<string>(() => CalculatePi(10))),
            (FutureOmega = new Future<string>(() => CalculateOmega()))
        ))
    ))
);

In F# and other more functional languages, I imagine we could use let statements to define and assign these variables as part of the overall expression, instead of having to define the variables in a separate statement as shown here.

The Future<T> class I wrote works fairly well for exploration and study of futures and the possible syntax to define them and access their values, and I’ll share it so that you can experiment with it if you like, but understand that this is (even more so than usual) not production ready code.  I’m making some very naive assumptions, not taking advantage of any task managers or thread pools, there is no intelligent scheduling going on, and I haven’t tested this in any real world applications.  With that disclaimer out of the way, here it is, complete with the consuming test code.

using System;
using System.Collections.Generic;
using System.Threading;
using System.Linq;

namespace FutureExpressionExample
{
    class Program
    {
        static void Main(string[] args)
        {
            DateTime StartTime = DateTime.Now;

            Future<string> FuturePi = null, FutureOmega = null, FutureConcat = null, FutureParen = null;

            var result = new Future<string>("bracket",
                () => Bracket(FutureParen),
                (FutureParen = new Future<string>("parenthesize",
                    () => Parenthesize(FutureConcat),
                    (FutureConcat = new Future<String>("concat",
                        () => FuturePi + " < " + FutureOmega,
                        (FuturePi = new Future<string>("pi", () => CalculatePi(10))),
                        (FutureOmega = new Future<string>("omega", () => CalculateOmega()))
                    ))
                ))
            );

            /* Alternative
             
            // first group of expressions evaluating in parallel
            var pi = new Future<string>("pi", () => CalculatePi(10));
            var omega = new Future<string>("omega", () => CalculateOmega());

            // a single future expression dependent on all of the futures in the first group
            var paren = new Future<string>("parenthesize", () => Parenthesize(pi + " < " + omega), pi, omega);
                        
            // another single future expression dependent on the paren future
            var result = new Future<string>("bracket", () => Bracket(paren), paren);

            */

            Console.WriteLine("Do other stuff while calculation occurs...");

            try
            {
                Console.WriteLine("\n" + result);
            }
            catch (AggregateException ex)
            {
                Console.WriteLine("\n" + ex.Message);
            }

            TimeSpan ts = DateTime.Now - StartTime;
            Console.WriteLine("\n" + ts.TotalSeconds.ToString() + " seconds");

            Console.ReadKey();
        }

        static string CalculatePi(int NumberDigits)
        {
            //throw new ApplicationException("Failed to calculate Pi");
            Thread.Sleep(3000);
            return "3.1415926535";
        }

        static string CalculateOmega()
        {
            //throw new ApplicationException("Failed to calculate Omega");
            Thread.Sleep(3000);
            return "999999999999999";
        }

        static string Parenthesize(string Text)
        {
            Thread.Sleep(500);
            return "( " + Text + " )";
        }

        static string Bracket(string Text)
        {
            Thread.Sleep(500);
            return "[ " + Text + " ]";
        }
    }

    public class Future<T> : IDisposable
    {
        public string Name { get; set; }
        public bool Complete { get; protected set; }
        public Exception Error { get; protected set; }

        protected Func<T> Expression { get; set; }

        protected List<Future<T>> Factors;
        protected List<Future<T>> FactorsCompleted;
        protected List<Future<T>> FactorsFailed;

        public event Action<Future<T>> Completed;
        protected void OnCompleted()
        {
            Complete = true;

            if (Completed != null)
                Completed(this);
        }

        private T _Value;
        public T Value
        {
            get
            {
                // block until complete
                while (!Complete)
                {
                    Thread.Sleep(1);
                }

                if (Exceptions.Count > 0)
                    throw new AggregateException(Exceptions);

                return _Value;
            }
            private set { _Value = value; }
        }

        public List<Exception> Exceptions
        {
            get
            {
                var list = new List<Exception>();

                foreach (Future<T> Factor in Factors)
                {
                    list.AddRange(Factor.Exceptions);
                }

                if (Error != null)
                    list.Add(Error);

                return list;
            }
        }

        public static implicit operator T(Future<T> Future)
        {
            return Future.Value;
        }

        // naming a Future is optional
        public Future(Func<T> Expression, params Future<T>[] Factors) : this("<not named>", Expression, Factors) { }

        public Future(string Name, Func<T> Expression, params Future<T>[] Factors)
        {
            this.Name = Name;
            this.Expression = Expression;
            this.Factors = new List<Future<T>>(Factors);
            
            FactorsCompleted = new List<Future<T>>();
            FactorsFailed = new List<Future<T>>();

            foreach (Future<T> Factor in this.Factors)
            {
                if (Factor.Complete)
                    FactorsCompleted.Add(Factor);
                else
                    Factor.Completed += new Action<Future<T>>(Factor_Completed);
            }

            // there may not be any factors, or they may all be complete
            if (FactorsCompleted.Count == this.Factors.Count)
                Expression.BeginInvoke(ReceiveCallback, null);
        }

        private void Factor_Completed(Future<T> Factor)
        {
            if (!FactorsCompleted.Contains(Factor))
                FactorsCompleted.Add(Factor);

            if (Factor.Error != null && !FactorsFailed.Contains(Factor))
                FactorsFailed.Add(Factor);

            Factor.Completed -= new Action<Future<T>>(Factor_Completed);

            if (Exceptions.Count > 0)
            {
                Dispose();
                OnCompleted();
                return;
            }

            if (FactorsCompleted.Count == Factors.Count)
                Expression.BeginInvoke(ReceiveCallback, null);
        }

        private void ReceiveCallback(IAsyncResult AsyncResult)
        {
            try
            {
                Value = Expression.EndInvoke(AsyncResult);
            }
            catch (Exception ex)
            {
                Error = ex;
            }
            
            Dispose();

            // computation is completed, regardless of whether it succeeded or failed
            OnCompleted();
        }

        public void Dispose()
        {
            foreach (Future<T> Factor in Factors)
            {
                Factor.Completed -= new Action<Future<T>>(Factor_Completed);
            }
        }

        // helpful for debugging
        public override string ToString()
        {
            return Name + ", " + (Complete ? "Complete" : "Incomplete") + (Error != null ? ", Error=" + Error.Message : string.Empty);
        }
    }

    public class AggregateException : Exception
    {
        public List<Exception> Exceptions;

        public AggregateException(IEnumerable<Exception> Exceptions)
        {
            this.Exceptions = new List<Exception>(Exceptions);
        }

        public override string Message
        {
            get
            {
                string message = string.Empty;
                foreach (Exception ex in Exceptions)
                {
                    message += ex.Message + "\n";
                }
                return message;
            }
        }
    }
}

Posted in Algorithms, Concurrency, Data Structures, Design Patterns, Functional Programming | 1 Comment »

Control Invoke Pattern Using Lambdas

Posted by Dan Vanderboom on July 1, 2008

In Windows Forms development, there are often times when you need to update the user interface in response to something happening on another thread.  This could be a timer firing every second, some service running in the background and notifying the client of updated status on its own (background) thread, etc.  If you’ve been working with Windows Forms for any length of time, you’ve doubtless run into exceptions that kindly inform you that you can’t update controls on a thread other than the one that created those controls. We’ll consider the simple issue of a timer firing every second, and the display of elapsed time in a label, for our example here, and introduce a new pattern to make this cleaner and more intuitive.

First we’ll set up our start time and the thread.  I use the System.Threading.Timer to ensure that we’re using a non-UI thread.

StartTime = DateTime.Now;

Timer = new System.Threading.Timer(new TimerCallback(Timer_Tick));
Timer.Change(0, 1000);

Here’s the event handler that calculates the elapsed time and calls Invoke, as well as the method we’re invoking, which will run on the UI thread, where it can safely update the lblElapsedTime label control.

private void Timer_Tick(object state)
{
    TimeSpan ts = DateTime.Now - StartTime;
    Invoke(new StringDelegate(DisplayText), ts.TotalSeconds.ToString());
}

private void DisplayText(string Text)
{
    lblElapsedTime.Text = Text;
}

In order to make this work, we need the event handler, a separate method to run in the UI thread, and also a delegate that matches the invoked method.

public delegate void StringDelegate(string Text);

This is a lot of moving parts for such a small piece of functionality.  To make matters worse, you may need to repeat this pattern many times, depending on the controls (or combinations thereof) that you need to update.  In a modern multi-threaded application, your class can become littered with these invocations, handlers, and delegates.

What I’d really like to do is call Invoke and pass in an anonymous method (in lambda form).  Unfortunately, Invoke takes a Delegate parameter, which is an abstract type.  This forces you to instantiate a concrete delegate type.  So while we eliminate our need for a separate method, we’re still dependent on our delegate type.

private void Timer_Tick(object state)
{
    TimeSpan ts = DateTime.Now - StartTime;
    Invoke(new StringDelegate(s => lblElapsedTime.Text = s), ts.TotalSeconds.ToString());
}

Or are we?  By using the Action (or MethodInvoker) delegate for all of these invocations, and relying on C# and its support for closures, we can make a reference to the TimeSpan variable from within the anonymous method body, like this:

private void Timer_Tick(object state)
{
    TimeSpan ts = DateTime.Now - StartTime;
    Invoke(new Action(() => lblElapsedTime.Text = ts.TotalSeconds.ToString()));
}

Here it’s contained within a single method, and no additional delegates will be necessary to make this work.  That’s pretty nice!  (A similar approach is taken here.)  But it’s still not quite as slick and intuitive as I’d like.  By creating an extension method, adding another overload of Invoke to the Control class, we can allow Invoke to assume a concrete Action delegate class, and write code like this:

private void Timer_Tick(object state)
{
    TimeSpan ts = DateTime.Now - StartTime;
    this.Invoke(() => lblElapsedTime.Text = ts.TotalSeconds.ToString());
}

Now that’s clean.  Just call Invoke, and pass in the lambda to execute.  Or several statements in a code block if you like.  It’s very succinct.  The only part that bothers me is the this keyword that’s required.  Extension methods don’t register in Intellisense, nor do they compile successfully, if they’re used without a variable, this, base, or something else with a dot after them.  If we’re really going to push the illusion of extending a class, I think we need to go all the way and make it work without having to explicitly type this.  But it is what it is for now; hopefully this will be fixed in the next version of C# (along with adding support for extension properties, events, and operators).

That being said, here is the extension method to make this work:

public static void Invoke(this Control Control, Action Action)
{
    Control.Invoke(Action);
}

It couldn’t be simpler (thanks to a little help from Mads Torgersen)!  I’ll include the finished program for those of you who want to run it to see and experiment; you’ll just need to create the Program.cs with the startup code yourself.  Create a new Windows Forms app and use what’s there, renaming the form listed in Application.Run to match this one.

using System;
using System.Drawing;
using System.Windows.Forms;
using System.Threading;

namespace ControlInvokeLambda
{
    public partial class frmTestInvokeLambda : Form
    {
        private Button btnExit;
        private Label lblElapsedTime;

        // make sure we're using the Threading Timer for our example, which runs on a non-UI thread
        private System.Threading.Timer Timer;
        
        private DateTime StartTime;

        public frmTestInvokeLambda()
        {
            // initialize controls
            
            btnExit = new Button();
            btnExit.Location = new Point(88, 80);
            btnExit.Name = "btnExit";
            btnExit.Size = new Size(118, 26);
            btnExit.TabIndex = 0;
            btnExit.Text = "Exit";
            btnExit.UseVisualStyleBackColor = true;
            btnExit.Click += new EventHandler(this.btnExit_Click);

            lblElapsedTime = new Label();
            lblElapsedTime.AutoSize = true;
            lblElapsedTime.Location = new Point(140, 36);
            lblElapsedTime.Name = "lblElapsedTime";
            lblElapsedTime.Size = new Size(13, 13);
            lblElapsedTime.TabIndex = 1;
            lblElapsedTime.Text = "0";

            AutoScaleDimensions = new SizeF(6F, 13F);
            AutoScaleMode = AutoScaleMode.Font;
            ClientSize = new Size(292, 136);
            Controls.Add(lblElapsedTime);
            Controls.Add(btnExit);
            Name = "frmTestInvokeLambda";
            Text = "Invoke Lambda";
            PerformLayout();

            // remember start time and startup timer

            StartTime = DateTime.Now;

            Timer = new System.Threading.Timer(new TimerCallback(Timer_Tick));
            Timer.Change(0, 1000);
        }

        private void btnExit_Click(object sender, EventArgs e)
        {
            Timer.Change(Timeout.Infinite, Timeout.Infinite);
            Application.Exit();
        }

        private void Timer_Tick(object state)
        {
            TimeSpan ts = DateTime.Now - StartTime;
            this.Invoke(() => lblElapsedTime.Text = ts.TotalSeconds.ToString());
        }
    }

    public static class ControlExtensions
    {
        public static void Invoke(this Control Control, Action Action)
        {
            Control.Invoke(Action);
        }
    }
}

Posted in Algorithms, Concurrency, Design Patterns, Windows Forms | 2 Comments »

New Spin on Spawning Threads

Posted by Dan Vanderboom on June 23, 2008

There are several good libraries (PFX, PLinq, CCR, etc.) available and coming out soon that help to write concurrent logic, providing various abstractions and control patterns.  Until I can incorporate these into my Compact Framework applications, however, I find myself spinning up threads in the usual way, and sometimes what I need to run concurrently doesn’t require any fancy coordination.

Here is the thread-spawning pattern I’ve used, and have seen used, countless times:

protected override void OnStart()
{
    Thread t = new Thread(new ThreadStart(Go));
    t.Name = "PrefetchLookupData";
    t.IsBackground = true;
    t.Priority = ThreadPriority.AboveNormal;
    t.Start();
}

private void Go()
{
    var c1 = LookupListDA.CustomerTypes;
    var c2 = LookupListDA.SalesOrderStates;
    var c3 = LookupListDA.VendorTypes;
}

The data layer shown in the example caches each lookup list internally the first time its property getter is accessed, so we can assign them to variables that are never used here and effectively “pre-fetch” our lookup list data while the user is waiting in the main menu, for example.

Using a lambda statement (anonymous method), we can consolidate this into a single named method, and avoid cluttering our class with a potential plethora of methods that may perform some ad hoc background thread processing.

protected override void OnStart()
{
    Thread t = new Thread(new ThreadStart(() =>
    {
        var c1 = LookupListDA.CustomerTypes;
        var c2 = LookupListDA.SalesOrderStates;
        var c3 = LookupListDA.VendorTypes;
    }));

    t.Name = "PrefetchLookupData";
    t.IsBackground = true;
    t.Priority = ThreadPriority.AboveNormal;
    t.Start();
}

After a little reflection, I came up with a similar pattern that further encapsulates the details, and in doing so, created an abstraction called LogicStream.  Other names that work are ExecutionStream or CodeStream, with the central concept being that you’re specifying a sequential set (stream) of instructions that runs parallel to the stream that defines it.

protected override void OnStart()
{
    new LogicStream("PrefetchLookupData", s =>
        {
            var c1 = LookupListDA.CustomerTypes;
            var c2 = LookupListDA.SalesOrderStates;
            var c3 = LookupListDA.VendorTypes;
        });
}

Like the second example, this approach doesn’t require a separate method, but it’s also completely self-contained: there is no thread manipulation code outside of this code block; any setup we need to do can either be done via parameters (“PrefetchLookupData” names the thread) or by specifying an additional code block for thread configuration (an example is shown below).  Finally, the LogicStream constructor immediately invokes the thread, which is a background thread by default, so nothing additional needs to be done.

Here’s a slightly more complicated example, showing how you can configure the thread and pass in multiple parameters.

using System;
using System.Threading;

namespace Demo
{
    class Program
    {
        static void Main(string[] args)
        {
            Thread.CurrentThread.Name = "Main";

            new LogicStream("ThreadX", s =>
            {
                // rename parameters passed into thread
                string ExtraInfo = s.Get<string>(0);
                int StartIndex = s.Get<int>(1);
                int EndIndex = s.Get<int>(2);

                // do work
                for (int x = StartIndex; x <= EndIndex; x++)
                {
                    Console.WriteLine("x = " + x.ToString().PadRight(10) + s.Name);
                    Thread.Sleep(50);

                    // this code has access to the Thread object via the LogicStream
                    if (x > 90)
                        s.Thread.Abort();
                }
            }, "Width Data", 0, 100); // optional parameters passed into the LogicStream

            new LogicStream(t =>
                {
                    // configure thread with complete flexibility
                    t.Name = "ThreadY";
                    t.IsBackground = true;
                    t.Priority = ThreadPriority.Highest;
                    t.SetApartmentState(ApartmentState.MTA);
                },
                s =>
                {
                    for (int y = 0; y < 100; y++)
                    {
                        Console.WriteLine("y = " + y.ToString().PadRight(10) + s.Name.PadLeft(15));
                        Thread.Sleep(50);
                    }
                });

            new LogicStream("ThreadZ", ThreadPriority.Lowest, s =>
                {
                    for (int z = 0; z < 100; z++)
                    {
                        Console.WriteLine("z = " + z.ToString().PadRight(10) + s.Name.PadLeft(23));
                        Thread.Sleep(50);
                    }
                });

            Console.ReadKey();
        }
    }
}

When this program runs, three threads will fire up and start processing, but because all of the worker threads are background threads, at any time the console receives a key press, the application will exit.

The parameters passed into the first thread are constants, which of course could be defined within the LogicStream’s code block itself.  Because anonymous methods in C# create closures when necessary, you could use a variable defined within the method (in this case, Main), or make reference to the method’s argument, args, without needing to pass in any parameters at all.

Finally, here is the LogicStream class to make this work.

using System;

namespace System.Threading
{
    public class LogicStream
    {
        public Thread Thread { get; private set; }
        public object[] Parameters { get; set; }

        protected Action<LogicStream> Action;
        protected long InstanceID;
        protected  static long NextInstanceID = 0;

        public LogicStream(Action<LogicStream> Action, params object[] Parameters)
            : this("LogicStream" + InstanceID.ToString(), ThreadPriority.Normal, true, Action, Parameters) { }

        public LogicStream(string Name, Action<LogicStream> Action, params object[] Parameters)
            : this(Name, ThreadPriority.Normal, Action, Parameters) { }

        public LogicStream(string Name, ThreadPriority Priority, Action<LogicStream> Action, params object[] Parameters)
            : this(Name, Priority, true, Action, Parameters) { }

        public LogicStream(string Name, ThreadPriority Priority, bool IsBackground, Action<LogicStream> Action, params object[] Parameters)
        {
            this.Action = Action;
            this.Parameters = Parameters;

            InstanceID = NextInstanceID;
            Interlocked.Increment(ref NextInstanceID);

            Thread = new Thread(new ThreadStart(InvokeAction));
            Thread.Name = Name;
            Thread.Priority = Priority;
            Thread.IsBackground = IsBackground;
            Thread.Start();
        }

        public LogicStream(Action<Thread> Config, Action<LogicStream> Action, params object[] Parameters)
        {
            this.Action = Action;

            Thread = new Thread(new ThreadStart(InvokeAction));
            Config(Thread);

            InstanceID = NextInstanceID;
            Interlocked.Increment(ref NextInstanceID);

            Thread.Start();
        }

        protected void InvokeAction()
        {
            Action(this);
        }

        public T Get<T>(int ParameterIndex)
        {
            return (T)Parameters[ParameterIndex];
        }
    }
}

It’s very simple, but that’s the best part.  You have the choice to configure the thread properties through various constructor parameters, or by defining an anonymous configuration method that can line up nicely with your anonymous action method.  If you’re lazy and don’t want to name your thread, or if you’re spinning up a bunch of them to perform the same task, you’ll still get a thread name differentiated by an auto-incrementing InstanceID.

So this isn’t anything big, of course.  Just a little syntactic wrapper around Thread and ThreadStart classes, with simple patterns built in for passing and fetching parameters, and defining streams of parallel logic a little more intuitively and concisely.  The trick to concurrent programming is coordination among parallel threads: passing messages (posting to ports), serializing use of certain resources, and so on.

I spent much of yesterday watching all of the CCR videos again and studying that implementation, so I expect to be writing more about that in the near future.

Posted in Algorithms, Concurrency, Design Patterns | 6 Comments »

Introduction to Port-Based Asynchronous Messaging

Posted by Dan Vanderboom on April 21, 2008

…using terminology and examples from the Concurrency & Coordination Runtime in C#

 

Port-Based Messaging

Port-based messaging has been in practice for a long time.  In the realm of low-level electronics, components have always been operating in parallel, hardware interface ports are designed around standards, and messages are posted to those ports, queuing up somewhere until the receiver is ready to process them.  This pattern has worked extremely well in multiple domains for a long time, and its key characteristic is the decoupling of data flow from control flow.

In sequential programs, one statement runs after another each time it’s run, and the behavior is predictable and repeatable. Concurrency is difficult because you have to consider all possible interleavings of multiple simultaneous tasks, so the overall behavior is nondeterministic. Depending on the relative timings of concurrent tasks, you could get different results each time if you’re not careful to set the appropriate locks on shared resources.  Port-based messaging architectures isolate islands of state across different execution contexts, and connect them with safe, atomic messages delivered through pre-defined ports.

Basic port-based asynchronous messaging

The posting of a message to a port, as shown in Figure 1, is followed by some handler method that is receiving and processing messages.  What’s not evident in the diagram, however, is that while data flows into the port, that posting is a non-blocking call.  The sender continues on doing other work, taking the time only to queue up a message somewhere.

Queuing is important because, even with large thread pools, we can’t guaranty that a receiver will be listening at the very moment a message arrives.  Letting them queue up means the receiver doesn’t have to block on a thread to wait.  Instead, the data waits and the thread checks messages on the port when it can.

Port showing message queue

What is a Port?

So what exactly is a port?  A port is a communication end point, but not in the sense of “a web service on a physical server”.  Think much more fine grained than that, even more fine-grained than methods.  With sequential programming, we commonly use try-catch blocks and handle both the exceptional and non-exceptional results of operations within a single method.  In port-based programming, you post a message to a port, which results in some handler method running on the receiving end, and different results can be sent back on different callback ports depending on the type of message.  Instead of calling a method that returns back to you when it ends, port-based programming is about always moving forward, and unwinding a call stack has very little meaning here.

Sequence diagram of port-based messaging

We can see in the sequence diagram above (Figure 3) a collection of services that communicate with and depend on each other.  Starting from the top, the left-most service posts a message to port 1, and then goes on to do other work or surrenders its thread back to the dispatcher for other tasks that are waiting to run.  A registered method on port 1 runs, and the logic there needs another service to complete it’s task, so it posts a message on port 2, and also continues processing without waiting.  The path of solid blue arrow lines traces the message path for normal execution.  If anything goes wrong, an exception can be posted to a different callback port, shown with a red outline in the diagram.

This diagram shows one possible composition of services and data flow.  Port sets, which are simply a collection of related ports, are shown as callback receivers in pairs, but they can consist of any number of ports with any mixture of messages types, depending on the needs of the system being coordinated.  In this example, if anything goes wrong in the handler methods at ports 2, 5, or 6, an exception message will be routed to port 6, where another handler method can compensate for or report on the error.  Also note that while during startup this system may have to process data at port 4 before the logic at ports 5, 7, and 8 can run… once it gets going, there could be activity operating at many ports concurrently (not just one port per service).

Arbiters, Dispatchers, and DispatcherQueues

Now it’s time to peel away some of the layers of simplification presented so far.  (It may help to have a beer or glass of wine at this point.)

An arbiter is a rule (or set of rules) about when and how to process messages for a specific port (or set of ports).  (It is helpful to think of arbiter as a data flow or message flow coordinator.)  Should messages be pulled off the queue as soon as they arrive?  Should the software wait until 5 messages have arrived before processing them all as a group?  Should messages be checked according to a timer firing every 20 seconds?  Should logic be run only when two ports have messages waiting (a join)?  What logic should be run when one of these conditions occurs?  Can method handlers on three specific ports run concurrently until a message arrives on a fourth port, whose handler must run exclusively, and when done the other three can run again (interleave)?  These are just a few of the many coordination patterns that can be expressed with different types of arbiters (and hierarchically nested arbiters, which are ingenious).

Arbiters, Dispatchers, and DispatcherQueues

Figure 4 illustrates that an arbiter is associated with a port to monitor and a method to execute under the right conditions.  The logic of the arbiter, depending on its type and configuration, determines whether to handle the message.  It gets its thread from a thread dispatcher, which contains a thread pool.  (Not the same as System.Threading.ThreadPool, though, as there can only be one of those per process.)

The next diagram (figure 5) could represent a join coordination.  An arbiter waits for messages on two ports, and depending on how it’s defined, it may process messages from one port repeatedly, but as soon as it receives a message on the second port (it may be an exception port, for example), the whole arbiter might tear itself down so that no more handling on those port will occur.  As you are probably starting to see, composition and attachment of arbiters are key to controlling specific patterns of coordination in arbitrarily powerful and flexible ways.

Multiple DispatcherQueues and complex Arbiters.

In the Concurrency & Coordination Runtime, we can attach and detach these arbiters during runtime; we don’t have to define them statically at compile time.  There has been some criticism towards this approach because dynamic arbitration rules are much more difficult to verify formally with analysis, and are therefore difficult to optimize compilation and thread management for, but the advantages of this flexibility are enormous and the performance (see this paper by Chrystanthakopoulos and Singh) has been very impressive compared to conventional multithreading approaches.  Ultimately, it’s not about whether we can guaranty 100% that nothing will go wrong using only the mathematical models currently in our repertoire, but whether we can be productive with these techniques to release software that meets acceptable quality standards across a broad range of application domains.

I don’t think we’re going to find a better set of strategies to work with anytime soon, and when we’ve pushed this technology hard enough, the tactics will be fine tuned and we can start absorbing some of these coordination concepts into languages themselves (without sacrificing the dynamism that a library of composable parts provides).  People are going to attempt concurrent programming whether it’s safe or not, and using a library such as the CCR significantly reduces the risk of ad hoc multi-threading code.

Cooperative Multitasking

When mainstream operating systems like Windows took their first steps to support multi-tasking, cooperative versus preemptive multi-tasking was a common topic.  The idea of an operating system depending on applications to surrender control in a well-behaved way was generally and rightfully considered a bad idea.  Any kind of error or instability in software could easily bring down the entire operating system, and enforcing a quickly growing community of software vendors to share nicely wasn’t a realistic option.  Being preemptive meant that the OS could forcefully stop an application from running after giving it a small, measured slice of time, and then switch the thread to a new context where another application could run for another time slice.  Regardless of how poorly applications ran, as benevolent dictator, the OS could ensure a fair scheduling of processor time.

The solution encapsulated in the Concurrency & Coordination Runtime is, on the other hand, a cooperative multi-tasking strategy.  However, because it operates within the local scope of an individual process, and is isolated from other processes in the same OS, its risk of destabilizing the system is nonexistent.  This deep level of cooperation, in fact, is what gives the CCR its great performance.  When used correctly, which George Chrysanthakopoulos (in this video) and his colleagues have brilliantly put within our reach in the CCR library, threads don’t sit around waiting on some resource or for long-running operations to complete; instead, control is freely surrendered back to the thread pool, where it is quickly assigned to a new task.

Finally, by surrendering threads freely instead of holding onto them, a continuous flow of threads through the different activities of the system is maintained, and there is therefore always an abundance of them to handle new messages waiting on ports.  Existing threads are not wasted.  As the Tao Te Ching says:

If you want to accord with the Tao,
just do your job, then let go.

Control & Data Flow: Sequential vs. Concurrent

In sequential programs, stacks are used to unwind method calls and provide return values (return messages), and threads follow the data flow; whereas in port-based programming, threads are managed by one or more thread dispatchers that are capable of maximizing the use of that thread by making it available in a pool and sharing it with with many other (potentially unrelated) tasks.  Data flows orthogonally and according to a different coordination strategy than control flow.  This task-thread agnosticism (the opposite of thread-affinity) is similar to the statelessness of a web server such as IIS; one or more threads from a large pool are injected into the tasks of processing, rendering, and serving up huge numbers of web pages, after which those threads are recycled back into the thread pool for execution of other tasks for a highly concurrent and scalable service platform.

So herein lies the trick: in order to split this coupling between data flow and control flow, a different means is needed to compose the two coordination strategies.  In C# and other popular imperative programming languages, methods implicitly pass thread control along with data arguments (the message), and the use of the stack for method calls asserts constraints on control flow, so making the CCR work involves some interesting patterns.

That’s why port-based programming is hard to get your head around.  It’s such a large shift from common sequential logic and it requires some additional planning (and good visualizations).  It’s obviously important to have a good set of tools for expressing that coordination, a simple set of conceptual primitives that allows us to compose arbitrarily-complex coordination patterns.  These primitives, including Message, Port, PortSet, Dispatcher (thread pool), and others provide the building blocks that we can use to define these relationships.  Once we define what we want to happen with these building blocks, the CCR can make it all happen.

This level of coordination is a level beyond the strategies used by most concurrent applications and frameworks in software today, primarily because there hasn’t been a pressing need for it until recently–processors had been growing phenomenally in speed for many years.  Now, however, we’re told that processor speed has plateaued, that we now have to scale out to scale up, spreading the processing load across multiple cores.  We are very fortunate that the work being done by researchers in fields like robotics can be applied in other service oriented architectures, and is a testament to the widespread use of the .NET Framework and the fantastic efforts of some very bright individuals.

Where to Find Microsoft Robotics Studio

Robotics Studio is a free download and can be found here, and while it is full of other good stuff, it’s worth checking out for the Concurrency and Coordination Runtime alone.

Posted in Compact Framework, Concurrency, Data Structures, Design Patterns, Microsoft Robotics Studio, Object Oriented Design, Service Oriented Architecture, Software Architecture | Leave a Comment »

.NET Micro Framework vs. Microsoft Robotics Studio

Posted by Dan Vanderboom on April 12, 2008

The .NET Micro Framework is a compatible subset of the full .NET Framework, similar to how the Compact Framework is a subset.  But the Micro Framework can act as its own real time operating system (RTOS) instead of loading the tiny CLR in a host operating system, and works with a variety of hardware devices, including those that don’t have their own memory management unit (MMU).  The gist is that embedded applications as well as low-level drivers can now be written in managed code (a huge leap forward), and take advantage of garbage collection, strong-typing, and other nice features that are typically absent in embedded systems programming.  This supposedly reduces errors, boosts productivity, and reduces development costs.

I’ve been watching videos like this, reading about the Micro Framework for the past few months, have pre-ordered this book from Amazon, and have been itching to get my hands on a hardware development kit to start experimenting.  The only problem is that the interfaces for these embedded devices are so foreign to me (I2C, GPIO, etc.), and I’m not exactly sure what my options are for assembling components.  What kinds of sensors are available?  Do I have to solder these pieces together or is there a nice modular plug system similar to the SATA, IDE, and PCI connectors on modern computers (but on a smaller scale)?  Do I have to write my own device drivers for anything I plug in, or is there some abstraction layer for certain classes of inputs and outputs?

The other issue that makes me hesitate is the thought of programming without language conveniences like generics on a more resource-constrained device than the Windows Mobile devices I’m already frustrated with (and have been for the past four years).

I’m not saying that I’m not excited about managed code on tiny embedded devices, and I’m not saying I won’t start playing with (and blogging about) this important technology sometime in the next few months, but I’ve discovered another platform for embedded device development with the ability to interact directly with the physical world that offers a much lower technical barrier for entry, so that’s where I’m starting.

What I’m referring to is Microsoft Robotics Studio, which is by all accounts a phenomenal platform for more than just robotics, and seems to overlap somewhat with the Micro Framework’s reach (at the intersection of the computer-digital and physical-analog worlds).  The two critical components of this architecture are the Decentralized Software Services Protocol (DSSP, originally named WSAP) and the Concurrency & Coordination Runtime (CCR).  Make sure you watch this video on the CCR, and note how George emphasizes that “it’s not about concurrency, it’s about coordination”, which I found especially enlightening.  These are highly robust and general purpose libraries, and both of them will run on Compact Framework!  (I was very impressed when I read this.)

Without having studied them both in depth yet, the CCR seems to cannibalize on the Task Parallel Library (TPL), at least conceptually, offering a much more complete system of thread manipulation and synchronization than the original System.Threading types, all for a vertical industry that has much greater demands of a concurrency framework that must, for example, deliver massive concurrency and complex coordination rules in a highly-distributed system, all the while handling full and partial failures gracefully.  Some of the patterns and idioms for making concurrency and synchronization operations easy to write and understand are masterfully designed by Henrik Nielsen and George Chrysanthakopoulos (and I thought Vanderboom was a long name!).

The fact that the CCR and DSSP were developed together is significant.  Tasks running in parallel need to be manipulated, coordinated, and tracked regardless of whether they’re running on four processors in one computer or on 256 processors spread across a hundred devices, and distributed services need a dependable way of parallelizing their efforts on individual machines.  This is a synergistic relationship, each subsystem enhancing the elegance and usefulness of the other.  So why not use the CCR in every application instead of developing the TPL?  Are these two teams actively collaborating, or have they been building the two frameworks in isolation?

I also have to make special mention of the decentralized software services, which as a protocol sits on top of HTTP in a RESTful implementation.  It supports composition of services by creating partnerships with other services, defining securely which partnerships are allowed, offering identification and discovery of services, and much more.  In this assembly then, we have a robust library for building distributed, composable, decentralized services-oriented systems with event publication and subscription, with services that can be hosted on desktop and mobile computers alike.  Wow, call me a geek, but that’s really frickin’ cool!  Some of the ideas I have for future projects require this kind of architectural platform, and I’ve been casually designing bits and pieces of such a system (and got so far as getting the communication subsystem working in CF).  Now, however, I might be turning my attention to seeing if I can’t use the Robotics Studio APIs instead.

One thing to be clear about is that Robotics Studio doesn’t support Micro Framework, at least not yet.  I wonder exactly how much value there would be to adding this support.  With hardware options such as this 2.6″ x 2.3″ motherboard with a 1.6 GHz Intel Atom processor, video support, and up to 1 GB of RAM, capable of running Windows XP or XP Embedded, and priced at $95 (or a 1.1 GHz version for $45), we’re already at a small enough form factor and scale for virtually any autonomous or semi-autonomous robotics (or general embedded) applications.  There’s also the possibility for one or more Micro Framework devices to exchange messages with a hardware module that is supported in Robotics Studio, such as the tiny board just mentioned.

Where do we go next?  For me, I just couldn’t resist jumping in with both feet, so I ordered about $500 in robotics gear from Phidgets: USB interface boards, light and tempature sensors, a 3-axis accelerometer (think Wii controller), motors and servos, an RFID reader with tags, LEDs, buttons, knobs, sliders, switches, and more.  I plan to use some of this for projects at CarSpot, and the rest simply to be creative with and have fun.  I also pre-ordered this book, but the publication date is set for June 10th, so by the time I get it, I may have already figured most of it out.

Posted in Compact Framework, Concurrency, Distributed Architecture, Micro Framework, Microsoft Robotics Studio, Service Oriented Architecture, Software Architecture | 2 Comments »

Tree Data Structure Source Code Posted

Posted by Dan Vanderboom on April 3, 2008

[Updated 8/14/2014] The source code for this library can be found here at GitHub. Also check out my blog post announcing it.

For those of you who have been waiting for the source code to the Tree<T> article I wrote, I’m happy to announce that it’s finally available.  I had originally intended to supply some non-trivial examples of its use, but with my super busy schedule at work and otherwise, I’ve had to reduce the scope just to get it out there.  One of my examples is large enough to warrant one or more large articles, so I also didn’t want to just toss them in there without sufficient explanation as to how it all works.

While I work on getting those ready, check out Damon Payne’s article on using a non-binary tree for modeling dependencies among concurrently-running tasks with the Task Parallel Library.  This is a great use of the tree data structure.  It would be interesting to see how that would work for tasks with cross-branch dependencies.  Does the tree become a graph?  Would iteration become a garbage-collection-like traversal?  How would it need to respond to insertion of tasks or modification or dependencies during execution?

My non-binary tree article, which has been updated with a short section just before the conclusion, can be found here.  At the very top of the article is a link to the source code in the form of a Visual Studio 2008 solution.  For those of you with VS2005, you should be able to easily extract the code and create a VS2005 project out of it.  I specifically targeted .NET 2.0 instead of 3.5, and originally tested it against Compact Framework.

I’ll also be doing further development of this library, so if you have any cool feature requests, let me know.

Posted in Algorithms, Concurrency, Data Structures, My Software, Object Oriented Design, Problem Modeling, Software Architecture | 6 Comments »