Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘Data Structures’ Category

The Archetype Language (Part 8)

Posted by Dan Vanderboom on October 1, 2010

Overview

This is part of a continuing series of articles about a new .NET language under development called Archetype.  Archetype is a C-style (curly brace) functional, object-oriented (class-based), metaprogramming-capable language with features and syntax borrowed from many languages, as well as some new constructs.  A major design goal is to succinctly and elegantly implement common patterns that normally require a lot of boilerplate code which can be difficult, error-prone, or just plain onerous to write.

You can follow the news and progress on the Archetype compiler on twitter @archetypelang.

Links to the individual articles:

Part 1 – Properties and fields, function syntax, the me keyword

Part 2 – Start function, named and anonymous delegates, delegate duck typing, bindable properties, composite bindings, binding expressions, namespace imports, string concatenation

Part 3 – Exception handling, local variable definition, namespace imports, aliases, iteration (loop, fork-join, while, unless), calling functions and delegates asynchronously, messages

Part 4 – Conditional selection (if), pattern matching, regular expression literals, agents, classes and traits

Part 5 – Type extensions, custom control structures

Part 6 – If expressions, enumerations, nullable types, tuples, streams, list comprehensions, subrange types, type constraint expressions

Part 7 Semantic density, operator overloading, custom operators

Part 8 – Constructors, declarative Archetype: the initializer body

Part 9 – Params & fluent syntax, safe navigation operator, null coalescing operators

Conceptual articles about language design and development tools:

Language Design: Complexity, Extensibility, and Intention

Reimagining the IDE

Better Tool Support for .NET

Constructors

A constructor in Archetype is a recommended, predefined prototype for instantiating an object correctly.

 

The default parameterless constructor is defined implicitly (it’s defined even if it isn’t written), even if other constructors are defined explicitly. This last part is unlike other languages that hide the parameterless constructor when others are defined.  This will make classes with these default constructors common in Archetype, to more easily support behaviors like serialization and dynamic construction.  When it needs to be hidden, it can be defined with reduced visibility, such as private.

 

A constructor is defined with the name new, consistent with how it’s invoked.

 

Let’s start with a very basic class, and build up to more complicated examples.

 

Customer object

{

FirstName string;

LastName string;

}

 

Despite the lack of an explicit constructor, it’s important for Archetype to define constructs that are useful in their default configurations.  You couldn’t get more basic that the Customer class above.  If we want to define the constructor explicitly, we can do so.

 

Customer object

{

FirstName string;

LastName string;

 

new ()

{

// do nothing

}

}

 

Instantiating a Customer object is easy. With the parameterless constructor, parentheses are optional.

 

var dilbert = new Customer;

 

Archetype, like C#, supports constructor initializers:

 

var dilbert = new Customer

{

FirstName = "Dilbert",

LastName = "Smith"

};

 

When you have few parameters and want to compress this call to a single line, the curly braces end up feeling a little too much (too formal?).

 

var dilbert = new Customer { FirstName = "Dilbert", LastName = "Smith" };

 

Archetype supports passing these assignment statements as final arguments of the constructor parameter list, like this:

 

var dilbert = new Customer(FirstName = "Dilbert", LastName = "Smith");

 

As a result, there isn’t much need to define constructors that only set fields and properties to the value of constructor parameters.  Because Archetype has this mechanism for fluidly initializing objects at construction, the only time constructors really need to be defined is when construction of the object is complicated or unintuitive, in which case a supplied construction pattern is a sure way to make sure it’s done correctly.  Our Customer example doesn’t meet those criteria, but if it did, this is one way we could write it:

 

Customer object

{

FirstName string;

LastName string;

 

new (FirstName string, LastName string)

{

this.FirstName = FirstName;

this.LastName = LastName;

}

}

 

To avoid having to qualify FirstName with the this keyword, many people prefer naming their parameters with the first character lower-cased.  That’s an unfortunate compromise.  When viewing at least the public members of a type, in a sense you’re creating an outward-facing API, and I think Pascal casing more naturally respects English grammar, not downplaying the signficance of the most-important first word in an identifier by lower-casing it to get around some unfortunate syntax limitation.

 

But instead of taking sides in a naming convention war, we can solve the problem in the language and remove the need to make any compromise.

 

new (FirstName string, LastName string)

{

set FirstName, LastName;

}

 

This lets us set individual properties named the same as constructor parameters.  It’s flexible enough to set some and consume other parameters differently, but when you want to set all parameters with matching member names, you can use the shortcut set all.  If that’s all the constructor needs to do, we can do away with the curly braces:

 

new (FirstName string, LastName string) set all;

 

If our Customer class contained a BirthDate property, we could use this constructor and pass in an initializer statement as a final parameter.

 

var dilbert = new Customer("Dilbert", "Smith", BirthDate = DateTime.Parse("7/4/1970");

 

This works with multiple initializers.  Alternatively, we could use an initializer body after the parameter list:

 

var dilbert = new Customer("Dilbert", "Smith")

{

BirthDate = DateTime.Parse("7/4/1970")

};

 

Note how we have two places to supply data to a new object, if needed: the parameter list for simple, short values, and the initializer body for much larger assignments.


Another common construction pattern is for one or more constructors to call another constructor with a default set of properties.  Typically the constructor with the full list of parameters performs the actual work, while the shorter constructors call into the main one, passing in some default values and passing the others through.

 

new (EvaluateFunc sFunc<T>) new(null, null, EvaluateFunc);

 

new (BaseObject object, EvaluateFuncsFunc<T>) new(null, BaseObject, EvaluateFunc);

 

new (Name string, BaseObject object, EvaluateFunc Func<T>)

{

set all;

 

// do all the real work…

// …

}

 

Declarative Archetype: The Initializer Body

 

The initializer body mentioned above has a special structure in Archetype.  Member assignment statements can appear side-by-side with value expressions that are processed by a special function called value.  This can be used, among other things, to add items to a collection.  It’s best to see in an example:

 

var dilbert = new Customer

{

FirstName = "Dilbert",

LastName = "Smith",

BirthDate = DateTime.Parse("7/4/1970"),

 

new SalesOrder(OrderCode = "ORD012940"),

 

new SalesOrder

{

OrderCode = "ORD012941",

 

new SalesOrderLine(ItemCode = "S0139", Quantity = 3),

new SalesOrderLine(ItemCode = "S0142", Quantity = 1)

}

};

 

The first three lines of the initializer set members with assignment statements.  The next expression (new SalesOrder …) in the list creates an object, but there’s no assignment.  It returns a value, but where does it go?  Take a look at the value functions below for the answer:

 

Customer object

{

FirstName string;

LastName string;

Orders SalesOrder* = new;

Invoices Invoice* = new;

 

// formatted inline

value (Order SalesOrder) Orders += Order;

 

// formatted with full code block

value (Invoice Invoice)

{

Invoices += Invoice;

}

}

 

A Customer has several collections of things–Orders and Invoices here–and because there are two value functions in the class, any expressions of type SalesOrder or Invoice will be evaluated and their values passed to the appropriate value function. Expressions of other types will trigger a compile-time error.

 

The += and -= operators haven’t been shown before.  Their use is a very natural fit for stream and list types.  The += operator appends an object to a stream, and -= removes the first occurrence of that object.

 

This simple addition of a value function in types (classes and structs) gives Archetype the ability to represent hierarchical structures in a clean, declarative way.  Sure it’s always been possible to format expressions similarly, but the syntactic trappings of imperative languages have made this difficult and unattractive at best, and in most real-world cases impractical.

 

When I experimented in creating a Future class, I came up with a pattern in C# to nest structures in a tree for large future expressions, but the need to match parentheses gets in the way and consumes too much attention that’s better focused on the logic itself:

 

Future<string> FuturePi = null, FutureOmega = null, FutureConcat = null, FutureParen = null;

 

var result = new Future<string>("bracket",
    () => Bracket(FutureParen),
    (FutureParen =
new Future<string>("parenthesize",
        () => Parenthesize(FutureConcat),
        (FutureConcat =
new Future<String>("concat",
            () => FuturePi +
" < " + FutureOmega,
                (FuturePi =
new Future<string>("pi", () => CalculatePi(10))),
                (FutureOmega =
new Future<string>("omega", () => CalculateOmega()))
            ))
        ))
    );

 

The difference finally occurred to me between the need to set few simple members and the definition of larger, more structured content–including nested structures–that begged for a way to supply them without carrying the end parenthesis down multiple lines or letting them build up into parentheses knots that must be carefully counted.  One gets to fidgeting with where to put them, and sometimes there’s no good answer to that.


Another feature we need to make this declarative notation ability robust is inline variable declaration and assignment.  Notice in the last example how several intermediary structures have variable names defined for them ahead of time, outside the expression. Writing that Future code, I felt it was unfortunate these variables couldn’t be defined inline as part of the expression.  Doing so would allow us to define any kind of structure we might see in XML or JSON, such as this XAML UI code.

 

new Canvas -> LayoutRoot

{

Height = Auto,

Width = Auto,

 

new StackPanel -> sp

{

Orientation = Vertical,

Height = 150,

Width = Auto,

 

Canvas.Top = 10,

Canvas.Left = 20,

 

with Canvas

{

Top = 10,

Left = 20,

},

 

with Canvas { Top = 10, Left = 20 },

 

Loaded += (sender, e)

{

Debug.WriteLine("StackPanel sp.Loaded running");

sp.ResizeTo(0.5 seconds, Auto, 200).Begin();

},

 

LayoutUpdated += HandleLayoutUpdated,

 

new TextBlock

{

FontSize = 18,

Text = "Title"

},

new TextBlock { Text = "Paragraph 1" },

new TextBlock { Text = "Paragraph 2" },

new TextBlock(Text = "Paragraph 3")

}

};

 

A few notes are needed here:

 

·   Wow, this looks a lot like XAML, but much friendlier to developers who have to actually read and edit it!  Yes, good observation.

·  Unlike XAML, every identifier here works with the all-important Rename refactoring, go to definition, find all references to, etc. This is great for reducing the amount of work to find relationships among things and manually update related files.

·  Also unlike XAML, code for event handlers can be defined here. I’m not saying you should cram all of your event handler logic here, but it could come in quite handy at times and I can’t see any reason to disable it. 

·  The with token is a custom operator (see Part 7) that provides access to attached properties through an initializer body. Custom extensions allow you to access these properties with a natural member-access style.

·  It hasn’t been possible to use generic classes in XAML. Specifying UI in Archetype, this would be trivial, and I suspect they could be used to good effect in many ways. Of course, in doing this you’d lose support for the designers in VS and Blend, which would be awfui.

·  Auto is simply an alias for double.NaN.

·  The -> custom operator in these expressions defines a variable and sets it to the value of the new object. The order of execution is:

1. Evaluate constructor parameters, if any are supplied.

2. Assign the object to the variable defined with ->, if supplied.

3. Set any fields or properties with assignment statements.

4. Evaluate value expressions, if supplied, and call the class’s value function with each one, if a value function has been defined.

5. Invoke any matching value function defined in class extensions.

 

By following this design, the example above can be translated into this C# code by the Archetype compiler:

 

var LayoutRoot = new Canvas()

{

Height = double.NaN,

Width = double.NaN

};

 

var sp = new StackPanel()

{

Orientation = Orientation.Vertical,

Height = 150.0,

Width = double.NaN

};

 

LayoutRoot.Children.Add(sp);

 

sp.SetValue(Canvas.TopProperty, 10.0);

sp.SetValue(Canvas.LeftProperty, 20.0);

 

sp.Loaded += (sender, e)

{

Debug.WriteLine("StackPanel sp.Loaded running");

sp.ResizeTo(0.5.seconds(), double.NaN, 200.0).Begin();

},

 

sp.LayoutUpdated += HandleLayoutUpdated;

 

sp.Children.Add(new TextBlock() { FontSize = 18, Text = "Title"});

sp.Children.Add(new TextBlock() { Text = "Paragraph 1"});

sp.Children.Add(new TextBlock() { Text = "Paragraph 2"});

sp.Children.Add(new TextBlock() { Text = "Paragraph 3"});

 

var VisualTree = LayoutRoot;

 

Compare the two approaches. The C# code is a typical example of imperative structure building, while the Archetype code is arguably as declarative as XAML, and with many advantages over XAML for developers.


Going back to the Future example, we could rewrite this in Archetype a few different ways.  I’ll present two.  In the first one, value functions are used to receive the future’s evaluation function as well as any Future objects the expression depends on.


new
Future<string>("bracket") -> result

{

() => Bracket(FutureParen),
new Future<string>("parenthesize") -> FutureParen

{

() => Parenthesize(FutureConcat),
new Future<string>("concat") -> FutureConcat

{

() => FuturePi + " < " + FutureOmega,
new Future<string>("pi") -> FuturePi

{

() => CalculatePi(10)

},

new Future<string>("omega") -> FutureOmega

{

() => CalculateOmega()

}

}

}

}


The shorter approach passes an evaluation delegate in as a parameter.


new
Future<string>(() => Bracket(FutureParen)) -> result

{

new Future<string>(() => Parenthesize(FutureConcat)) -> FutureParen

{

new Future<string>(() => FuturePi + " < " + FutureOmega) -> FutureConcat

{

new Future<string>(() => CalculatePi(10)) -> FuturePi,

new Future<string>(() => CalculateOmega()) -> FutureOmega

}

}

}

 

The name string parameter is missing from the last example.  This was only for use during debugging.  Now what we have is a very direct description of futures that are dependent on other futures in a dependency graph.

Summary

Object construction is a crucial part of an object-oriented language, and Archetype is advanced with its options for constructing arbitrary object graphs and initializing even complicated state in a single expression.  These fluent declarative syntax features are ideal for representing structures such as XAML UI, state machines, dependency graphs, and much more.

XAML is a language.  The question this work has me asking is: do we really need a separate language if our general purpose language supports highly declarative syntax? It’s a provocative question without an easy answer, but it seems clear that many DSLs could emerge within a language that so richly supports composition.

With the ability to define arbitrarily complex structures in code—from declarative object graphs to rich functional expressions—it’s hard to think of a situation that would be too difficult to model and build an API or application around.

Posted in Archetype Language, Data Structures, Design Patterns, Language Innovation, Silverlight, User Interface Design, WPF | 2 Comments »

The Archetype Language (Part 7)

Posted by Dan Vanderboom on September 27, 2010

Overview

This is part of a continuing series of articles about a new .NET language under development called Archetype.  Archetype is a C-style (curly brace) functional, object-oriented (class-based), metaprogramming-capable language with features and syntax borrowed from many languages, as well as some new constructs.  A major design goal is to succinctly and elegantly implement common patterns that normally require a lot of boilerplate code which can be difficult, error-prone, or just plain onerous to write.

You can follow the news and progress on the Archetype compiler on twitter @archetypelang.

Links to the individual articles:

Part 1 – Properties and fields, function syntax, the me keyword

Part 2 – Start function, named and anonymous delegates, delegate duck typing, bindable properties, composite bindings, binding expressions, namespace imports, string concatenation

Part 3 – Exception handling, local variable definition, namespace imports, aliases, iteration (loop, fork-join, while, unless), calling functions and delegates asynchronously, messages

Part 4 – Conditional selection (if), pattern matching, regular expression literals, agents, classes and traits

Part 5 – Type extensions, custom control structures

Part 6 – If expressions, enumerations, nullable types, tuples, streams, list comprehensions, subrange types, type constraint expressions

Part 7 Semantic density, operator overloading, custom operators

Part 8 – Constructors, declarative Archetype: the initializer body

Part 9 – Params & fluent syntax, safe navigation operator, null coalescing operators

Conceptual articles about language design and development tools:

Language Design: Complexity, Extensibility, and Intention

Reimagining the IDE

Better Tool Support for .NET

Semantic Density

As an avid reader growing up, I noticed that my knowledge and understanding of a topic grew more easily the faster I read.  Instead of going through a chapter every day or two, which puts weeks or months between the front and back covers, I devoured 200-300 pages in a night, getting through the largest books in a couple days.  And in reading multiple books on a subject back-to-back, it was easier to find relationships and tie together concepts for things that were still so fresh in my memory.

In my study of linguistics, I learned that legends like Noam Chomsky could learn hundreds of langauges; the previous librarian at the Vatican could read 97.  Bodmer’s excellent book The Loom of Language attempts to teach 10 languages at once, and it seems that the more languages you learn, the easier it is to pick up others.

What these examples have in common is semantic density.  It might seem from what I’ve said that this would be like drinking from a firehose which only the most gifted could endure, but I would argue that intensely-focused learning puts our minds in a highly alert and receptive condition.  In such a state, being able to draw more connections between statements and ideas, we are better able to comprehend the whole in a holistic, intuitive way.

Code Example

Semantic density is important in code, too.  With a pattern like INotifyPropertyChanged, formatted as I have it below, it’s 12 lines of code, 13 if you separate your fields and properties with a blank line, but in the ballpark of a dozen lines of code.  (This is additional explanation for a feature described in part 2 of this series.)

1:   string _Display;
2:   public string Display
3:   {
4:       get { return _Display; }
5:      
set
6:      
{
7:           _Display = value;
8: 
9:           if (PropertyChanged != null)
10:              PropertyChanged(this, new PropertyChangedEventArgs("Display"));
11:      }
12:  }

Does the ability to inform external code of changes seem like it should take a dozen lines to get there?  This can be somewhat compressed by defining a SetProperty method:

1:   string _Display;
2:   public string Display
3:   {
4:       get { return _Display; }
5:       set { SetProperty("Display", ref _Display, value); }
6:   }


This chops the line count in half, bringing it down to six lines–seven if you include a space above or below to separate it from other members.  On my monitor, that means I can see about eight property definitions at a time.  Now that’s usually enough, but I’ve written a few custom controls  that have upwards of 30 properties.  For a new pair of eyes, getting the gist of that class is going to involve a lot of scrolling, never seeing more than a small slice at a time of a much large picture.  The frame of time I mentioned earlier in regard to studying a subject is analogous here to the frame of space.  Seeing 20% of a class at any time lends itself to faster grokking than seeing a 2% sliver at a time.  Our minds, marvelous as they are, do have limits.  Lowering semantic density, such as by spreading meaning over large distances or time spans, makes us work harder to accomplish the same task, trying to put all the pieces together, and the differences are often dramatic.

Just as nouns can be modified by adjectives in natural languages, types in Archetype support user-defined type modifiers.  By defining a new type modifier called bindable to encapsulate the INotifyPropertyChanged pattern, we can collapse the above example into a single line:

1:   Age bindable int;

 

I have no problem stacking these properties right on top of one another.  Although expressed in a highly dense form, it’s actually easier to understand at a glance in these three simple tokens than in the half-dozen lines above, which beg for interpretation to assemble their meaning.  Even if we have 30 of these, they’d all fit on one screen, and the purpose of the class as a whole is quickly gathered.

 

One thing I noticed in developing the animation library Animate.NET is how much code I saved: not having to worry about the details of storyboard creation, key frames, and so on.  It allows you to get right to the point of stating your intention.  Often a library like this is enough, but once in a while language extensibility is a much better approach; and when it is, not having the option can be painful and time consuming.

 

Custom Operators & Operator Overloading

As in most languages, Archetype supports two forms of syntax for operations: functions and operators.  Functions are invoked by including a pair of parentheses after their name that contain any arguments to pass in, whereas operators appear adjacent to or between sub-expressions. 

In C#, some operators are available for overloading.  Archetype supports these operator overloads by using the same names for them.  This allows Archetype to use operators defined in C# and to expose supported operators to C# consumers.

However, Archetype goes one step further and allows you to define custom operators.  There are three basic kinds of custom operator:

  1. unary prefix
  2. unary suffix
  3. binary

If we wanted an easy way to duplicate strings in C#, we might define an extension method called Dup, but in Archetype we also have this option:

// "ABC" dup 3 == "ABCABCABC"

binary dup string (left string, right int)

return string.Repeat(left, right)


The expression parser sees “ABC”, identifies it as a string value, and then looks at the next token.  If the dot operator were found (.), it would look for a member of string or an extension member on string, but because the next token isn’t the dot operator, it looks up the token in the operator table.  An operator called dup is defined with a left string argument and a right int argument, matching the expression.  If the operator were more complicated, it would have a curly-brace code block, but because it’s a single return statement, that’s optional.

Archetype operators aren’t limited to letters, though.  We can also use symbols (but not numbers) in our operator names.  Here’s a “long forward arrow” (compiled with name DashDashGreaterThan) that allows us to write a single function parameter before the function name itself:

// "Hey" –> Console.WriteLine;

binary<T> –> void (left T, right Action<T>)

right(left);


Note that the generic type parameter is attached to the binary keyword.  I arrived at this placement through much experimentation.  Names like –><T> are difficult to read and can be trickier to parse.

There is a special binary operator called adjacent which you can think of as an “invisible operator” capable of inserting an operation between two sub-expressions.  In the following example, two adjacent strings are interpreted as a concatenation of the two.

// "123" "45" == "12345"

binary adjacent string (left string, right string)

return left + right;

With custom operators, what was originally part of the language can now be defined in a library instead.  This greatly simplifies the language.  Just as methods can be shadowed to override them, so too will some ability be needed in Archetype to block or override operators that would otherwise be imported along with a namespace.

The next operator we’ll look at is the unary suffix.  The example consists of units of time: minutes and seconds.

// 12 minutes == TimeSpan.FromMinutes(12)

unary suffix minutes TimeSpan (short, int)

return TimeSpan.FromMinutes((int)value);

 

// 3 seconds == TimeSpan.FromSeconds(3)

unary suffix seconds TimeSpan (short, int)

return TimeSpan.FromSeconds((int)value);


With support for extension properties, we could have also said 12.minutes or 3.seconds, which is already better than C#’s 12.minutes() and 3.seconds(), but by defining these tokens as unary suffix operators, we can eliminate even the dot operator and make it that much more fluent and natural to type (without losing any syntactic precision).  Notice how a list of types is provided instead of a parameter list.  Unary operators by definition have only a single argument, but we often want them to operate on several different types.

Here’s a floating point operator for seconds.

// 2.5 seconds == (2 seconds).Add(0.5 * 1000 milliseconds)

// WholeNumber and Fraction are extension properties on float, double, and decimal

unary suffix seconds TimeSpan (float, double, decimal)

return value.WholeNumber seconds + value.Fraction * 1000 milliseconds;

We can use the adjacent operator on TimeSpans the same that we did for strings above.

// 4 minutes 10 seconds == (4 minutes).Add(10 seconds)

binary adjacent TimeSpan (left TimeSpan, right TimeSpan)

return left.Add(right);

Now let’s combine the use of a few of these operators into a single example.

alias min = minutes;

alias s = seconds;

var later = DateTime.Now + 2 min 15 s;

// Schedule is an extension method on DateTime

later.Schedule

{

// schedule this to run later

}

We can also define our DateTime without assigning its value to a variable.

(DateTime.Now + 10 seconds).Schedule

{

}

 

(DateTime.Now + 10 seconds).Schedule (Repeat=10 seconds)

{

}

Repeat is an optional parameter of Schedule, defaulting to TimeSpan.Zero, meaning “don’t repeat”.

Additional Notes

When more than one operator is valid in a given position, the most specific operator (in terms of its parameter types) is used.  If there’s any ambiguity or overlap remaining, a compiler error is issued.

Unary operators will take precedence over binary operators, but it hasn’t been determined yet what precedence either one will actually have in relation to all of the other operators, or whether this will be specified in the operator definition.

Because of this design for custom operators, I’ve been able to remove things from the language itself and include them as operator definitions in a library.

Summary

This article provided some deeper explanation into previously covered material and introduced the syntax for Archetype’s very powerful custom operator declaration syntax.  The next article will cover some special operators built into Archetype, and property path syntax which is something I came up with a while back to safely reference identifiers that would be impervious to both refactoring and obfuscation.

I’m curious to read your feedback on custom operators in particular, so keep the great comments coming!

Posted in Archetype Language, Composability, Data Structures, Design Patterns, Language Innovation | 1 Comment »

Four Traversal Patterns for Tree<T>

Posted by Dan Vanderboom on April 5, 2010

[Updated 8/14/2014] The source code for this library can be found here at GitHub. Also check out my blog post announcing it.

This is an update to my original article on a non-binary tree data structure.  After receiving multiple requests to complete this, here it is.  The updated source code can be downloaded here.

To illustrate the four possible traversal types, the following tree data structure will be used in all of my examples.

image

The enum TreeTraversalType has two possible values: DepthFirst and BreadthFirst.  The other enum involved, TreeTraversalDirection, determines which end of the tree it starts from: TopDown or BottomUp.  There are four possible combinations of these two values, which will each be presented separately.

Depth-First, Top-Down

This traversal strategy involves starting at the root and digging as deep into the tree whenever possible.  The order of nodes yielded by our example tree structure would be a-b-d-e-c-f-g.

image

Depth-First, Bottom-Up

Because we’re moving bottom-up in this case, the first thing we have to do is dive to the bottom of the first branch we find.  We skip past a and b, and find d with no children.  D is therefore the first node we’ll yield.  E is it’s peer, so that comes next.  From e, we move up the tree to b.  A is the root, so it has to be the last node yielded, so we’re going to dive into the c branch next, yield f and g, and then yield c and finally a.  The order is: d-e-b-f-g-c-a.

image

Breadth-First, Top-Down

When we traverse breadth-first, we’re moving through the tree in levels.  With Top-Down, we start with the root a, then move to the second level with b and c, and then finally move to the third level to yield d-e-f-g.

image

Breadth-First, Bottom-Up

The final traversal is the one I had the most trouble with, and I ended up cheating a little by reversing the Breadth-First, Top-Down traversal.

image

Conclusion

Now, with a call to GetEnumerable on a node object, you can specify which of these four traversal patterns to use.  Here is an example of how that code looks:

foreach (SimpleTreeNode<string> node in 
    root.GetEnumerable(TreeTraversalType.DepthFirst, TreeTraversalDirection.TopDown))
{
    // ...
}

 

Posted in Algorithms, Data Structures | Tagged: | 4 Comments »

Animate.NET: Fluent Animation Library for Silverlight & WPF

Posted by Dan Vanderboom on December 31, 2009

Overview

The basic idea—in Silverlight and WPF—that an animation is just a change in some DependencyProperty over time is simple and powerful.  However, at that level of detail, the API for defining and managing complex animations involves writing a ton of code.  There are code-less animations, of course, such as those created in the Visual State Manager, but when you want to perform really dynamic animations, state-based animations can become impractical or outright impossible.

In response to this, I’ve published my fluent-style code-based animation library for Silverlight and WPF on CodePlex at http://animatedotnet.codeplex.com.  This is an API for making code-based animations intuitive and simple without having to write dozens or even hundreds of lines of code to create and configure storyboards, keyframes, perform repetitive math to calculate alignment, rotation, and other low-level details that distract one from the original purpose of the animation.  In one example, I counted over 120 lines of standard storyboard code, and with the abstractions and fluent API I’ve come up with, reduced that down to half a dozen lines of beautiful, pure intent.  As a result, it’s much more readable and faster to write.

I was initially inspired by Nigel Sampson from his blog article on building a Silverlight animation framework.  The code on his site was a good first step in creating higher-level abstractions, going above DoubleAnimation to define PositionAnimation and RotationAnimation, and I decided to build on top of that, adding other abstractions as well as a fluent-style API in the form of extension methods that hide even those classes.

Concepts

All Animate.NET animations derive from the Animation class which tracks the UI element being modified, the duration of animation, whether it has completed, and fires an event when the animation completes.  It manages building and executing the Storyboard object so you don’t have to.

Subclasses of Animation currently include OpacityAnimation, PositionAnimation, RotationAnimation, SizeAnimation, TransformAnimation, and GroupAnimation.  TransformAnimation is the parent class of RotateAnimation, and in the future ScaleAnimation and TranslateAnimation may also be included.

GroupAnimation is special because it allows you to combine multiple animations.  These groups can be nested and each group can include a wait time before starting (to stagger animations).

The Animate static class includes all of the extension methods that make up the fluent API, and the intention is for this to be the master class for building complex group animations.  Most of these methods come in pairs: you can RotateTo a specific angle or RotateBy relative to your current angle; MoveTo a specific location or MoveBy relative to your current position, etc.

Here’s the list so far:

  • Group and Wait
  • Fade, FadeIn, FadeOut, and CrossFade
  • RotateTo and RotateBy
  • ResizeTo and ResizeBy
  • MoveTo and MoveBy

Examples

Animate.NET can best be understood and appreciated with examples.

Basic Animations

Let’s say you want to resize an element to a new size.  Normally you’d need a storyboard and two DoubleAnimations: one for x and another for y, and for each you’d need to set several properties.  With Animate.NET, you can define and execute your animation beginning with a reference to the element you want to animate:

var rect = new Rectangle()
{
    Height = 250,
    Width = 350,
    Fill = new SolidColorBrush(Colors.Blue)
};
MainStage.Children.Add(rect);
rect.SetPosition(50, 50);

rect.ResizeTo(150, 150, 1.5.seconds()).Begin();


Only a single line of code, the last one, is needed to resize the rect element.

Note the call to Begin.  Without this, the ResizeTo (and all other fluent API calls) will return an object that derives from Animation but will not run.  We can, if needed, obtain a reference to the animation and begin the animation separately, like this:

var anim = rect.ResizeTo(150, 150, 1.5.seconds());
anim.Begin();


This allows us to compose animations into groups and manipulate animations after they’ve started, and is very similar to how LINQ queries are composed and later executed.

You’ll also notice the use of several other extension methods:

  • SetPosition – sets Left and Top currently.  In future versions, you’ll be able to define a registration point for positioning that may be located elsewhere, such as the center of the element.
  • seconds() – along with milliseconds, minutes, etc., allows you to specify a TimeSpan object more fluently.  I saw this in some Ruby code and loved it.  If only the C# team would implement extension properties, it would look even cleaner (eliminate the need for parentheses).
  • Center() and GetCenter() – centers an element immediately, and gets a Point object representing the center of the object respectively.  Not used in these examples, but worth mentioning.

Group Animations

Next I’ll show an example of a group animation using the Animate class’s Group method:

Animate.Group(
    rect.RotateBy(rect.GetCenter(), -90, 1.seconds()),
    rect.FadeOut(1.seconds())
    ).Begin();


This group animation contains two child animations: one to rotate the rectangle 90 degrees counterclockwise, and the other to fade the rectangle out (make it completely transparent).  The method takes a params array, so you can include as many animations as you like.

Because the animations listed are peers in the group, they begin running at the same time.  Often you will want to stagger animations, however.  You can accomplish this with the Wait method, which is the Group method in disguise (it simply includes an additional TimeSpan parameter).

Animate.Group(
    rect.RotateBy(rect.GetCenter(), -90, 1.5.seconds()),
    rect.FadeIn(0.5.seconds()),
    Animate.Wait(1.seconds(),
        rect.FadeOut(0.5.seconds())
        )
    ).Begin();


This animation rotates the rectangle for 1.5 seconds.  During the first 0.5 seconds, it fades in; during the last 0.5 seconds, it fades out.  Only one element, rect, was used in this example, but any number of UI elements can participate.

Animations can be nested and staggered to arbitrary complexity.  Because all animations derive from the Animation class, you can write properties or methods to encapsulate group animations, and assemble them programmatically before executing them.  Because all the ceremony of storyboards and keyframes is abstracted away, it’s very easy to see what’s happening in this code in terms of the end result.

Method Chaining

One of the benefits of a fluent API is the ability to chain together methods that modify a primary object.  For example, the Animation class defines a WhenComplete method that can be used to respond to the completion of an animation.  In the samples project on CodePlex, I create new UI objects at the beginning of each animation, and remove them afterward:

rect.ResizeTo(150, 150, 1.5.seconds())
    .WhenComplete(a =>
    {
        Thread.Sleep(2000);
        MainStage.Children.Remove(rect);
    })
    .Begin();


I pause for a couple seconds after displaying the final result before removing that object from its container.

Extension methods will be used more in the future for this library.  Uses will include modifying the animation to apply easing functions, responding to collision detection (by stopping or reversing), and so on.  This might end up looking something like:

rect.ResizeTo(150, 150, 1.5.seconds())
    .Ease(EasingFunction.Cubic(0.5))
    .StopIf(a => Animate.Collision(GetCollisionObjects()))
    .Begin();
 

Feedback and Future Direction

I’m releasing this as a very early experiment, and I’m interested in your feedback on the library and its API.

What kind of functionality would you like to see added?  Do the method names and syntax feel right?  What major, common animation scenarios have I omitted?  What other kinds of samples would you like to see?

Download the library and samples and give it a try!

Posted in Algorithms, Animation, Composability, Data Structures, Design Patterns, Dynamic Programming, Fluent API, Silverlight, WPF | 5 Comments »

Problems with RIA Services (Feedback for July 2009 CTP)

Posted by Dan Vanderboom on November 9, 2009

RIA Services (new home page) is a collection of tools and libraries for making Rich Internet Applications, especially line of business applications, easier to develop.  Brad Abrams did a great presentation of RIA Services at MIX 2009 that touches on querying, validation, authentication, and how to share logic between the server and client sides.  Brad also has a huge series of articles (26 as I write this) on using Silverlight and RIA Services to build a realistic application.

I love the concept of RIA Services.  Brad and his team have done a fantastic job of identifying the critical issues for LOB systems and have the right idea to simplify those common data access tasks through the whole pipeline from database to UI controls, using libraries, Visual Studio tooling, or whatever it takes to get the job done.

So before I lay down some heavy criticism of RIA Services, take into consideration that it’s still a CTP and that my scenario pushes the boundaries of what was likely conceived of for this product, at least for such an early stage.

Shared Data Model with WPF & Silverlight Clients

The cause of so much of my grief with RIA Services has been my need to share a data model, and access to a shared database, across WPF as well as Silverlight client applications.  Within the constraints of this situation, I keep running into problem after problem while trying to use RIA Services productively.

The intuitive thing to do is: define a single data model project that compiles to a single assembly, and then reference that in my Silverlight and non-Silverlight projects.  This would be a 100% full-fidelity shared data model.  As long as the code I wrote was a subset of both Silverlight and normal .NET Frameworks (an intersection), we could share identical types and write complex validation and model manipulation logic, all without having to constrain ourselves to work within the limitations of a convoluted code generation scheme.  Back when I wrote Compact Framework applications, I did this with great success despite the platform gap, and I didn’t have anything like RIA Services to help.

Incompatible Assemblies

Part of the problem arises because Silverlight assemblies are incompatible with non-Silverlight assemblies.  A lot of what RIA Services is doing is trying to find a way around this limitation: picking up attributes and code files from one project and inserting that code into the Silverlight project with a build action.  This Visual Studio “magic” has been criticized for its weakness in dealing with multiple-solution systems where Visual Studio can’t update the client because it’s not loaded, and I’ve heard there’s work being done to address this, but for my current needs, this magic aspect of it isn’t a problem.  The specifics of how it works, however, are.

Different Data Access APIs

Accessing entities requires a different API in Silverlight via RiaContextBase versus ObjectContext elsewhere.  Complex logic in the model (for validation and other actions against the model) requires access to other entities and therefore access to the current object context, but the context APIs for Silverlight and WPF are very different.  Part of this has to do with Silverlight’s inability to make synchronous calls to the server.

In significantly large systems that I build, I use validation logic such as “this entity is valid if it’s pointing to an entity of a different type that contains a PropertyX value of Y”.  One of my tables stores a tree of data, so I have methods for loading entire subtrees and ensuring that no circular references exist.  For these kinds of tasks, I need access to the data context in basic validation methods.  When I delete nodes from a tree, I need to delete child nodes, so update logic is part of the model that needs to be the same in every client.  I don’t want to define that multiple times for multiple clients.  I like to program very DRY.  In other words, I find myself in need of a shared model.

RIA Services doesn’t provide anything like type equivalence for a shared model, however.  Data model classes in Silverlight inherit from Entity, but EntityObject in WPF.  In the RIA Services domain context, we RaiseDataMemberChanged, but in a normal EF object context, we need to ReportPropertyChanged.  In WPF, I can call MyEntity.Load(MergeOption.PreserveChanges), but in Silverlight there’s no Load method on the entity and no MergeOption enum.  In WPF I can query against context.SomeEntitySet, but in Silverlight you would query against context.GetSomeEntitySetQuery() and then execute the query with another method call.

This chasm of disparity makes all but the simplest shared model logic impractical and frustrating.  The code generation technique, though good in principle, keeps getting in the way.  For example, I have both parameterless and parameterized constructors in my entity classes.  This works great in my WPF client, but when this code is synchronized to my Silverlight client, I get an error because the Silverlight-side entity class is generated in two parts: in the hidden partial class, a parameterless constructor is generated which calls partial method OnCreated; and in the visible partial class, the constructor method I defined on the server is dumped into another file, so I have duplicate constructors.  If I remove the parameterless constructor from the server side, I get an error because my entity class requires a parameterless constructor (and defining a non-default constructor effectively removes the default one from the resulting type unless it’s explicitly defined).  I thought I could define the partial method OnCreated and put my construction logic in there, but the partial method is only defined on the client side.  That means sharing construction logic consists of copying and pasting the OnCreated method across the various clients—far from an ideal solution.

Entity Data Model Required to be in Web Project

Another strategy I attempted was to define the .edmx file and my partial class extensions in a class library, and then reference that from the web project.  I could define the LinqToEntitiesDomainService<MyDataContext>, but sharing entity class code (by generating code in the Silverlight project) isn’t possible unless the .edmx file and partial class extensions are defined in the web project itself.  This would mean that my WPF client would have to reference a web project for data access, which by itself seems wrong.  (Or making a copy of the data model, which is worse.)  It would be better for the WPF client to talk to the same domain service as the Silverlight client, but RIA Services doesn’t give you an option to link that web project to a non-Silverlight project, so again I ran into a brick wall.

So Don’t Do That

The kind of advice I’m getting for this is, “so don’t do that”.  In other words, don’t write complex validation logic in the model or otherwise try to access the data context; don’t write parameterized constructors; don’t aim for 100% type fidelity across all endpoints of a system; don’t try to share data models with Silverlight and non-Silverlight projects, etc.  But I see the potential for RIA Services, so I have to push for these things unless I hear really convincing arguments against them (or compelling alternatives).

Conclusion

The fact that there are different data contexts and data item definitions within those contexts imposes a burden on the developer to use different techniques for each environment, and creates challenges for centralizing data model logic and reusing equivalent logic across different kinds of clients.  My gut feeling is that RIA Services in its current form has some fundamental design flaws that will need to be addressed, taking into consideration systems with a mix of Silverlight, WPF, and other clients, before it becomes a truly robust data access platform.

Posted in Data Structures, Design Patterns, Distributed Architecture, LINQ, RIA Services, Software Architecture | 3 Comments »

Better Tool Support for .NET

Posted by Dan Vanderboom on September 7, 2009

Productivity Enhancing Tools

Visual Studio has come a long way since its debut in 2002.  With the imminent release of 2010, we’ll see a desperately-needed overhauling of the archaic COM extensibility mechanisms (to support the Managed Package Framework, as well as MEF, the Managed Extensibility Framework) and a redesign of the user interface in WPF that I’ve been pushing for and predicted as inevitable quite some time ago.

For many alpha geeks, the Visual Studio environment has been extended with excellent third-party, productivity-enhancing tools such as CodeRush and Resharper.  I personally feel that the Visual Studio IDE team has been slacking in this area, providing only very weak support for refactorings, code navigation, and better Intellisense.  While I understand their desire to avoid stepping on partners’ toes, this is one area I think makes sense for them to be deeply invested in.  In fact, I think a new charter for a Developer Productivity Team is warranted (or an expansion of their team if it already exists).

It’s unfortunately a minority of .NET developers who know about and use these third-party tools, and the .NET community as a whole would without a doubt be significantly more productive if these tools were installed in the IDE from day one.  It would also help to overcome resistance from development departments in larger organizations that are wary of third-party plug-ins, due perhaps to the unstable nature of many of them.  Microsoft should consider purchasing one or both of them, or paying a licensing fee to include them in every copy of Visual Studio.  Doing so, in my opinion, would make them heroes in the eyes of the overwhelming majority of .NET developers around the world.

It’s not that I mind paying a few hundred dollars for these tools.  Far from it!  The tools pay for themselves very quickly in time saved.  The point is to make them ubiquitous: to make high-productivity coding a standard of .NET development instead of a nice add-on that is only sometimes accepted.

Consider just from the perspective of watching speakers at conferences coding up samples.  How many of them don’t use such a tool in their demonstration simply because they don’t want to confuse their audience with an unfamiliar development interface?  How many more demonstrations could they be completing in the limited time they have available if they felt more comfortable using these tools in front of the masses?  You know you pay good money to attend these conferences.  Wouldn’t you like to cover significantly more ground while you’re there?  This is only likely to happen when the tool’s delivery vehicle is Visual Studio itself.  Damon Payne makes a similar case for the inclusion of the Managed Extensibility Framework in .NET Framework 4.0: build it into the core and people will accept it.

The Gorillas in the Room

CodeRush and Resharper have both received recent mention in the Hanselminutes podcast (episode 196 with Mark Miller) and in the Deep Fried Bytes podcast (episode 35 with Corey Haines).  If you haven’t heard of CodeRush, I recommend watching these videos on their use.

For secondary information on CodeRush, DXCore, and the principles with which they were designed, I recommend these episodes of DotNetRocks:

I don’t mean to be so biased toward CodeRush, but this is the tool I’m personally familiar with, has a broader range of functionality, and it seems to get the majority of press coverage.  However, those who do talk about Resharper do speak highly of it, so I recommend you check out both of them to see which one works best for you.  But above all: go check them out!

Refactor – Rename

Refactoring code is something we should all be doing constantly to avoid the accumulation of technical debt as software projects and the requirements on which they are based evolve.  There are many refactorings in Visual Studio for C#, and many more in third-party tools for several languages, but I’m going to focus here on what I consider to be the most important refactoring of them all: Rename.

Why is Rename so important?  Because it’s so commonly used, and it has such far-reaching effects.  It is frequently the case that we give poor names to identifiers before we clearly understand their role in the “finished” system, and even more frequent that an item’s role changes as the software evolves.  Failure to rename items to accurately reflect their current purpose is a recipe for code rot and greater code maintenance costs, developer confusion, and therefore buggy logic (with its associated support costs).

When I rename an identifier with a refactoring tool, all of the references to that identifier are also updated.  There might be hundreds of references.  In the days before refactoring tools, one would accomplish this with Find-and-Replace, but this is dangerous.  Even with options like “match case” and “match whole word”, it’s easy to rename the wrong identifiers, rename pieces of string literals, and so on; and if you forget to set these options, it’s worse.  You can go through each change individually, but that can take a very long time with hundreds of potential updates and is a far cry from a truly intelligent update.

Ultimately, the intelligence of the Rename refactoring provides safety and confidence for making far-reaching changes, encouraging more aggressive refactoring practices on a more regular basis.

Abolishing Magic Strings

I am intensely passionate about any tool or coding practice that encourages refactoring and better code hygiene.  One example of such a coding practice is the use of lambda expressions to select identifiers instead of using evil “magical strings”.  From my article on dynamically sorting Linq queries, the use of “magic strings” would force me to write something like this to dynamically sort a Linq query:

Customers = Customers.Order("LastName").Order("FirstName", SortDirection.Descending);

The problem here is that “LastName” and “FirstName” are oblivious to the Rename refactoring.  Using the refactoring tool might give me a false sense of security in thinking that all of my references to those two fields have been renamed, leading me to The Pit of Despair.  Instead, I can define a function and use it like the following:

public static IOrderedEnumerable<T> Order<T>(this IEnumerable<T> Source, 
    Expression<Func<T, object>> Selector, SortDirection SortDirection)
{
    return Order(Source, (Selector.Body as MemberExpression).Member.Name, SortDirection);
}

Customers = Customers.Order(c => c.LastName).Order(c => c.FirstName, SortDirection.Descending);

This requires a little understanding of the structure of expressions to implement, but the benefit is huge: I can now use the refactoring tool with much greater confidence that I’m not introducing subtle reference bugs into my code.  For such a simple example, the benefit is dubious, but multiply this by hundreds or thousands of magic string references, and the effort involved in refactoring quickly becomes overwhelming.

Coding in this style is most valuable when it’s a solution-wide convention.  So long as you have code that strays from this design philosophy, you’ll find yourself grumbling and reaching for the inefficient and inelegant Find-and-Replace tool.  The only time it really becomes an issue, then, is when accessing libraries that you have no control over, such as the Linq-to-Entities and the Entity Framework, which makes extensive use of magic strings.  In the case of EF, this is mitigated somewhat by your ability to regenerate the code it uses.  In other libraries, it may be possible to write extension methods like the Order method shown above.

It’s my earnest hope that library and framework authors such as the .NET Framework team will seriously consider alternatives to, and an abolition of, “magic strings” and other coding practices that frustrate otherwise-powerful refactoring tools.

Refactoring Across Languages

A tool is only as valuable as it is practical.  The Rename refactoring is more valuable when coding practices don’t frustrate it, as explained above.  Another barrier to the practical use of this tool is the prevalence of multiple languages within and across projects in a Visual Studio solution.  The definition of a project as a single-language container is dubious when you consider that a C# or VB.NET project may also contain HTML, ASP.NET, XAML, or configuration XML markup.  These are all languages with their own parsers and other language services.

So what happens when identifiers are shared across languages and a Rename refactoring is executed?  It depends on the languages involved, unfortunately.

When refactoring a C# class in Visual Studio, the XAML’s x:Class value is also updated.  What we’re seeing here is cross-language refactoring, but unfortunately it only works in one direction.  There is no refactor command to update the x:Class value from the XAML editor, so manually changing it causes my C# class to become sadly out of sync.  Furthermore, this seems to be XAML specific.  If I refactor the name of an .aspx.cs class, the Inherits attribute of the Page directive in the .aspx file doesn’t update.

How frequent do you think it is that someone would want to change a code-behind file for an ASP.NET page, and yet would not want to change the Inherits attribute?  Probably not very common (okay, probably NEVER).  This is a matter of having sensible defaults.  When you change an identifier name in this way, the development environment does not respond in a sensible way by default, forcing the developer to do extra work and waste time.  This is a failure in UI design for the same reason that Intellisense has been such a resounding success: Intellisense anticipates our needs and works with us; the failure to keep identifiers in sync by default is diametrically opposed to this intelligence.  This represents a fragmented and inconsistent design for an IDE to possess, thus my hope that it will be addressed in the near future.

The problem should be recognized as systemic, however, and addressed in a generalized way.  Making individual improvements in the relationships between pairs of languages has been almost adequate, but I think it would behoove us to take a step back and take a look at the future family of languages supported by the IDE, and the circumstances that will quickly be upon us with Microsoft’s Oslo platform, which enables developers to more easily build tool-supported languages (especially DSLs, Domain Specific Languages). 

Even without Oslo, we have seen a proliferation of languages: IronRuby, IronPython, F#, and the list goes on.  A refactoring tool that is hard-coded for specific languages will be unable to keep pace with the growing family of .NET and markup languages, and certainly unable to deal with the demands of every DSL that emerges in the next few years.  If instead we had a way to identify our code identifiers to the refactoring tool, and indicate how they should be bound to identifiers in other languages in other files, or even other projects or solutions, the tools would be able to make some intelligent decisions without understanding each language ahead of time.  Each language’s language service could supply this information.  For more information on Microsoft Oslo and its relationship to a world of many languages, see my article on Why Oslo Is Important.

Without this cross-language identifier binding feature, we’ll remain in refactoring hell.  I offered a feature suggestion to the Oslo team regarding this multi-master synchronization of a model across languages that was rejected, much to my dismay.  I’m not sure if the Oslo team is the right group to address this, or if it’s more appropriate for the Visual Studio IDE team, so I’m not willing to give up on this yet.

A Default of Refactor-Rename

The next idea I’d like to propose here is that the Rename refactoring is, in fact, a sensible default behavior.  In other words, when I edit an identifier in my code, I more often than not want all of the references to that identifier to change as well.  This is based on my experience in invoking the refactoring explicitly countless times, compared to the relatively few times I want to “break away” that identifier from all the code that references.

Think about it: if you have 150 references to variable Foo, and you change Foo to FooBar, you’re going to have 150 broken references.  Are you going to create a new Foo variable to replace them?  That workflow doesn’t make any sense.  Why not just start editing the identifier and have the references update themselves implicitly?  If you want to be aware of the change, it would be trivial for the IDE to indicate the number of references that were updated behind the scenes.  Then, if for some reason you really did want to break the references, you could explicitly launch a refactoring tool to “break references”, allowing you to edit that identifier definition separately.

The challenge that comes to mind with this default behavior concerns code that spans across solutions that aren’t loaded into the IDE at the same time.  In principle, this could be dealt with by logging the refactoring somewhere accessible to all solutions involved, in a location they can all access and which gets checked into source control.  The next time the other solutions are loaded, the log is loaded and the identifiers are renamed as specified.

Language Property Paths

If you’ve done much development with Silverlight or WPF, you’ve probably run into the PropertyPath class when using data binding or animation.  PropertyPath objects represent a traversal path to a property such as “Company.CompanyName.Text”.  The travesty is that they’re always “magic strings”.

My argument is that the property path is such an important construct that it deserves to be an core part of language syntax instead of just a type in some UI-platform-specific library.  I created a data binding library for Windows Forms for which I created my own property path syntax and type, and there are countless non-UI scenarios in which this construct would also be incredibly useful.

The advantage of having a language like C# understand property path syntax is that you avoid a whole class of problems that developers have used “magic strings” to solve.  The compiler can then make intelligent decisions about the correctness of paths, and errors can be identified very early in the cycle.

Imagine being able to pass property paths to methods or return then from functions as first-class citizens.  Instead of writing this:

Binding NameTextBinding = new Binding("Name") { Source = customer1; }

… we could write something like this, have access to the Rename refactoring, and even get Intellisense support when hitting the dot (.) operator:

Binding NameTextBinding = new Binding(@Customer.Name) { Source = customer1; }

In this code example, I use the fictitious @ operator to inform the compiler that I’m specifying a property path and not trying to reference a static property called Name on the Customer class.

With property paths in the language, we could solve our dynamic Linq sort problem cleanly, without using lambda expressions to hack around the problem:

Customers = Customers.Order(@Customer.LastName).Order(@Customer.FirstName, SortDirection.Descending);

That looks and feels right to me.  How about you?

Summary

There are many factors of developer productivity, and I’ve established refactoring as one of them.  In this article I discussed tooling and coding practices that support or frustrate refactoring.  We took a deep look into the most important refactoring we have at our disposal, Rename, and examined how to get the greatest value out of it in terms of personal habits, as well as long-term tooling vision and language innovation.  I proposed including property paths in language syntax due to its general usefulness and its ability to solve a whole class of problems that have traditionally been solved using problematic “magic strings”.

It gives me hope to see the growing popularity of Fluent Interfaces and the use of lambda expressions to provide coding conventions that can be verified by the compiler, and a growing community of bloggers (such as here and here) writing about the abolition of “magic strings” in their code.  We can only hope that Microsoft program managers, architects, and developers on the Visual Studio and .NET Framework teams are listening.

Posted in Data Binding, Data Structures, Design Patterns, Development Environment, Dynamic Programming, Functional Programming, Language Innovation, LINQ, Oslo, Silverlight, Software Architecture, User Interface Design, Visual Studio, Visual Studio Extensibility, Windows Forms | Leave a Comment »

Why Oslo is Important

Posted by Dan Vanderboom on January 17, 2009

imageContrary to common misunderstanding and speculation, the point of Oslo is not to put programming in the hands of business analysts who want to write their own business rules.  Do I think some of that will happen?  Architects and engineers will try everything they can imagine.  Some of them will succeed in specific niches or scenarios, but it won’t replace application or system design, and it will probably be very limited for the forseeable future.  Oslo is more about dramatically improving the productivity of designers and developers by generalizing common solution patterns and generating more adaptable tools.

PDC Keynote

Much of the confusion around Oslo occurs for two reasons:

  1. Oslo is designed at a higher level of abstraction than most systems today, so its scope is broad and it will have an impact on virtually every product, solution and service across Microsoft.  It’s difficult to get your head around something that big.
  2. Because of its abstract nature, core concepts are defined in terms that are heavily overloaded, like "Model", "Repository", and "Language".  Once you’ve picked up the lingo and can translate Oslo terminology into language you’re already familiar with, both the concept and magnitude of it will become obvious.

Oslo isn’t something completely new; in fact, Oslo borrows from a lot of previous research and even existing model-driven development tools.  Oslo focuses existing technologies and techniques into a coherent and mature vision of development, combining all parts into a more powerful whole, and promises to deliver a supremely adaptable and efficient platform to develop on.

What Is Oslo?

Oslo is a software factory for generating first-class, tool-supported languages out of your declarative specifications.

A factory is a highly organized production facility
that produces members of a product line
using standardized parts, tools and production processes.

-from a review of Software Factories

The product line is analogous to Oslo’s parsers, transform tools, and IDE plugins for new data models and languages (both textual and visual) that you define.  The standardized parts are Oslo’s library components; the tools are the M languages and the Quadrant/Intellipad application; and the processes are shaped by the flow of data through the Oslo tool chain (see the diagram near the end of this article).

With Oslo, you build the custom tools you need to rapidly build or generate software systems.  It’s all about using the right tool for the job, and having a say in how those tools are shaped to obtain the greatest leverage.

As stated at the home page of softwarefactories.com:

We see a capacity crisis looming. The industry continues to hand-stitch applications distributed over multiple platforms housed by multiple businesses located around the planet, automating business processes like health insurance claim processing and international currency arbitrage, using strings, integers and line by line conditional logic. Most developers build every application as though it is the first of its kind anywhere.

In other words, there’s already a huge shortage of experienced, highly-qualified professionals capable of ensuring the success of these increasingly complex systems, and with the need (and complexity) growing exponentially, our current development practices increasingly fall short of the total demand.

Books like Greenfield’s Software Factories have been advocating building at a higher level of abstraction for years, and my initial reaction was to see it as a natural, evolutionary milestone for a highly mature software system.  However, it’s an awful lot of focused development effort to attain such a level of maturity, and not many organizations are able to pull it off given the state of our current development platforms.

It’s therefore fortuitous that Microsoft teams have taken up the challenge of building these abilities into their .NET platform.  After all, that’s where it really belongs: in the framework.

Unexpected Awesomeness

Oslo of course contains a lot of expected awesomeness, but where it will probably have the most impact in terms of developer productivity is with new first-class languages and language tools.  Why?  It first helps to understand the world of data formats and languages.

We’ve had an explosion of data formats–these mini Domain Specific Languages, if you will (especially in the form of complex configuration files).  As systems evolve and scale, and the ways we can configure and compose our application’s behavior continues to grow, at what point do we perceive that configuration graph as the rich language that it becomes?  Or when our user interfaces evolve from Monolithic to Modular to Composite to Granular Composite (or User Composable), at what point does that persistent object graph become our UX DSL (as with XAML in WPF).

Sometimes we set our standards too low, or are slow to raise them when the time has come to do so.  With XML we get extensibility in defining languages and we think, "If we can parse it, then we can build a tool over it."  I don’t know about you, but I’d much rather work with rich client software–some kind of designer–over a textual data format any day.

But you know how things go: some company like Microsoft builds a whole bunch of cool stuff, driven off some XML configuration, or they unleash something like XAML on which WPF, WF, and more are built.  XAML is great for tools to read and write, and although XML and XAML are textual and not binary and therefore human readable in a text editor (the original intention behind that term), it’s simply not as easy to read as C# or VB.NET.  That’s why we aren’t all rushing to program everything in XAML.

Companies like Microsoft, building from the bottom up, release their platforms well in advance of the thick client user experiences that make them enjoyable to use and which encourages mass adoption.  Their models, frameworks, and applications are so large now that they’re released in massively differentiated stages, producing a technology adoption gap.

By giving that language a syntax other than XML, however, we can approach it in the same way we approach our program logic: in the most human readable and aesthetically-pleasant way we can devise, resembling our programming languages of choice.

Sometimes, the density of data and its structure in our model is such that a visual editor fails to represent that model well.  Source code is a case in point.  You could create a visual designer to visualize flow control, branching logic, and even complex expression building (like the iTunes Smart Playlist), but code in text format is more appropriate in this kind of scenario, and ends up being more efficient with the existing tooling available.  Especially with an IDE like Visual Studio, we’re working with human-millenia of effort that have gone into the great code editing tools we use today.  Oslo respects this need for choice by offering support for building both visual and textual DSLs, and recognizes the fluent definition of new formats and languages as the bridge to the next quantum leap in productivity.

If we had an easy way of defining languages in formats that we developers felt comfortable working with–as we’re comfortable with our general purpose languages and their rich tool support–then we’d be much more productive in the transition between a technology first being released and later having rich tool support over it.  WPF has taken quite a while to be adopted as much as it has, partly due to tool availability and maturity.  Before Expression Blend or Cider designers were released and hand-coding XAML was the only way, those who braved the angle brackets struggled with it.  As I play with Silverlight, I realize how much must still be done in XAML, and how we still struggle.  It’s simply not as nice to work with as my C# code.  Not as rich, and not as strongly tool-supported.

That’s one place Oslo provides value.  With the ability to define new textual and visual DSLs, rigorous verification and validation in a rich set of tools, the promise of Intellisense, colorization of keywords, operators, constants, and more, the Oslo architects recognize the ability to enhance our development experience in a language-agnostic way, raising the level of abstraction because, as they say, the way to solve any technical problem is to approach it at one higher level of indirection.  Unfortunately, this makes Oslo so generalized and abstract that it’s difficult to grasp and therefore to appreciate its immensity.  Once you can take a step back and see how it fits in holistically, you’ll see that it has the potential to dramatically transform the landscape of software development.

Currently, it’s a lot of work to implement all the language services in Visual Studio to give them as rich an experience as we’ve come to expect with C#, VB.NET, and others.  This is a serious impediment to doing this kind of work, so solving the problem at the level of Oslo drastically lowers the barrier to entry for implementing tool-supported languages.  The Oslo bits I’ve seen and played with are very early in the lifecycle for this massive scope of technology, but the more I think about its potential, the more impressed I am with the fundamental concept.  As Chris Anderson explained in his PDC session on MGrammar, MGrammar was an implementation detail, but sometime around June 2007, that feature team realized just how much customers wanted direct access to it and decided to release MGrammar to the world.

Modeling & The Repository

That’s all well and good for DSLs and language enthusiasts/geeks, but primarily perhaps, Oslo is about the creation, exploration, relation, and execution of models in an interoperable way.  In other words, all of the models that are currently used to describe a software system, or an entire IT environment, are either not encoded formally enough to verify or execute, or they’re encoded or stored in proprietary ways that don’t allow interoperability with other models.  A diagram in Visio or PowerPoint documenting network topology, for example, knows nothing about the component architecture or deployment model of the software systems installed and running on that network.

When people usually talk about models, they imagine high-level architecture documents, overviews used to visually summarize work that is much more granular in nature.  These models aren’t detailed, and they normally aren’t kept up to date and in sync with the current design as changes are made.  But modeling in Oslo is not an attempt to make these visual models contain all of the necessary detail, or to develop software with visual tools exclusively.  Oslo simply provides the tools, both graphical and textual, to define and relate many models.  It will be up to the development community to decide how all these tools are ultimately used, which parts of our systems will be specified in a mix of general purpose, domain specific, and visual languages.  Ultimately, Oslo will provide the material and glue to fill the gaps between the high and low level specifications, and unite them into a common, connected, and much more useful set of data.

To grasp what Oslo modeling is really all about requires that we expand our definition of "model", to see the models expressed in our configuration and XAML files, in our applications’ database schemas, in our entity classes, and so on.  As software grows in complexity and becomes more composable, we can use various languages to model its behavior, store that in the repository for runtime execution, inspection, or reuse by other systems.

This funny and clever Oslo video (reminiscent of The Hitchhiker’s Guide to the Galaxy) explains modeling in the broader sense alluded to here.

If we had some universal container for the storage of all different kinds of models, and a standardized way of relating entities across models, we’d be able to do things like impact analysis, where we could see the effect on software systems if someone were to alter the network it was running on; or powerful data mining on the IT execution environment of a business.

Many different tools, with different audiences, will be able to connect into this repository to manipulate aspects of the models that they understand and have access to.  This is just the tip of the iceberg.  We already model so much of what we do in the IT and software worlds, and as we begin adopting business process middleware and orchestration software like BizTalk, there’s a huge amount of value in those models converging and connecting.  That’s where the Oslo Repository comes in.

Oslo provides interoperability among models in the same way that SOA provides interoperability among services.  Not unlike the interoperability we have now among many different languages all sharing the same CLR specification.

Bridging data models across repositories or in shared repository is a major step forward.  With Windows Azure and Microsoft’s commitment to their online services platform (and considering the momentum of the SaaS movement with Amazon, Google, and others), shared storage and data sets are the future.  (Check out SQL Data Services if you haven’t already, and watch for some exciting announcements coming later this year!)

The Dichotomy of Data vs. Metadata

Jeff Pinkston from the Oslo team aptly reflects the attitude of the group when he scoffs at the categorical difference between data and metadata.  In terms of storing and querying it, serializing and communicating it, and everything else that matters in enterprise software, data is data and there’s no reason not to treat it the same when it comes to architecting a system.  We have our primary models and our secondary models, our shared models and our protected models, but they’re still just models that shape our software’s behavior, and they share all of the same characteristics when it comes to manipulation and access.  It’s their ultimate effect that differs.

It’s worth noting, I think, the line that’s been drawn between code and data in some programming languages and not in others (C# vs. LISP).  A division has been made for the sake of security rather than necessity.  Machine instruction codes are represented in the same sort of binary data and realized in the same digital circuitry as traditional user data.  It’s tempting to keep things locked down and divided, but as languages evolve to become more late bound and dynamic (and as the tools evolve to make this feasible), there will be more need for the manipulation of expression trees and ASTs.  I strongly suspect the lines will blur until they disappear.

Schema and Object Instance Languages

In order to define models, we need a tool.  In Oslo, this is a textual language called MShema and an editor called Intellipad.  I personally think it’s odd to talk people’s ears off about "model, model, model", and then to use the synonym "schema" to name the language, but all of these names could change before they’re shipped for all we know.

This is a simple example of an MSchema document:

module MyModel
{
    type Person
    {
        LastName : Text;
        FirstName : Text;
    }

    People : Person*;
}

By running this through the "M Compiler", a SQL script is generated that will create the appropriate database objects.  Intellipad is able to verify the correctness of your schema, and what’s really nice is that you don’t even have to specify data types when you start sketching out your model.  Defaults are assumed, and you can get more specific as your model evolves.

MGraph is a language for defining instances of objects, constrained by an MSchema and similar in format.  So MSchema is to MGraph what XSD is to XML.

In this article, Lars Corneliussen explains Microsoft’s vision to make MGraph as common as XML is today.  Take a look at his article to see a side-by-side comparison of the same object represented as XML (POX), JSON, and MGraph, and decide for yourself which you like best (or see below).

MSchema and MGraph are easier and more efficient to read and write than XML.  Their message format resembles typical structured programming languages, and developers are already familiar with these formats.  XML is a fine format for a tool; it’s human readable but not human-friendly.  A C-style language, on the other hand, is much more human-friendly than all of the angle brackets and the redundancy (and verbosity) of tag text.  That narrows down our choice to JSON and MGraph.

In JSON, the property/field/attribute names are delimited by quotation marks, suggesting that the whole structure is a dumb property bag.

{
    "LastName" : "Vanderboom",
    "FirstName" : "Dan"
}

MGraph has a very similar syntax, but its attribute property names are recognized and validated by the parser generated from MSchema, so the quotation marks are unnecessary.  It ends up looking more natural, and a little more concise.

{
    LastName : "Vanderboom",
    FirstName : "Dan"
}

Because MGraph is just a message format, and Microsoft’s service offerings already support multiple message formats (SOAP/POX/JSON/etc.), it wouldn’t disrupt any of their architecture to add an MGraph adapter, and I’ll be shocked if I don’t hear about one in their next release.

Meta-Languages and MGrammar

In the same way that Oslo includes a meta-model because it allows us to define models, it also includes a meta-language because it allows us to define languages (as YACC and ANTLR have done).  However, just as Pinkston doesn’t think data and metadata should be treated different, it makes sense to think of a language that defines languages as just another language.  There is something Zen about that, where the tools somehow seem to bend back upon themselves like one of Escher‘s drawings.

DrawingHands

Here is an example language defined by MGrammar in a great article on MSDN called MGrammar in a Nutshell:

module SongSample
{
    language Song
    {
        // Notes
        token Rest = "-";
        token Note = "A".."G";
        token Sharp = "#";
        token Flat = "b";
        token RestOrNote = Rest | Note (Sharp | Flat)?;

        syntax Bar = RestOrNote RestOrNote RestOrNote RestOrNote;
        syntax List(element)
          = e:element => [e]
          | es:List(element) e:element => [valuesof(es), e];

        // One or more bars (recursive technique)
        syntax Bars = bs:List(Bar) => Bars[valuesof(bs)];
        syntax ASong = Music bs:Bars => Song[Bars[valuesof(bs)]];
        syntax Songs = ss:List(ASong) => Songs[valuesof(ss)];

        // Main rule
        syntax Main = Album ss:Songs => Album[ss];

        // Keywords
        syntax Music = "Music";
        syntax Album = "Album";

        // Ignore whitespace
        syntax LF = "\u000A";
        syntax CR = "\u000D";
        syntax Space = "\u0020";

        interleave Whitespace = LF | CR | Space;
    }
}

This is a pretty straight forward way to define a language and generate a parser.  Aside from the obvious keywords to define syntax rules and token patterns (with an alternative and more readable format for regular expressions), the => projection operator allows you to shape the MGraph output according to your needs.

I created two simple languages with MGrammar on the plane trip back to Milwaukee from the PDC in November.  The majority of my time was spent fussing with the editor, Intellipad, and for the last half hour I found it very easy to create a language on the fly, extending and changing it through experimentation quickly and easily.  Projections, which are functional expressions in MGrammar used to shape MGraph output, are the most challenging part.  There are a number of techniques that shape the output graph, so it will be good to see how this is approached in future reference examples.

Surreptitiously announced just before I wrote this, Mike Weinhardt at Microsoft indicated that a gallery of example grammars for MGrammar is being put together, to point to the sample grammars for various languages in addition to grammars that the community develops, and it should be available by the end of this month.  These examples demonstrating how to define languages and write sensible projections, coming from the developers who are putting MGrammar together, will be an invaluable tool for teaching you how to use common patterns (just as 101 LINQ Samples did for LINQ).

As Doug Purdy explained on .NET Rocks: "People who are building a domain specific language, and they don’t want to understand how to build a parser, or they’re not language designers.  Actually, they are language designers.  They design a language, but they actually don’t do the whole thing.  They don’t build a parser.  What they do, they just leverage the XML parser.  And what we’re trying to do is provide a toolset for folks where they don’t have to resort to XML in order to do DSLs."

From the same episode, Don Box said of the DSL session at PDC: "I’ve never seen a session with more geek porn in it."

Don: "It’s like crack for developers.  It’s kind of addictive; it takes over your life."

Doug: "If you want the power of Anders in your hand…"

The Tool Chain

Now that we have a better sense of what’s included in Oslo in terms of languages, editors, and the shared repository, we can look at the relationship among the other pieces, which are manifested in the CTP as a set of command-line tools.  In the future, these will integrate into an IDE, most likely Visual Studio.  (I’d expect Intellipad and Quadrant to merge with Visual Studio, but there’s no guaranty this will happen.)

When you create your model with MSchema, you’ll use m to validate that model and generate a SQL script to create a SQL Server 2008 database schema (yes, it only works right now with SQL Server 2008).  You’ll also use the m command to validate your object graph (written in MGraph) against your schema, and translate that into a set of SQL commands to perform inserts and updates against tables.

With enough models, there’ll be huge value in adding yours to the repository.  If you don’t mind writing MGraph or you generate it automatically with something like an MGraphSerializer class in your code, this may be all you need.

If, on the other hand, you decide you could really benefit by defining your own textual language to use instead of MGraph, you can use MGrammar to define a new language.  This language gets compiled by the mg compiler to create your parser, and the mgx command translates code in your new language into an MGraph, which can then be pulled into your database using m.

This diagram depicts the process:

image

Other than these command-line tools, Quadrant is the highly extensible visual tool for exploring models graphically, and Intellipad is a different face on the same shell for defining DSLs with MGrammar and writing DSL code, as well as writing and verifying MSchema and MGraph code.

We should see fairly soon the convergence of these three languages (MGraph, MSchema, and MGrammar) into a single M language.  This makes sense, since what you want to project in your DSL should be something within your model, verified by your schema.  This may ultimately make these projections much easier to write.

We’ll also see this tool chain absorbed into multiple development environments, eventually with rich binding across multiple representations of our model, although this will take longer in Visual Studio.

Languages and Nested Languages

I looked at some MService examples, and I can understand Damon’s concern that although it’s nice to have "operation" as a keyword in a service-oriented language, with more keywords giving you the ability to specify aspects of each endpoint and the communications patterns required, that enclosing the business logic within that service language is probably not a good idea.  I took this from Dennis van der Stelt’s blog:

service Service
{
  operation PhotoUpload(stream : Stream) : Text
  {
    .PostUriTemplate = "upload";

    index : Text = invoke DateTime.Now.Ticks.ToString();
    filename : Text = "d:\\demo\\photo\\" + index + ".jpg";
    invoke MService.ServiceHelper.StoreInFile(stream, filename);

    return index;
  }
}

Why not?  You’re defining a general purpose language within the curley braces, one capable of defining variables, assigning values, referencing .NET objects, and calling methods.  But why do you want to learn a new language to write services when the language you’re using right now is already supremely capable of that?  Don’t you already know a good syntax for invoking methods (other than "invoke %mehthod%")?  If instead you simply referenced an assembly, type, and method from an MService script, you could externally turn any .NET method with serializable parameters and return value into a service operation by feeding it this kind of file, without having to recompile, and without having to reinvent the wheel.

The possible exception would be if MGrammar adds the ability (as discussed by speakers at the PDC) of supporting multiple layers of enclosing languages within other languages.  In other words, you could use MService to define operations and their attributes using its own syntax, and within the curly braces that follow, use the C# or VB.NET parsers to process the logic with the comprehension of a separate language.  There are some neat possibilities here, but I expect the development community to be conservative and hesitent about mixing layers of semantics, as there is an awful lot of room for confusion and complexity.  It may be better to leave different language blocks in separate files or containers, and to allow them to reference each other as .NET assemblies and XML files reference each other today.

However, I wouldn’t get too hung up on the early versions of these new languages, or any one language specifically.  The useful, sensible ones that take real developer needs into account and provide the most value will be adopted, and many more will quickly fall into disuse.  But the overall pattern will be for the emergence of an amazing amount of leverage in terms of improving human comprehension and taking advantage of our ability to manipulate structured, symbolic object graphs to build and verify software systems.

Resources

After a few months of research and many hours of writing, I don’t feel like I’ve even scratched the surface.  But instead of giving you an absolutely comprehensive picture, I’m going to stop here and continue in future articles.  In the meantime, check out the following resources.

For an overview of the development paradigm, look for information on language-oriented programming, including an article I wrote that alludes to how "we will have to raise the level of abstraction to a point that may be hard for us to imagine with our existing tools and languages" due to the "precipitious growth of software complexity".  The "community of abstractions" is the model in Oslo-speak.

For Microsoft specific content: there were some great sessions at the PDC (watch the recorded videos).  It was covered (with much confusion) on the .NET Rocks! podcast (here and here) as well as on Software Engineering Radio; and there are lots of bloggers talking about their initial experiences with it, such as Shawn Wildermuth, Lars Corneliussen, and of course Chris Sells and Jeff Pinkston.  The most clear and coherent explanation I’ve heard was from an interview with Ron Jacobs and David Chappell (Ron gave the keynote at MSDN Dev Con, hosted the ARCast podcast for years).  MSDN has at least 29 videos on the Oslo Developer Center, where there’s a good amount of information. including a FAQ.  There’s also the online guide for MGrammar, MGrammar in a Nutshell, and the Oslo team blog.

If you’re interested in creating DSLs, make sure to keep a look out for details about the upcoming DSL Developers Conference, which is tentatively planned for April 16-17, immediately following the Lang.NET conference (on general purpose languages) on April 14-16.  I’m hoping to be at both this year.  And in case you haven’t heard, Microsoft is planning another PDC Conference for 2009, the first time ever these conferences have run for two consecutive years!  There will no doubt be much more Oslo news and conference material to cover it at the PDC in November.

Pluralsight, an instructor-led training company, now teaches a two-day "Oslo" Fundamentals course (and Don Box’s blog is hosted there).

The best way to learn about Oslo, however, is to dive in and use it.  That’s what I’m doing with my newest system, which needs to be modeled from scratch.  So if you haven’t done so already, download the Oslo SDK (link updated to January 2009 SDK) and introduce yourself to the future of modeling and development!

[Click here for the next article in this Oslo series, on common misconceptions and fallacies about Oslo.]

Posted in Data Structures, Development Environment, Distributed Architecture, Language Extensions, Language Innovation, Metaprogramming, Oslo, Problem Modeling, Service Oriented Architecture, Software Architecture, SQL Data Services, Visual Studio, Windows Azure | 44 Comments »

A KeyedList Implementation: Syntax Subtleties for an Intuitive API

Posted by Dan Vanderboom on November 28, 2008

[This article is a follow-up on the theme of data structures started in my article on a Non-Binary Tree Data Structure.] 

Clear API Design

When designing object models, there are often times when a Dictionary is the best choice for rapid lookup and access of items.  However, when attempting to make those object models as intuitive and simple as possible, sometimes the fact that a dictionary is being used is an implementation detail and needn’t be exposed externally.

Take this model, for example:

class Database
{
    public IEnumerable<Table> Tables;

    public Database()
    {
        _Tables = new List<Table>();
    }
}

class Table
{
    public string TableName;

    public Table(string TableName)
    {
        this.TableName = TableName;
    }
}

This is nice and simple.  I can loop through the Tables collection like this:

Database db = new Database();

foreach (var table in db.Tables)
{
    Console.WriteLine(table.TableName);
}

So it's too bad that the consumer end of this has to change when we use a dictionary.  By making a change to our Database type...

class Database
{
    public Dictionary<string, Table> Tables;

    public Database()
    {
        Tables = new Dictionary<string, Table>();
    }
}

... we now have to specify the "Values" collection to get an IEnumerable<T> of the correct type.

foreach (var table in db.Tables.Values)
{
    Console.WriteLine(table.TableName);
}

The consumers of our API may not care about iterating through key-value pairs, but now they have to remember to use this Values property or face the wrath of red squiggles and compiler errors when they forget.  Of course, there’s a pattern we could use to hide this dictionary inner goo from the outside.

class Database
{
    public IEnumerable<Table> Tables { get { return _Tables.Values; } }
    private Dictionary<string, Table> _Tables { get; set; }

    public Database()
    {
        _Tables = new Dictionary<string, Table>();
    }
}

Now from the outside, we can foreach over db.Tables, but inside we can use the Dictionary for fast access to elements by key.

The Need for a Dictionary-List Hybrid

This is an either-or approach: that is, it assumes that the API consumer is better off with an IEnumerable collection and won’t have any need for keyed access to data (or even adding data, in this case).  How can we have the best of both words, with the ability to write this kind of code?

Database db = new Database();

foreach (var table in db.Tables)
{
    Console.WriteLine(table.TableName);
}

Table t = db.Tables["Customers"];

This is a hybrid of a Dictionary and a List.  (Don’t confuse this with a HybridDictionary, which is purely a dictionary with runtime-adapting storage strategies.)  It provides an IEnumerable<T> enumerator (instead of IEnumerable<K, T>), as well as an indexer for convenient lookup by key.

There’s another aspect of working with dictionaries that has always bugged me:

db.Tables.Add("Vendors", new Table("Vendors"));

This is repetitive, plus it says the same thing twice.  What if I misspell my key in one of these two places?  What I’d really like is to tell my collection which property of the Table class to use, and have it fill in the key for me.  How can I do that?  Well, I know I can select a property value concisely (in a compiler-checked and refactoring-friendly way) with a lambda expression.  So perhaps I can supply that expression in the collection’s constructor.  I decided to call my new collection KeyedList<K, T>, which inherits from Dictionary so I don’t have to do all the heavy lifting.  Here’s how construction looks:

Tables = new KeyedList<string, Table>(t => t.TableName);

Now I can add Table objects to my collection, and the collection will use my lambda expression to fill in the key for me.

Tables.Add(new Table("Vendors"));

How does this work, exactly?  Here's a first cut at our KeyedList class:

public class KeyedList<K, T> : Dictionary<K, T>, IEnumerable<T>
{
    private Func<T, K> LinkedValue;

    public KeyedList()
    {
        LinkedValue = null;
    }

    public KeyedList(Func<T, K> LinkedValue)
    {
        this.LinkedValue = LinkedValue;
    }

    public void Add(T item)
    {
        if (LinkedValue == null)
            throw new InvalidOperationException("Can't call KeyedList<K, T>.Add(T) " +
                "unless a LinkedValue function has been assigned");

        Add(LinkedValue(item), item);
    }

    public new IEnumerator<T> GetEnumerator()
    {
        foreach (var item in Values)
        {
            yield return item;
        }
        yield break;
    }
}

This is still pretty simple, but I can think of one thing that it’s missing (aside from a more complete IList<T> implementation).  With a collection class like this, with tightly-integrated knowledge about the relationship between the key property of an item and the key in the Dictionary, what happens when we change that key property in the item?  Suddenly it doesn’t match the dictionary key, and we have to remember to update this in an explicit separate step in our code whenever this happens.  It seems that this is a great opportunity to forget something and introduce a bug into our code.  How could our KeyedCollection class track and update this for us?

Unfortunately, there’s no perfect solution.  “Data binding” in .NET is weak in my opinion, and requires implementation of INotifyPropertyChanged in our classes to participate; and when it does so, we only get notification of the property name that changed (supplied as a string), and have no idea what the old value was unless we store that somewhere ourselves.  Automatically injecting all classes with data binding code isn’t practical, of course, even using AOP (since many BCL classes, for example, reside in signed assemblies).  Hopefully a future CLR will be able to perform some tricks, such as intelligently and dynamically modifing those classes, for which other class’s data binding code specify interest, so we can have effortless and universal data binding.

Now back to reality.  I want to mention that although my code typically works just as well in Compact Framework as it does in Full Framework, I’m going off the reservation here.  I’m going to be using expression trees, which are not supported in Compact Framework at all.

Expressions

The Expression class (in System.Linq.Expressions) is really neat.  With it, you can wrap a delegate type to create an expression tree, which you can explore and modify, and at some point even compile into a function which you can invoke.  The best part is that lambda expressions can be assigned to Expression types in the same way that they can be assigned to normal delegates.

Func<int> func = () => 5;
Expression<Func<int>> expr = () => 5;

The first line defines a function that returns an int, and a function is supplied as a lambda that returns the constant 5.  The second line defines an expression tree of a function that returns an int.  This extra level of indirection allows us to take a step back and look at the structure of the function itself in a precompiled state.  The structure is a tree, which can be arbitrarily complex.  You can think of this as a way of modeling the expression in a data structure.  While func can be executed immediately, expr requires that we compile it by calling the Compile method (which generates IL for the method and returns Func<int>).

int FuncResult1 = func.Invoke();
int FuncResult2 = func();

int ExprResult1 = expr.Compile().Invoke();
int ExprResult2 = expr.Compile()();

The first two lines are equivalent, as are the last two.  I just wanted to point out here, with the two ways of calling the functions, how they are in fact the same, even though the last line looks funky.

Synchronizing Item & Dictionary Keys

So why do we need expressions?  Because we need to know the name of the property we’ve supplied in our KeyedList constructor.  You can’t extract that information out of a function (supplied as a lambda expression or otherwise).  But expressions contain all the metadata we need.  Note that for this synchronization to work, it requires that the items in our collection implement INotifyPropertyChanged.

class Table : INotifyPropertyChanged
{
    public event PropertyChangedEventHandler PropertyChanged;

    private string _TableName;
    public string TableName
    {
        get { return _TableName; }
        set
        {
            _TableName = value;

            if (PropertyChanged != null)
                PropertyChanged(this, new PropertyChangedEventArgs("TableName"));
        }
    }

    public Table(string TableName)
    {
        this.TableName = TableName;
    }
}

This is tedious work, and though there are some patterns and code snippets I use to ease the burden a little, it’s still a lot of work to go through to implement such a primitive ability as data binding.

In order to get at the expression metadata, we’ll have to update our constructor to ask for an expression:

public KeyedList(Expression<Func<T, K>> LinkedValueExpression)
{
    this.LinkedValueExpression = LinkedValueExpression;
}

We’ll also need to define a field to store this, and a property will help to compile it for us.

private Func<T, K> LinkedValue;

private Expression<Func<T, K>> _LinkedValueExpression;
public Expression<Func<T, K>> LinkedValueExpression
{
    get { return _LinkedValueExpression; }
    set
    {
        _LinkedValueExpression = value;
        LinkedValue = (value == null) ? null : _LinkedValueExpression.Compile();
    }
}

Now that the groundwork has been set, we can hook into the PropertyChanged event if it’s implemented, which we do by shadowing the Add method.

public new void Add(K key, T item)
{
    base.Add(key, item);

    if (typeof(INotifyPropertyChanged).IsAssignableFrom(typeof(T)))
        (this[key] as INotifyPropertyChanged).PropertyChanged += new PropertyChangedEventHandler(KeyList_PropertyChanged);
}

One caveat about this approach: our shadowing method Add will unfortunately not be called if accessed through a variable of the base class.  That is, if you assign a KeyedList object to a Dictionary object, and call Add from that Dictionary variable, the Dictionary.Add method will be called and not KeyedList.Add, so synchronization of keys will not work properly in that case.  It’s extremely rare that you’d do such a thing, but I want to point it out regardless.  As inheritor of a base class, I would prefer the derived class be in fuller control of these behaviors, but we work with what we have.  I’ll actually take advantage of this later on in a helper method.

Finally, the tricky part.  We need to examine our lambda’s expression tree and extract the property or field name from it.  We’ll compare that to the property name reported to us as changed.  The comparison is actually done between two MemberInfo variables, which is possible because reflection ensures that only one MemberInfo object will exist for each member.  The MemberExpression object, which iniherits from Expression, possesses a Member property, and the other we get from typeof(T).GetMember.  Here’s what that looks like:

private void KeyList_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
    var lambda = LinkedValueExpression as LambdaExpression;
    if (lambda == null)
        return;

    var expr = lambda.Body as MemberExpression;
    if (expr == null)
        return;

    MemberInfo[] members = typeof(T).GetMember(e.PropertyName, MemberTypes.Property | MemberTypes.Field, BindingFlags.Instance | BindingFlags.Public);
    if (members.Length == 0)
        throw new ApplicationException("Field or property " + e.PropertyName + " not found in type " + typeof(T).FullName);

    MemberInfo mi = members[0];
    if (mi == expr.Member)
    {
        // we don't know what the old key was, so we have to find the object in the dictionary
        // then remove it and re-add it
        foreach (var kvp in KeyValuePairs)
        {
            if ((typeof(T).IsValueType && kvp.Value.Equals(sender)) || (kvp.Value as object) == (sender as object))
            {
                T item = this[kvp.Key];
                Remove(kvp.Key);
                Add(item);
                return;
            }
        }
    }
}

This code makes an important assumption; namely, that a lambda expression will be used, which will contain a single field or property access.  It does not support composite or calculated keys, such as (t.SchemaName + “.” + t.TableName), though it’s possible.  I’m currently working on a method that recursively explores an Expression tree and checks for member access anywhere in the tree, to support scenarios like this.  For now, and for the purpose of this article, we’ll stick to the simple case of single member access.

I found that having access to the list of KeyValuePairs was actually useful in my code, and to keep the PropertyChanged handler concise, I added a new KeyValuePairs property to expose the base Dictionary’s enumerator, which you can find in the complete listing of the KeyedList class toward the end of this article.  I now have two iterators; and the way I’ve flipped it around, the default iterator of the base class has become a secondary, named iterator of KeyedList.

Here is a test program to demonstrate the functionality and flexibility of the KeyedList class.

class Program
{
    static void Main(string[] args)
    {
        // use a lambda expression to select the member of Table to use as the dictionary key
        var Tables = new KeyedList<string, Table>(t => t.TableName);
        
        // add a Table object to our KeyList like an old-fashioned Dictionary
        Tables.Add("Customers", new Table("Customers"));

        // add a Table object to our KeyList without explicitly specifying a key
        Tables.Add(new Table("Vendors"));

        // prove that a change in an item's key property updates the corresponding dictionary key
        Table Vendors = Tables["Vendors"];
        Vendors.TableName = "VENDORS";
        Console.WriteLine("Is 'Vendors' a valid dictionary key? " + Tables.ContainsKey("Vendors"));
        Console.WriteLine("Is 'VENDORS' a valid dictionary key? " + Tables.ContainsKey("VENDORS"));

        // prove that the IEnumerable<T> iterator works
        // note that we don't have to loop through Tables.Values
        foreach (var table in Tables)
        {
            Console.WriteLine(table.TableName);
        }

        Console.ReadKey();
    }
}

Complete Source for KeyedList

For your convenience, here is the complete listing of KeyedList.

public class KeyedList<K, T> : Dictionary<K, T>, IEnumerable<T>
{
    private Func<T, K> LinkedValue;

    private Expression<Func<T, K>> _LinkedValueExpression;
    public Expression<Func<T, K>> LinkedValueExpression
    {
        get { return _LinkedValueExpression; }
        set
        {
            _LinkedValueExpression = value;
            LinkedValue = (value == null) ? null : _LinkedValueExpression.Compile();
        }
    }

    public KeyedList()
    {
        LinkedValueExpression = null;
    }

    public KeyedList(Expression<Func<T, K>> LinkedValueExpression)
    {
        this.LinkedValueExpression = LinkedValueExpression;
    }

    public void Add(T item)
    {
        if (LinkedValue == null)
            throw new InvalidOperationException("Can't call KeyedList<K, T>.Add(T) " +
                "unless a LinkedValue function has been assigned");

        Add(LinkedValue(item), item);
    }

    public new void Add(K key, T item)
    {
        base.Add(key, item);

        if (typeof(INotifyPropertyChanged).IsAssignableFrom(typeof(T)))
            (this[key] as INotifyPropertyChanged).PropertyChanged += new PropertyChangedEventHandler(KeyList_PropertyChanged);
    }

    private void KeyList_PropertyChanged(object sender, PropertyChangedEventArgs e)
    {
        var lambda = LinkedValueExpression as LambdaExpression;
        if (lambda == null)
            return;

        var expr = lambda.Body as MemberExpression;
        if (expr == null)
            return;

        MemberInfo[] members = typeof(T).GetMember(e.PropertyName, 
            MemberTypes.Property | MemberTypes.Field, 
            BindingFlags.Instance | BindingFlags.Public);
        if (members.Length == 0)
            throw new ApplicationException("Field or property " + e.PropertyName + " not found in type " + typeof(T).FullName);

        MemberInfo mi = members[0];
        if (mi == expr.Member)
        {
            // we don't know what the old key was, so we have to find the object in the dictionary
            // then remove it and re-add it
            foreach (var kvp in KeyValuePairs)
            {
                if ((typeof(T).IsValueType && kvp.Value.Equals(sender)) 
                    || (!typeof(T).IsValueType && (kvp.Value as object) == (sender as object)))
                {
                    T item = this[kvp.Key];
                    Remove(kvp.Key);
                    Add(item);
                    return;
                }
            }
        }
    }

    public new IEnumerator<T> GetEnumerator()
    {
        foreach (var item in Values)
        {
            yield return item;
        }
        yield break;
    }

    public IEnumerable<KeyValuePair<K, T>> KeyValuePairs
    {
        get
        {
            // because GetEnumerator is shadowed (to provide the more intuitive IEnumerable<T>), 
            // and foreach'ing over "base" isn't allowed,
            // we use a Dictionary variable pointing to "this"
            // so we can use its IEnumerable<KeyValuePair<K, T>>
            Dictionary<K, T> dict = this;
            foreach (var kvp in dict)
            {
                yield return kvp;
            }
            yield break;
        }
    }
}

Conclusion

As with the non-binary Tree data structure I created in this article, I prefer to work with more intelligent object containers that establish tighter integration between the container itself and the elements they contain.  I believe this reduces mental friction, both for author and consumer of components (or the author and consumer roles when they are the same person), and allows a single generic data structure to be used where custom collection classes are normally defined.  Additionally, by using data structures that expose more flexible surface areas, we can often reap the benefits of having powerful lookup features without locking ourselves out of the simple List facades that serve so well in our APIs.

The Achilles Heel of this solution is a weak and un-guaranteed data binding infrastructure combined with lack of support for Expressions in Compact Framework.  In other words, items have to play along, and it’s not platform universal.

Clearly, more work needs to be done.  Event handlers need to be unhooked when items are removed, optimizations could be done to speed up key synchronization, and more complex expressions could be supported without too much trouble.  Ultimately what I’d like to see is a core collection class with the ability to add and access multiple indexes (such as with a database), instead of presuming that a Dictionary with its solitary key is all we can or should use.  The hashed key of a Dictionary seems like a great adornment to an existing collection class, rather than a hard-coded stand-alone structure.  But I think this is a good start toward addressing some of the fundamental shortcomings of these existing approaches, and hopefully demonstrating the value of more intelligent collection and container classes.

Other Implementations

I was at a loss for a while as to what to call this collection class; I considered DictionaryList (and ListDictionary), as well as HashedList, before arriving at KeyedList.  To my amazement, I found several other implementations of the same kind of data structure with the same name, so it must be a good name.  The implementations here and here are more complete than mine, but neither auto-assign keys with a key-selection function or update dictionary keys using data binding, which ultimately is what I’m emphasizing here.  Hint: it wouldn’t be tough to combine what I have here with either of them.

Posted in Algorithms, Data Binding, Data Structures, Design Patterns, Object Oriented Design, Reflection | 5 Comments »

Technology Research Highlights: C5, YIIHAW, and MemJet

Posted by Dan Vanderboom on October 7, 2008

When I’m not designing or writing code, I’m usually doing some kind of research.  Looking for new gadgets, toys, technologies, groundbreaking research, or following my favorite blogs and podcasts.  Because I haven’t been highly focused on any single project lately, I thought I’d share some of the more interesting things I’ve come across.

Peter Sestoft, C5 Generics Collection Library, and YIIHAW

Peter Sestoft is a brilliant professor at the very old (founded 1479) Copenhagen IT University in Denmark, which actually has a student body consisting of a female majority (59%)!  I discovered Sestoft’s works while doing research for some new, more powerful collection class I was working on (Tree<T> and Set<T>).  He was featured on Channel 9 back in January on a show primarily about the C5 Collection Library.

C5 Generics Collection Library

The C5 Collection Library is an extremely powerful and well-designed library, based on earlier Java and Smalltalk collection library designs, and completely blows away the standard collection classes delivered as part of the .NET Framework.  An earlier version was covered well in a Dr. Dobb’s article, but to summarize: C5 is interface-based, provides convenient but hard-to-implement functionality (such as persistent sorted collections and sublist views), provides useful abstractions such as lists, sets, bags, priority queues, double-ended queues, stacks, dictionaries, and optimizes access of these data structures for asymptotic complexity.

Unless you’re an expert—as with encryption—if you find yourself creating your own complex data structures, chances are you’re going to do it wrong.  So it’s nice to know that the mathematical experts have taken the time to very carefully design and construct a wide assortment of useful data structures for the rest of us to use, complete with unit tests.  And the best part, in the spirit of true science, C5 has a very liberal open source license, so it can be used in open source and commercial software alike.  It’s reportedly being used by the military, video game shops, web server software, and more.

Sestoft and his colleagues have the excellent habit of providing copious documentation: a book’s worth, in fact!  It can be found online here.

I think it’s an excellent candidate for being included in the .NET Framework itself, and I highly recommend that you check it out the next time you need a fancy data structure and are tempted to write it yourself, and to consider for optimizing performance of existing code that relies heavily on data structures (which is almost everywhere).

Getting the fundamentals right is probably the hardest thing to do.  It’s easy (and tempting) to build as tall as we can, but getting a firm foundation in place is much more important.  That’s why the C5 Collections Library is so amazing.  So what are you waiting for?  Go download it!

YIIHAW Is an Intelligent and High-performing Aspect Weaver

What really gets me excited is Sestoft’s YIIHAW, a high-performance static aspect weaver for .NET.

Aspect Oriented Programming (AOP) is a paradigm for addressing cross-cutting concerns in software.  That is, logic that operates orthogonally to business logic, such as logging, security, configuration, and more.  There are a number of open source aspect weavers available today, and they generally fall into two camps: static or compile-time weavers, and dynamic or runtime weavers.

While incredibly useful, several factors have kept AOP from entering the mainstream.  For one, a good set of tools hasn’t yet emerged to enable easy debugging of aspects.  When we step through lines of code in Visual Studio, the debugger has no way of knowing if or when some other injected piece of code should pick up the thread of execution, since weaving isn’t a natively recognized and supported activity within the debugger.

Another hindrance is performance: aspect weavers typically inject logic (called advices) in the form of method calls at the beginning and/or ending of existing methods (the locations are called join points) which are selected with special selector statements (called pointcuts).  The typical trick, as I just alluded to, is to include a new method in the class (called an introduction) and then insert method invocation statements at the beginning and end of the methods to be modified.  This isn’t a big deal in many cases, but in performance-critical sections of calls, the additional method calls and the stack manipulations that accompany them prevent these techniques from being practical in performance-critical scenarios.

Data structures play a pivotal role in high-performance algorithms, of course, and Sestoft’s experience in building the C5 Collections Library provided the perfect context in which to consider how some of the specific features of his data structures might adversely affect performance.  For example: his collections fire events to inform any interested parties that a collection had an item inserted, deleted, or modified in some other way, eliminating the need for defining specialized collections that trigger specific actions outside of that collection (see my article on a Tree<T> structure I created that makes use of this).

If the C5 library could start with the basic, slimmed down, super high-performant data structures, and then “weave in” aspects to include features such as event notifications only when needed, the problem might be solved.  The problem with the typical approach, however, is that insertion of method calls provides as much or more overhead than the simple checks to activate those features to begin with (if not null, then invoke event), so he knew he’d have to go with a different approach.

After evaluating the capabilities and measuring the performance of existing open source aspect weavers, they were all found to be inadequate, and the YIIHAW thesis project was born.  You can find the fascinating thesis (which I just finished reading) here.  In a nutshell, YIIHAW employs an advanced technique called advice inlining, in which CIL instructions are woven directly into their target methods.  There are a surprising number of challenges to this approach, including the need to convert the original methods RET (return) instructions to jumps, remapping jump addresses, updating the method stack size, and much more.  The students who worked on this thesis (Rasmus Johansen and Stephan Spangenberg), under Sestoft’s direction, did a masterful job of overcoming all of these challenges, the result of which is a highly-capable static aspect weaver with no performance penalty!

This is truly amazing, and it enables aspects to be woven into an assembly repeatedly: one for feature A, another for feature B, etc., all with the assurance of type safety and performance equivalent to writing the code by hand with all features included in the first place.

Another positive of the YIIHAW thesis is the light it shines on CIL instructions, their organization and function.  The advice inlining techniques provide a great tutorial for those wanting a deeper understanding of how to manipulate things at the CIL level.  Many who have attempted AOP have either not considered this approach, or else turned away from this path due to the complexity and challenge.  Not so with the YIIHAW team, who faced the problem head on and accomplished something fundamentally great as a result.  The fantastic Mono.Cecil library, despite being poorly documented, is used for analysis and manipulation of CIL code, and several other ideas came to mind as I saw how this tool was put to use.

My only criticism of their application of this technology for the generation of specialized data structures at compile time is that you may find yourself needing different versions of the same data structure in different parts of your application.  In a class that performs analysis of visual data for extracting spatial cues, you may need a Tree with as little overhead as possible (i.e., no event notifications, no serialization capability), and in the user interface, you may need another Tree with those additional characteristics.  By making a single decision at compile/weave time, your options will be limited at runtime.

There’s also the issue of supporting the debugging process, which can’t be dismissed lightly.  But YIIHAW has gone to extreme measures to overcome some of AOP’s largest obstacles; and in addition to interceptions (weaving logic into existing methods), YIIHAW also supports introductions (adding new types and members), and typestructure modifications (changing a class’s base class, or making a class implement an interface).

Overall, YIIHAW’s potential is huge, and I believe that it won’t be long before the tooling catches up to the point where AOP becomes a mainstream approach to complex cross-cutting systems problems, simplifying designs and the engineering process overall.

Memjet Printer

Color printing is about to get a whole lot faster.  At 60 pages per minute (ppm), the Memjet printer, developed by Kia Silverbrook, a prolific inventor with over 1400 patents.  Check out the YouTube video to see it in action.

I’ve seen a lot of speculation that Memjet is a hoax, and a technological leap forward this large certainly warrants skepticism.  However, the technology was written about in a very respectable science magazine, one of my favorites: Scientific American, in June 2007.  It’s also been covered in several articles by PC Magazine, for example in May 2007, which mentions the Memjet appearing at the 2007 Global Ink Jet Printing Conference.  (Talk about specific conferences!)

The print heads are page-width, so they don’t have to move back and forth across the page.  The “MEM” in Memjet stands for microelectromechanical, and refers to the fabrication process that involves CMOS chips on a silicon wafer for precision layout.

With ink spraying at 900 million 1-2 picoliter drops per second, drop size small enough to dry within a second, and an estimated price tag under $200 (compared to the comparable HP Edgeline at $16,000), waiting around for long documents to print will soon be a thing of the past for everyone.  The first devices are expected to appear in 2009.

Posted in Algorithms, Aspect Oriented Programming, Data Structures, Design Patterns, Research | Leave a Comment »

Concurrency & Coordination With Futures in C#

Posted by Dan Vanderboom on July 3, 2008

A future is a proxy or placeholder for a value that may not yet be known, usually because the calculation is time consuming.  It is used as a synchronization construct, and it is an effective way to define dependencies among computations that will execute when all of their factors have been calculated, or in other words, to construct an expression tree with each node potentially computing in parallel.  According to the Wikipedia article on futures and promises, using them can dramatically reduce latency in distributed systems.

Damon pointed out that the Parallel Extensions library contains a Future<T> class, so I started looking around for examples and explanations of how they work, what the syntax is like, and I ran across a frightening example implementing the asynchronous programming model with Future<T>, as well as going in the other direction, wrapping an APM implementation with Future<T>.  Other articles give pretty good explanations but trivial examples.  From what I gathered briefly, the ContinueWith method for specifying the next step of calculation to process doesn’t seem to provide an intuitive way to indicate that several calculations may be depending upon the current one (unless it can be called multiple times?).  Using ContinueWith, you’re always specifying forward the calculation task that depends on the current future object.  It also surprised me a little that Future inherits from Task, because my understanding of a future is that it’s primarily defined as a value-holding object.  But considering that a future really holds an expression that needs to be calculated, making Future a Task doesn’t seem so odd.

So I decided to implement my own Future<T> class before looking at the parallel extensions library too deeply.  I didn’t want to prejudice my solution, because I wanted to make an exercise of it and see what I would naturally come up with.

Though I tried avoiding prejudice, I still wound up characterizing it in my head as an task, and thought that a future would simply be a pair of Action and Reaction methods (both of the Action delegate type).  The Action would execute and could do whatever it liked, including evaluate some expression and store it in a variable.  If the Action completed, the Reaction method (a continuation) would run, and these could be specified using lambdas.  Because I was storing the results in a local variable (result), swallowed up and made accessible with a closure, I didn’t see a need for a Value property in the future and therefore no need to make the type generic.  Ultimately I thought it silly to have a Reaction method, since anything you needed to happen sequentially after a successful Action, you could simply store at the end of the Action method itself.

FutureTask task = new FutureTask(
    () => result = CalculatePi(10),
    () => new FutureTask(
        () => result += "...",
        () => Console.WriteLine(result),
        ex2 => Console.WriteLine("ex2: " + ex2.Message)),
    ex1 => Console.WriteLine("ex1: " + ex1.Message));

The syntax is what I was most concerned with, and as I started playing around with nesting of futures to compose my calculation, I started to feel like I was onto something.  After all, it was almost starting to resemble some of the F# code I’ve been looking at, and I took that style of functional composition to be a good sign.  As you can see from the code above, I also include a constructor parameter of type Action<Exception> for compensation logic to run in the event that the original action fails.  (The result variable is a string, CalculatePi returns a string, and so the concatenation of the ellipsis really does make sense here.)

The problem that started nagging me was the thought that a composite computation of future objects might not be able to be defined all in one statement like this, not building the dependency tree from the bottom up.  You can really only define the most basic factors (the leaf nodes of a dependency tree) at the beginning this way, and then the expressions that depend directly upon those leaf nodes, etc.  What if you have 50 different starting future values, and you can only proceed with the next step in the calculation once 5 of those specific futures have completed evaluation?  How would you express those dependencies with this approach?

That’s when I started to think about futures as top-down hierarchical data container objects, instead of tasks that have pointers to some next task in a sequence.  I created a Future<T> class whose constructor takes an optional name (to aid debugging), a method of type Func<T> (which is a simple expression, supplied as a lambda in my examples), and finally an optional params list of other Future<T> objects on which that future value depends.

The first two futures in the code below start calculating pi (3.1415926535) and omega (which I made up to be a string of 9s).  They have no dependencies, so they can start calculating right away.  The paren future has two dependencies, supplied as two parameters at the end of the argument list: pi and omega.  You can see that the values pi.Value and omega.Value are used in the expression, which will simply surround the concatenated string value with parentheses and spaces.

var pi = new Future<string>("pi", () => CalculatePi(10));
var omega = new Future<string>("omega", () => CalculateOmega());

var paren = new Future<string>("parenthesize", () => Parenthesize(pi.Value + " < " + omega.Value), pi, omega);

var result = new Future<string>("bracket", () => Bracket(paren.Value), paren);

Finally, the result future has a dependency on the paren future.  This surrounds the result of paren.Value with brackets and spaces.  Because the operations here are trivial, I’ve added Thread.Sleep statements to all of these methods to simulate more computationally expensive work.

Dependencies Among Futures

The program starts calculating pi and omega concurrently, and then immediately builds the paren future, which because of its dependencies waits for completion of the pi and omega futures.  But it doesn’t block the thread.  Execution continues immediately to build the result future, and then moves on to the next part of the program.  When each part of the expression, each future, completes, it will set a Complete boolean property to true and invoke a Completed event.  Any attempt to access the Value property of one of these futures will block the thread until it (and all of the futures it depends on) have completed evaluation.

Furthermore, if an exception occurs, all of the futures that depend on it will no longer attempt to evaluate, and the exceptions will be thrown as part of an AggregateException when accessing the Value property.  This AggregateException contains all of the individual exceptions that were thrown as part of evaluating each future expression.  If both pi and omega fail, result should be able to hand me a list of all Exceptions below it in the tree structure that automatically gets formed.

There are two bits of code I added as icing on this cake.  The first is the use of the implicit operator to convert a variable of type Future<T> to type T.  In other words, if you have a Future<string> called result, you can now pass result into methods where a string parameter is expected, etc.  In the code listing at the end of the article, you’ll notice that I reference pi and omega instead of pi.Value and omega.Value (as in the code snippet above).

public static implicit operator T(Future<T> Future)
{
    return Future.Value;
}

The other helpful bit is an override of ToString, which allows you to hover over a future variable in debug mode and see its name (if you named it), whether it’s Complete or Incomplete, and any errors encountered during evaluation.

public override string ToString()
{
    return Name + ", " + (Complete ? "Complete" : "Incomplete") + (Error != null ? "Error=" + Error.Message : string.Empty);
}

Debug Experience of Future

What I’d really like to do is have the ability to construct this composite expression in a hierarchical form in the language, with a functionally composed syntax, replacing any parameter T with a Future<T>, something like this:

var result = new Future<string>("bracket", () => Bracket(
    new Future<string>("parenthesize", () => Parenthesize(
        new Future<string>("pi", () => CalculatePi(10))
        + " < " +
        new Future<string>("omega", () => CalculateOmega())
    ))
));

The Bracket and Parenthesize methods both require a string, but I give them an object that will at some point (“in the future”) evaluate to a string.  Another term used for future is promise, although there is a distinction in some languages that support both, but you can think in terms of giving those methods the promise that they’ll get a string later, at which time they can proceed with their own evaluation.  This effectively creates lazy evaluation, sometimes referred to as normal-order evaluation.

There are a few problems with this code, however.  First of all, though it’s composed functionally from the top down and returns the correct answer, it takes too long to do it: about 8 seconds instead of 4.  That means it’s processing all of the steps sequentially.  This happens because the future objects we’re handing to the Parenthesize and Bracket methods have to be converted from Future<string> to string before they can be evaluated in the expression, and doing that activates the implicit operator, which executes the Value property getter.  This completely destroys the asynchronous behavior we’re going for, by insisting on resolving it immediately with the wait built-into the Value property.  The string concatenation expression evaluates sequentially one piece at a time, and when that’s done, the next level up evaluates, and so on.

The solution is to declare our futures as factors we depend on at each level, which start them executing right away due to C#’s order of evaluation, and declare the operations we want to perform in terms of those predefined futures.  After a few hours of rearranging definitions, declaration order, and experimenting with many other details (including a brief foray into being more indirect with Func<Future<T>>), this is the working code I came up with:

Future<string> FuturePi = null, FutureOmega = null, FutureConcat = null, FutureParen = null;

var result = new Future<string>(
    () => Bracket(FutureParen),
    (FutureParen = new Future<string>(
        () => Parenthesize(FutureConcat),
        (FutureConcat = new Future<String>(
            () => FuturePi + " < " + FutureOmega,
            (FuturePi = new Future<string>(() => CalculatePi(10))),
            (FutureOmega = new Future<string>(() => CalculateOmega()))
        ))
    ))
);

In F# and other more functional languages, I imagine we could use let statements to define and assign these variables as part of the overall expression, instead of having to define the variables in a separate statement as shown here.

The Future<T> class I wrote works fairly well for exploration and study of futures and the possible syntax to define them and access their values, and I’ll share it so that you can experiment with it if you like, but understand that this is (even more so than usual) not production ready code.  I’m making some very naive assumptions, not taking advantage of any task managers or thread pools, there is no intelligent scheduling going on, and I haven’t tested this in any real world applications.  With that disclaimer out of the way, here it is, complete with the consuming test code.

using System;
using System.Collections.Generic;
using System.Threading;
using System.Linq;

namespace FutureExpressionExample
{
    class Program
    {
        static void Main(string[] args)
        {
            DateTime StartTime = DateTime.Now;

            Future<string> FuturePi = null, FutureOmega = null, FutureConcat = null, FutureParen = null;

            var result = new Future<string>("bracket",
                () => Bracket(FutureParen),
                (FutureParen = new Future<string>("parenthesize",
                    () => Parenthesize(FutureConcat),
                    (FutureConcat = new Future<String>("concat",
                        () => FuturePi + " < " + FutureOmega,
                        (FuturePi = new Future<string>("pi", () => CalculatePi(10))),
                        (FutureOmega = new Future<string>("omega", () => CalculateOmega()))
                    ))
                ))
            );

            /* Alternative
             
            // first group of expressions evaluating in parallel
            var pi = new Future<string>("pi", () => CalculatePi(10));
            var omega = new Future<string>("omega", () => CalculateOmega());

            // a single future expression dependent on all of the futures in the first group
            var paren = new Future<string>("parenthesize", () => Parenthesize(pi + " < " + omega), pi, omega);
                        
            // another single future expression dependent on the paren future
            var result = new Future<string>("bracket", () => Bracket(paren), paren);

            */

            Console.WriteLine("Do other stuff while calculation occurs...");

            try
            {
                Console.WriteLine("\n" + result);
            }
            catch (AggregateException ex)
            {
                Console.WriteLine("\n" + ex.Message);
            }

            TimeSpan ts = DateTime.Now - StartTime;
            Console.WriteLine("\n" + ts.TotalSeconds.ToString() + " seconds");

            Console.ReadKey();
        }

        static string CalculatePi(int NumberDigits)
        {
            //throw new ApplicationException("Failed to calculate Pi");
            Thread.Sleep(3000);
            return "3.1415926535";
        }

        static string CalculateOmega()
        {
            //throw new ApplicationException("Failed to calculate Omega");
            Thread.Sleep(3000);
            return "999999999999999";
        }

        static string Parenthesize(string Text)
        {
            Thread.Sleep(500);
            return "( " + Text + " )";
        }

        static string Bracket(string Text)
        {
            Thread.Sleep(500);
            return "[ " + Text + " ]";
        }
    }

    public class Future<T> : IDisposable
    {
        public string Name { get; set; }
        public bool Complete { get; protected set; }
        public Exception Error { get; protected set; }

        protected Func<T> Expression { get; set; }

        protected List<Future<T>> Factors;
        protected List<Future<T>> FactorsCompleted;
        protected List<Future<T>> FactorsFailed;

        public event Action<Future<T>> Completed;
        protected void OnCompleted()
        {
            Complete = true;

            if (Completed != null)
                Completed(this);
        }

        private T _Value;
        public T Value
        {
            get
            {
                // block until complete
                while (!Complete)
                {
                    Thread.Sleep(1);
                }

                if (Exceptions.Count > 0)
                    throw new AggregateException(Exceptions);

                return _Value;
            }
            private set { _Value = value; }
        }

        public List<Exception> Exceptions
        {
            get
            {
                var list = new List<Exception>();

                foreach (Future<T> Factor in Factors)
                {
                    list.AddRange(Factor.Exceptions);
                }

                if (Error != null)
                    list.Add(Error);

                return list;
            }
        }

        public static implicit operator T(Future<T> Future)
        {
            return Future.Value;
        }

        // naming a Future is optional
        public Future(Func<T> Expression, params Future<T>[] Factors) : this("<not named>", Expression, Factors) { }

        public Future(string Name, Func<T> Expression, params Future<T>[] Factors)
        {
            this.Name = Name;
            this.Expression = Expression;
            this.Factors = new List<Future<T>>(Factors);
            
            FactorsCompleted = new List<Future<T>>();
            FactorsFailed = new List<Future<T>>();

            foreach (Future<T> Factor in this.Factors)
            {
                if (Factor.Complete)
                    FactorsCompleted.Add(Factor);
                else
                    Factor.Completed += new Action<Future<T>>(Factor_Completed);
            }

            // there may not be any factors, or they may all be complete
            if (FactorsCompleted.Count == this.Factors.Count)
                Expression.BeginInvoke(ReceiveCallback, null);
        }

        private void Factor_Completed(Future<T> Factor)
        {
            if (!FactorsCompleted.Contains(Factor))
                FactorsCompleted.Add(Factor);

            if (Factor.Error != null && !FactorsFailed.Contains(Factor))
                FactorsFailed.Add(Factor);

            Factor.Completed -= new Action<Future<T>>(Factor_Completed);

            if (Exceptions.Count > 0)
            {
                Dispose();
                OnCompleted();
                return;
            }

            if (FactorsCompleted.Count == Factors.Count)
                Expression.BeginInvoke(ReceiveCallback, null);
        }

        private void ReceiveCallback(IAsyncResult AsyncResult)
        {
            try
            {
                Value = Expression.EndInvoke(AsyncResult);
            }
            catch (Exception ex)
            {
                Error = ex;
            }
            
            Dispose();

            // computation is completed, regardless of whether it succeeded or failed
            OnCompleted();
        }

        public void Dispose()
        {
            foreach (Future<T> Factor in Factors)
            {
                Factor.Completed -= new Action<Future<T>>(Factor_Completed);
            }
        }

        // helpful for debugging
        public override string ToString()
        {
            return Name + ", " + (Complete ? "Complete" : "Incomplete") + (Error != null ? ", Error=" + Error.Message : string.Empty);
        }
    }

    public class AggregateException : Exception
    {
        public List<Exception> Exceptions;

        public AggregateException(IEnumerable<Exception> Exceptions)
        {
            this.Exceptions = new List<Exception>(Exceptions);
        }

        public override string Message
        {
            get
            {
                string message = string.Empty;
                foreach (Exception ex in Exceptions)
                {
                    message += ex.Message + "\n";
                }
                return message;
            }
        }
    }
}

Posted in Algorithms, Concurrency, Data Structures, Design Patterns, Functional Programming | 1 Comment »