Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘Dynamic Programming’ Category

Animate.NET: Fluent Animation Library for Silverlight & WPF

Posted by Dan Vanderboom on December 31, 2009

Overview

The basic idea—in Silverlight and WPF—that an animation is just a change in some DependencyProperty over time is simple and powerful.  However, at that level of detail, the API for defining and managing complex animations involves writing a ton of code.  There are code-less animations, of course, such as those created in the Visual State Manager, but when you want to perform really dynamic animations, state-based animations can become impractical or outright impossible.

In response to this, I’ve published my fluent-style code-based animation library for Silverlight and WPF on CodePlex at http://animatedotnet.codeplex.com.  This is an API for making code-based animations intuitive and simple without having to write dozens or even hundreds of lines of code to create and configure storyboards, keyframes, perform repetitive math to calculate alignment, rotation, and other low-level details that distract one from the original purpose of the animation.  In one example, I counted over 120 lines of standard storyboard code, and with the abstractions and fluent API I’ve come up with, reduced that down to half a dozen lines of beautiful, pure intent.  As a result, it’s much more readable and faster to write.

I was initially inspired by Nigel Sampson from his blog article on building a Silverlight animation framework.  The code on his site was a good first step in creating higher-level abstractions, going above DoubleAnimation to define PositionAnimation and RotationAnimation, and I decided to build on top of that, adding other abstractions as well as a fluent-style API in the form of extension methods that hide even those classes.

Concepts

All Animate.NET animations derive from the Animation class which tracks the UI element being modified, the duration of animation, whether it has completed, and fires an event when the animation completes.  It manages building and executing the Storyboard object so you don’t have to.

Subclasses of Animation currently include OpacityAnimation, PositionAnimation, RotationAnimation, SizeAnimation, TransformAnimation, and GroupAnimation.  TransformAnimation is the parent class of RotateAnimation, and in the future ScaleAnimation and TranslateAnimation may also be included.

GroupAnimation is special because it allows you to combine multiple animations.  These groups can be nested and each group can include a wait time before starting (to stagger animations).

The Animate static class includes all of the extension methods that make up the fluent API, and the intention is for this to be the master class for building complex group animations.  Most of these methods come in pairs: you can RotateTo a specific angle or RotateBy relative to your current angle; MoveTo a specific location or MoveBy relative to your current position, etc.

Here’s the list so far:

  • Group and Wait
  • Fade, FadeIn, FadeOut, and CrossFade
  • RotateTo and RotateBy
  • ResizeTo and ResizeBy
  • MoveTo and MoveBy

Examples

Animate.NET can best be understood and appreciated with examples.

Basic Animations

Let’s say you want to resize an element to a new size.  Normally you’d need a storyboard and two DoubleAnimations: one for x and another for y, and for each you’d need to set several properties.  With Animate.NET, you can define and execute your animation beginning with a reference to the element you want to animate:

var rect = new Rectangle()
{
    Height = 250,
    Width = 350,
    Fill = new SolidColorBrush(Colors.Blue)
};
MainStage.Children.Add(rect);
rect.SetPosition(50, 50);

rect.ResizeTo(150, 150, 1.5.seconds()).Begin();


Only a single line of code, the last one, is needed to resize the rect element.

Note the call to Begin.  Without this, the ResizeTo (and all other fluent API calls) will return an object that derives from Animation but will not run.  We can, if needed, obtain a reference to the animation and begin the animation separately, like this:

var anim = rect.ResizeTo(150, 150, 1.5.seconds());
anim.Begin();


This allows us to compose animations into groups and manipulate animations after they’ve started, and is very similar to how LINQ queries are composed and later executed.

You’ll also notice the use of several other extension methods:

  • SetPosition – sets Left and Top currently.  In future versions, you’ll be able to define a registration point for positioning that may be located elsewhere, such as the center of the element.
  • seconds() – along with milliseconds, minutes, etc., allows you to specify a TimeSpan object more fluently.  I saw this in some Ruby code and loved it.  If only the C# team would implement extension properties, it would look even cleaner (eliminate the need for parentheses).
  • Center() and GetCenter() – centers an element immediately, and gets a Point object representing the center of the object respectively.  Not used in these examples, but worth mentioning.

Group Animations

Next I’ll show an example of a group animation using the Animate class’s Group method:

Animate.Group(
    rect.RotateBy(rect.GetCenter(), -90, 1.seconds()),
    rect.FadeOut(1.seconds())
    ).Begin();


This group animation contains two child animations: one to rotate the rectangle 90 degrees counterclockwise, and the other to fade the rectangle out (make it completely transparent).  The method takes a params array, so you can include as many animations as you like.

Because the animations listed are peers in the group, they begin running at the same time.  Often you will want to stagger animations, however.  You can accomplish this with the Wait method, which is the Group method in disguise (it simply includes an additional TimeSpan parameter).

Animate.Group(
    rect.RotateBy(rect.GetCenter(), -90, 1.5.seconds()),
    rect.FadeIn(0.5.seconds()),
    Animate.Wait(1.seconds(),
        rect.FadeOut(0.5.seconds())
        )
    ).Begin();


This animation rotates the rectangle for 1.5 seconds.  During the first 0.5 seconds, it fades in; during the last 0.5 seconds, it fades out.  Only one element, rect, was used in this example, but any number of UI elements can participate.

Animations can be nested and staggered to arbitrary complexity.  Because all animations derive from the Animation class, you can write properties or methods to encapsulate group animations, and assemble them programmatically before executing them.  Because all the ceremony of storyboards and keyframes is abstracted away, it’s very easy to see what’s happening in this code in terms of the end result.

Method Chaining

One of the benefits of a fluent API is the ability to chain together methods that modify a primary object.  For example, the Animation class defines a WhenComplete method that can be used to respond to the completion of an animation.  In the samples project on CodePlex, I create new UI objects at the beginning of each animation, and remove them afterward:

rect.ResizeTo(150, 150, 1.5.seconds())
    .WhenComplete(a =>
    {
        Thread.Sleep(2000);
        MainStage.Children.Remove(rect);
    })
    .Begin();


I pause for a couple seconds after displaying the final result before removing that object from its container.

Extension methods will be used more in the future for this library.  Uses will include modifying the animation to apply easing functions, responding to collision detection (by stopping or reversing), and so on.  This might end up looking something like:

rect.ResizeTo(150, 150, 1.5.seconds())
    .Ease(EasingFunction.Cubic(0.5))
    .StopIf(a => Animate.Collision(GetCollisionObjects()))
    .Begin();
 

Feedback and Future Direction

I’m releasing this as a very early experiment, and I’m interested in your feedback on the library and its API.

What kind of functionality would you like to see added?  Do the method names and syntax feel right?  What major, common animation scenarios have I omitted?  What other kinds of samples would you like to see?

Download the library and samples and give it a try!

Posted in Algorithms, Animation, Composability, Data Structures, Design Patterns, Dynamic Programming, Fluent API, Silverlight, WPF | 5 Comments »

Better Tool Support for .NET

Posted by Dan Vanderboom on September 7, 2009

Productivity Enhancing Tools

Visual Studio has come a long way since its debut in 2002.  With the imminent release of 2010, we’ll see a desperately-needed overhauling of the archaic COM extensibility mechanisms (to support the Managed Package Framework, as well as MEF, the Managed Extensibility Framework) and a redesign of the user interface in WPF that I’ve been pushing for and predicted as inevitable quite some time ago.

For many alpha geeks, the Visual Studio environment has been extended with excellent third-party, productivity-enhancing tools such as CodeRush and Resharper.  I personally feel that the Visual Studio IDE team has been slacking in this area, providing only very weak support for refactorings, code navigation, and better Intellisense.  While I understand their desire to avoid stepping on partners’ toes, this is one area I think makes sense for them to be deeply invested in.  In fact, I think a new charter for a Developer Productivity Team is warranted (or an expansion of their team if it already exists).

It’s unfortunately a minority of .NET developers who know about and use these third-party tools, and the .NET community as a whole would without a doubt be significantly more productive if these tools were installed in the IDE from day one.  It would also help to overcome resistance from development departments in larger organizations that are wary of third-party plug-ins, due perhaps to the unstable nature of many of them.  Microsoft should consider purchasing one or both of them, or paying a licensing fee to include them in every copy of Visual Studio.  Doing so, in my opinion, would make them heroes in the eyes of the overwhelming majority of .NET developers around the world.

It’s not that I mind paying a few hundred dollars for these tools.  Far from it!  The tools pay for themselves very quickly in time saved.  The point is to make them ubiquitous: to make high-productivity coding a standard of .NET development instead of a nice add-on that is only sometimes accepted.

Consider just from the perspective of watching speakers at conferences coding up samples.  How many of them don’t use such a tool in their demonstration simply because they don’t want to confuse their audience with an unfamiliar development interface?  How many more demonstrations could they be completing in the limited time they have available if they felt more comfortable using these tools in front of the masses?  You know you pay good money to attend these conferences.  Wouldn’t you like to cover significantly more ground while you’re there?  This is only likely to happen when the tool’s delivery vehicle is Visual Studio itself.  Damon Payne makes a similar case for the inclusion of the Managed Extensibility Framework in .NET Framework 4.0: build it into the core and people will accept it.

The Gorillas in the Room

CodeRush and Resharper have both received recent mention in the Hanselminutes podcast (episode 196 with Mark Miller) and in the Deep Fried Bytes podcast (episode 35 with Corey Haines).  If you haven’t heard of CodeRush, I recommend watching these videos on their use.

For secondary information on CodeRush, DXCore, and the principles with which they were designed, I recommend these episodes of DotNetRocks:

I don’t mean to be so biased toward CodeRush, but this is the tool I’m personally familiar with, has a broader range of functionality, and it seems to get the majority of press coverage.  However, those who do talk about Resharper do speak highly of it, so I recommend you check out both of them to see which one works best for you.  But above all: go check them out!

Refactor – Rename

Refactoring code is something we should all be doing constantly to avoid the accumulation of technical debt as software projects and the requirements on which they are based evolve.  There are many refactorings in Visual Studio for C#, and many more in third-party tools for several languages, but I’m going to focus here on what I consider to be the most important refactoring of them all: Rename.

Why is Rename so important?  Because it’s so commonly used, and it has such far-reaching effects.  It is frequently the case that we give poor names to identifiers before we clearly understand their role in the “finished” system, and even more frequent that an item’s role changes as the software evolves.  Failure to rename items to accurately reflect their current purpose is a recipe for code rot and greater code maintenance costs, developer confusion, and therefore buggy logic (with its associated support costs).

When I rename an identifier with a refactoring tool, all of the references to that identifier are also updated.  There might be hundreds of references.  In the days before refactoring tools, one would accomplish this with Find-and-Replace, but this is dangerous.  Even with options like “match case” and “match whole word”, it’s easy to rename the wrong identifiers, rename pieces of string literals, and so on; and if you forget to set these options, it’s worse.  You can go through each change individually, but that can take a very long time with hundreds of potential updates and is a far cry from a truly intelligent update.

Ultimately, the intelligence of the Rename refactoring provides safety and confidence for making far-reaching changes, encouraging more aggressive refactoring practices on a more regular basis.

Abolishing Magic Strings

I am intensely passionate about any tool or coding practice that encourages refactoring and better code hygiene.  One example of such a coding practice is the use of lambda expressions to select identifiers instead of using evil “magical strings”.  From my article on dynamically sorting Linq queries, the use of “magic strings” would force me to write something like this to dynamically sort a Linq query:

Customers = Customers.Order("LastName").Order("FirstName", SortDirection.Descending);

The problem here is that “LastName” and “FirstName” are oblivious to the Rename refactoring.  Using the refactoring tool might give me a false sense of security in thinking that all of my references to those two fields have been renamed, leading me to The Pit of Despair.  Instead, I can define a function and use it like the following:

public static IOrderedEnumerable<T> Order<T>(this IEnumerable<T> Source, 
    Expression<Func<T, object>> Selector, SortDirection SortDirection)
{
    return Order(Source, (Selector.Body as MemberExpression).Member.Name, SortDirection);
}

Customers = Customers.Order(c => c.LastName).Order(c => c.FirstName, SortDirection.Descending);

This requires a little understanding of the structure of expressions to implement, but the benefit is huge: I can now use the refactoring tool with much greater confidence that I’m not introducing subtle reference bugs into my code.  For such a simple example, the benefit is dubious, but multiply this by hundreds or thousands of magic string references, and the effort involved in refactoring quickly becomes overwhelming.

Coding in this style is most valuable when it’s a solution-wide convention.  So long as you have code that strays from this design philosophy, you’ll find yourself grumbling and reaching for the inefficient and inelegant Find-and-Replace tool.  The only time it really becomes an issue, then, is when accessing libraries that you have no control over, such as the Linq-to-Entities and the Entity Framework, which makes extensive use of magic strings.  In the case of EF, this is mitigated somewhat by your ability to regenerate the code it uses.  In other libraries, it may be possible to write extension methods like the Order method shown above.

It’s my earnest hope that library and framework authors such as the .NET Framework team will seriously consider alternatives to, and an abolition of, “magic strings” and other coding practices that frustrate otherwise-powerful refactoring tools.

Refactoring Across Languages

A tool is only as valuable as it is practical.  The Rename refactoring is more valuable when coding practices don’t frustrate it, as explained above.  Another barrier to the practical use of this tool is the prevalence of multiple languages within and across projects in a Visual Studio solution.  The definition of a project as a single-language container is dubious when you consider that a C# or VB.NET project may also contain HTML, ASP.NET, XAML, or configuration XML markup.  These are all languages with their own parsers and other language services.

So what happens when identifiers are shared across languages and a Rename refactoring is executed?  It depends on the languages involved, unfortunately.

When refactoring a C# class in Visual Studio, the XAML’s x:Class value is also updated.  What we’re seeing here is cross-language refactoring, but unfortunately it only works in one direction.  There is no refactor command to update the x:Class value from the XAML editor, so manually changing it causes my C# class to become sadly out of sync.  Furthermore, this seems to be XAML specific.  If I refactor the name of an .aspx.cs class, the Inherits attribute of the Page directive in the .aspx file doesn’t update.

How frequent do you think it is that someone would want to change a code-behind file for an ASP.NET page, and yet would not want to change the Inherits attribute?  Probably not very common (okay, probably NEVER).  This is a matter of having sensible defaults.  When you change an identifier name in this way, the development environment does not respond in a sensible way by default, forcing the developer to do extra work and waste time.  This is a failure in UI design for the same reason that Intellisense has been such a resounding success: Intellisense anticipates our needs and works with us; the failure to keep identifiers in sync by default is diametrically opposed to this intelligence.  This represents a fragmented and inconsistent design for an IDE to possess, thus my hope that it will be addressed in the near future.

The problem should be recognized as systemic, however, and addressed in a generalized way.  Making individual improvements in the relationships between pairs of languages has been almost adequate, but I think it would behoove us to take a step back and take a look at the future family of languages supported by the IDE, and the circumstances that will quickly be upon us with Microsoft’s Oslo platform, which enables developers to more easily build tool-supported languages (especially DSLs, Domain Specific Languages). 

Even without Oslo, we have seen a proliferation of languages: IronRuby, IronPython, F#, and the list goes on.  A refactoring tool that is hard-coded for specific languages will be unable to keep pace with the growing family of .NET and markup languages, and certainly unable to deal with the demands of every DSL that emerges in the next few years.  If instead we had a way to identify our code identifiers to the refactoring tool, and indicate how they should be bound to identifiers in other languages in other files, or even other projects or solutions, the tools would be able to make some intelligent decisions without understanding each language ahead of time.  Each language’s language service could supply this information.  For more information on Microsoft Oslo and its relationship to a world of many languages, see my article on Why Oslo Is Important.

Without this cross-language identifier binding feature, we’ll remain in refactoring hell.  I offered a feature suggestion to the Oslo team regarding this multi-master synchronization of a model across languages that was rejected, much to my dismay.  I’m not sure if the Oslo team is the right group to address this, or if it’s more appropriate for the Visual Studio IDE team, so I’m not willing to give up on this yet.

A Default of Refactor-Rename

The next idea I’d like to propose here is that the Rename refactoring is, in fact, a sensible default behavior.  In other words, when I edit an identifier in my code, I more often than not want all of the references to that identifier to change as well.  This is based on my experience in invoking the refactoring explicitly countless times, compared to the relatively few times I want to “break away” that identifier from all the code that references.

Think about it: if you have 150 references to variable Foo, and you change Foo to FooBar, you’re going to have 150 broken references.  Are you going to create a new Foo variable to replace them?  That workflow doesn’t make any sense.  Why not just start editing the identifier and have the references update themselves implicitly?  If you want to be aware of the change, it would be trivial for the IDE to indicate the number of references that were updated behind the scenes.  Then, if for some reason you really did want to break the references, you could explicitly launch a refactoring tool to “break references”, allowing you to edit that identifier definition separately.

The challenge that comes to mind with this default behavior concerns code that spans across solutions that aren’t loaded into the IDE at the same time.  In principle, this could be dealt with by logging the refactoring somewhere accessible to all solutions involved, in a location they can all access and which gets checked into source control.  The next time the other solutions are loaded, the log is loaded and the identifiers are renamed as specified.

Language Property Paths

If you’ve done much development with Silverlight or WPF, you’ve probably run into the PropertyPath class when using data binding or animation.  PropertyPath objects represent a traversal path to a property such as “Company.CompanyName.Text”.  The travesty is that they’re always “magic strings”.

My argument is that the property path is such an important construct that it deserves to be an core part of language syntax instead of just a type in some UI-platform-specific library.  I created a data binding library for Windows Forms for which I created my own property path syntax and type, and there are countless non-UI scenarios in which this construct would also be incredibly useful.

The advantage of having a language like C# understand property path syntax is that you avoid a whole class of problems that developers have used “magic strings” to solve.  The compiler can then make intelligent decisions about the correctness of paths, and errors can be identified very early in the cycle.

Imagine being able to pass property paths to methods or return then from functions as first-class citizens.  Instead of writing this:

Binding NameTextBinding = new Binding("Name") { Source = customer1; }

… we could write something like this, have access to the Rename refactoring, and even get Intellisense support when hitting the dot (.) operator:

Binding NameTextBinding = new Binding(@Customer.Name) { Source = customer1; }

In this code example, I use the fictitious @ operator to inform the compiler that I’m specifying a property path and not trying to reference a static property called Name on the Customer class.

With property paths in the language, we could solve our dynamic Linq sort problem cleanly, without using lambda expressions to hack around the problem:

Customers = Customers.Order(@Customer.LastName).Order(@Customer.FirstName, SortDirection.Descending);

That looks and feels right to me.  How about you?

Summary

There are many factors of developer productivity, and I’ve established refactoring as one of them.  In this article I discussed tooling and coding practices that support or frustrate refactoring.  We took a deep look into the most important refactoring we have at our disposal, Rename, and examined how to get the greatest value out of it in terms of personal habits, as well as long-term tooling vision and language innovation.  I proposed including property paths in language syntax due to its general usefulness and its ability to solve a whole class of problems that have traditionally been solved using problematic “magic strings”.

It gives me hope to see the growing popularity of Fluent Interfaces and the use of lambda expressions to provide coding conventions that can be verified by the compiler, and a growing community of bloggers (such as here and here) writing about the abolition of “magic strings” in their code.  We can only hope that Microsoft program managers, architects, and developers on the Visual Studio and .NET Framework teams are listening.

Posted in Data Binding, Data Structures, Design Patterns, Development Environment, Dynamic Programming, Functional Programming, Language Innovation, LINQ, Oslo, Silverlight, Software Architecture, User Interface Design, Visual Studio, Visual Studio Extensibility, Windows Forms | Leave a Comment »

Strongly-Typed, Dynamic Linq Order Operator

Posted by Dan Vanderboom on August 20, 2009

A Community Solution

I love social technologies like Stack Overflow, where people can collaborate loosely to share knowledge and help get things done.  Stack Overflow does on a large scale what developer blogs like mine have been doing on a smaller scale: creating a community around the sharing of ideas and methods.

Every once in a while, I get some great feedback that includes a fix for one of my bugs, a performance tweak I didn’t realize was possible, or an extension to some library I left unfinished.

This morning I ran into a question about my dynamic Linq sort, solved and answered by “Ch00k”, allowing one to get compile-time checking of identifier names.  Well done!

(It’s too bad Stack Overflow doesn’t promote the use of real names for professional developers to better market themselves with skill and reputation.)

My original article (with source code) is here.  All I added to the library was this:

public static IOrderedEnumerable<T> Order<T>(this IEnumerable<T> Source, 
    Expression<Func<T, object>> Selector, SortDirection SortDirection)
{
    return Order(Source, (Selector.Body as MemberExpression).Member.Name, SortDirection);
}

To test it, I used this code:

IEnumerable<Customer> Customers = new Customer[] { new Customer("Dan", "Vanderboom"), new Customer("Steve", "Vanderboom"), 
    new Customer("Tracey", "Vanderboom"), new Customer("Sarah", "Barkelew") };

Customers = Customers.Order(c => c.LastName, SortDirection.Ascending);
Customers = Customers.Order(c => c.FirstName, SortDirection.Descending);

foreach (var cust in Customers)
{
    Console.WriteLine(cust.FirstName + " " + cust.LastName);
}

Now I can refactor these data model classes with a tool and all my dynamic sorting queries will stay in sync!

Posted in Collaboration, Design Patterns, Dynamic Programming, Language Extensions, LINQ, Object Oriented Design, Open Source, Social Networking | Tagged: , , , | 3 Comments »

Oslo: Misconceptions and Fallacies

Posted by Dan Vanderboom on February 1, 2009

In the many conversations and debates I’ve had about Oslo recently–in person, on the phone, through email, on blogs, and in the Oslo forum–I’ve encountered a good amount of resistance to the goals of Oslo.  Some of this is due to misconception and general confusion, some is due to an attachment to one’s current methodology, and some I believe is simply due to a fear of anything new and unknown.  In the course of these conversations, I’ve run across a common set of thoughts or themes which I have attempted to represent faithfully here.

My first article, Why Oslo is Important, attempted to elucidate the high-level collection of concepts, technologies, and tools flying under the Oslo banner as it exists today and how I imagine it evolving in the future.  Though I’ve received a lot of interest and appreciation, it also managed to spark some feedback from those who were still confused or concerned, leading me to believe that I had failed to deliver a fully satisfying explanation.

Understandably, Oslo and its target domain are too large to explain or digest in a single article, even a long one.  It’s also too early in the development cycle to be very specific.  So it’s reasonable to suggest that one can’t be totally satisfied until substantial examples and reference applications are built using the Oslo tools.  Fair enough.

This article isn’t going to provide that reference application.  It has the more modest goal of trying to dispel some of the common misconceptions and fallacies that I’ve encountered, and my responses to them.  In future articles, as I design and develop my newest software system, I’ll be documenting and publishing the how and why of my use of Oslo tools and technologies to provide more specific evidence of its usefulness.  I’ll also be providing references to much of the good work that is being done to provide solutions to various problems.

As always, I encourage you to participate and provide feedback: to me, and especially to the Oslo team.  The more brain power we can bring to bear on this problem in the community, the better off the Oslo team will be, and the faster Oslo will evolve to become precisely the set of tools we need to improve our overall development experience, and developer productivity in particular.

“Oslo doesn’t solve any problems that can’t already be solved with existing tools or technologies.”

When the first steam-powered tractors were sold to farmers in the 1860’s, traditional ox and horse farmers might have said the same thing.  “This tractor won’t do anything that my ox-plowing method can’t already do.”  This is true, but it’s not an effective argument against the use of the new technology, which was a much faster and more cost-effective method of farming.  The same farmer could harvest much more of his crop in the same amount of time, and as the new technology matured and gas-powered engines became available (in the 1880’s), so did the benefit increase.  The same goes for any high level language above assembly language, the use of a relational database over a loose collection of files, display of text on a CRT instead of punched tape to communicate with a user, or any other great technological leap forward that “doesn’t accomplish anything new”.

Then again, it all depends what kind of problems you’re talking about.  If you get specific enough, I’m sure you’ll find plenty in Oslo that’s new, even so early in its development, such as a shared repository for interoperable models and the ability to define new parsers that provide tooling support such as keyword colorization.  Sometimes it’s these little details that act as the glue to pull components together and create substantial value through the synergy that results.  Visual Studio and Intellisense weren’t strictly needed (you can still use Notepad and cs.exe), but it can quickly answer dozens of questions a day without having to jump out of context and spend a lot of time looking through disconnected documentation.

“We don’t need to know about Oslo or model-driven development because the applications I build are small and specific, or otherwise don’t need to be so general and flexible.”

This may or may not be true for you, but saying that the industry doesn’t need to advance because you don’t perceive a direct benefit to your own development isn’t valid.  The reason your applications are able to be so simple is because of the wealth of tools, languages, platforms, frameworks, and libraries that your applications leverage.  Standing on the shoulder of giants, you might say.

Many of these systems and components can benefit tremendously from a model-driven approach, and if it improves productivity for Microsoft and other third-party developers, that means they’ll be able to spend less time on plumbing and more time building the framework features you care about.  It’s also likely to make all of those APIs cleaner and more consistent, resulting in easier discoverability and fewer headaches for you, the API consumer.  As the .NET Framework and other frameworks and libraries grow ever larger, this will be critical to keeping things organized and under control.

“Oslo is going to force me to model all kinds of things that really aren’t needed in my software.”

The existence of Oslo will not force you to model any characteristics that you aren’t already modeling through other means.  What it will do is provide more options and tools for modeling your software more effectively and more productively.  It will also significantly ease the burden of creating more heavily model-driven software if you decide that’s right for your application or service.

For more information and a clearer definition of what a model is, see this article on the MSDN Oslo Development Center.

“Oslo will impose a workflow on me that doesn’t make sense for my methodology or business.”

Where Oslo fits into your specific workflow will be ultimately determined by you.  This isn’t any different from Entity Framework.  In v1 of EF, the tooling supports the reading of database structure and the generation of entity classes, but there is work being done to support a workflow going in the other direction: that of starting with classes and generating the database.  The Entity Framework itself doesn’t actually care in which direction you want to work; the issue is primarily one of tool support.  Other initiatives such as adding support for POCO indicate that the EF team is listening to feedback from the community and making the necessary changes to achieve broad support of their framework.  I would expect the same from the Oslo team.

Early releases of Oslo will have similar limitations; currently it seems that M can only be used to generate a database structure from MSchema, and that database structure can be read by Entity Framework to generate your entity classes.  Because Microsoft has such a broad audience to satisfy, other workflows will have to be accommodated, such as starting with class files and generating M files and database schemas.  In fact, I’ve submitted feedback to Microsoft’s Connect site to ensure this kind of multi-master synchronization of model representations is considered.

“Putting everything in a database is overkill for my application, so Oslo isn’t relevant to me.”

While the Repository is an important aspect of Oslo, it isn’t required.  Command line tools exist to transform textual input (specified as MGraph, or in your own custom-defined format using a Domain Specific Language) into MGraph output.  There is a separate step to convert this into SQL, or to optionally inject this data directly into the Oslo Repository.

If you don’t want to use the Repository, there are already multiple methods available for instantiating objects directly from this text data, whether it’s read from a file, embedded as a resource, or sent as data over a network.  Josh Williams (SpankyJ) has published an example showing how to convert DSL text into XAML, and instantiate the object graph using an MGraphXamlReader, and Torkel Ödegaard of Coding Instinct wrote an article demonstrating how to write a generic deserializer without using XAML.

Model formats such as CSDL, MSL, and SSDL for EF, or configuration data currently specified for WCF and WF, will all very likely be expressed in some DSL specified with M (there has been talk about these efforts already).  Since applications without database access will need these technologies, it will be impossible to force developers to read this model data from a SQL Server database.

“We already have XML, XSD, and XSLT, so there’s no benefit to having yet another language to specify the same things.”

XSD is used to define formats and languages (such as XAML), and XML is used as a poor man’s one-size-fits-all meta-format for specifying hierarchical data.  While XML is friendly enough to open in text editors, it’s designed more for tools than for human eyes.

Having different languages and formats to represent different kinds of data actually eases human comprehension and authoring ability.  As Chris Anderson said in his Oslo session at PDC, when you’re looking at XML in an editor, what stands out are not what’s important to your domain, but rather what’s important to XML: elements and attributes.

People are using XML to define their DSLs and formats, not because XML is the best representation, but because writing parsers for new formats and languages is just too hard.  Customers had been asking Microsoft for the ability to write these DSLs easily, so it was out of conversations and customer feedback that Microsoft decided to expose these services.

So it’s not a matter of absolutely needing M and the ability to define new languages because of some inability to get work done without them.  Rather, it’s about reducing the amount of mental work required to author our models and increasing our productivity as a result.  It’s also about having powerful transformational tools available to convert all formats and languages into a common representation so that the models can all interoperate despite their differences, in the same way that .NET languages all compile to a common CIL/MSIL language so that many different programming languages can interoperate.  Without this ability, we’d have a different community of developers for each language instead of one broad group of “.NET developers” who can all share and benefit from each other’s knowledge and libraries.  This has been recognized as such an important advantage that there are efforts underway to compile languages other than Java to JVM byte cote.

The larger the community, the larger our collective pool of knowledge, and the greater reuse we actually achieve.

“Oslo is too general and abstract to be useful to real developers building real systems.”

The idea that generalization can get out of control for a specific problem is valid, in the same way that a problem can be over-analyzed.  But that doesn’t mean that we should stigmatize all general-purpose software, or that we should ignore the growing trend for enterprise software systems to require greater flexibility, user customization support, extensibility, and so on.

The fact is that life on Earth evolves towards greater complexity, and as supporting hardware resources increase and business demands grow, so does software.  Taming that complexity will require rethinking how we approach every aspect of software design and development, including how to model it.

The software development industry is stratified into many layers, from platform development to one-off, command-line utilities.  Some organizations write software to support millions of users, while others deploy specialized applications in-house, but most of us fall somewhere in between.  Oslo seems to be most applicable to enterprise software offered to many customers, including cloud services, but there are subsets of Oslo that will have an impact on a great majority of .NET developers sooner or later.

There’s a lot of thought and work that goes on in our world (and billions of dollars spent) on “pure research” in the sciences (including computer science) that isn’t directly applicable to every John Doe, but without which we wouldn’t have things like nuclear power plants, microwaves, radio, television, satellite communication, or many pharmaceuticals.  The Nobel laureates of the world who have spent their lives studying something so abstract and remote from every day life have contributed massively to the technological progress of our world, and quite often contribute to a better, more sanitary, healthy, and productive society.  Despite the risks and dangers each technology enables, we somehow still make steady progress in terms of reducing chaos and violence.

Without abstract and general technology like general purpose language compilers, which can specify any logic we dream up for any type of application we care to build, we’d be back in the stone age.  The Internet itself is based on communication standards that are so general, they are applicable to any application protocol or service traffic we can devise.

So before dismissing software (or any technology) due to its abstract or general nature, think about where we’d be without them.  Someone has to approach the colossal, abstract, general problems with enough foresight to deliver solutions before they’re too desperately needed; and who better than a huge organization like Microsoft with deep pockets?

Ironically, our ability to define Domain Specific Languages with Oslo give us the converse power: the ability to define languages and formats that are extremely specific to our purposes and problem domains, and therefore enable us to write our specifications with less ceremony and noise that accompanies a general purpose language.  This allows us to specify our intentions more easily by saying only what we need to say to get the point across.  So the only reason Oslo must be so general is to provide that interoperability and translation layer across a set of specific formats that we define… without us having to work so hard for it.

For different reasons, it reminds me of generics, another general and abstract tool: it’s a complicated feature to implement in a language, but it provides great expressive power.  Generics also don’t provide anything we couldn’t manage to do before in other ways, but I certainly wouldn’t want to go back to programming without them!  In fact, you might say it’s an effective modeling tool.

Posted in Development Environment, Dynamic Programming, Language Innovation, Metaprogramming, Oslo, Problem Modeling | 7 Comments »

The Future of Programming Languages

Posted by Dan Vanderboom on November 6, 2008

Two of the best sessions at the PDC this year were Anders Hejlsberg’s The Future of C# and a panel on The Future of Programming.

A lot has been said and written about dynamic programming, metaprogramming, and language syntax extensions–not just academically over the past few decades, but also as a recently growing buzz among the designers and users of mainstream object-oriented languages.

Anders Hejlsberg

Dynamic Programming

After a scene-setting tour through the history and evolution of C#, Anders addressed how C# 4.0 would allow much simpler interoperation between C# and dynamic languages.  I’ve been following Charlie Calvert’s Language Futures website, where they’ve been discussing these features early on with the development community.  It’s nice to see how seriously they take the feedback they’re getting, and I really think it’s going to have a positive impact on the language as a whole.  Initial thoughts revolved around creating a new block of code with code like dynamic { DynamicStuff.SomeUndefinedProperty = “whatever”; }.

But at the PDC we saw that instead dynamic will be a type for our dynamic objects, and so dynamic lookup of members will only be allowed for those variables.  Anders’ demo showed off interactions with JavaScript and Python, as well as Office via COM, all without the ugly Type.Missing parameters (optional parameter support also played a part in that).  Other ideas revolved around easing Reflection access, and XML document access for Xml nodes dynamically.

Meta-Programming

At the end of his talk, Anders showed a stunning demo of metaprogramming working within C#.  It was an early prototype, so all language features were not supported, but it worked similar to Eval where the code was constructed inside a string and then compiled at runtime.  But it was flexible and powerful enough that he could create delegates to functions that he Eval’ed up into existence.  Someone in the audience asked how this was different from Lisp macros, to which Anders replied: “This is basically Lisp macros.”

Before you get too excited (or worried) about this significant bit of news, Anders made no promises about when metaprogramming would be available, and he subtly suggested that it may very well be a post-4.0 feature.  As he said in the Future of Programming Panel, however: “We’re rewriting the compiler in managed code, and I’d say one of the big motivators there is to make it a better metaprogramming system, sort of open up the black box and allow people to actually use the compiler as a service…”

Regardless of when it arrives, I hope they will give serious consideration to providing syntax checking of this macro or meta code, instead of treating it blindly at compile-time as a “magic string”, as has so long plagued the realm of data access.  After all, one of the primary advantages of Linq is to enable compile-time checking of queries, to enforce not only strict type checking, but to also more fundamentally ensure that data sources and their members are valid.  The irregularity of C#’s syntax, as opposed to Lisp, will make that more difficult (thanks to Paul for pointing this out), but I think most developers will eventually agree it’s a worthwhile cause.  Perhaps support for nested grammars in the generic sense will set the stage for enabling this feature.

Language Syntax Extensions

If metaprogramming is about making the compiler available as a service, language extensions are about making the compiler service transparent and extensible.

The majority (but not all) of the language design panel stressed caution in evolving and customizing language syntax and discussed the importance of syntax at length, but they’ve been considering the demands of the development community seriously.  At times Anders vacillated between trying to offer alternatives and admitting that, in the end, customization of language syntax by developers would prevail; and that what’s important is how we go about enabling those scenarios without destroying our ability to evolve languages usefully, avoiding their collapse from an excess of ambiguity and inconsistency in the grammar.

“Another interesting pattern that I’m very fond of right now in terms of language evolution is this notion that our static languages, and our programming languages in general, are getting to be powerful enough, that with all of these things we’re picking up from functional programming languages and metaprogramming, that you can–in the language itself–build these little internal DSLs, where you use fluent interface style, and you dot together operators, and you have deferred execution… where you can, in a sense, create little mini languages, except for the syntax.

If you look at parallel extensions for .NET, they have a Parallel.For, where you give the start and how many times you want to go around, and a lambda which is the body you want to execute.  And boy, if you squint, that looks like a Parallel For statement.

But it allows API designers to experiment with different styles of programming.  And then, as they become popular, we can pick them up and put syntactic veneers on top of them, or we can work to make languages maybe even richer and have extensible syntax like we talked about, but I’m encouraged by the fact that our languages have gotten rich enough that you do a lot of these things without even having to have syntax.” – Anders Hejlsberg

On one hand, I agree with him: the introduction of lambda expressions and extension methods can create some startling new syntax-like patterns of coding that simply weren’t feasible before.  I’ve written articles demonstrating some of this, such as New Spin on Spawning Threads and especially The Visitor Design Pattern in C# 3.0.  And he’s right: if you squint, it almost looks like new syntax.  The problem is that programmers don’t want to squint at their code.  As Chris Anderson has noted at the PDC and elsewhere, developers are very particular about how they want their code to look.  This is one of the big reasons behind Oslo’s support for authoring textual DSLs with the new MGrammar language.

One idea that came up several times (and which I alluded to above) is the idea of allowing nested languages, in a similar way that Linq comprehensions live inside an isolated syntactic context.  C++ developers can redefine many operators in flexible ways, and this can lead to code that’s very difficult to read.  This can perhaps be blamed on the inability of the C++ language to provide alternative and more comprehensive syntactic extensibility points.  Operators are what they have to work with, so operators are what get used for all kinds of things, which change per type.  But their meaning gets so overloaded, literally, that they lose any obvious (context-free) meaning.

But operators don’t have to be non-alphabetic tokens, and the addition of new keywords or symbols could be introduced in limited contexts, such as a modifier for a member definition in a type (to appear alongside visibility, overload, override, and shadowing keywords), or within a delimited block of code such as an r-value, or a curly-brace block for new flow control constructs (one of my favorite ideas and an area most in need of extensions).  Language extensions might also be limited in scope to specific assemblies, only importing extensions explicitly, giving library authors the ability to customize their own syntax without imposing a mess on consumers of the library.

Another idea would be to allow the final Action delegate parameter of a function to be expressed as a curly-brace-delimited code block following the function call, in lieu of specifying the parameter within parentheses, and removing the need for a semicolon.  For example, with a method defined like this:

public static class Parallel
{
    // Action delegate defined last, to take advantage of C# syntactic sugar
    public static void For(long Start, long Count, Action Action)
    {
        // TODO: implement
    }
}

…a future C# compiler might allow you to write code like this:

Parallel.For(0, 10)
{
    // add code here for the Action delegate parameter
}

As Dr. T points out to me, however, the tricky part will consist of supporting local returns: in other words, when you call return inside that delegate’s code block, you really expect it to return from the enclosing method, not the one defined by the delegate parameter.  Support for continue or break would also make for a more intuitive fit.  If there’s one thing Microsoft does right, it’s language design, and I have a lot of confidence that issues like this will continue to be recognized and ultimately implemented correctly.  In reading their blogs and occasionally sharing ideas with them, it’s obvious they’re as passionate about the language and syntax as I am.

The key for language extensions, I believe, will be to provide more structured extensibility points for syntax (such as control flow blocks), instead of opening up the entire language for arbitrary modification.  As each language opens up some new aspect of its syntax for extension, a number of challenges will surface that will need to be dealt with, and it will be critical to solve these problems before continuing on with further evolution of the language.  Think of all we’ve gained from generics, and the challenges of dealing with a more complex type system we’ve incurred as a result.  We’re still getting updates in C# 4.0 to address shortcomings of generics, such as issues regarding covariance and contravariance.  Ultimately, though, generics were well worth it, and I believe the same will be said of metaprogramming and language extensions.

Looking Forward

I’ll have much more to say on this topic when I talk about Oslo and MGrammar.  The important points to take away from this are that mainstream language designers are taking these ideas to heart now, and there are so many ideas and options out there that we can and will experiment to find the right combination (or combinations) of both techniques and limitations to make metaprogramming and language syntax extensions useful, viable, and sustainable.

Posted in Conferences, Design Patterns, Dynamic Programming, Functional Programming, Language Extensions, LINQ, Metaprogramming, Reflection, Software Architecture | 1 Comment »