Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘Distributed Architecture’ Category

Problems with RIA Services (Feedback for July 2009 CTP)

Posted by Dan Vanderboom on November 9, 2009

RIA Services (new home page) is a collection of tools and libraries for making Rich Internet Applications, especially line of business applications, easier to develop.  Brad Abrams did a great presentation of RIA Services at MIX 2009 that touches on querying, validation, authentication, and how to share logic between the server and client sides.  Brad also has a huge series of articles (26 as I write this) on using Silverlight and RIA Services to build a realistic application.

I love the concept of RIA Services.  Brad and his team have done a fantastic job of identifying the critical issues for LOB systems and have the right idea to simplify those common data access tasks through the whole pipeline from database to UI controls, using libraries, Visual Studio tooling, or whatever it takes to get the job done.

So before I lay down some heavy criticism of RIA Services, take into consideration that it’s still a CTP and that my scenario pushes the boundaries of what was likely conceived of for this product, at least for such an early stage.

Shared Data Model with WPF & Silverlight Clients

The cause of so much of my grief with RIA Services has been my need to share a data model, and access to a shared database, across WPF as well as Silverlight client applications.  Within the constraints of this situation, I keep running into problem after problem while trying to use RIA Services productively.

The intuitive thing to do is: define a single data model project that compiles to a single assembly, and then reference that in my Silverlight and non-Silverlight projects.  This would be a 100% full-fidelity shared data model.  As long as the code I wrote was a subset of both Silverlight and normal .NET Frameworks (an intersection), we could share identical types and write complex validation and model manipulation logic, all without having to constrain ourselves to work within the limitations of a convoluted code generation scheme.  Back when I wrote Compact Framework applications, I did this with great success despite the platform gap, and I didn’t have anything like RIA Services to help.

Incompatible Assemblies

Part of the problem arises because Silverlight assemblies are incompatible with non-Silverlight assemblies.  A lot of what RIA Services is doing is trying to find a way around this limitation: picking up attributes and code files from one project and inserting that code into the Silverlight project with a build action.  This Visual Studio “magic” has been criticized for its weakness in dealing with multiple-solution systems where Visual Studio can’t update the client because it’s not loaded, and I’ve heard there’s work being done to address this, but for my current needs, this magic aspect of it isn’t a problem.  The specifics of how it works, however, are.

Different Data Access APIs

Accessing entities requires a different API in Silverlight via RiaContextBase versus ObjectContext elsewhere.  Complex logic in the model (for validation and other actions against the model) requires access to other entities and therefore access to the current object context, but the context APIs for Silverlight and WPF are very different.  Part of this has to do with Silverlight’s inability to make synchronous calls to the server.

In significantly large systems that I build, I use validation logic such as “this entity is valid if it’s pointing to an entity of a different type that contains a PropertyX value of Y”.  One of my tables stores a tree of data, so I have methods for loading entire subtrees and ensuring that no circular references exist.  For these kinds of tasks, I need access to the data context in basic validation methods.  When I delete nodes from a tree, I need to delete child nodes, so update logic is part of the model that needs to be the same in every client.  I don’t want to define that multiple times for multiple clients.  I like to program very DRY.  In other words, I find myself in need of a shared model.

RIA Services doesn’t provide anything like type equivalence for a shared model, however.  Data model classes in Silverlight inherit from Entity, but EntityObject in WPF.  In the RIA Services domain context, we RaiseDataMemberChanged, but in a normal EF object context, we need to ReportPropertyChanged.  In WPF, I can call MyEntity.Load(MergeOption.PreserveChanges), but in Silverlight there’s no Load method on the entity and no MergeOption enum.  In WPF I can query against context.SomeEntitySet, but in Silverlight you would query against context.GetSomeEntitySetQuery() and then execute the query with another method call.

This chasm of disparity makes all but the simplest shared model logic impractical and frustrating.  The code generation technique, though good in principle, keeps getting in the way.  For example, I have both parameterless and parameterized constructors in my entity classes.  This works great in my WPF client, but when this code is synchronized to my Silverlight client, I get an error because the Silverlight-side entity class is generated in two parts: in the hidden partial class, a parameterless constructor is generated which calls partial method OnCreated; and in the visible partial class, the constructor method I defined on the server is dumped into another file, so I have duplicate constructors.  If I remove the parameterless constructor from the server side, I get an error because my entity class requires a parameterless constructor (and defining a non-default constructor effectively removes the default one from the resulting type unless it’s explicitly defined).  I thought I could define the partial method OnCreated and put my construction logic in there, but the partial method is only defined on the client side.  That means sharing construction logic consists of copying and pasting the OnCreated method across the various clients—far from an ideal solution.

Entity Data Model Required to be in Web Project

Another strategy I attempted was to define the .edmx file and my partial class extensions in a class library, and then reference that from the web project.  I could define the LinqToEntitiesDomainService<MyDataContext>, but sharing entity class code (by generating code in the Silverlight project) isn’t possible unless the .edmx file and partial class extensions are defined in the web project itself.  This would mean that my WPF client would have to reference a web project for data access, which by itself seems wrong.  (Or making a copy of the data model, which is worse.)  It would be better for the WPF client to talk to the same domain service as the Silverlight client, but RIA Services doesn’t give you an option to link that web project to a non-Silverlight project, so again I ran into a brick wall.

So Don’t Do That

The kind of advice I’m getting for this is, “so don’t do that”.  In other words, don’t write complex validation logic in the model or otherwise try to access the data context; don’t write parameterized constructors; don’t aim for 100% type fidelity across all endpoints of a system; don’t try to share data models with Silverlight and non-Silverlight projects, etc.  But I see the potential for RIA Services, so I have to push for these things unless I hear really convincing arguments against them (or compelling alternatives).

Conclusion

The fact that there are different data contexts and data item definitions within those contexts imposes a burden on the developer to use different techniques for each environment, and creates challenges for centralizing data model logic and reusing equivalent logic across different kinds of clients.  My gut feeling is that RIA Services in its current form has some fundamental design flaws that will need to be addressed, taking into consideration systems with a mix of Silverlight, WPF, and other clients, before it becomes a truly robust data access platform.

Posted in Data Structures, Design Patterns, Distributed Architecture, LINQ, RIA Services, Software Architecture | 3 Comments »

Multicasting with Silverlight 3 Local Messaging

Posted by Dan Vanderboom on April 29, 2009

[This article and the sample solution included were written with Silverlight 3 Beta.]

The very first thing I did to experiment with Silverlight 3’s new local messaging feature was to create an application with a listener name of “Everyone”, pop up multiple instances of the application, and try sending a message to all of the instances.  I got a nasty HRESULT E_FAIL exception message upon firing up the second instance.  I closed the application and restarted, only to find I got the same error message on the first instance as well (until I rebooted).

The problem was that a listener must have a unique name and I was violating that rule.  There are no groups, and multicasting to multiple receivers in the same group isn’t supported out of the box.  Because I didn’t dispose of the object, that name was never released.  This seems like a design flaw; when a Silverlight application instance ends, the Silverlight runtime should be able to detect that and release this name resource.

When I heard about this local message passing ability, my first thought was that it would create a neat opportunity, especially in out-of-browser applications, for multiple-window applications.  This would be great for those of us who use multiple monitors, as we could then slide panels around where we wanted them, taking full advantage of our workspace.

My sample application, which you can download here, consists of a TextBox, a Submit button that sends the content of that TextBox to all the other instances, and a TextBlock that displays important events.  The first time the application runs, it identifies itself as the master window.  All subsequent application runs identify themselves as child windows.  Here’s a screenshot of this application running out-of-browser:

Screenshot

To accomplish this, the master window will need to have a well known name.  I chose MyApp/Master to identify both the application and the window name.  Each of the child windows require a unique name, so I chose the format MyApp/{guid}.  Once an instance realizes there’s already a master window, it gives itself a child window GUID name and then registers that name with the master window.  When a child instance exits, it unregisters itself with the master instance.  And finally, when the master window exits, it informs all of the child windows (so they can shutdown, most likely).

I defined several static members in the App class itself, so they would be visible across pages, and also because I wanted to hook into Application_Exit and needed access from there.

public const string MasterWindowName = "MyApp/Master";
public static string WindowName;
public static Guid WindowID;
public static List<Guid> ChildWindows;

Hooking into the LocalMessageReceiver’s MessageReceived event, I looked for specific keywords in a protocol I quickly cooked up, and in most cases extracted a parameter by parsing the message string.  These commands are NewWindow, CloseWindow, MasterWindowClosed, and UpdateText.

void MessageReceiver_MessageReceived(object sender, MessageReceivedEventArgs e)
{
    if (e.Message.StartsWith("NewWindow:"))
    {
        if (App.ChildWindows == null)
            App.ChildWindows = new List<Guid>();

        Guid NewWindowID = new Guid(e.Message.Substring("NewWindow:".Length));

        App.ChildWindows.Add(NewWindowID);

        Log("New window detected, id = " + NewWindowID.ToString());

        return;
    }

    if (e.Message.StartsWith("CloseWindow:"))
    {
        var id = new Guid(e.Message.Substring("CloseWindow:".Length));

        if (App.WindowName == App.MasterWindowName)
            App.ChildWindows.Remove(id);

        Log("Closing window, id = " + id.ToString());
        
        return;
    }

    if (e.Message == "MasterWindowClosed" && App.WindowName != App.MasterWindowName)
    {
        Log("Master Window Closed");
        return;
    }

    if (e.Message.StartsWith("UpdateText"))
    {
        var text = e.Message.Substring("UpdateText:".Length);

        txtName.Text = text;

        // if this is the master window, then distribute to all child windows
        if (App.WindowName == "MyApp/Master")
            UpdateTextMulticast(text);

        return;
    }
}

As you can see, most of this is simple housekeeping code to track which application instances or windows are currently connected to the master window.  The UpdateText command calls the UpdateTextMulticast method, which looks like this:

private void UpdateTextMulticast(string Text)
{
    foreach (var id in App.ChildWindows)
    {
        var MessageSender = new LocalMessageSender("MyApp/" + id.ToString());
        MessageSender.SendAsync("UpdateText:" + txtName.Text);
    }
}

If the window is a child window, the Submit button sends a message to the master window; the master window, when clicking on Submit, calls the UpdateTextMulticast method.

private void btnSubmit_Click(object sender, RoutedEventArgs e)
{
    if (App.WindowName == "MyApp/Master")
    {
        UpdateTextMulticast(txtName.Text);
    }
    else
    {
        var MessageSender = new LocalMessageSender("MyApp/Master");
        MessageSender.SendAsync("UpdateText:" + txtName.Text);
    }
}

Finally, this is how a window alerts other windows that it’s closing (in App.xaml.cs):

private void Application_Exit(object sender, EventArgs e)
{
    if (WindowName != MasterWindowName)
    {
        var MessageSender = new LocalMessageSender(MasterWindowName);
        MessageSender.SendAsync("CloseWindow:" + WindowID);
    }
    else if (ChildWindows != null)
    {
        foreach (var id in ChildWindows)
        {
            var MessageSender = new LocalMessageSender("MyApp/" + id.ToString());
            MessageSender.SendAsync("MasterWindowClosed");
        }
    }
}

That’s about all there is to it.  Admittedly, having the same Silverlight application act as both master and child window might not be the best arrangement, and it certainly adds a little to the complexity of the code, but the sky is the limit as far as how the new local messaging feature of Silverlight 3 could be used.

What I’d really like to see is some kind of WCF customization that could define WCF services and host them specifically for consumption across this local messaging channel.  Doing so would eliminate the need for defining and parsing a protocol as I’ve done in this example, as WCF could handle the serialization and service method invocation.

Posted in Design Patterns, Distributed Architecture, Silverlight, User Interface Design | 3 Comments »

Why Oslo is Important

Posted by Dan Vanderboom on January 17, 2009

imageContrary to common misunderstanding and speculation, the point of Oslo is not to put programming in the hands of business analysts who want to write their own business rules.  Do I think some of that will happen?  Architects and engineers will try everything they can imagine.  Some of them will succeed in specific niches or scenarios, but it won’t replace application or system design, and it will probably be very limited for the forseeable future.  Oslo is more about dramatically improving the productivity of designers and developers by generalizing common solution patterns and generating more adaptable tools.

PDC Keynote

Much of the confusion around Oslo occurs for two reasons:

  1. Oslo is designed at a higher level of abstraction than most systems today, so its scope is broad and it will have an impact on virtually every product, solution and service across Microsoft.  It’s difficult to get your head around something that big.
  2. Because of its abstract nature, core concepts are defined in terms that are heavily overloaded, like "Model", "Repository", and "Language".  Once you’ve picked up the lingo and can translate Oslo terminology into language you’re already familiar with, both the concept and magnitude of it will become obvious.

Oslo isn’t something completely new; in fact, Oslo borrows from a lot of previous research and even existing model-driven development tools.  Oslo focuses existing technologies and techniques into a coherent and mature vision of development, combining all parts into a more powerful whole, and promises to deliver a supremely adaptable and efficient platform to develop on.

What Is Oslo?

Oslo is a software factory for generating first-class, tool-supported languages out of your declarative specifications.

A factory is a highly organized production facility
that produces members of a product line
using standardized parts, tools and production processes.

-from a review of Software Factories

The product line is analogous to Oslo’s parsers, transform tools, and IDE plugins for new data models and languages (both textual and visual) that you define.  The standardized parts are Oslo’s library components; the tools are the M languages and the Quadrant/Intellipad application; and the processes are shaped by the flow of data through the Oslo tool chain (see the diagram near the end of this article).

With Oslo, you build the custom tools you need to rapidly build or generate software systems.  It’s all about using the right tool for the job, and having a say in how those tools are shaped to obtain the greatest leverage.

As stated at the home page of softwarefactories.com:

We see a capacity crisis looming. The industry continues to hand-stitch applications distributed over multiple platforms housed by multiple businesses located around the planet, automating business processes like health insurance claim processing and international currency arbitrage, using strings, integers and line by line conditional logic. Most developers build every application as though it is the first of its kind anywhere.

In other words, there’s already a huge shortage of experienced, highly-qualified professionals capable of ensuring the success of these increasingly complex systems, and with the need (and complexity) growing exponentially, our current development practices increasingly fall short of the total demand.

Books like Greenfield’s Software Factories have been advocating building at a higher level of abstraction for years, and my initial reaction was to see it as a natural, evolutionary milestone for a highly mature software system.  However, it’s an awful lot of focused development effort to attain such a level of maturity, and not many organizations are able to pull it off given the state of our current development platforms.

It’s therefore fortuitous that Microsoft teams have taken up the challenge of building these abilities into their .NET platform.  After all, that’s where it really belongs: in the framework.

Unexpected Awesomeness

Oslo of course contains a lot of expected awesomeness, but where it will probably have the most impact in terms of developer productivity is with new first-class languages and language tools.  Why?  It first helps to understand the world of data formats and languages.

We’ve had an explosion of data formats–these mini Domain Specific Languages, if you will (especially in the form of complex configuration files).  As systems evolve and scale, and the ways we can configure and compose our application’s behavior continues to grow, at what point do we perceive that configuration graph as the rich language that it becomes?  Or when our user interfaces evolve from Monolithic to Modular to Composite to Granular Composite (or User Composable), at what point does that persistent object graph become our UX DSL (as with XAML in WPF).

Sometimes we set our standards too low, or are slow to raise them when the time has come to do so.  With XML we get extensibility in defining languages and we think, "If we can parse it, then we can build a tool over it."  I don’t know about you, but I’d much rather work with rich client software–some kind of designer–over a textual data format any day.

But you know how things go: some company like Microsoft builds a whole bunch of cool stuff, driven off some XML configuration, or they unleash something like XAML on which WPF, WF, and more are built.  XAML is great for tools to read and write, and although XML and XAML are textual and not binary and therefore human readable in a text editor (the original intention behind that term), it’s simply not as easy to read as C# or VB.NET.  That’s why we aren’t all rushing to program everything in XAML.

Companies like Microsoft, building from the bottom up, release their platforms well in advance of the thick client user experiences that make them enjoyable to use and which encourages mass adoption.  Their models, frameworks, and applications are so large now that they’re released in massively differentiated stages, producing a technology adoption gap.

By giving that language a syntax other than XML, however, we can approach it in the same way we approach our program logic: in the most human readable and aesthetically-pleasant way we can devise, resembling our programming languages of choice.

Sometimes, the density of data and its structure in our model is such that a visual editor fails to represent that model well.  Source code is a case in point.  You could create a visual designer to visualize flow control, branching logic, and even complex expression building (like the iTunes Smart Playlist), but code in text format is more appropriate in this kind of scenario, and ends up being more efficient with the existing tooling available.  Especially with an IDE like Visual Studio, we’re working with human-millenia of effort that have gone into the great code editing tools we use today.  Oslo respects this need for choice by offering support for building both visual and textual DSLs, and recognizes the fluent definition of new formats and languages as the bridge to the next quantum leap in productivity.

If we had an easy way of defining languages in formats that we developers felt comfortable working with–as we’re comfortable with our general purpose languages and their rich tool support–then we’d be much more productive in the transition between a technology first being released and later having rich tool support over it.  WPF has taken quite a while to be adopted as much as it has, partly due to tool availability and maturity.  Before Expression Blend or Cider designers were released and hand-coding XAML was the only way, those who braved the angle brackets struggled with it.  As I play with Silverlight, I realize how much must still be done in XAML, and how we still struggle.  It’s simply not as nice to work with as my C# code.  Not as rich, and not as strongly tool-supported.

That’s one place Oslo provides value.  With the ability to define new textual and visual DSLs, rigorous verification and validation in a rich set of tools, the promise of Intellisense, colorization of keywords, operators, constants, and more, the Oslo architects recognize the ability to enhance our development experience in a language-agnostic way, raising the level of abstraction because, as they say, the way to solve any technical problem is to approach it at one higher level of indirection.  Unfortunately, this makes Oslo so generalized and abstract that it’s difficult to grasp and therefore to appreciate its immensity.  Once you can take a step back and see how it fits in holistically, you’ll see that it has the potential to dramatically transform the landscape of software development.

Currently, it’s a lot of work to implement all the language services in Visual Studio to give them as rich an experience as we’ve come to expect with C#, VB.NET, and others.  This is a serious impediment to doing this kind of work, so solving the problem at the level of Oslo drastically lowers the barrier to entry for implementing tool-supported languages.  The Oslo bits I’ve seen and played with are very early in the lifecycle for this massive scope of technology, but the more I think about its potential, the more impressed I am with the fundamental concept.  As Chris Anderson explained in his PDC session on MGrammar, MGrammar was an implementation detail, but sometime around June 2007, that feature team realized just how much customers wanted direct access to it and decided to release MGrammar to the world.

Modeling & The Repository

That’s all well and good for DSLs and language enthusiasts/geeks, but primarily perhaps, Oslo is about the creation, exploration, relation, and execution of models in an interoperable way.  In other words, all of the models that are currently used to describe a software system, or an entire IT environment, are either not encoded formally enough to verify or execute, or they’re encoded or stored in proprietary ways that don’t allow interoperability with other models.  A diagram in Visio or PowerPoint documenting network topology, for example, knows nothing about the component architecture or deployment model of the software systems installed and running on that network.

When people usually talk about models, they imagine high-level architecture documents, overviews used to visually summarize work that is much more granular in nature.  These models aren’t detailed, and they normally aren’t kept up to date and in sync with the current design as changes are made.  But modeling in Oslo is not an attempt to make these visual models contain all of the necessary detail, or to develop software with visual tools exclusively.  Oslo simply provides the tools, both graphical and textual, to define and relate many models.  It will be up to the development community to decide how all these tools are ultimately used, which parts of our systems will be specified in a mix of general purpose, domain specific, and visual languages.  Ultimately, Oslo will provide the material and glue to fill the gaps between the high and low level specifications, and unite them into a common, connected, and much more useful set of data.

To grasp what Oslo modeling is really all about requires that we expand our definition of "model", to see the models expressed in our configuration and XAML files, in our applications’ database schemas, in our entity classes, and so on.  As software grows in complexity and becomes more composable, we can use various languages to model its behavior, store that in the repository for runtime execution, inspection, or reuse by other systems.

This funny and clever Oslo video (reminiscent of The Hitchhiker’s Guide to the Galaxy) explains modeling in the broader sense alluded to here.

If we had some universal container for the storage of all different kinds of models, and a standardized way of relating entities across models, we’d be able to do things like impact analysis, where we could see the effect on software systems if someone were to alter the network it was running on; or powerful data mining on the IT execution environment of a business.

Many different tools, with different audiences, will be able to connect into this repository to manipulate aspects of the models that they understand and have access to.  This is just the tip of the iceberg.  We already model so much of what we do in the IT and software worlds, and as we begin adopting business process middleware and orchestration software like BizTalk, there’s a huge amount of value in those models converging and connecting.  That’s where the Oslo Repository comes in.

Oslo provides interoperability among models in the same way that SOA provides interoperability among services.  Not unlike the interoperability we have now among many different languages all sharing the same CLR specification.

Bridging data models across repositories or in shared repository is a major step forward.  With Windows Azure and Microsoft’s commitment to their online services platform (and considering the momentum of the SaaS movement with Amazon, Google, and others), shared storage and data sets are the future.  (Check out SQL Data Services if you haven’t already, and watch for some exciting announcements coming later this year!)

The Dichotomy of Data vs. Metadata

Jeff Pinkston from the Oslo team aptly reflects the attitude of the group when he scoffs at the categorical difference between data and metadata.  In terms of storing and querying it, serializing and communicating it, and everything else that matters in enterprise software, data is data and there’s no reason not to treat it the same when it comes to architecting a system.  We have our primary models and our secondary models, our shared models and our protected models, but they’re still just models that shape our software’s behavior, and they share all of the same characteristics when it comes to manipulation and access.  It’s their ultimate effect that differs.

It’s worth noting, I think, the line that’s been drawn between code and data in some programming languages and not in others (C# vs. LISP).  A division has been made for the sake of security rather than necessity.  Machine instruction codes are represented in the same sort of binary data and realized in the same digital circuitry as traditional user data.  It’s tempting to keep things locked down and divided, but as languages evolve to become more late bound and dynamic (and as the tools evolve to make this feasible), there will be more need for the manipulation of expression trees and ASTs.  I strongly suspect the lines will blur until they disappear.

Schema and Object Instance Languages

In order to define models, we need a tool.  In Oslo, this is a textual language called MShema and an editor called Intellipad.  I personally think it’s odd to talk people’s ears off about "model, model, model", and then to use the synonym "schema" to name the language, but all of these names could change before they’re shipped for all we know.

This is a simple example of an MSchema document:

module MyModel
{
    type Person
    {
        LastName : Text;
        FirstName : Text;
    }

    People : Person*;
}

By running this through the "M Compiler", a SQL script is generated that will create the appropriate database objects.  Intellipad is able to verify the correctness of your schema, and what’s really nice is that you don’t even have to specify data types when you start sketching out your model.  Defaults are assumed, and you can get more specific as your model evolves.

MGraph is a language for defining instances of objects, constrained by an MSchema and similar in format.  So MSchema is to MGraph what XSD is to XML.

In this article, Lars Corneliussen explains Microsoft’s vision to make MGraph as common as XML is today.  Take a look at his article to see a side-by-side comparison of the same object represented as XML (POX), JSON, and MGraph, and decide for yourself which you like best (or see below).

MSchema and MGraph are easier and more efficient to read and write than XML.  Their message format resembles typical structured programming languages, and developers are already familiar with these formats.  XML is a fine format for a tool; it’s human readable but not human-friendly.  A C-style language, on the other hand, is much more human-friendly than all of the angle brackets and the redundancy (and verbosity) of tag text.  That narrows down our choice to JSON and MGraph.

In JSON, the property/field/attribute names are delimited by quotation marks, suggesting that the whole structure is a dumb property bag.

{
    "LastName" : "Vanderboom",
    "FirstName" : "Dan"
}

MGraph has a very similar syntax, but its attribute property names are recognized and validated by the parser generated from MSchema, so the quotation marks are unnecessary.  It ends up looking more natural, and a little more concise.

{
    LastName : "Vanderboom",
    FirstName : "Dan"
}

Because MGraph is just a message format, and Microsoft’s service offerings already support multiple message formats (SOAP/POX/JSON/etc.), it wouldn’t disrupt any of their architecture to add an MGraph adapter, and I’ll be shocked if I don’t hear about one in their next release.

Meta-Languages and MGrammar

In the same way that Oslo includes a meta-model because it allows us to define models, it also includes a meta-language because it allows us to define languages (as YACC and ANTLR have done).  However, just as Pinkston doesn’t think data and metadata should be treated different, it makes sense to think of a language that defines languages as just another language.  There is something Zen about that, where the tools somehow seem to bend back upon themselves like one of Escher‘s drawings.

DrawingHands

Here is an example language defined by MGrammar in a great article on MSDN called MGrammar in a Nutshell:

module SongSample
{
    language Song
    {
        // Notes
        token Rest = "-";
        token Note = "A".."G";
        token Sharp = "#";
        token Flat = "b";
        token RestOrNote = Rest | Note (Sharp | Flat)?;

        syntax Bar = RestOrNote RestOrNote RestOrNote RestOrNote;
        syntax List(element)
          = e:element => [e]
          | es:List(element) e:element => [valuesof(es), e];

        // One or more bars (recursive technique)
        syntax Bars = bs:List(Bar) => Bars[valuesof(bs)];
        syntax ASong = Music bs:Bars => Song[Bars[valuesof(bs)]];
        syntax Songs = ss:List(ASong) => Songs[valuesof(ss)];

        // Main rule
        syntax Main = Album ss:Songs => Album[ss];

        // Keywords
        syntax Music = "Music";
        syntax Album = "Album";

        // Ignore whitespace
        syntax LF = "\u000A";
        syntax CR = "\u000D";
        syntax Space = "\u0020";

        interleave Whitespace = LF | CR | Space;
    }
}

This is a pretty straight forward way to define a language and generate a parser.  Aside from the obvious keywords to define syntax rules and token patterns (with an alternative and more readable format for regular expressions), the => projection operator allows you to shape the MGraph output according to your needs.

I created two simple languages with MGrammar on the plane trip back to Milwaukee from the PDC in November.  The majority of my time was spent fussing with the editor, Intellipad, and for the last half hour I found it very easy to create a language on the fly, extending and changing it through experimentation quickly and easily.  Projections, which are functional expressions in MGrammar used to shape MGraph output, are the most challenging part.  There are a number of techniques that shape the output graph, so it will be good to see how this is approached in future reference examples.

Surreptitiously announced just before I wrote this, Mike Weinhardt at Microsoft indicated that a gallery of example grammars for MGrammar is being put together, to point to the sample grammars for various languages in addition to grammars that the community develops, and it should be available by the end of this month.  These examples demonstrating how to define languages and write sensible projections, coming from the developers who are putting MGrammar together, will be an invaluable tool for teaching you how to use common patterns (just as 101 LINQ Samples did for LINQ).

As Doug Purdy explained on .NET Rocks: "People who are building a domain specific language, and they don’t want to understand how to build a parser, or they’re not language designers.  Actually, they are language designers.  They design a language, but they actually don’t do the whole thing.  They don’t build a parser.  What they do, they just leverage the XML parser.  And what we’re trying to do is provide a toolset for folks where they don’t have to resort to XML in order to do DSLs."

From the same episode, Don Box said of the DSL session at PDC: "I’ve never seen a session with more geek porn in it."

Don: "It’s like crack for developers.  It’s kind of addictive; it takes over your life."

Doug: "If you want the power of Anders in your hand…"

The Tool Chain

Now that we have a better sense of what’s included in Oslo in terms of languages, editors, and the shared repository, we can look at the relationship among the other pieces, which are manifested in the CTP as a set of command-line tools.  In the future, these will integrate into an IDE, most likely Visual Studio.  (I’d expect Intellipad and Quadrant to merge with Visual Studio, but there’s no guaranty this will happen.)

When you create your model with MSchema, you’ll use m to validate that model and generate a SQL script to create a SQL Server 2008 database schema (yes, it only works right now with SQL Server 2008).  You’ll also use the m command to validate your object graph (written in MGraph) against your schema, and translate that into a set of SQL commands to perform inserts and updates against tables.

With enough models, there’ll be huge value in adding yours to the repository.  If you don’t mind writing MGraph or you generate it automatically with something like an MGraphSerializer class in your code, this may be all you need.

If, on the other hand, you decide you could really benefit by defining your own textual language to use instead of MGraph, you can use MGrammar to define a new language.  This language gets compiled by the mg compiler to create your parser, and the mgx command translates code in your new language into an MGraph, which can then be pulled into your database using m.

This diagram depicts the process:

image

Other than these command-line tools, Quadrant is the highly extensible visual tool for exploring models graphically, and Intellipad is a different face on the same shell for defining DSLs with MGrammar and writing DSL code, as well as writing and verifying MSchema and MGraph code.

We should see fairly soon the convergence of these three languages (MGraph, MSchema, and MGrammar) into a single M language.  This makes sense, since what you want to project in your DSL should be something within your model, verified by your schema.  This may ultimately make these projections much easier to write.

We’ll also see this tool chain absorbed into multiple development environments, eventually with rich binding across multiple representations of our model, although this will take longer in Visual Studio.

Languages and Nested Languages

I looked at some MService examples, and I can understand Damon’s concern that although it’s nice to have "operation" as a keyword in a service-oriented language, with more keywords giving you the ability to specify aspects of each endpoint and the communications patterns required, that enclosing the business logic within that service language is probably not a good idea.  I took this from Dennis van der Stelt’s blog:

service Service
{
  operation PhotoUpload(stream : Stream) : Text
  {
    .PostUriTemplate = "upload";

    index : Text = invoke DateTime.Now.Ticks.ToString();
    filename : Text = "d:\\demo\\photo\\" + index + ".jpg";
    invoke MService.ServiceHelper.StoreInFile(stream, filename);

    return index;
  }
}

Why not?  You’re defining a general purpose language within the curley braces, one capable of defining variables, assigning values, referencing .NET objects, and calling methods.  But why do you want to learn a new language to write services when the language you’re using right now is already supremely capable of that?  Don’t you already know a good syntax for invoking methods (other than "invoke %mehthod%")?  If instead you simply referenced an assembly, type, and method from an MService script, you could externally turn any .NET method with serializable parameters and return value into a service operation by feeding it this kind of file, without having to recompile, and without having to reinvent the wheel.

The possible exception would be if MGrammar adds the ability (as discussed by speakers at the PDC) of supporting multiple layers of enclosing languages within other languages.  In other words, you could use MService to define operations and their attributes using its own syntax, and within the curly braces that follow, use the C# or VB.NET parsers to process the logic with the comprehension of a separate language.  There are some neat possibilities here, but I expect the development community to be conservative and hesitent about mixing layers of semantics, as there is an awful lot of room for confusion and complexity.  It may be better to leave different language blocks in separate files or containers, and to allow them to reference each other as .NET assemblies and XML files reference each other today.

However, I wouldn’t get too hung up on the early versions of these new languages, or any one language specifically.  The useful, sensible ones that take real developer needs into account and provide the most value will be adopted, and many more will quickly fall into disuse.  But the overall pattern will be for the emergence of an amazing amount of leverage in terms of improving human comprehension and taking advantage of our ability to manipulate structured, symbolic object graphs to build and verify software systems.

Resources

After a few months of research and many hours of writing, I don’t feel like I’ve even scratched the surface.  But instead of giving you an absolutely comprehensive picture, I’m going to stop here and continue in future articles.  In the meantime, check out the following resources.

For an overview of the development paradigm, look for information on language-oriented programming, including an article I wrote that alludes to how "we will have to raise the level of abstraction to a point that may be hard for us to imagine with our existing tools and languages" due to the "precipitious growth of software complexity".  The "community of abstractions" is the model in Oslo-speak.

For Microsoft specific content: there were some great sessions at the PDC (watch the recorded videos).  It was covered (with much confusion) on the .NET Rocks! podcast (here and here) as well as on Software Engineering Radio; and there are lots of bloggers talking about their initial experiences with it, such as Shawn Wildermuth, Lars Corneliussen, and of course Chris Sells and Jeff Pinkston.  The most clear and coherent explanation I’ve heard was from an interview with Ron Jacobs and David Chappell (Ron gave the keynote at MSDN Dev Con, hosted the ARCast podcast for years).  MSDN has at least 29 videos on the Oslo Developer Center, where there’s a good amount of information. including a FAQ.  There’s also the online guide for MGrammar, MGrammar in a Nutshell, and the Oslo team blog.

If you’re interested in creating DSLs, make sure to keep a look out for details about the upcoming DSL Developers Conference, which is tentatively planned for April 16-17, immediately following the Lang.NET conference (on general purpose languages) on April 14-16.  I’m hoping to be at both this year.  And in case you haven’t heard, Microsoft is planning another PDC Conference for 2009, the first time ever these conferences have run for two consecutive years!  There will no doubt be much more Oslo news and conference material to cover it at the PDC in November.

Pluralsight, an instructor-led training company, now teaches a two-day "Oslo" Fundamentals course (and Don Box’s blog is hosted there).

The best way to learn about Oslo, however, is to dive in and use it.  That’s what I’m doing with my newest system, which needs to be modeled from scratch.  So if you haven’t done so already, download the Oslo SDK (link updated to January 2009 SDK) and introduce yourself to the future of modeling and development!

[Click here for the next article in this Oslo series, on common misconceptions and fallacies about Oslo.]

Posted in Data Structures, Development Environment, Distributed Architecture, Language Extensions, Language Innovation, Metaprogramming, Oslo, Problem Modeling, Service Oriented Architecture, Software Architecture, SQL Data Services, Visual Studio, Windows Azure | 43 Comments »

MSDN Developer Conference in Chicago

Posted by Dan Vanderboom on January 13, 2009

I just got home to Milwaukee from the MSDN Developer Conference in Chicago, about two hours drive.  I knew that it would be a rehash of the major technologies revealed at the PDC which I was at in November, so I wasn’t sure how much value I’d get out of it, but I had a bunch of questions about their new technologies (Azure, Oslo, Geneva, VS2010, .NET 4.0, new language stuff), and it just sounded like fun to go out to Fogo de Chao for dinner (a wonderful Brazilian steakhouse, with great company).

So despite my reservations, I’m glad I went.  I think it also helped that I’ve had since November to research and digest all of this new stuff, so that I could be ready with good questions to ask.  There’ve been so many new announcements, it’s been a little overwhelming.  I’m still picking up the basics of Silverlight/WPF and WCF/WF, which have been out for a while now.  But that’s part of the fun and the challenge of the software industry.

Sessions

With some last minute changes to my original plan, I ended up watching all four Azure sessions.  All of the speakers did a great job.  That being said, “A Lap Around Azure” was my least favorite content because it was so introductory and general.  But the opportunity to drill speakers for information, clarification, or hints of ship dates made it worth going.

I was wondering, for example, if the ADO.NET Data Services Client Library, which talks to a SQL Server back end, can also be used to point to a SQL Data Services endpoint in the cloud.  And I’m really excited knowing now that it can, because that means we can use real LINQ (not weird LINQ-like syntax in a URI).  And don’t forget Entities!

I also learned that though my Mesh account (which I love and use every day) is beta, there’s a CTP available for developers that includes new features like tracking of Mesh Applications.  I’ve been thinking about Mesh a lot, not only because I use it, but because I wanted to determine if I could use the synchronization abilities in the Mesh API to sync records in a database.

<speculation Mode=”RunOnSentence”>
If Microsoft is building this entire ecosystem of interoperable services, and one of them does data storage and querying (SQL Data Services), and another does synchronization and conflict resolution (Mesh Services)–and considering how Microsoft is making a point of borrowing and building on existing knowledge (REST/JSON/etc.) instead of creating a new proprietary stack–isn’t it at least conceivable that these two technologies would at some point converge in the future into a cloud data services replication technology?
</speculation>

I’m a little disappointed that Ori Amiga’s Mesh Mobile wasn’t mentioned.  It’s a very compelling use of the Mesh API.

The other concern I’ve had lately is the apparent immaturity of SQL Data Services.  As far as what’s there in the beta, it’s tables without enforceable schemas (so far), basic joins, no grouping, no aggregates, and a need to manually partition across virtual instances (and therefore to also deal with the consequences of that partitioning, which affects querying, storage, etc.).  How can I build a serious enterprise, Internet-scale system without grouping or aggregates in the database tier?  But as several folks suggested and speculated, Data Services will most likely have these things figured out by the time it’s released, which will probably be the second half of 2009 (sooner than I thought).

Unfortunately, if you’re using Mesh to synchronize a list of structured things, you don’t get the rich querying power of a relational data store; and if you use SQL Data Services, you don’t get the ability to easily and automatically synchronize data with other devices.  At some point, we’ll need to have both of these capabilities working together.

When you stand back and look at where things are going, you have to admit that the future of SQL Data Services looks amazing.  And I’m told this team is much further ahead than some of the other teams in terms of robustness and readiness to roll out.  In the future (post 2009), we should have analytics and reporting in the cloud, providing Internet-scale analogues to their SQL Analysis Server and SQL Reporting Services products, and then I think there’ll be no stopping it as a mass adopted cloud services building block.

Looking Forward

The thought that keeps repeating in my head is: after we evolve this technology to a point where rapid UX and service development is possible and limitless scaling is reached in terms of software architecture, network load balancing, and hardware virtualization, where does the development industry go from there?  If there are no more rungs of the scalability ladder we have to climb, what future milestones will we reach?  Will we have removed the ceiling of potential for software and what it can accomplish?  What kind of impact will that have on business?

Sometimes I suspect the questions are as valuable as the answers.

Posted in ADO.NET Data Services, Conferences, Distributed Architecture, LINQ, Mesh, Oslo, Service Oriented Architecture, SQL Analysis Services, SQL Data Services, SQL Reporting Services, SQL Server, Virtualization, Windows Azure | 1 Comment »

Windows Azure: Internet Operating System

Posted by Dan Vanderboom on October 27, 2008

image

As I write this during the keynote of PDC 2008, Ray Ozzie is announcing (with a great deal of excitement) a new Windows-branded Internet operating system called Windows Azure.  Tipping their hats to work done by Amazon.com with their EC2 (Elastic Cloud Computing) service, Microsoft is finally revealing their vision for a world-wide, scalable, cloud services platform that looks forward toward the next fifty years.

Windows Azure, presumably named for the color of the sky, is being released today as a CTP in the United States, and will be rolled out soon across the rest of the planet.  This new Internet platform came together over the past few years after a common set of scalable technologies emerged behind the scenes across MSN, Live, MSDN, XBox Live, Zune, and other global online services.  Azure is the vehicle for delivering that expertise as a platform to Microsoft developers around the world.

SQL Server Data Services, now called SQL Services (storage, data mining, reporting), .NET Services (federated identity, workflows, etc.), SharePoint Services, Live Services (synchronization), and Dynamics/CRM Services are provided as examples of services moving immediately to Windows Azure, with many more on their way, both from Microsoft and from third parties such as Bluehoo.com, a social networking technology using Bluetooth on Windows Mobile cell phones to discover and interact with others nearby.

The promise of a globally distributed collection of data centers offering unlimited scalability, automatic load-balancing, intelligent data synchronization, and model-driven service lifecycle management is very exciting.  Hints of this new offering could be seen from podcasts, articles, and TechEd sessions earlier this year on cloud services in general, talk of Software-As-Service (SAS), Software+Services, the Internet Service Bus, and the ill-named BizTalk Services.

The development experience looks pretty solid, even this early in the game.  Testing can be done on your local development machine, without having to upload your service to the cloud, and all of the cloud services APIs will be available, all while leveraging existing skills with ASP.NET, web services, WCF and WF, and more.  Publishing to the cloud involves the same Publish context menu item you’re used to in deploying your web applications today, except that you’ll end up at a web page where you’ll upload two files.  Actual service start time may take a few minutes.

Overall, this is a very exciting offering.  Because of Microsoft’s commitment to building and supporting open platforms, it’s hard to imagine what kinds of products and services this will lead to, how this will catalyze the revolution to cloud services that has already begun.

Posted in Conferences, Distributed Architecture, Service Oriented Architecture, Software Architecture, Virtualization | Leave a Comment »

Observations on the Evolution of Software Development

Posted by Dan Vanderboom on September 18, 2008

Neoteny in the Growth of Software Flexibility and Power

Neoteny is a biological phenomenon of an organism’s development observed across multiple generations of a species.  According to Wikipedia, neoteny is “the retention, by adults in a species, of traits previously seen only in juveniles”, and accounts for many evolutionary shifts, including the human brain’s ability to remain elastic and malleable later in life than those of our distant ancestors.

So how does this relate to software?  Software is a great deal like an organic species.  The species emerged (not long ago), incubated in a more or less fragile state for a number of decades, and continues to evolve today.  Each software application or system built is a new member of the species, and over the generations they have become more robust, intelligent, and useful.  We’ve even formed a symbiotic relationship with software.

Consider the fact that software running on computers was at one time compiled to machine language code for a specific processor.  With the invention of platform-independent instruction sets and their associated runtimes performing just-in-time compilation (Java’s JVM and .NET Framework’s CLR), we’ve delayed the actual production of machine language code until it’s actually needed on the target machine.  The compiler produces a slightly more abstract representation of the program logic, and an extra translation step at installation or runtime is needed to complete the process to make the software usable.

With the growing popularity of dynamic languages such as Lisp, Python, and the .NET Framework’s upcoming release of its Dynamic Language Runtime (DLR), we’re taking another step of neoteny.  Instead of a compiler generating instruction byte codes, a “compiler for any dynamic language implemented on top of the DLR has to generate DLR abstract trees, and hand it over to the DLR libraries” (per Wikipedia).  These abstract syntax trees (AST), normally an intermediate artifact created deep within the bowels of a traditional compiler (and eventually discarded), are now persisted as compiler output.

Traits previously seen only in juveniles… now retained by adults.  Not too much of a metaphorical stretch!  The question is: how far can we go?  And I think the answer depends on the ability of hardware to support the additional “just in time” processing that needs to occur, executing more of the compiler’s tail-end tasks within the execution runtime itself, providing programming languages with greater flexibility and power until the compilation stages we currently execute at design-time almost entirely disappear (to be replaced, perhaps, by new pre-processing tasks.)

I remember my Turbo Pascal compiler running on a 33 MHz processor with 1 MB of RAM, and now my cell phone runs at 620 MHz (with a graphics accelerator) and has gigabytes of memory and storage.  And yet with the state of things today, the inclusion of language-specific compilers within the runtime is still quite infeasible.  In the .NET Framework, there are too many potential languages that people might attempt to include in such a runtime: C#, F#, VB, Boo, IronPython, etc.  Trying to cram all of those compilers into a universal runtime that would fit (and perform well) on a cell phone or other mobile device isn’t yet feasible, which is why we have technologies with approaches like System.Reflection.Emit (on the full .NET Framework), and Mono.Cecil (which works on Compact Framework as well).  These work at the platform-independent CIL level, and so can interpret and generate programs generically, interact with each others’ components, and so on.  One metaprogramming mechanism can therefore be reused across all .NET languages, and this metalinguistic programming trend is being discussed on the C# and other language design teams.

I’ve just started using Mono.Cecil, chosen because it is cross-platform friendly (and open source).  The API isn’t very intuitive, but because the source is available, and because extension methods can go a long way to making it more accessible, it’s a great option.  The documentation is sparse, and assembly generation has some performance issues, but it’s a work-in-progress with tremendous potential.  If you’re doing any kind of static analysis or have any need to dynamically generate and consume types and assemblies (to get around language limitations, for example), I’d encourage you to check it out.  A comparison of Mono.Cecil to System.Reflection can be found here.  Another library called LinFu, which performs lots of mind-bending magic and actually uses Mono.Cecil, is also worth exploring.

VB10 will supposedly be moving to the DLR to become a truly dynamic language, which considering their history of support for late binding, makes a lot of sense.  With a dynamic language person on the C# 4.0 team (Jim Hugunin from IronPython), one wonders if C# won’t eventually go the same route, while keeping its strongly-typed feel and IDE feedback mechanisms.  You might laugh at the idea of C# supporting late binding (dynamic lookup), but this is being planned regardless of the language being static or dynamic.

As the DLR evolves, performance optimizations are being discovered and implemented that may close the gap between pre-compiled and dynamically interpreted languages.  Combine this with manageable concurrent execution, and the advantages we normally attribute to static languages may soon disappear altogether.

The Precipitous Growth of Software System Complexity

We’re truly on the cusp of a precipitous period of growth for software complexity, as an exploding array of devices and diverse platforms around the world connect in an ever-more immersive Internet.  Taking full advantage of parallel and distributed computing environments by solving the challenges of concurrency and coordination, as well as following the trend toward increased integration among software components, is pushing software complexity into new orders of magnitude.  The strategies we come up with for organizing these systems will have to take several key factors into consideration, and we will have to raise the level of abstraction to a point that may be hard for us to imagine with our existing tools and languages.

One aspect that’s clear is the rise of declarative or intention-based syntax, whether represented as XML, Domain Specific Langauges (DSL), attribute decoration, or a suite of new visual modeling editors.  This is in part a consequence of raising the abstraction level, as lower-level libraries are entrusted to solve common problems and take advantage of common opportunities.

Another is the use of Inversion of Control (IoC) containers and dependency injection in component based architectures, thereby standardizing the lifecycle of the application and its components, and providing a common environment or ecosystem for all of its components, as well as introducing a common protocol for component location, creation, access, and disposal.  This level of consistency is valuable for sharing a common understanding of how to troubleshoot software components.  The more predictable a component’s interaction with the rest of the system, the easier it is to debug and modify; conversely, the more unique it and its communication system is, the more disparity there is among components, and the more difficult to understand and modify without introducing errors.  If software is a species and applications are individuals, then components are the cells of a system.

Even the introduction of functional programming languages into the mainstream over the past couple years is due, in part, to the ability of those languages to provide more declarative support, more syntactic flexibility, and new ways of dealing with concurrency and coordination issues (such as immutable values) and light-weight, ad hoc data structures (tuples).

Balancing the Forces of Coupling, Cohesion, and Modularity

On a fundamental level, the more that components are independent, the less coupled and the more modular and flexible they are.  But the more they can communicate with and are allowed to benefit from each other, the more interdependent they become.  This adds to cohesiveness and synergy, but also stronger coupling to a community of abstractions.

A composition of services has layers and segments of interdependence, and while there are dependencies, these should be dependencies on abstractions (interfaces and not implementations).  Since there will be at least one implementation of each service, and the extensibility exists to build others as needed, dependency is only a liability when the means for fulfilling it are not extensible.  Both sides of a contract need to be fulfilled regardless; service-oriented or component-based designs merely provide a mechanism for each side to implement and fulfill its part of the contract, and ideally the system also provides a discovery mechanism for the service provider to publish its availability for other components to discover and consume it.

If you think about software components as a hierarchy or tree of services, with services of one layer depending on more root services, it’s easy to see how this simplifies the perpetual task of adding new and revising existing functionality.  You’re essentially editing an outline, and you have opportunities to move services around, reorganize dependencies easily, and have many of the details of the software’s complexity absorbed into this easy-to-use outline structure (and its supporting infrastructure).  Systems of arbitrary complexity become feasible, and then relatively routine.  There’s a somewhat steep learning curve to get to this point, but once you’ve crossed it, your opportunities extend endlessly for no additional mental cost.  At least not in terms of how to compose your system out of individual parts.

Absorbing Complexity into Frameworks

The final thing I want to mention is that a rise in overall complexity doesn’t mean that the job of software developers necessarily has to become more difficult than it is currently.  With the proper design of components that abstract away the complexity into reusable frameworks with intuitive interfaces, developers at the business logic level don’t need to be aware of the inner complexity, in the same way that software developers are largely absolved of the responsibility of thinking about the processor’s inner workings.  As we build our technology stack higher and higher, like the famed Tower of Babel, we must make sure that it’s organized and structured in a way to support that upward growth and the load imposed upon it… so it doesn’t come crashing down.

The requirements for building components tomorrow will not be the same as they were yesterday.  As illustrated in this account of the effort involved in a feature change at Microsoft, in the future, we will also want to consider issues such as tool-assisted refactorability (and patterns that frustrate this, such as “magic strings”), and due to an explosion of component libraries, discoverability of types, members, and their use.

A processor can handle any complexity of instruction and data flow.  The trick is in organizing all of this in a way that other developers can understand and work with.

Posted in Compact Framework, Component Based Engineering, Concurrency, Design Patterns, Development Environment, Distributed Architecture, Functional Programming, Mobile Devices, Object Oriented Design, Problem Modeling, Reflection, Service Oriented Architecture, Software Architecture, Visual Studio | 1 Comment »

.NET Micro Framework vs. Microsoft Robotics Studio

Posted by Dan Vanderboom on April 12, 2008

The .NET Micro Framework is a compatible subset of the full .NET Framework, similar to how the Compact Framework is a subset.  But the Micro Framework can act as its own real time operating system (RTOS) instead of loading the tiny CLR in a host operating system, and works with a variety of hardware devices, including those that don’t have their own memory management unit (MMU).  The gist is that embedded applications as well as low-level drivers can now be written in managed code (a huge leap forward), and take advantage of garbage collection, strong-typing, and other nice features that are typically absent in embedded systems programming.  This supposedly reduces errors, boosts productivity, and reduces development costs.

I’ve been watching videos like this, reading about the Micro Framework for the past few months, have pre-ordered this book from Amazon, and have been itching to get my hands on a hardware development kit to start experimenting.  The only problem is that the interfaces for these embedded devices are so foreign to me (I2C, GPIO, etc.), and I’m not exactly sure what my options are for assembling components.  What kinds of sensors are available?  Do I have to solder these pieces together or is there a nice modular plug system similar to the SATA, IDE, and PCI connectors on modern computers (but on a smaller scale)?  Do I have to write my own device drivers for anything I plug in, or is there some abstraction layer for certain classes of inputs and outputs?

The other issue that makes me hesitate is the thought of programming without language conveniences like generics on a more resource-constrained device than the Windows Mobile devices I’m already frustrated with (and have been for the past four years).

I’m not saying that I’m not excited about managed code on tiny embedded devices, and I’m not saying I won’t start playing with (and blogging about) this important technology sometime in the next few months, but I’ve discovered another platform for embedded device development with the ability to interact directly with the physical world that offers a much lower technical barrier for entry, so that’s where I’m starting.

What I’m referring to is Microsoft Robotics Studio, which is by all accounts a phenomenal platform for more than just robotics, and seems to overlap somewhat with the Micro Framework’s reach (at the intersection of the computer-digital and physical-analog worlds).  The two critical components of this architecture are the Decentralized Software Services Protocol (DSSP, originally named WSAP) and the Concurrency & Coordination Runtime (CCR).  Make sure you watch this video on the CCR, and note how George emphasizes that “it’s not about concurrency, it’s about coordination”, which I found especially enlightening.  These are highly robust and general purpose libraries, and both of them will run on Compact Framework!  (I was very impressed when I read this.)

Without having studied them both in depth yet, the CCR seems to cannibalize on the Task Parallel Library (TPL), at least conceptually, offering a much more complete system of thread manipulation and synchronization than the original System.Threading types, all for a vertical industry that has much greater demands of a concurrency framework that must, for example, deliver massive concurrency and complex coordination rules in a highly-distributed system, all the while handling full and partial failures gracefully.  Some of the patterns and idioms for making concurrency and synchronization operations easy to write and understand are masterfully designed by Henrik Nielsen and George Chrysanthakopoulos (and I thought Vanderboom was a long name!).

The fact that the CCR and DSSP were developed together is significant.  Tasks running in parallel need to be manipulated, coordinated, and tracked regardless of whether they’re running on four processors in one computer or on 256 processors spread across a hundred devices, and distributed services need a dependable way of parallelizing their efforts on individual machines.  This is a synergistic relationship, each subsystem enhancing the elegance and usefulness of the other.  So why not use the CCR in every application instead of developing the TPL?  Are these two teams actively collaborating, or have they been building the two frameworks in isolation?

I also have to make special mention of the decentralized software services, which as a protocol sits on top of HTTP in a RESTful implementation.  It supports composition of services by creating partnerships with other services, defining securely which partnerships are allowed, offering identification and discovery of services, and much more.  In this assembly then, we have a robust library for building distributed, composable, decentralized services-oriented systems with event publication and subscription, with services that can be hosted on desktop and mobile computers alike.  Wow, call me a geek, but that’s really frickin’ cool!  Some of the ideas I have for future projects require this kind of architectural platform, and I’ve been casually designing bits and pieces of such a system (and got so far as getting the communication subsystem working in CF).  Now, however, I might be turning my attention to seeing if I can’t use the Robotics Studio APIs instead.

One thing to be clear about is that Robotics Studio doesn’t support Micro Framework, at least not yet.  I wonder exactly how much value there would be to adding this support.  With hardware options such as this 2.6″ x 2.3″ motherboard with a 1.6 GHz Intel Atom processor, video support, and up to 1 GB of RAM, capable of running Windows XP or XP Embedded, and priced at $95 (or a 1.1 GHz version for $45), we’re already at a small enough form factor and scale for virtually any autonomous or semi-autonomous robotics (or general embedded) applications.  There’s also the possibility for one or more Micro Framework devices to exchange messages with a hardware module that is supported in Robotics Studio, such as the tiny board just mentioned.

Where do we go next?  For me, I just couldn’t resist jumping in with both feet, so I ordered about $500 in robotics gear from Phidgets: USB interface boards, light and tempature sensors, a 3-axis accelerometer (think Wii controller), motors and servos, an RFID reader with tags, LEDs, buttons, knobs, sliders, switches, and more.  I plan to use some of this for projects at CarSpot, and the rest simply to be creative with and have fun.  I also pre-ordered this book, but the publication date is set for June 10th, so by the time I get it, I may have already figured most of it out.

Posted in Compact Framework, Concurrency, Distributed Architecture, Micro Framework, Microsoft Robotics Studio, Service Oriented Architecture, Software Architecture | 2 Comments »