Critical Development

Language design, framework development, UI design, robotics and more.

Why Oslo is Important

Posted by Dan Vanderboom on January 17, 2009

imageContrary to common misunderstanding and speculation, the point of Oslo is not to put programming in the hands of business analysts who want to write their own business rules.  Do I think some of that will happen?  Architects and engineers will try everything they can imagine.  Some of them will succeed in specific niches or scenarios, but it won’t replace application or system design, and it will probably be very limited for the forseeable future.  Oslo is more about dramatically improving the productivity of designers and developers by generalizing common solution patterns and generating more adaptable tools.

PDC Keynote

Much of the confusion around Oslo occurs for two reasons:

  1. Oslo is designed at a higher level of abstraction than most systems today, so its scope is broad and it will have an impact on virtually every product, solution and service across Microsoft.  It’s difficult to get your head around something that big.
  2. Because of its abstract nature, core concepts are defined in terms that are heavily overloaded, like "Model", "Repository", and "Language".  Once you’ve picked up the lingo and can translate Oslo terminology into language you’re already familiar with, both the concept and magnitude of it will become obvious.

Oslo isn’t something completely new; in fact, Oslo borrows from a lot of previous research and even existing model-driven development tools.  Oslo focuses existing technologies and techniques into a coherent and mature vision of development, combining all parts into a more powerful whole, and promises to deliver a supremely adaptable and efficient platform to develop on.

What Is Oslo?

Oslo is a software factory for generating first-class, tool-supported languages out of your declarative specifications.

A factory is a highly organized production facility
that produces members of a product line
using standardized parts, tools and production processes.

-from a review of Software Factories

The product line is analogous to Oslo’s parsers, transform tools, and IDE plugins for new data models and languages (both textual and visual) that you define.  The standardized parts are Oslo’s library components; the tools are the M languages and the Quadrant/Intellipad application; and the processes are shaped by the flow of data through the Oslo tool chain (see the diagram near the end of this article).

With Oslo, you build the custom tools you need to rapidly build or generate software systems.  It’s all about using the right tool for the job, and having a say in how those tools are shaped to obtain the greatest leverage.

As stated at the home page of

We see a capacity crisis looming. The industry continues to hand-stitch applications distributed over multiple platforms housed by multiple businesses located around the planet, automating business processes like health insurance claim processing and international currency arbitrage, using strings, integers and line by line conditional logic. Most developers build every application as though it is the first of its kind anywhere.

In other words, there’s already a huge shortage of experienced, highly-qualified professionals capable of ensuring the success of these increasingly complex systems, and with the need (and complexity) growing exponentially, our current development practices increasingly fall short of the total demand.

Books like Greenfield’s Software Factories have been advocating building at a higher level of abstraction for years, and my initial reaction was to see it as a natural, evolutionary milestone for a highly mature software system.  However, it’s an awful lot of focused development effort to attain such a level of maturity, and not many organizations are able to pull it off given the state of our current development platforms.

It’s therefore fortuitous that Microsoft teams have taken up the challenge of building these abilities into their .NET platform.  After all, that’s where it really belongs: in the framework.

Unexpected Awesomeness

Oslo of course contains a lot of expected awesomeness, but where it will probably have the most impact in terms of developer productivity is with new first-class languages and language tools.  Why?  It first helps to understand the world of data formats and languages.

We’ve had an explosion of data formats–these mini Domain Specific Languages, if you will (especially in the form of complex configuration files).  As systems evolve and scale, and the ways we can configure and compose our application’s behavior continues to grow, at what point do we perceive that configuration graph as the rich language that it becomes?  Or when our user interfaces evolve from Monolithic to Modular to Composite to Granular Composite (or User Composable), at what point does that persistent object graph become our UX DSL (as with XAML in WPF).

Sometimes we set our standards too low, or are slow to raise them when the time has come to do so.  With XML we get extensibility in defining languages and we think, "If we can parse it, then we can build a tool over it."  I don’t know about you, but I’d much rather work with rich client software–some kind of designer–over a textual data format any day.

But you know how things go: some company like Microsoft builds a whole bunch of cool stuff, driven off some XML configuration, or they unleash something like XAML on which WPF, WF, and more are built.  XAML is great for tools to read and write, and although XML and XAML are textual and not binary and therefore human readable in a text editor (the original intention behind that term), it’s simply not as easy to read as C# or VB.NET.  That’s why we aren’t all rushing to program everything in XAML.

Companies like Microsoft, building from the bottom up, release their platforms well in advance of the thick client user experiences that make them enjoyable to use and which encourages mass adoption.  Their models, frameworks, and applications are so large now that they’re released in massively differentiated stages, producing a technology adoption gap.

By giving that language a syntax other than XML, however, we can approach it in the same way we approach our program logic: in the most human readable and aesthetically-pleasant way we can devise, resembling our programming languages of choice.

Sometimes, the density of data and its structure in our model is such that a visual editor fails to represent that model well.  Source code is a case in point.  You could create a visual designer to visualize flow control, branching logic, and even complex expression building (like the iTunes Smart Playlist), but code in text format is more appropriate in this kind of scenario, and ends up being more efficient with the existing tooling available.  Especially with an IDE like Visual Studio, we’re working with human-millenia of effort that have gone into the great code editing tools we use today.  Oslo respects this need for choice by offering support for building both visual and textual DSLs, and recognizes the fluent definition of new formats and languages as the bridge to the next quantum leap in productivity.

If we had an easy way of defining languages in formats that we developers felt comfortable working with–as we’re comfortable with our general purpose languages and their rich tool support–then we’d be much more productive in the transition between a technology first being released and later having rich tool support over it.  WPF has taken quite a while to be adopted as much as it has, partly due to tool availability and maturity.  Before Expression Blend or Cider designers were released and hand-coding XAML was the only way, those who braved the angle brackets struggled with it.  As I play with Silverlight, I realize how much must still be done in XAML, and how we still struggle.  It’s simply not as nice to work with as my C# code.  Not as rich, and not as strongly tool-supported.

That’s one place Oslo provides value.  With the ability to define new textual and visual DSLs, rigorous verification and validation in a rich set of tools, the promise of Intellisense, colorization of keywords, operators, constants, and more, the Oslo architects recognize the ability to enhance our development experience in a language-agnostic way, raising the level of abstraction because, as they say, the way to solve any technical problem is to approach it at one higher level of indirection.  Unfortunately, this makes Oslo so generalized and abstract that it’s difficult to grasp and therefore to appreciate its immensity.  Once you can take a step back and see how it fits in holistically, you’ll see that it has the potential to dramatically transform the landscape of software development.

Currently, it’s a lot of work to implement all the language services in Visual Studio to give them as rich an experience as we’ve come to expect with C#, VB.NET, and others.  This is a serious impediment to doing this kind of work, so solving the problem at the level of Oslo drastically lowers the barrier to entry for implementing tool-supported languages.  The Oslo bits I’ve seen and played with are very early in the lifecycle for this massive scope of technology, but the more I think about its potential, the more impressed I am with the fundamental concept.  As Chris Anderson explained in his PDC session on MGrammar, MGrammar was an implementation detail, but sometime around June 2007, that feature team realized just how much customers wanted direct access to it and decided to release MGrammar to the world.

Modeling & The Repository

That’s all well and good for DSLs and language enthusiasts/geeks, but primarily perhaps, Oslo is about the creation, exploration, relation, and execution of models in an interoperable way.  In other words, all of the models that are currently used to describe a software system, or an entire IT environment, are either not encoded formally enough to verify or execute, or they’re encoded or stored in proprietary ways that don’t allow interoperability with other models.  A diagram in Visio or PowerPoint documenting network topology, for example, knows nothing about the component architecture or deployment model of the software systems installed and running on that network.

When people usually talk about models, they imagine high-level architecture documents, overviews used to visually summarize work that is much more granular in nature.  These models aren’t detailed, and they normally aren’t kept up to date and in sync with the current design as changes are made.  But modeling in Oslo is not an attempt to make these visual models contain all of the necessary detail, or to develop software with visual tools exclusively.  Oslo simply provides the tools, both graphical and textual, to define and relate many models.  It will be up to the development community to decide how all these tools are ultimately used, which parts of our systems will be specified in a mix of general purpose, domain specific, and visual languages.  Ultimately, Oslo will provide the material and glue to fill the gaps between the high and low level specifications, and unite them into a common, connected, and much more useful set of data.

To grasp what Oslo modeling is really all about requires that we expand our definition of "model", to see the models expressed in our configuration and XAML files, in our applications’ database schemas, in our entity classes, and so on.  As software grows in complexity and becomes more composable, we can use various languages to model its behavior, store that in the repository for runtime execution, inspection, or reuse by other systems.

This funny and clever Oslo video (reminiscent of The Hitchhiker’s Guide to the Galaxy) explains modeling in the broader sense alluded to here.

If we had some universal container for the storage of all different kinds of models, and a standardized way of relating entities across models, we’d be able to do things like impact analysis, where we could see the effect on software systems if someone were to alter the network it was running on; or powerful data mining on the IT execution environment of a business.

Many different tools, with different audiences, will be able to connect into this repository to manipulate aspects of the models that they understand and have access to.  This is just the tip of the iceberg.  We already model so much of what we do in the IT and software worlds, and as we begin adopting business process middleware and orchestration software like BizTalk, there’s a huge amount of value in those models converging and connecting.  That’s where the Oslo Repository comes in.

Oslo provides interoperability among models in the same way that SOA provides interoperability among services.  Not unlike the interoperability we have now among many different languages all sharing the same CLR specification.

Bridging data models across repositories or in shared repository is a major step forward.  With Windows Azure and Microsoft’s commitment to their online services platform (and considering the momentum of the SaaS movement with Amazon, Google, and others), shared storage and data sets are the future.  (Check out SQL Data Services if you haven’t already, and watch for some exciting announcements coming later this year!)

The Dichotomy of Data vs. Metadata

Jeff Pinkston from the Oslo team aptly reflects the attitude of the group when he scoffs at the categorical difference between data and metadata.  In terms of storing and querying it, serializing and communicating it, and everything else that matters in enterprise software, data is data and there’s no reason not to treat it the same when it comes to architecting a system.  We have our primary models and our secondary models, our shared models and our protected models, but they’re still just models that shape our software’s behavior, and they share all of the same characteristics when it comes to manipulation and access.  It’s their ultimate effect that differs.

It’s worth noting, I think, the line that’s been drawn between code and data in some programming languages and not in others (C# vs. LISP).  A division has been made for the sake of security rather than necessity.  Machine instruction codes are represented in the same sort of binary data and realized in the same digital circuitry as traditional user data.  It’s tempting to keep things locked down and divided, but as languages evolve to become more late bound and dynamic (and as the tools evolve to make this feasible), there will be more need for the manipulation of expression trees and ASTs.  I strongly suspect the lines will blur until they disappear.

Schema and Object Instance Languages

In order to define models, we need a tool.  In Oslo, this is a textual language called MShema and an editor called Intellipad.  I personally think it’s odd to talk people’s ears off about "model, model, model", and then to use the synonym "schema" to name the language, but all of these names could change before they’re shipped for all we know.

This is a simple example of an MSchema document:

module MyModel
    type Person
        LastName : Text;
        FirstName : Text;

    People : Person*;

By running this through the "M Compiler", a SQL script is generated that will create the appropriate database objects.  Intellipad is able to verify the correctness of your schema, and what’s really nice is that you don’t even have to specify data types when you start sketching out your model.  Defaults are assumed, and you can get more specific as your model evolves.

MGraph is a language for defining instances of objects, constrained by an MSchema and similar in format.  So MSchema is to MGraph what XSD is to XML.

In this article, Lars Corneliussen explains Microsoft’s vision to make MGraph as common as XML is today.  Take a look at his article to see a side-by-side comparison of the same object represented as XML (POX), JSON, and MGraph, and decide for yourself which you like best (or see below).

MSchema and MGraph are easier and more efficient to read and write than XML.  Their message format resembles typical structured programming languages, and developers are already familiar with these formats.  XML is a fine format for a tool; it’s human readable but not human-friendly.  A C-style language, on the other hand, is much more human-friendly than all of the angle brackets and the redundancy (and verbosity) of tag text.  That narrows down our choice to JSON and MGraph.

In JSON, the property/field/attribute names are delimited by quotation marks, suggesting that the whole structure is a dumb property bag.

    "LastName" : "Vanderboom",
    "FirstName" : "Dan"

MGraph has a very similar syntax, but its attribute property names are recognized and validated by the parser generated from MSchema, so the quotation marks are unnecessary.  It ends up looking more natural, and a little more concise.

    LastName : "Vanderboom",
    FirstName : "Dan"

Because MGraph is just a message format, and Microsoft’s service offerings already support multiple message formats (SOAP/POX/JSON/etc.), it wouldn’t disrupt any of their architecture to add an MGraph adapter, and I’ll be shocked if I don’t hear about one in their next release.

Meta-Languages and MGrammar

In the same way that Oslo includes a meta-model because it allows us to define models, it also includes a meta-language because it allows us to define languages (as YACC and ANTLR have done).  However, just as Pinkston doesn’t think data and metadata should be treated different, it makes sense to think of a language that defines languages as just another language.  There is something Zen about that, where the tools somehow seem to bend back upon themselves like one of Escher‘s drawings.


Here is an example language defined by MGrammar in a great article on MSDN called MGrammar in a Nutshell:

module SongSample
    language Song
        // Notes
        token Rest = "-";
        token Note = "A".."G";
        token Sharp = "#";
        token Flat = "b";
        token RestOrNote = Rest | Note (Sharp | Flat)?;

        syntax Bar = RestOrNote RestOrNote RestOrNote RestOrNote;
        syntax List(element)
          = e:element => [e]
          | es:List(element) e:element => [valuesof(es), e];

        // One or more bars (recursive technique)
        syntax Bars = bs:List(Bar) => Bars[valuesof(bs)];
        syntax ASong = Music bs:Bars => Song[Bars[valuesof(bs)]];
        syntax Songs = ss:List(ASong) => Songs[valuesof(ss)];

        // Main rule
        syntax Main = Album ss:Songs => Album[ss];

        // Keywords
        syntax Music = "Music";
        syntax Album = "Album";

        // Ignore whitespace
        syntax LF = "\u000A";
        syntax CR = "\u000D";
        syntax Space = "\u0020";

        interleave Whitespace = LF | CR | Space;

This is a pretty straight forward way to define a language and generate a parser.  Aside from the obvious keywords to define syntax rules and token patterns (with an alternative and more readable format for regular expressions), the => projection operator allows you to shape the MGraph output according to your needs.

I created two simple languages with MGrammar on the plane trip back to Milwaukee from the PDC in November.  The majority of my time was spent fussing with the editor, Intellipad, and for the last half hour I found it very easy to create a language on the fly, extending and changing it through experimentation quickly and easily.  Projections, which are functional expressions in MGrammar used to shape MGraph output, are the most challenging part.  There are a number of techniques that shape the output graph, so it will be good to see how this is approached in future reference examples.

Surreptitiously announced just before I wrote this, Mike Weinhardt at Microsoft indicated that a gallery of example grammars for MGrammar is being put together, to point to the sample grammars for various languages in addition to grammars that the community develops, and it should be available by the end of this month.  These examples demonstrating how to define languages and write sensible projections, coming from the developers who are putting MGrammar together, will be an invaluable tool for teaching you how to use common patterns (just as 101 LINQ Samples did for LINQ).

As Doug Purdy explained on .NET Rocks: "People who are building a domain specific language, and they don’t want to understand how to build a parser, or they’re not language designers.  Actually, they are language designers.  They design a language, but they actually don’t do the whole thing.  They don’t build a parser.  What they do, they just leverage the XML parser.  And what we’re trying to do is provide a toolset for folks where they don’t have to resort to XML in order to do DSLs."

From the same episode, Don Box said of the DSL session at PDC: "I’ve never seen a session with more geek porn in it."

Don: "It’s like crack for developers.  It’s kind of addictive; it takes over your life."

Doug: "If you want the power of Anders in your hand…"

The Tool Chain

Now that we have a better sense of what’s included in Oslo in terms of languages, editors, and the shared repository, we can look at the relationship among the other pieces, which are manifested in the CTP as a set of command-line tools.  In the future, these will integrate into an IDE, most likely Visual Studio.  (I’d expect Intellipad and Quadrant to merge with Visual Studio, but there’s no guaranty this will happen.)

When you create your model with MSchema, you’ll use m to validate that model and generate a SQL script to create a SQL Server 2008 database schema (yes, it only works right now with SQL Server 2008).  You’ll also use the m command to validate your object graph (written in MGraph) against your schema, and translate that into a set of SQL commands to perform inserts and updates against tables.

With enough models, there’ll be huge value in adding yours to the repository.  If you don’t mind writing MGraph or you generate it automatically with something like an MGraphSerializer class in your code, this may be all you need.

If, on the other hand, you decide you could really benefit by defining your own textual language to use instead of MGraph, you can use MGrammar to define a new language.  This language gets compiled by the mg compiler to create your parser, and the mgx command translates code in your new language into an MGraph, which can then be pulled into your database using m.

This diagram depicts the process:


Other than these command-line tools, Quadrant is the highly extensible visual tool for exploring models graphically, and Intellipad is a different face on the same shell for defining DSLs with MGrammar and writing DSL code, as well as writing and verifying MSchema and MGraph code.

We should see fairly soon the convergence of these three languages (MGraph, MSchema, and MGrammar) into a single M language.  This makes sense, since what you want to project in your DSL should be something within your model, verified by your schema.  This may ultimately make these projections much easier to write.

We’ll also see this tool chain absorbed into multiple development environments, eventually with rich binding across multiple representations of our model, although this will take longer in Visual Studio.

Languages and Nested Languages

I looked at some MService examples, and I can understand Damon’s concern that although it’s nice to have "operation" as a keyword in a service-oriented language, with more keywords giving you the ability to specify aspects of each endpoint and the communications patterns required, that enclosing the business logic within that service language is probably not a good idea.  I took this from Dennis van der Stelt’s blog:

service Service
  operation PhotoUpload(stream : Stream) : Text
    .PostUriTemplate = "upload";

    index : Text = invoke DateTime.Now.Ticks.ToString();
    filename : Text = "d:\\demo\\photo\\" + index + ".jpg";
    invoke MService.ServiceHelper.StoreInFile(stream, filename);

    return index;

Why not?  You’re defining a general purpose language within the curley braces, one capable of defining variables, assigning values, referencing .NET objects, and calling methods.  But why do you want to learn a new language to write services when the language you’re using right now is already supremely capable of that?  Don’t you already know a good syntax for invoking methods (other than "invoke %mehthod%")?  If instead you simply referenced an assembly, type, and method from an MService script, you could externally turn any .NET method with serializable parameters and return value into a service operation by feeding it this kind of file, without having to recompile, and without having to reinvent the wheel.

The possible exception would be if MGrammar adds the ability (as discussed by speakers at the PDC) of supporting multiple layers of enclosing languages within other languages.  In other words, you could use MService to define operations and their attributes using its own syntax, and within the curly braces that follow, use the C# or VB.NET parsers to process the logic with the comprehension of a separate language.  There are some neat possibilities here, but I expect the development community to be conservative and hesitent about mixing layers of semantics, as there is an awful lot of room for confusion and complexity.  It may be better to leave different language blocks in separate files or containers, and to allow them to reference each other as .NET assemblies and XML files reference each other today.

However, I wouldn’t get too hung up on the early versions of these new languages, or any one language specifically.  The useful, sensible ones that take real developer needs into account and provide the most value will be adopted, and many more will quickly fall into disuse.  But the overall pattern will be for the emergence of an amazing amount of leverage in terms of improving human comprehension and taking advantage of our ability to manipulate structured, symbolic object graphs to build and verify software systems.


After a few months of research and many hours of writing, I don’t feel like I’ve even scratched the surface.  But instead of giving you an absolutely comprehensive picture, I’m going to stop here and continue in future articles.  In the meantime, check out the following resources.

For an overview of the development paradigm, look for information on language-oriented programming, including an article I wrote that alludes to how "we will have to raise the level of abstraction to a point that may be hard for us to imagine with our existing tools and languages" due to the "precipitious growth of software complexity".  The "community of abstractions" is the model in Oslo-speak.

For Microsoft specific content: there were some great sessions at the PDC (watch the recorded videos).  It was covered (with much confusion) on the .NET Rocks! podcast (here and here) as well as on Software Engineering Radio; and there are lots of bloggers talking about their initial experiences with it, such as Shawn Wildermuth, Lars Corneliussen, and of course Chris Sells and Jeff Pinkston.  The most clear and coherent explanation I’ve heard was from an interview with Ron Jacobs and David Chappell (Ron gave the keynote at MSDN Dev Con, hosted the ARCast podcast for years).  MSDN has at least 29 videos on the Oslo Developer Center, where there’s a good amount of information. including a FAQ.  There’s also the online guide for MGrammar, MGrammar in a Nutshell, and the Oslo team blog.

If you’re interested in creating DSLs, make sure to keep a look out for details about the upcoming DSL Developers Conference, which is tentatively planned for April 16-17, immediately following the Lang.NET conference (on general purpose languages) on April 14-16.  I’m hoping to be at both this year.  And in case you haven’t heard, Microsoft is planning another PDC Conference for 2009, the first time ever these conferences have run for two consecutive years!  There will no doubt be much more Oslo news and conference material to cover it at the PDC in November.

Pluralsight, an instructor-led training company, now teaches a two-day "Oslo" Fundamentals course (and Don Box’s blog is hosted there).

The best way to learn about Oslo, however, is to dive in and use it.  That’s what I’m doing with my newest system, which needs to be modeled from scratch.  So if you haven’t done so already, download the Oslo SDK (link updated to January 2009 SDK) and introduce yourself to the future of modeling and development!

[Click here for the next article in this Oslo series, on common misconceptions and fallacies about Oslo.]

44 Responses to “Why Oslo is Important”

  1. […] by metadouglasp on January 18th, 2009 Dan Vanderboom has a wonderful blog post titled “Why Oslo is Important” that is well worth checking […]

  2. By far the best post on OSLO that I’ve read so far. I look forward to reading future posts. Thanks!

  3. […] Why Oslo Is Important (Dan Vanderboom) – Link of the Day […]

  4. […] Why Oslo is Important – Dan Vanderboom explores Oslo examining why people are confused about Oslo and looking at the what and why of the various components that make up Oslo. […]

  5. […] Why Oslo is important: […]

  6. […] Why Oslo is Important: Dan Vanderboom offers deep insight into the reasoning and goals behind Oslo. […]

  7. […] This article by Dan Vanderboom is one of the best pieces of writing I’ve seen to explain why Oslo is important. Here are a few quotes, but you should take the time to read the whole thing. […]

  8. After watching 4 PDC videos and reading a dozen articles on Oslo, including this one and several from Don Box, I have come to the conclusion that I am stupid, because I still don’t understand the problem that Oslo is trying to solve. Maybe it would be helpful if the people involved would stop writing multi-page abstracts and tool descriptions, and show an actual business problem being solved by Oslo. Because until I see that, I don’t think I’m going to get it.

    • Dan Vanderboom said

      If you’re trying to think of a single problem that isn’t solved and needs to be solved, you’re going to miss it. Oslo offers tools and an approach to solving problems that have already been addressed, but in a much more productive way.

      We already model our problem domains: by creating an object-oriented model of classes to represent our domain, by creating tables in a database to persist those business objects, by defining XML schemas and writing XML documents to store hierarchies of data specific describing application behavior, and so on.

      Somewhat confusingly, Oslo offers a set of tools that do a variety of things that are related only by their shared purpose of making the process of modeling more efficient. The ability of Oslo’s M language to define schemas is not much different from defining an XSD schema file, but it’s easier on the eyes and easier to write. Its ability to define Domain Specific Languages allows us to easily define our own formats. We could have done this before, of course, by writing a language parser, but that’s really hard work, so M gives us rapid development of languages. And the opportunity for enhancing productivity with the use of Domain Specific Languages is well documented.

      The only problem that hasn’t been solved, which Oslo also aims for, is to allow a vast ecosystem of models (defined by many different parties) to interconnect. Currently, data in applications that model different domains all exist in their own data silos, with very limited ability to relate to other data silos. It can be done, but it’s very difficult and so actual relation among disparate application models isn’t common. Think about all of the systems integration work being done, with third party ISVs integrating their software into ERP systems like DynamicsGP and SAP. Each system defines its own API, its own extensibility points, its own data exchange formats and protocols. Oslo-based software will be able to transcend these scenarios with little effort compared to the current state of affairs.

      The hope is for Oslo to create the bridges between applications. In this sense, it’s not a far cry from the SQL Server based WinFS file system originally planned for Windows Vista (and then cancelled). The idea was that if we had structured storage for application data built right into the OS, we could query and join across the data models of multiple applications, share common data more easily, and get tighter integration instead of duplicating everything senselessly. I think Oslo will deliver on the promise originally hoped for in that WinFS product concept. Whether SQL Server (or parts thereof) gets pulled back into the OS is another story, but I wouldn’t be surprised…

  9. Christian Gross said

    When I read comments like:

    >Think about all of the systems integration work being done, with third party ISVs integrating their software into ERP systems like DynamicsGP and SAP. Each system defines its own API, its own extensibility points, its own data exchange formats and protocols. Oslo-based software will be able to transcend these scenarios with little effort compared to the current state of affairs.

    I think, yes it works so long as we all use Oslo? no? And if that is the case why not have used a single language like Java, or .NET called it a day. How would one integrate a system that has nothing to do with Oslo, with one that has absolutely nothing to do with Oslo?

    >The idea was that if we had structured storage for application data built right into the OS, we could query and join across the data models of multiple applications, share common data more easily, and get tighter integration instead of duplicating everything senselessly.

    I REALLY hope you are kidding here… I have seen this movie before, it was called OLE structured documents. Remember way back, way way back, when the idea behind OLE was to enable the sending of documents that would “automatically” open themselves? Remember the competitor to OLE (can’t think of the name) that flopped and was trying to do this?

    So while I also have watched the videos, read the blog entries and looked at the samples I am at the exact same point as David Nelson… Wondering Huh?

    • Dan Vanderboom said

      Those are totally valid concerns.

      In the case of integration into ERP systems, I’m referring to systems that create their own ecosystem for plugin components. If the host system chooses to use the Oslo repository, third parties could write plugins that fluidly leverage the data structures defined by the host system. This is a semi-private, protected environment among components that are meant to be tightly integrated. I’m not suggested that we pick up Oslo as a replacement for good SOA practices. But I am suggesting that new avenues and opportunities will present themselves to us.

      The reason I’m thinking along these lines is primarily because of what ADO.NET Data Services is doing. Typically, communication channels are defined as discrete and manually enumerated and coded message endpoints: web service methods and so forth. If you need to grab data in 10 different query shapes, you’d have something like 10 web services defined on your back end. If you’re an old-fashioned CRUD fan, you might autogenerate web service methods for CRUD operations for each table and have at managing it in code. Or perhaps you don’t do any raw data access across the network, instead requesting complex types which are constructed on the server side, building up some object graph based on method parameters and serializing it back.

      The problem with writing all of these web service methods manually, and making sure more are added when the underlying data structure is supplemented, is that you have that much more code to maintain. That many more moving parts to possibly get wrong and have to debug through. And why do we have these specific web methods? For much of our communication code, what we’re doing is accessing persistent data: querying it, updating it, and saving it back to storage. If there were a universal way of querying data remotely, as there is with Data Services, then I’m guessing we don’t have many methods that need manual writing. Our entire communication backend can shrink profoundly. A complex data access layer of our own isn’t needed in such a scenario.

      And of course ADO.NET Data Services isn’t the only technology approaching this “general, flexible data access over the network” technology. REST proponents are building their own URL schemes for selecting, filtering, and otherwise querying and updating specific “entities” or database rows or whatever. This is a growing trend, and for good reason: it rids us of our need for an inflexible and code-heavy communication proxy layer in our design.

      Where Oslo-based systems need to integrate with other systems, sensible SOA techniques as well as flexible REST-based data access protocols can be used. Where a community of components can share an Oslo repository, there is some additional benefit. There will be limitations, of course.

      OLE Compound Files (or structured documents) weren’t exactly the same thing, but it was an attempt at integration. It seems like people were trying to make a bigger deal out of it than it really was. I don’t think of it as a failure–at least, it always seemed to work fine for me. It invented the concept of execution hosting containers, and executed logic from different programs in different containers, so that you could stick a view of an Excel document inside a Word document, or vice versa. That was a story of execution integration, of creating a visual mashup of various documents, but the data across documents wasn’t joined or integrated into a usable whole.

      Oslo is about model-driven development, so primarily Oslo is concerned with the models that drive an application. There are visual models depicting architectural components and network boundaries, there are textual models that configure what formats, protocols, and behaviors should be used with WCF communications, there are models that are interpreted to lay out user interfaces, and there are models that describe security constraints across an enterprise. Oslo is aiming to put all of these models into a representation that is interoperable enough to join all of the models, and not only use them to drive software behavior, but also to do interesting and valuable analysis on them. Currently our software models are in a million different formats and “repositories”. Putting them together makes sense.

      Don’t be discouraged by one superficially-similar technology that disappointed you long ago. Look at how long it took Microsoft to tackle Entity Framework, and they’re not all the way there yet. These things, large in scope, take time. Sometimes a couple tries. (Remember ObjectSpaces?) But with game changing technologies like Oslo, I think it will be well worth the journey and effort.

  10. “Currently our software models are in a million different formats and ‘repositories’. Putting them together makes sense.”

    Our software models are in different formats because they have completely different goals and accomplish completely different things. You say that putting them together makes sense as if that were self-evident, but I don’t see the value in it.

    I guess all I can say is that I am unconvinced. Everything publicly available about Oslo is abstract discussion, not real world application. You responded to my criticism of that abstract discussion with yet another abstract discussion. Even the PDC CTP is a collection of half-finished tools and trivial samples. It all still seems like pie in the sky to me. Maybe you have better vision than I do. Or maybe this is yet another technology that Microsoft is really excited about right up until the point when they drop it for The Next Big Thing (did someone mention ObjectSpaces?). Either way, until someone can show me how my job (writing internal LOB apps for a medium sized financial company) will be improved by Oslo, I will remain unconvinced.

    • Dan Vanderboom said

      Our software models aren’t in different formats because they have different goals. A developer doesn’t say, “I need to store config data about X, which has a different purpose from Y, so I’m going to make up a new format, or use TSV or CSV, to make sure these differently-aimed components are stored differently and can’t be related easily.” There’s no purpose to the disarray. Models are stored differently because there’s no interoperability story yet, no coordination with the hope of relating the data for other purposes. The Visio folks make up their format, and they don’t talk to the WCF team making their configuration tool, so ultimately it’s just been a lack of communication and agreement on standards.

      I’m not saying you should be convinced to start using Oslo to write all of your new software, or rewrite what you have. Like you said, Oslo is an unfinished collection of tools and thoughts at this point. Be skeptical! That’s healthy.

      Two of the big examples that came up for this model-driven approach were Sharepoint and CRM. These are heavily model-driven applications. Their user interface layouts, validation rules, plugins, lists, and so on are all driven by data stored in a database. This is one use of Oslo, and what these teams have found is that building applicaitons like this in a data-driven (model-driven) way has provided a lot of benefits to those teams, cleaning up a lot of the patterns that with previous approaches was much messier and time-consuming to maintain. As software grows in complexity, they offer the suggestion that the same approach will be appropriate and useful in many scenarios. This doesn’t make Oslo a panacea, but it does warrant our attention as it matures and grows.

      The repository is one aspect, but I’ve focused more on the side of textual DSLs and the language innovation that is likely to come out of this. I will be blogging more about my initial experiences, which I’m already excited about. Even the small set of tools available today seem to allow me to prototype my system faster than I otherwise would have. More details on that soon.

    • Dan Vanderboom said

      The simple answer why there’s value in putting model data together is the same reason why you put any data in a relational database instead of a loose collection of lists in files. You can join them where they relate, query across models to validate one against another, see where they conflict and pose potential problems. If my WCF configuration is set up in a way that conflicts with security requirements I have of my system, joining this data allows me to more easily detect this before I deploy my software.

  11. Christian Gross said

    I get a very very strong feeling of this being something for Architecture Astronauts…

    And I think the following is very applicable:

    >> The Architecture Astronauts will say things like: “Can you imagine a program like Napster where you can download anything, not just songs?” Then they’ll build applications like Groove that they think are more general than Napster, but which seem to have neglected that wee little feature that lets you type the name of a song and then listen to it — the feature we wanted in the first place. Talk about missing the point. If Napster wasn’t peer-to-peer but it did let you type the name of a song and then listen to it, it would have been just as popular.

    When I look at Oslo, I see, ok interesting, but are you not forgetting that while the model might be similar, the underlying implementation is completely different?

    I write trading systems, and as such have released a trading platform based on Microsoft Excel, messaging, .NET. And time after time I see examples of how Oslo could be used to manage data streams, and react to patterns. You know all nice and dandy. BUT those examples are trivial toys that I can implement in a few minutes. They are missing the guts, which are the details.

    Let me give you an example. I would guess with Oslo you build a trading model. And in this model you have the ability to buy and sell an order. You even put in a stop loss, and profit. All simple, and could be modeled. Here is the twist with one my of clients they have informed me that their trades are too big to handle in a single shot. They make 25,000,0000 million dollar trades and the broker limits them to 5,000,000 per trade.

    So how does Oslo deal with this? It does not fit into the model anymore. So there are two options. Option 1 it is an implementation detail, but even that is wrong because when an order is chopped the limit value cannot be ensured. In the model you would define market, limit, etc types of orders. When the model assumes a limit value, it cannot be mathematically enforced. Option 2, the model is adapted to facilitate the chopped order. However, the issue then becomes what other details do you put into the model? Are then not resorting to writing code? Because other brokers might other issues and oddities?

    When I write trading systems these are the kind of details that I run into day in and day out. And it is the details that drives me bonkers, but also gives me the work because I know these details and understand when a trader says I need to trade 25,000,000.

    I wonder about the following comment you made:

    >>> Even the small set of tools available today seem to allow me to prototype my system faster than I otherwise would have. More details on that soon.

    If I may jab, prototypes are cute, but they don’t get work done… My point is that while people would like to believe that you can model trading, the reality is that any model that you define is actually pretty superficial. Models help us understand the overall architecture, and give us insight on what each piece should be doing. I agree with that. But from what I have seen of the modeling approach I am rather skeptical it can be put into any meaningful detail.

    • Dan Vanderboom said

      There’s nothing wrong with the generality of an application like Groove. Just because they left out a feature that you and many others want doesn’t mean the generality of the application prevents it. I’m not a Groove user, but different kinds of content could be be identified and labeled, and a search could be implemented to look through metadata such as song names, movie names, and so on. There may be business or legal reasons why the Groove team doesn’t support those things.

      Different developers work on different kinds of systems. LOB applications are in much demand, but there is also systems programming, OS development, compiler and dev tool development, and in many cases there is a need for great generality and wide applicability, the ability to be flexible and adaptable without experiencing a lot of pain. If you have no need to be anything other than very specific and focused solving well-understood problems, that’s great.

      The model might be similar to what? Models don’t need to be similar to each other to be used in combination, they just need to be formatted and accessed similarly. To give another example, look at a good SOA implementation. You have a .NET service over here, and over there you maybe have a Java service, and they’re communicating because they share an understanding of how to format and parse messages, and have beforehand agreed on a shared protocol (and maybe there’s a process of negotiation to choose from several, but then that negotiation is standardized). Those services may be structured and implemented in completely different ways, having entirely different purposes, but as long as they share a shape and format for messages, they’re able to communicate with and benefit from each other.

      In models it’s very similar. These models are really just occassionally-updated persistent messages, and if each software component that uses a model formats it and stores it differently, then no component can easily combine the models into useful views. Sure, you could write a component to manually read a CSV text file, and then access a database to query a table, and then hook into some API through a DLL to request other model information, and then mash them together in your code. But that’s very expensive code to write in terms of mental cycles and code maintainability.

      Having one place and one structured mechanism to store those models which lends itself naturally to combination (such as relational database) substantially lowers the barrier to entry for doing that kind of work, and many opportunities that “would have been nice” if it weren’t for all the crazy work required, now becomes trivial and worthwhile. The balance of effort versus reward shifts in favor of reward for many activities we wouldn’t consider today. In that sense, I see Oslo opening more doors. If that’s not for everybody, that’s okay, but I have a feeling it will be applicable to a wider audience than you think. I’m not saying it will change the way you develop in v1 out of the box–chances are, like every other new technology, it will evolve over many versions–but in a couple years when the tools have evolved more and much more feedback has made its way to Microsoft, I have reason to believe it’s going to be a Really Big Deal. Because of their size, Microsoft can be slow moving in some areas, but they have very large ears (moreso lately with early and frequent code drops). They also have a great reputation for delivering excellent development platforms. If you have some beef with Microsoft DevDiv, or .NET Framework, or Visual Studio, I’m sure you have your reasons. Personally I’m pretty happy with the tools I’m using; I just get impatient sometimes waiting for products and features to come out that I need/want.

      Your characterization of Oslo as something you might use to “manage data streams and react to patterns” suggests you’re missing the whole point. Even without Oslo, you must have modeled your trading platform in some way. Maybe that was with UML on a whiteboard, or maybe you did an ER diagram in a CASE tool, or just started creating tables and columns in a database, but that’s all modeling. You talk about the difficulties of writing business rule logic for trading systems, and you seem to suggest that there’s no model you could create to effectively capture the logic you need to make your software system useful. Then how on earth does your system provide benefit now?! Surely you’ve devised some way to represent the knowledge you need to make automated decisions based upon it! Or perhaps what you’re saying is that some decisions in your LOB have to be made by humans, which just means it’s part of your software’s workflow business rules, but then that workflow can be modeled and it’s the management of that workflow that your software is primarily concerned with. Oslo isn’t trying to force your software to make human decisions; and it’s not forcing you to define models that you wouldn’t want to create anyway. It’s just trying to give you more powerful modeling tools to do what you’re already doing, and hopefully to make some new things possible as well.

      If your client’s orders are too big to handle in one trade because the broker limits them to $5 million per trade, that sounds like a simple business rule that their broker enforces and doesn’t have much to do with your software. If you need to add some logic to make exceptions for certain of the broker’s clients (or whatever makes sense in the trading industry), that has to be decided based on some data that gets modeled somehow, with Oslo or in another way.

      It sounds like the trading software industry requires deep knowledge of very complex models. So why wouldn’t you like some new tools to make that modeling effort easier for you? Maybe you’re happy with what you have and don’t ever want to look at the new and improved tools. Nobody is going to take that option away from you. Hell, Steve Gibson (author of SpinRite and the man who coined the word “spyware”) continues today to write all of his software in assembly language, and refuses to join the rest of us in the world of high-level languages. And he seems to be doing pretty well for himself. More power to him! I on the other hand enjoy C# 3.0 and LINQ immensely, and believe there’s a lot of advantage there. I get to focus much more on my core problem instead of how every step is implemented at the register access level.

      I don’t see the cuteness of prototypes, and there are different kinds. When I start the design of a new system, or a new major component or module for one, I identify the largest risks first and ensure that certain ideas I have will actually work. If I don’t do this, a whole lot of time can pass with schedules based on assumptions that are actually pretty shaky. Doing a few technical feasibility prototypes gives me great peace of mind and makes scheduling and planning easier. UI prototypes give an early look to end users and thus early feedback to the developers, and give all stakeholders more assurance that the development team is on the right track. There are some throwaway prototypes, and many more that get incorporated directly into production code and evolve from there. So I don’t know what’s “cute” about these experiments and starting-points for software components. They’re a very important part of an agile software project.

      The models that we create are fairly superficial. The abstraction we define of a Person is a mockery of a real Person. But that’s okay: models don’t need to be deeply representative of everything they allude to. They only need to contain the details that our software requires to make the right decisions. You say people “want to believe you can model trading”. What I’m saying is that you have already modeled trading in some superficial way in your software, and that software does provide value to your users. Perhaps what Oslo is saying is, all of your models don’t have to be hard-coded into binaries in the way you store them now, or saved to different formats and storage engines if external. Oslo has many facets, and one is that of a bridge for moving some of that model from imperative code to declarative structures that can be queried, manipulated, and reused in meaningful ways.

      There isn’t a “modeling approach”. There are a million approaches to development, and modeling is an important aspect of every one of them. What’s new isn’t modeling; it’s the heavy use of the word modeling to provoke a new understanding of its role and importance, and to establish a shared definition for future conversation.

  12. Christian Gross said

    I am going to be cynical. In all of this discussion which was rather lengthy and I appreciate the indepth answer, you barely committed a paragraph to the problem that I posed. I actually wanted an OSLO answer on how such a thing could be modeled in Oslo. I proposed to potential ideas, but your answer was.

    >If your client’s orders are too big to handle in one trade because the broker limits them to $5 million per trade, that sounds like a simple business rule that their broker enforces and doesn’t have much to do with your software. If you need to add some logic to make exceptions for certain of the broker’s clients (or whatever makes sense in the trading industry), that has to be decided based on some data that gets modeled somehow, with Oslo or in another way.

    This answer is nowhere near complete. I could cynical say what you just did was handwave the answer away. The fact that you are saying that it is a simple business rule is a hand wave. It is not a simple business rule. It has quite a bit to do with trade execution, and trading margins. The real issue here is that when I issue such orders I affect the market, and the market maker within the brokerage, and on the market will see these orders coming through.

    I don’t actually expect you to know this, but then again I did not expect a handwave either. What I expected was an answer to how I could use Oslo to solve my problem. And I suppose the answer is no. I don’t expect you to solve the problem or even address it now, that’s ok since you have answered the question of what Oslo does and what use it could be to me. For that I am appreciative.

  13. Christian Gross said


    You said:

    If you have some beef with Microsoft DevDiv, or .NET Framework, or Visual Studio, I’m sure you have your reasons. Personally I’m pretty happy with the tools I’m using;

    For the record, nowhere did I say that I have a beef with Microsoft .NET framework, or Visual Studio. I have actually no clue as to why you are even saying this? I am even annoyed that you are thinking I happen to be some “freak who hates MS.”

    In fact on the contrary. I am in the market (Financial) and I know Microsoft all too well. I happen to like Office and Excel. That is how I earn my money. And I happen to like Visual Studio and .NET, again how I earn my money. I even at one time was a Regional Director. So if you want to accuse me of something, accuse me of not drinking the Kool-Aid…

    • Dan Vanderboom said

      I said “if you have some beef”, not that you do or that you claimed to. I apologize for using the phrase “have beef”, which is too open to subjective interpretation. I don’t think you’re a Microsoft hater. What I was trying to point out was your heavy cynicism, which seems to be above and beyond skepticism, and your lack of confidence that the Oslo team is working on something with real value, when it seems that you don’t yet have a clear picture of what the technology is all about yet. The Oslo team is an all-star cast of folks who have worked on great technologies like WCF and the VB compiler, very smart and productive developers with a lot of experience shipping real products that millions of people are using. Nobody’s perfect, but I doubt they’ve all been duped into working on something nebulous, unsubstantial, and useless.

      I also didn’t intend to summarily dismiss the complexity of your software’s business rules. The one I referred to, that of limiting trades to a specific dollar amount, sounds simple as you’ve described it thus far. As complex as our systems get, they are reduced eventually to smaller and simpler problems. Consider that statement a jumping off point for further conversation. If it’s more complex than you described, tell me about it and then I can give you more information about how it could be modeled, and what “modeling” means (to me, at least) in that context.

      Part of this, I believe, is a matter of scale. Huge organizations that deal with large economies of scale deal with issues that are unfamiliar and hard to relate to for much smaller organizations. Microsoft develops software systems on such a massive scale that they’re one of the first to recognize specific limitations of various approaches and opportunities for better consolidation, integration, automated testing, modeling, and every other aspect of software engineering. The problems they need to solve (for example, source control for massive teams) aren’t always the ones that you or I need to be concerned with. On the other hand, by solving these problems for themselves, they tend to encounter a ton of issues, each of which may be encountered by only a handful of customers. With Oslo, and the fullest use of the approach (using the repository, being fully model-driven, etc.), I suspect it won’t be really compelling in the beginning for smaller development organizations without a need for great flexibility, and may never be. The reason I personally get excited about Oslo is because I think subsets of the technology will be useful. That, and I happen to be interested in language design, and I think DSLs have great productivity benefits, whether as a way to create visual designers/editors or textual languages.

      A lot of the conversation around Oslo will likely be at a higher level because it’s so new and we’re looking at it from the business case side first. As the product team moves forward and obtains feedback from the community, the direction will be solidified and the details determined.

      As far as how things are modeled with Oslo specifically, I think a lot of that remains to be seen, and I also think there will be many different approaches supported. For example, today you can write a model in the M textual language. As explained in my article, MSchema is the aspect of the language that lets you define types, members, relationships, calculated values, and extents. For those who work with smaller models and prefer visual editors, we’ll have something akin to the class diagram designer; for those who dislike that approach or who work with much larger data models, working in M will probably be preferred. By running your MSchema definition through the M compiler, you can generate the SQL statements to setup your database. I’ve found this already to be handy. In the past I’ve written SQL scripts by hand to create the tables, columns, foreign key relationships, and constraints that I needed for a system. But with M, I have a model description that is immediately self-validating in the editor, and I don’t have to define types for every member right away. I can “sketch in” the concepts that my system works with, and refine them with a business analyst nearby, all the while having useful validation. As my model solidifies, I can add the appropriate data types and generate the SQL I need. I find that working with M is more intuitive than SQL. M lets me express what I want, whereas SQL is a more specific implementation language with less intuitive syntax. Even though I’ve had many years experience writing SQL, I still much prefer to write M, and I only first saw it in November. That by itself is a big deal to me.

      Another specific use, which I’ve been saving for my follow-up article (but will share with you here) is the use of M to facilitate automated unit and integration testing. Because I can easily create a textual DSL, which I’ve done for the current system I’m building, I can create a very intuitive-looking text file of a bunch of data and their relationships to each other. In the past, stuffing data into database tables for testing has been somewhat painful; I would write insert statements, make sure they’re executed in the right order, execute them, then run my integration test. (Or mock a retrieved graph of objects in memory and skip the DB access altogether.) With my DSL, I write a simple and intuitive textual description of the data I want to test, and the MX image tool will generate all the correct SQL insert and update statements for me, stuffing the results into a database. I can then clear the DB, import my MGraph data into the DB (or skip the DB and import directly into memory), and test with this data.

      In a similar vein, I can test large parts of the system where the UI isn’t completed yet, by similarly importing data through scripts in my DSL language, and focus on the guts of the system before the UI is done and polished. My UI specialist/developer is very busy and backlogged, and this keeps me developing the back end quickly without having to wait for a good data input mechanism. It’s a powerful generic data import mechanism.

      If using Oslo can shave off an hour of development time here and there, while importing data or defining my database structure, it’s already a win for me, albeit in a small way. I’m excited because this is true even at this early in the Oslo lifecycle. I’m excited, because as an amateur language designer, I now have a .NET tool for generating parsers based on a very clean syntax in MGrammar, and with its immediate tool validation and the promise of Intellisense support for new languages, I expect a lot of language innovation to occur in the community that otherwise wouldn’t have. More ideas and more tools = better. They won’t all be good, but it increases the chances that good ideas will emerge, in the same way that WPF doesn’t ensure good UI design (in fact, many horrifying designs will emerge), but it does increase the opportunity for good UI.

      A big part of this requires creativity and an open mind. If you’re overly cynical, you predispose your mind not to see the opportunities. With some imagination and enthusiasm, you can not only find creative ways to use Oslo to benefit your modeling efforts, but because Oslo is so fresh, you can participate in a community that helps shape where this technology is headed, and therefore ensure that it’s of maximum benefit to you. The progress of technology is supposed to be fun. Not everything will make sense, and we can’t incorporate every new tool that comes along, but it’s a good idea IMHO to keep up to date with the new stuff that comes out (especially when it’s such a large effort by the vendor that makes a good portion of your existing development tools), and at least give it a fair chance and try to understand it before dismissing it as useless.

  14. […] még a perspektívában ehhez kapcsolódó “Oslo” téma*: Why Oslo is Important —  Oslo and the OMG —  Oslo is Love with Chris Sells — Videos on "Oslo" […]

  15. “Nobody’s perfect, but I doubt they’ve all been duped into working on something nebulous, unsubstantial, and useless.”

    If only I could believe that. But I’ve seen it happen too many times before, especially at Microsoft: high-level architects get excited about a new technology without giving enough thought to how it will actually be used. The result is a technology that, while possibly innovative and technologically interesting, can’t pull its own weight in the real world.

    I’m not saying that Oslo will fall into that category. I’m just saying that with all the technologies out there today, I can’t afford to spend time on a technology that may or may not be around a year from now. If Microsoft wants us to get excited about Oslo, they need to give us a reason to be excited, which means they have to show us how it is applicable to the real world, not just to the technology enthusiasts. But no one is doing that.

    The .NET framework itself suffered from a similar problem. For a long time after it was released, it was seen by technology decision makers as a Java imitation with no particular advantage in a business environment, because .NET’s main advocates were only interested in extolling its technological innovations, not its real world applicability. Don’t get me wrong; I’m all for technological innovation. And I hope that Oslo really does have the kind of positive impact that you obviously think it will. But there is no way I am going to put even a minor investment into a new, unfinished, unproven technology without at least some reason to believe that it is going to pay off. And so far, I haven’t found that reason.

    • Dan Vanderboom said

      As far as high-level architects getting excited about a technology without much thought of application… I’m not saying that doesn’t happen, at Microsoft or elsewhere. Obviously it has. But I’ve also seen a few things that help to mitigate some of that.

      One is Microsoft’s growing trend toward earlier and more frequent CTPs and betas, engaging customers and partners in the real world to obtain valuable feedback, instead of developing in a monolithic style in a dark basement without interacting or conversing with real customers until the product is almost released. The Astoria team is a great example of this (early and incremental releases), and the Oslo team seems to be following suit. I hear you saying it’s too early to get involved, and I can understand why most developers can’t justify the investment at so early a stage, but this is in fact the best time to voice our concerns and provide insight that their team may miss, just as early childhood is the most critical time to establish good behavior and a healthy attitude. It’s up to each of us to decide when it makes sense to jump in and get involved, and I recognize my own involvement as being quite early. I just happen to have seen this coming for years now, or at least hoped it was coming.

      The other mitigating factor is that, despite projects “failing” at Microsoft, I keep hearing stories about how knowledge obtained from a canceled product is brought to bear in a new project. As developers and their knowledge are transferred from an initiative that falters to another project that succeeds, it occurs to me that the original investment wasn’t a complete waste after all. In fact, to the extent that the experience makes each developer stronger, and that real problems were solved along the way, you might say that nothing is lost but the expectation to deliver a specific product. Software development is a series of experiments, and sometimes an experiment’s result tells you that something isn’t possible (at least using that method). An experiment that hopes to prove the efficacy of a drug, for example, and fails to do so, still provides valuable knowledge about the nature of medicine. As Thomas Edison purportedly said, “I have not failed 1,000 times. I have successfully discovered 1,000 ways to NOT make a light bulb.” Hopefully we won’t need to stumble so many times to discover the answer we’re looking for, and as a smaller ISV we don’t have as much room for stumbling, but trial and error is part of the process at any scale. We don’t gain without risking, and when we risk we occasionally fall. I try many things that don’t work, or that prove infeasible given the current state of technology, and yet I manage to deliver complex systems successfully.

      I tend to believe that even the geeks who are hyper-focused on the technology instead of end-user needs tend to have a sense of the right direction to go in terms of architecture (though not end-user workflow). I hear a lot about how developers are too focused on the implementation details, terms like SOA and widgets and plugins and so forth, which customers don’t care about, and while I agree that we need to be keenly aware of customer/market needs (or someone driving the project needs to be), it’s entirely appropriate for developers to swim in this terminology and realm of abstractions. Do you really think that guys working in the construction trades shouldn’t be so focused on length of nails, the PSI of nail guns, wrench alloys, or the new materials used in plumbing connectors? As a home owner, I wouldn’t care about any of that, but I hope to hell the construction crew thinks and breathes that stuff! (And believe me, they do, I hear their shop talk all the time.)

      The problem you refer to with the .NET Framework was a failure of marketing and image projection, not with the technology. The developers were concerned with precisely the things we want them to be concerned with: giving us not just some JVM clone, but the ability to have multiple languages communicating with each other and using each others’ libraries seamlessly.

      We need our architects and developers focusing on architecture and design, getting excited about what technology can do, looking for ways to exploit the knowledge and skill they already have to work with. And we also need someone, developer or otherwise, to be aware of actual market demands and customer perspectives. We need marketing people to project the right images. And we need all of these people communicating effectively. The problem is not that any one of these is happening; the problem is when one of these is not happening.


    • Dan Vanderboom said

      I guess the other thing is, the way I see it is: Oslo will ultimately become just one vendor’s implementation of a common, emerging pattern of software factories and language workbenches. I don’t see Oslo as some hair-brained idea conceived originally within Microsoft’s walls, but as a resource-laden development team addressing these fairly well-studied fields to deliver greater expressive power to developers. These dynamic tool experiences run in parallel to the more dynamic languages and platforms that are becoming more popular (the DLR, for instance), with their ability to treat program code as a data structure that can be manipulated at runtime (modifying types and so on), providing tremendous expressive power. Some are concerned with the additional power and are worried about not having enough constraint, but the fact is that they allow competent developers to be more productive. The cost is some performance (which gets better over time with new optimization techniques and the rise of multicore).

  16. I hear what you’re saying, and I agree with you somewhat. I agree that the initial stumbling of .NET was primarly a failing of marketing. That is something that Microsoft has had a problem with throughout its history: they build a better (or at least a different) mouse trap, and then they expect the world to beat a path to their door. But the real world doesn’t work that way; you have to sell it if you want anyone to buy it (literally or figuratively).

    I think where we differ is that you see the failure of marketing as just a problem with image projection, whereas I see it as being just as big as any technical challenge they are trying to overcome. I think its great that they are releasing CTPs early in an attempt to get customer feedback early in the process. But what I am saying is that effort serves no purpose whatsoever if they don’t also give customers a reason to invest their time in testing and providing feedback on an unfinished, unproven technology. Why should I download and install the Oslo CTP and learn M and MSchema and put up with buggy and undocumented tools and try to figure out how the whole thing fits together, when the world of technology is advancing so quickly and there are so many other more mature technologies that I could be learning and integrating into my applications? I’m not saying that there isn’t a reason; I’m saying no one is making any attempt to articulate those reasons in a way that anyone who isn’t already sold can understand. Yes, that is a failure of marketing, and it’s a big problem if Microsoft actually wants anyone to pay attention to Oslo.

    • Dan Vanderboom said

      You’re absolutely right: Microsoft needs to sell the idea of Oslo to developers. That’s the intended customer of Oslo, not the end user or owner of that developer’s software. It’s an implementation detail that the customer won’t care about anyway, except in terms of what can be built for them now because of it, or what now is feasible that wasn’t previously. They just want their results.

      If the development team is willing to listen for feedback, why shouldn’t the marketing team listen as well? What do we need to be sold on Oslo? What’s the right image to project? What is the best way to convince developers that a technology is ready to take a chance with and start testing? Or maybe this has already been decided, and it’s just a matter of the product maturing to a predetermined milestone.

      I do see marketing challenges being significantly technical, and worthy of allocating appropriate resources to get right, as you do. If anything, I focus on writing about the technical issues of Oslo because I am myself trying to sell the idea (or my version of it, anyway) and “Technical” happens to be the language the customer speaks.

  17. […] még a perspektívában ehhez kapcsolódó “Oslo” téma*: Why Oslo is Important —  Oslo and the OMG —  Oslo is Love with Chris Sells — Videos on "Oslo" […]

  18. […] Oleg, and Omer, I’ll have the pleasure to chat about CodeGen, DSL’s, Oslo (check out why it is important btw), and maybe a couple other caffeine-powered kinds of stuff.That’s promising to be… […]

  19. […] I do agree with most of the points of Dan Vanderboom’s Why Oslo is Important. […]

  20. Ben Gillis said

    This doesn’t really tell anyone why Oslo is important. There are some tidbits that do, but they’re buried in such a plethora of verbage you could have made this article 1/10 the length it is (and, thus 10x more productive? 😉

    With a title like this and then to read the body of it, it’s understandable why there are negative comments on it. And, that’s a shame, because Oslo is important and that can be stated in a few bullet points to a few paragraphs. Drilling into the tools without that background is of little value.

    > Much of the confusion around Oslo occurs for two reasons:
    > 1. Oslo is designed at a higher level of abstraction than most systems today,
    This makes no sense. Oslo is designed in the same way any common db-backed, parser-driven, automated framework application behaves today. (And this is really #1 reason for confusion: most hand-code their way out of nearly everything. An “agile” shop is not an *automated* shop.)
    > 2. Because of its abstract nature, core concepts are defined in terms that are heavily overloaded
    lol. “Abstract nature”? btw, these terms are not heavily-overloaded to anybody but those w/o experience. Sure, there are various approaches to model-driven development, but hands-on, I haven’t found anyone who can’t understand what a “model”, “repository” and a “language” is.

    Much of the confusion about Oslo is whether one is already modeling or not. For those who are, the confusion is generally about what the details and features will be of Oslo by RTM. For those who are not familiar with modeling, this article is not telling them why it is important…a succinct, concise reason(s).

    This isn’t meant to flame. I think Oslo is important. I think it’s important the right message get out about Oslo. Following to the next article, most of those questions are due to confusion. An informed non-modeling experience group would be asking different questions if already told why Oslo is important.

    • Dan Vanderboom said


      You’re correct that I didn’t constrain the article to answering the question in the title. I intended to cover more of the “what is oslo” in addition to its importance, so perhaps the title could have been better. In an article this popular, there are bound to be some negative comments, and not all of them were.

      When I said that Oslo was designed at a higher level of abstraction, I didn’t say it was implemented with more abstract tooling. I meant what I said; in other words, the conceptual design is more abstract than the conceptual design of most LOB software systems. Parsers in general are more abstract in this way, and Oslo introduces a parser generator. Yes, there are other parser generators out there, as I mentioned, and they are all several levels of abstraction more than what most developers are used to.

      So when I talk about “abstract nature”, or abstraction in general, I’m referring to a definition of “considered apart from a particular case or instance”. A programming language like C# is abstract as it can be applied to the solution of many problems in many domains. A language and toolset for defining new languages and providing parsers and generators is even more abstracted away from writing programs that deliver value to business.

      The reason I mentioned overloaded terminology is because most developers that I’ve talked to wouldn’t have considered XAML or the contents of a configuration file to be included in the definition of “language” as they normally used the word. The definition of “model” is likewise tricky, as some see it only as drawing on a whiteboard or in a lines-and-boxes diagramming tool, but the modeling of our systems also occurs in code, when we create patterns of classes and methods and their interactions. In psychology, we talk about modeling problems in our mind with mental representations which we manipulate internally to plan our actions, such as when we learn by imitation. More generally, modeling is “the act of representing something”, which is a very open-ended and inclusive definition. When listening to people talk about Oslo, I notice that they tended to talk at cross purposes due to the differing definition of these words.

      So as far as whether people are “already modeling or not”, I would say that all developers are already modeling by writing the code that they do. When you write your data access classes like Customer and SalesOrder, you’re modeling the business problem that you want to solve. Your programming language is a modeling tool. What I tell people about Oslo isn’t that it introduces modeling, but rather that it provides additional modeling tools. By thinking of what we already do as modeling, I think we’ll have a better grasp of what Oslo is, what it has to offer, and therefore why it’s important.

      No article can appease every audience. If you recognize an opportunity to explain an aspect or cover an angle that I and others haven’t, you can always write your own articles on the subject. More explanation is definitely needed.

    • Dan Vanderboom said


      So if I had to provide just a few points describing Oslo’s importance, as you suggested, it would be something like:

      Greater development productivity (the team is targeting 10x) due to:
      better tools for declaratively expressing intent
      and better communication of intent among team members.

  21. […] Why Oslo is Important « Critical Development […]

  22. […] Even without Oslo, we have seen a proliferation of languages: IronRuby, IronPython, F#, and the list goes on.  A refactoring tool that is hard-coded for specific languages will be unable to keep pace with the growing family of .NET and markup languages, and certainly unable to deal with the demands of every DSL that emerges in the next few years.  If instead we had a way to identify our code identifiers to the refactoring tool, and indicate how they should be bound to identifiers in other languages in other files, or even other projects or solutions, the tools would be able to make some intelligent decisions without understanding each language ahead of time.  Each language’s language service could supply this information.  For more information on Microsoft Oslo and its relationship to a world of many languages, see my article on Why Oslo Is Important. […]

  23. Good researched article! This post is actually the most factual on this deserving topic. I concur with your conclusions and will eagerly look forward to your future updates. Thanks for the great clarity in your writing. Authentic work and much success with this excellent site!

  24. java programming lesson…

    […]Why Oslo is Important « Critical Development[…]…


    […]Why Oslo is Important « Critical Development[…]…

  26. luxusni vozy…

    […]Why Oslo is Important « Critical Development[…]…

  27. swollen stomach…

    […]Why Oslo is Important « Critical Development[…]…

  28. zmafia said


    […]Why Oslo is Important « Critical Development[…]…

  29. pujcky brno…

    […]Why Oslo is Important « Critical Development[…]…

  30. wallpaper zone…

    […]Why Oslo is Important « Critical Development[…]…

  31. life insurance without physical…

    […]Why Oslo is Important « Critical Development[…]…

  32. 1022hunter said


    […]Why Oslo is Important « Critical Development[…]…

  33. […] This article by Dan Vanderboom is one of the best pieces of writing I’ve seen to explain why Oslo is important. Here are a few quotes, but you should take the time to read the whole thing. […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: