Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘User Interface Design’ Category

Dynamic Clipping of Rounded Corners in Silverlight

Posted by Dan Vanderboom on November 6, 2010

I recently came across a nice article by Colin Eberhardt on automatically adjusting clipping bounds whenever the element to be clipped was resized, and I decided to make some small additions to make it work with rounded corners for a project I’m working on. Since Colin was generous enough to share his code, I’m going to share my additions as well.

Colin’s solution involves the creation of an attached property called Clip.ToBounds (a bool) applied to the item whose bounds you want to serve as a clipping path.  When this property is attached, it adds a listener to the Resize event and updates the clipping path when this happens.  Simple but effective.

I’ve added two more attached properties: Clip.RadiusX and Clip.RadiusY.

Here’s what the end result looks like:

image

The whole bordered area is a UserControl I created called ContentView.  Its root is a Border, which contains a Grid that is broken into three rows.  The top row contains a ContentHeader control, and the bottom row contains a ContentFooter control.  This is defined in the following code:

<UserControl
    x:Class="ClipRoundedCorners.ContentView"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    xmlns:local="clr-namespace:ClipRoundedCorners"
    mc:Ignorable="d"
    d:DesignWidth="640" d:DesignHeight="480">

    <Border x:Name="LayoutRoot" BorderThickness="2" BorderBrush="White" CornerRadius="10">
        <Grid local:Clip.ToBounds="true" local:Clip.RadiusX="10" local:Clip.RadiusY="10">
            <Grid.RowDefinitions>
                <RowDefinition Height="84"/>
                <RowDefinition/>
                <RowDefinition Height="88"/>
            </Grid.RowDefinitions>
            <local:ContentHeader Margin="0"/>
            <local:ContentFooter Margin="0" d:LayoutOverrides="Width" Grid.Row="2"/>
            <ScrollViewer Margin="0" Grid.Row="1" Background="#FF545454"/>
        </Grid>
    </Border>
</UserControl>


The attached Clip properties are defined on the Grid control. Even though the Border defines its own CornerRadius, it’s the Grid within it that needs to set the clipping path.

Within the Clip class, I’ve updated the ClipToBounds method to set the RadiusX and RadiusY properties of the RectangleGeometry object used to set the clipping path.

private static void ClipToBounds(FrameworkElement Element)
{
    if (GetToBounds(Element))
    {
        Element.Clip = new RectangleGeometry()
        {
            Rect = new Rect(0, 0, Element.ActualWidth, Element.ActualHeight)
        };

        var rect = Element.Clip as RectangleGeometry;
        rect.RadiusX = GetRadiusX(Element);
        rect.RadiusY = GetRadiusY(Element);
    }
    else
    {
        Element.Clip = null;
    }
}

The complete Visual Studio 2010 (Silverlight 4) demo project can be downloaded here.

Resize your browser to see ContentView resize.  The clipping path updates like you’d expect it to.

Happy Silverlight coding!

Posted in Silverlight, User Interface Design | Tagged: , , , | 3 Comments »

The Archetype Language (Part 8)

Posted by Dan Vanderboom on October 1, 2010

Overview

This is part of a continuing series of articles about a new .NET language under development called Archetype.  Archetype is a C-style (curly brace) functional, object-oriented (class-based), metaprogramming-capable language with features and syntax borrowed from many languages, as well as some new constructs.  A major design goal is to succinctly and elegantly implement common patterns that normally require a lot of boilerplate code which can be difficult, error-prone, or just plain onerous to write.

You can follow the news and progress on the Archetype compiler on twitter @archetypelang.

Links to the individual articles:

Part 1 – Properties and fields, function syntax, the me keyword

Part 2 – Start function, named and anonymous delegates, delegate duck typing, bindable properties, composite bindings, binding expressions, namespace imports, string concatenation

Part 3 – Exception handling, local variable definition, namespace imports, aliases, iteration (loop, fork-join, while, unless), calling functions and delegates asynchronously, messages

Part 4 – Conditional selection (if), pattern matching, regular expression literals, agents, classes and traits

Part 5 – Type extensions, custom control structures

Part 6 – If expressions, enumerations, nullable types, tuples, streams, list comprehensions, subrange types, type constraint expressions

Part 7 Semantic density, operator overloading, custom operators

Part 8 – Constructors, declarative Archetype: the initializer body

Part 9 – Params & fluent syntax, safe navigation operator, null coalescing operators

Conceptual articles about language design and development tools:

Language Design: Complexity, Extensibility, and Intention

Reimagining the IDE

Better Tool Support for .NET

Constructors

A constructor in Archetype is a recommended, predefined prototype for instantiating an object correctly.

 

The default parameterless constructor is defined implicitly (it’s defined even if it isn’t written), even if other constructors are defined explicitly. This last part is unlike other languages that hide the parameterless constructor when others are defined.  This will make classes with these default constructors common in Archetype, to more easily support behaviors like serialization and dynamic construction.  When it needs to be hidden, it can be defined with reduced visibility, such as private.

 

A constructor is defined with the name new, consistent with how it’s invoked.

 

Let’s start with a very basic class, and build up to more complicated examples.

 

Customer object

{

FirstName string;

LastName string;

}

 

Despite the lack of an explicit constructor, it’s important for Archetype to define constructs that are useful in their default configurations.  You couldn’t get more basic that the Customer class above.  If we want to define the constructor explicitly, we can do so.

 

Customer object

{

FirstName string;

LastName string;

 

new ()

{

// do nothing

}

}

 

Instantiating a Customer object is easy. With the parameterless constructor, parentheses are optional.

 

var dilbert = new Customer;

 

Archetype, like C#, supports constructor initializers:

 

var dilbert = new Customer

{

FirstName = "Dilbert",

LastName = "Smith"

};

 

When you have few parameters and want to compress this call to a single line, the curly braces end up feeling a little too much (too formal?).

 

var dilbert = new Customer { FirstName = "Dilbert", LastName = "Smith" };

 

Archetype supports passing these assignment statements as final arguments of the constructor parameter list, like this:

 

var dilbert = new Customer(FirstName = "Dilbert", LastName = "Smith");

 

As a result, there isn’t much need to define constructors that only set fields and properties to the value of constructor parameters.  Because Archetype has this mechanism for fluidly initializing objects at construction, the only time constructors really need to be defined is when construction of the object is complicated or unintuitive, in which case a supplied construction pattern is a sure way to make sure it’s done correctly.  Our Customer example doesn’t meet those criteria, but if it did, this is one way we could write it:

 

Customer object

{

FirstName string;

LastName string;

 

new (FirstName string, LastName string)

{

this.FirstName = FirstName;

this.LastName = LastName;

}

}

 

To avoid having to qualify FirstName with the this keyword, many people prefer naming their parameters with the first character lower-cased.  That’s an unfortunate compromise.  When viewing at least the public members of a type, in a sense you’re creating an outward-facing API, and I think Pascal casing more naturally respects English grammar, not downplaying the signficance of the most-important first word in an identifier by lower-casing it to get around some unfortunate syntax limitation.

 

But instead of taking sides in a naming convention war, we can solve the problem in the language and remove the need to make any compromise.

 

new (FirstName string, LastName string)

{

set FirstName, LastName;

}

 

This lets us set individual properties named the same as constructor parameters.  It’s flexible enough to set some and consume other parameters differently, but when you want to set all parameters with matching member names, you can use the shortcut set all.  If that’s all the constructor needs to do, we can do away with the curly braces:

 

new (FirstName string, LastName string) set all;

 

If our Customer class contained a BirthDate property, we could use this constructor and pass in an initializer statement as a final parameter.

 

var dilbert = new Customer("Dilbert", "Smith", BirthDate = DateTime.Parse("7/4/1970");

 

This works with multiple initializers.  Alternatively, we could use an initializer body after the parameter list:

 

var dilbert = new Customer("Dilbert", "Smith")

{

BirthDate = DateTime.Parse("7/4/1970")

};

 

Note how we have two places to supply data to a new object, if needed: the parameter list for simple, short values, and the initializer body for much larger assignments.


Another common construction pattern is for one or more constructors to call another constructor with a default set of properties.  Typically the constructor with the full list of parameters performs the actual work, while the shorter constructors call into the main one, passing in some default values and passing the others through.

 

new (EvaluateFunc sFunc<T>) new(null, null, EvaluateFunc);

 

new (BaseObject object, EvaluateFuncsFunc<T>) new(null, BaseObject, EvaluateFunc);

 

new (Name string, BaseObject object, EvaluateFunc Func<T>)

{

set all;

 

// do all the real work…

// …

}

 

Declarative Archetype: The Initializer Body

 

The initializer body mentioned above has a special structure in Archetype.  Member assignment statements can appear side-by-side with value expressions that are processed by a special function called value.  This can be used, among other things, to add items to a collection.  It’s best to see in an example:

 

var dilbert = new Customer

{

FirstName = "Dilbert",

LastName = "Smith",

BirthDate = DateTime.Parse("7/4/1970"),

 

new SalesOrder(OrderCode = "ORD012940"),

 

new SalesOrder

{

OrderCode = "ORD012941",

 

new SalesOrderLine(ItemCode = "S0139", Quantity = 3),

new SalesOrderLine(ItemCode = "S0142", Quantity = 1)

}

};

 

The first three lines of the initializer set members with assignment statements.  The next expression (new SalesOrder …) in the list creates an object, but there’s no assignment.  It returns a value, but where does it go?  Take a look at the value functions below for the answer:

 

Customer object

{

FirstName string;

LastName string;

Orders SalesOrder* = new;

Invoices Invoice* = new;

 

// formatted inline

value (Order SalesOrder) Orders += Order;

 

// formatted with full code block

value (Invoice Invoice)

{

Invoices += Invoice;

}

}

 

A Customer has several collections of things–Orders and Invoices here–and because there are two value functions in the class, any expressions of type SalesOrder or Invoice will be evaluated and their values passed to the appropriate value function. Expressions of other types will trigger a compile-time error.

 

The += and -= operators haven’t been shown before.  Their use is a very natural fit for stream and list types.  The += operator appends an object to a stream, and -= removes the first occurrence of that object.

 

This simple addition of a value function in types (classes and structs) gives Archetype the ability to represent hierarchical structures in a clean, declarative way.  Sure it’s always been possible to format expressions similarly, but the syntactic trappings of imperative languages have made this difficult and unattractive at best, and in most real-world cases impractical.

 

When I experimented in creating a Future class, I came up with a pattern in C# to nest structures in a tree for large future expressions, but the need to match parentheses gets in the way and consumes too much attention that’s better focused on the logic itself:

 

Future<string> FuturePi = null, FutureOmega = null, FutureConcat = null, FutureParen = null;

 

var result = new Future<string>("bracket",
    () => Bracket(FutureParen),
    (FutureParen =
new Future<string>("parenthesize",
        () => Parenthesize(FutureConcat),
        (FutureConcat =
new Future<String>("concat",
            () => FuturePi +
" < " + FutureOmega,
                (FuturePi =
new Future<string>("pi", () => CalculatePi(10))),
                (FutureOmega =
new Future<string>("omega", () => CalculateOmega()))
            ))
        ))
    );

 

The difference finally occurred to me between the need to set few simple members and the definition of larger, more structured content–including nested structures–that begged for a way to supply them without carrying the end parenthesis down multiple lines or letting them build up into parentheses knots that must be carefully counted.  One gets to fidgeting with where to put them, and sometimes there’s no good answer to that.


Another feature we need to make this declarative notation ability robust is inline variable declaration and assignment.  Notice in the last example how several intermediary structures have variable names defined for them ahead of time, outside the expression. Writing that Future code, I felt it was unfortunate these variables couldn’t be defined inline as part of the expression.  Doing so would allow us to define any kind of structure we might see in XML or JSON, such as this XAML UI code.

 

new Canvas -> LayoutRoot

{

Height = Auto,

Width = Auto,

 

new StackPanel -> sp

{

Orientation = Vertical,

Height = 150,

Width = Auto,

 

Canvas.Top = 10,

Canvas.Left = 20,

 

with Canvas

{

Top = 10,

Left = 20,

},

 

with Canvas { Top = 10, Left = 20 },

 

Loaded += (sender, e)

{

Debug.WriteLine("StackPanel sp.Loaded running");

sp.ResizeTo(0.5 seconds, Auto, 200).Begin();

},

 

LayoutUpdated += HandleLayoutUpdated,

 

new TextBlock

{

FontSize = 18,

Text = "Title"

},

new TextBlock { Text = "Paragraph 1" },

new TextBlock { Text = "Paragraph 2" },

new TextBlock(Text = "Paragraph 3")

}

};

 

A few notes are needed here:

 

·   Wow, this looks a lot like XAML, but much friendlier to developers who have to actually read and edit it!  Yes, good observation.

·  Unlike XAML, every identifier here works with the all-important Rename refactoring, go to definition, find all references to, etc. This is great for reducing the amount of work to find relationships among things and manually update related files.

·  Also unlike XAML, code for event handlers can be defined here. I’m not saying you should cram all of your event handler logic here, but it could come in quite handy at times and I can’t see any reason to disable it. 

·  The with token is a custom operator (see Part 7) that provides access to attached properties through an initializer body. Custom extensions allow you to access these properties with a natural member-access style.

·  It hasn’t been possible to use generic classes in XAML. Specifying UI in Archetype, this would be trivial, and I suspect they could be used to good effect in many ways. Of course, in doing this you’d lose support for the designers in VS and Blend, which would be awfui.

·  Auto is simply an alias for double.NaN.

·  The -> custom operator in these expressions defines a variable and sets it to the value of the new object. The order of execution is:

1. Evaluate constructor parameters, if any are supplied.

2. Assign the object to the variable defined with ->, if supplied.

3. Set any fields or properties with assignment statements.

4. Evaluate value expressions, if supplied, and call the class’s value function with each one, if a value function has been defined.

5. Invoke any matching value function defined in class extensions.

 

By following this design, the example above can be translated into this C# code by the Archetype compiler:

 

var LayoutRoot = new Canvas()

{

Height = double.NaN,

Width = double.NaN

};

 

var sp = new StackPanel()

{

Orientation = Orientation.Vertical,

Height = 150.0,

Width = double.NaN

};

 

LayoutRoot.Children.Add(sp);

 

sp.SetValue(Canvas.TopProperty, 10.0);

sp.SetValue(Canvas.LeftProperty, 20.0);

 

sp.Loaded += (sender, e)

{

Debug.WriteLine("StackPanel sp.Loaded running");

sp.ResizeTo(0.5.seconds(), double.NaN, 200.0).Begin();

},

 

sp.LayoutUpdated += HandleLayoutUpdated;

 

sp.Children.Add(new TextBlock() { FontSize = 18, Text = "Title"});

sp.Children.Add(new TextBlock() { Text = "Paragraph 1"});

sp.Children.Add(new TextBlock() { Text = "Paragraph 2"});

sp.Children.Add(new TextBlock() { Text = "Paragraph 3"});

 

var VisualTree = LayoutRoot;

 

Compare the two approaches. The C# code is a typical example of imperative structure building, while the Archetype code is arguably as declarative as XAML, and with many advantages over XAML for developers.


Going back to the Future example, we could rewrite this in Archetype a few different ways.  I’ll present two.  In the first one, value functions are used to receive the future’s evaluation function as well as any Future objects the expression depends on.


new
Future<string>("bracket") -> result

{

() => Bracket(FutureParen),
new Future<string>("parenthesize") -> FutureParen

{

() => Parenthesize(FutureConcat),
new Future<string>("concat") -> FutureConcat

{

() => FuturePi + " < " + FutureOmega,
new Future<string>("pi") -> FuturePi

{

() => CalculatePi(10)

},

new Future<string>("omega") -> FutureOmega

{

() => CalculateOmega()

}

}

}

}


The shorter approach passes an evaluation delegate in as a parameter.


new
Future<string>(() => Bracket(FutureParen)) -> result

{

new Future<string>(() => Parenthesize(FutureConcat)) -> FutureParen

{

new Future<string>(() => FuturePi + " < " + FutureOmega) -> FutureConcat

{

new Future<string>(() => CalculatePi(10)) -> FuturePi,

new Future<string>(() => CalculateOmega()) -> FutureOmega

}

}

}

 

The name string parameter is missing from the last example.  This was only for use during debugging.  Now what we have is a very direct description of futures that are dependent on other futures in a dependency graph.

Summary

Object construction is a crucial part of an object-oriented language, and Archetype is advanced with its options for constructing arbitrary object graphs and initializing even complicated state in a single expression.  These fluent declarative syntax features are ideal for representing structures such as XAML UI, state machines, dependency graphs, and much more.

XAML is a language.  The question this work has me asking is: do we really need a separate language if our general purpose language supports highly declarative syntax? It’s a provocative question without an easy answer, but it seems clear that many DSLs could emerge within a language that so richly supports composition.

With the ability to define arbitrarily complex structures in code—from declarative object graphs to rich functional expressions—it’s hard to think of a situation that would be too difficult to model and build an API or application around.

Posted in Archetype Language, Data Structures, Design Patterns, Language Innovation, Silverlight, User Interface Design, WPF | 2 Comments »

Reimagining the IDE

Posted by Dan Vanderboom on May 31, 2010

Overview

After working in Visual Studio for the past decade, I’ve accumulated a broad spectrum of ideas on how the experience could be better.  From microscopic features like “I want to filter Intellisense member lists by member type” to recognition of larger patterns of conceptual organization and comprehension, there aren’t many corners of the IDE that couldn’t be improved with additional features—or in some cases—a redesign.

To put things in perspective, consider how the Windows Mobile platform languished for years and become stale (or “good enough”) until the iPhone changed the game and raised the bar on quality to a whole new level.  It wasn’t until fierce competition stole significant market share that Microsoft completely scrapped the Windows Mobile division and started fresh with a complete redesign called Windows Phone 7.  This is one of the smartest things Microsoft has done in a long time.

After many years of incremental evolution, it’s often necessary to rethink, reimagine, and occassionally even start from scratch in order to make the next revolutionary jump forward.

Visual Studio Focus

Integrated Development Environments have been with us for at least the past decade.  Whether you work in Visual Studio, Eclipse, NetBeans, or another tool, there is tremendous overlap in the set of panels available, the flexible layout of those panels, saved workspaces, and add-in infrastructure to make as much as possible extensible.  I’ll focus on Visual Studio for my examples and explanations since that’s the IDE I’m most familiar with, but there are parallels to other IDEs for much of what I’m going to cover.

Visual Components & Flexible Layout

Visual layout is one thing that IDEs do right.  Instead of a monolithic UI, it’s broken down into individual components such as panels, toolbars, toolboxes, main menus and context menus, code editors, designers, and more.  These components can be laid out at runtime with intuitive drag-and-drop operations that visually suggest the end result.

The panels of an IDE can be docked to any edge of another panel, they can be laid on top of another panel to create tab controls, and adjacent panels can be relatively resized with splitters that appear between panels.  After many years of refinement, it’s hard to imagine a better layout system than this.

The ability to save layouts as workspaces in Expression Blend is a particularly nice feature.  It would be nicer still if the user could define triggers for these workspaces, such as “change layout to the UI Designer workspace when the XAML or Windows Forms designers are opened”.

IDE Hosting

Visual Studio and other development tools have traditionally been desktop applications.  In Silverlight 4, however, we now have a framework sufficiently powerful to build a respectable cross-platform IDE.

With features such as off-line, out-of-browser execution, full screen mode, custom context menus, and trusted access to the local file system, it’s now possible for a great IDE to be built and run on Windows, Mac OS X, or Linux, and to allow a developer to access the IDE and their solutions from any computer with a browser (and the Silverlight plug-in).

There are already programming editors and compilers in the cloud.  In episode 562 of .NET Rocks on teaching programming to kids, their guests point out that a subset of the Small Basic IDE is available in Silverlight.  For those looking to build programming editors, ActiPro has a SyntaxEditor control in WPF that they’re currently porting to Silverlight (for which they report seeing a lot of demand).

Ideally such an IDE would be free, or would have a free version available, but for those of us who need high-end tools and professional-level features sets, imagine how nice it would be to pay a monthly fee for access to an ever-evolving IDE service instead of having to cough up $1,100 or $5,500 (or more) every couple years.  Not only would costs be conveniently amortized over the span of the tool’s use, but all of your personal preferences would be easily synchronized across all computers that you use to work on that IDE.

With cloud computing services such as Windows Azure, it would even be possible to off-load compilation of large solutions to the cloud.  Builds that took 30 minutes could be cut down to a few minutes or less by parallelizing build tasks across mutliple cores and servers.

The era of cloud development tools is upon us.

Solution Explorer & The Project System

Solution Explorer is one of the most useful and important panels in Visual Studio.  It provides us with an organizational tool for all the assets in our solution, and provides a window into the project system on which core behaviors such as builds are based.  It is through the Solution Explorer that we typically add or remove files, and gain access to visual designers and the ever-present code editor.

In many ways, however, Solution Explorer and the project system it represents are built on an old and tired design that hasn’t evolved much since its introduction over ten years ago.

For example, it still isn’t possible to “add existing folder” and have that folder and all of its contents pulled into a project.  If you’ve ever had to rebuild a project file and pull in a large number of files organized in many nested folders, you have a good idea of how painful an effort this can be.

If you’ve ever tried sharing the same code across multiple incompatible platforms, between Full and Compact Framework, or between Silverlight 3 and Full Framework, you’ve likely run into kludgey workarounds like placing multiple project files in the same folder and including the same set of files, or using a tool like Project Linker.

Reference management can also be unwieldy when you have many projects and references.  How do you ensure you’re not accidentally referencing two different versions of the same assembly from two different projects?  My article on Project Reference Oddness in VS2008, which explores the mysterious and indirect ways references work, is by far one of my most popular articles.  I’m guessing that’s because so many people can relate to the complexity and confusion of managing these dependencies.

“Projects” Are Conceptually Overloaded: Violating the Single Responsibility Principle

In perhaps the most important example, consider how multiple projects are packaged for deployment, such as what happens for the sake of debugging.  Which assemblies and other files are copied to the output directory before the program is executed?  The answer, discussed in my Project Reference Oddness article, is that it depends.  Files that are added to a project as “Content” don’t even become part of the assembly: they’re just passed through as a deployment command.

So what exactly is a Visual Studio “project”?  It’s all of these things:

  • A set of source code files that will get compiled, producing an assembly.
  • A set of files that get embedded in the resulting assembly as resources.
  • A set of deployment commands for loose files.
  • A set of deployment commands for referenced assemblies.

If a Visual Studio project were a class definition, we’d say it violated the Single Responsibility Principle.  It’s trying to be too many things: both a definition for an assembly as well as a set of deployment commands.  It’s this last goal that leads to all the confusion over references and deployment.

Let’s examine the reason for this.

A deployment definition is something that can span not only multiple assemblies, but also additional loose files.  In order to debug my application, I need assemblies A, B, and C, as well as some loose files, to be copied to the output directory.  Because there is no room for the deployment definition in the hierarchy visualized by Solution Explorer, however, I must somehow encode that information within the project definitions themselves.

If assembly A references B, then Visual Studio infers that the output of B needs to be copied to A’s output directory when A is built.  Since B references C, we can infer that the output of C needs to be copied to B’s output directory when B is built.  Indirectly, then, C’s output will get dumped in A’s output directory, along with B’s output.

What you end up with is a pipeline of files that shuffles things along from C to B to A.  Hopefully, if all the reference properties are set correctly, this works as intended and the result is good.  But the logic behind all of this is an implicit black box.  There’s no transparency, so when things get complicated and something goes wrong, it can become impossible to figure it out in a reasonable amount of time (try reading through verbose build output sometime).

At one point, just before writing the article on references mentioned above, I was spending 10 hours or more a week just fighting with reference dependencies.  It was a huge mess, and a very expensive way to accomplish absolutely nothing in terms of providing value to customers.

Deployments & Assemblies

Considering our new perspective on the importance of representing deployments as first-class organizational items in solutions, let’s take a look at what that might look like in an IDE.  Focus on the top-left of the screenshot below.

image

The first level of darker text (“Silverlight Client” and “Cloud Services”) are equivalent to “solution folders” in Visual Studio.  They’re labels that can be nested like folders for organizational purposes.  Within each of these areas is a collection of Deployment definitions.  The expanded deployment is for the Shell of our Silverlight application.  The only child of this deployment is a location.

In a desktop application, you might have multiple deployment locations, such as $AppDir$, $AppDir$\Data, or $UserDir$\AppName, each with child nodes representing content to be deployed to those locations.  In Silverlight, however, it doesn’t make sense to deploy to a specific folder since that’s abstracted away from you.  So for this example, the destination is Shell.XAP.

You’ll notice that multiple assemblies are listed.  If this were a web application, you might have a number of loose files as well, such as default.aspx or web.config.  If such files were listed under that deployment, you could double-click one to open and edit in the editor on the right-hand side of the screen.

The nice thing about this setup is the complete transparency: if a file is listed in a deployment path, you know if will be copied to the output directory before debugging begins.  If it’s not listed, it won’t get deployed.  It’s that simple.

The next question you might have is: doesn’t this mean that I have a lot of extra work to manually add each of these assembly files?  Especially when it comes to including the necessary references, nobody wants the additional burden of having to manually drag every needed reference into a deployment definition.

This is pretty easy to deal with.  When you add a reference to an assembly, and that referenced assembly isn’t in the .NET Framework (those are accessed via the GAC and therefore don’t need to be included), the IDE can add that assembly to the deployment definition for you.  Additionally, it would be helpful if all referenced assemblies lit up (with a secondary highlight color) when a referencing assembly was selected in the list.  That way, you’d be able to quickly figure out why each assembly was included in that deployment.  And if you select an assembly that requires a missing assembly, the name of any missing assemblies should appear in a general status area.

What we end up with is a more explicit and transparent way of dealing with deployment definitions separately from assembly definitions, a clean separation of concepts, and direct control over deployment behavior.  Because deployment intent is specified explicitly, this would be a great starting point for installer technologies to plug into the IDE.

In Visual Studio, a project maps many inputs to many outputs, and confuses deployment and assembly definitions.  A Visual Studio “project” is essentially an “input” concept.  In the approach I’ve outlined here, all definitions are “output” concepts; in other words, items in the proposed solution hierarchy are defined in terms of intended results.  It’s always a good idea to “begin with the end in mind” this way.

Multiple Solution Views

In the screenshot above, you’ll notice there’s a dropdown list called Solution View.  The current view is Deployment; the other option is Assembly.  The reason I’ve included two views is because the same assembly may appear in multiple deployments.  If what you want is a list of unique assemblies, that alternative view should be available.

A New Template System

The other redesign required is around the idea of Visual Studio templates.  Instead of solution, project, and project item templates in Visual Studio, you would have four template types: solution, deployment, assembly, and file.  Consider these examples:

Deployment Template: ASP.NET Web Application

  • $AppDir$
    • Assembly: MyWebApp.dll
      • App.xaml.cs
      • App.xaml    (embedded resource)
      • Main.xaml.cs
      • Main.xaml   (embedded resource)
    • File: Default.aspx
    • File: Web.config
    • Folder: App_Data
      • File: SampleData.dat

Solution Template: Silverlight Solution

  • Deployment: Silverlight Client
    • MySLApp.XAP
      • Assembly: MyClient.dll
        • App.xaml.cs
        • App.xaml    (embedded resource)
        • Main.xaml.cs
        • Main.xaml   (embedded resource)
  • Deployment: ASP.NET Web Application
    • $AppDir$
      • Assembly: MyWebApp.dll
        • YouGetTheIdea.cs
      • Folder: ClientBin
        • MySLApp.XAP (auto-copied from Deployment above)
      • File: Default.aspx
      • File: Web.config

Summary

In this article, we explored several features in modern IDEs (Visual Studio specifically), and some of the ways in which imaginative rethinking could bring substantial improvements to the developer experience.  I have to wonder how quickly a large ship like Visual Studio (with 1.5 million lines of mostly C++ code) could turn and adapt to new ideas like this, or whether it makes sense to start fresh without all the burden of legacy.

Though I have many more ideas to share, especially regarding the build system, multiple-language name resolution and refactoring, and IDE REPL tools, I will save all of that for future articles.

Posted in Cloud Computing, Development Environment, Silverlight, User Interface Design, Visual Studio, Windows Azure | Leave a Comment »

Better Tool Support for .NET

Posted by Dan Vanderboom on September 7, 2009

Productivity Enhancing Tools

Visual Studio has come a long way since its debut in 2002.  With the imminent release of 2010, we’ll see a desperately-needed overhauling of the archaic COM extensibility mechanisms (to support the Managed Package Framework, as well as MEF, the Managed Extensibility Framework) and a redesign of the user interface in WPF that I’ve been pushing for and predicted as inevitable quite some time ago.

For many alpha geeks, the Visual Studio environment has been extended with excellent third-party, productivity-enhancing tools such as CodeRush and Resharper.  I personally feel that the Visual Studio IDE team has been slacking in this area, providing only very weak support for refactorings, code navigation, and better Intellisense.  While I understand their desire to avoid stepping on partners’ toes, this is one area I think makes sense for them to be deeply invested in.  In fact, I think a new charter for a Developer Productivity Team is warranted (or an expansion of their team if it already exists).

It’s unfortunately a minority of .NET developers who know about and use these third-party tools, and the .NET community as a whole would without a doubt be significantly more productive if these tools were installed in the IDE from day one.  It would also help to overcome resistance from development departments in larger organizations that are wary of third-party plug-ins, due perhaps to the unstable nature of many of them.  Microsoft should consider purchasing one or both of them, or paying a licensing fee to include them in every copy of Visual Studio.  Doing so, in my opinion, would make them heroes in the eyes of the overwhelming majority of .NET developers around the world.

It’s not that I mind paying a few hundred dollars for these tools.  Far from it!  The tools pay for themselves very quickly in time saved.  The point is to make them ubiquitous: to make high-productivity coding a standard of .NET development instead of a nice add-on that is only sometimes accepted.

Consider just from the perspective of watching speakers at conferences coding up samples.  How many of them don’t use such a tool in their demonstration simply because they don’t want to confuse their audience with an unfamiliar development interface?  How many more demonstrations could they be completing in the limited time they have available if they felt more comfortable using these tools in front of the masses?  You know you pay good money to attend these conferences.  Wouldn’t you like to cover significantly more ground while you’re there?  This is only likely to happen when the tool’s delivery vehicle is Visual Studio itself.  Damon Payne makes a similar case for the inclusion of the Managed Extensibility Framework in .NET Framework 4.0: build it into the core and people will accept it.

The Gorillas in the Room

CodeRush and Resharper have both received recent mention in the Hanselminutes podcast (episode 196 with Mark Miller) and in the Deep Fried Bytes podcast (episode 35 with Corey Haines).  If you haven’t heard of CodeRush, I recommend watching these videos on their use.

For secondary information on CodeRush, DXCore, and the principles with which they were designed, I recommend these episodes of DotNetRocks:

I don’t mean to be so biased toward CodeRush, but this is the tool I’m personally familiar with, has a broader range of functionality, and it seems to get the majority of press coverage.  However, those who do talk about Resharper do speak highly of it, so I recommend you check out both of them to see which one works best for you.  But above all: go check them out!

Refactor – Rename

Refactoring code is something we should all be doing constantly to avoid the accumulation of technical debt as software projects and the requirements on which they are based evolve.  There are many refactorings in Visual Studio for C#, and many more in third-party tools for several languages, but I’m going to focus here on what I consider to be the most important refactoring of them all: Rename.

Why is Rename so important?  Because it’s so commonly used, and it has such far-reaching effects.  It is frequently the case that we give poor names to identifiers before we clearly understand their role in the “finished” system, and even more frequent that an item’s role changes as the software evolves.  Failure to rename items to accurately reflect their current purpose is a recipe for code rot and greater code maintenance costs, developer confusion, and therefore buggy logic (with its associated support costs).

When I rename an identifier with a refactoring tool, all of the references to that identifier are also updated.  There might be hundreds of references.  In the days before refactoring tools, one would accomplish this with Find-and-Replace, but this is dangerous.  Even with options like “match case” and “match whole word”, it’s easy to rename the wrong identifiers, rename pieces of string literals, and so on; and if you forget to set these options, it’s worse.  You can go through each change individually, but that can take a very long time with hundreds of potential updates and is a far cry from a truly intelligent update.

Ultimately, the intelligence of the Rename refactoring provides safety and confidence for making far-reaching changes, encouraging more aggressive refactoring practices on a more regular basis.

Abolishing Magic Strings

I am intensely passionate about any tool or coding practice that encourages refactoring and better code hygiene.  One example of such a coding practice is the use of lambda expressions to select identifiers instead of using evil “magical strings”.  From my article on dynamically sorting Linq queries, the use of “magic strings” would force me to write something like this to dynamically sort a Linq query:

Customers = Customers.Order("LastName").Order("FirstName", SortDirection.Descending);

The problem here is that “LastName” and “FirstName” are oblivious to the Rename refactoring.  Using the refactoring tool might give me a false sense of security in thinking that all of my references to those two fields have been renamed, leading me to The Pit of Despair.  Instead, I can define a function and use it like the following:

public static IOrderedEnumerable<T> Order<T>(this IEnumerable<T> Source, 
    Expression<Func<T, object>> Selector, SortDirection SortDirection)
{
    return Order(Source, (Selector.Body as MemberExpression).Member.Name, SortDirection);
}

Customers = Customers.Order(c => c.LastName).Order(c => c.FirstName, SortDirection.Descending);

This requires a little understanding of the structure of expressions to implement, but the benefit is huge: I can now use the refactoring tool with much greater confidence that I’m not introducing subtle reference bugs into my code.  For such a simple example, the benefit is dubious, but multiply this by hundreds or thousands of magic string references, and the effort involved in refactoring quickly becomes overwhelming.

Coding in this style is most valuable when it’s a solution-wide convention.  So long as you have code that strays from this design philosophy, you’ll find yourself grumbling and reaching for the inefficient and inelegant Find-and-Replace tool.  The only time it really becomes an issue, then, is when accessing libraries that you have no control over, such as the Linq-to-Entities and the Entity Framework, which makes extensive use of magic strings.  In the case of EF, this is mitigated somewhat by your ability to regenerate the code it uses.  In other libraries, it may be possible to write extension methods like the Order method shown above.

It’s my earnest hope that library and framework authors such as the .NET Framework team will seriously consider alternatives to, and an abolition of, “magic strings” and other coding practices that frustrate otherwise-powerful refactoring tools.

Refactoring Across Languages

A tool is only as valuable as it is practical.  The Rename refactoring is more valuable when coding practices don’t frustrate it, as explained above.  Another barrier to the practical use of this tool is the prevalence of multiple languages within and across projects in a Visual Studio solution.  The definition of a project as a single-language container is dubious when you consider that a C# or VB.NET project may also contain HTML, ASP.NET, XAML, or configuration XML markup.  These are all languages with their own parsers and other language services.

So what happens when identifiers are shared across languages and a Rename refactoring is executed?  It depends on the languages involved, unfortunately.

When refactoring a C# class in Visual Studio, the XAML’s x:Class value is also updated.  What we’re seeing here is cross-language refactoring, but unfortunately it only works in one direction.  There is no refactor command to update the x:Class value from the XAML editor, so manually changing it causes my C# class to become sadly out of sync.  Furthermore, this seems to be XAML specific.  If I refactor the name of an .aspx.cs class, the Inherits attribute of the Page directive in the .aspx file doesn’t update.

How frequent do you think it is that someone would want to change a code-behind file for an ASP.NET page, and yet would not want to change the Inherits attribute?  Probably not very common (okay, probably NEVER).  This is a matter of having sensible defaults.  When you change an identifier name in this way, the development environment does not respond in a sensible way by default, forcing the developer to do extra work and waste time.  This is a failure in UI design for the same reason that Intellisense has been such a resounding success: Intellisense anticipates our needs and works with us; the failure to keep identifiers in sync by default is diametrically opposed to this intelligence.  This represents a fragmented and inconsistent design for an IDE to possess, thus my hope that it will be addressed in the near future.

The problem should be recognized as systemic, however, and addressed in a generalized way.  Making individual improvements in the relationships between pairs of languages has been almost adequate, but I think it would behoove us to take a step back and take a look at the future family of languages supported by the IDE, and the circumstances that will quickly be upon us with Microsoft’s Oslo platform, which enables developers to more easily build tool-supported languages (especially DSLs, Domain Specific Languages). 

Even without Oslo, we have seen a proliferation of languages: IronRuby, IronPython, F#, and the list goes on.  A refactoring tool that is hard-coded for specific languages will be unable to keep pace with the growing family of .NET and markup languages, and certainly unable to deal with the demands of every DSL that emerges in the next few years.  If instead we had a way to identify our code identifiers to the refactoring tool, and indicate how they should be bound to identifiers in other languages in other files, or even other projects or solutions, the tools would be able to make some intelligent decisions without understanding each language ahead of time.  Each language’s language service could supply this information.  For more information on Microsoft Oslo and its relationship to a world of many languages, see my article on Why Oslo Is Important.

Without this cross-language identifier binding feature, we’ll remain in refactoring hell.  I offered a feature suggestion to the Oslo team regarding this multi-master synchronization of a model across languages that was rejected, much to my dismay.  I’m not sure if the Oslo team is the right group to address this, or if it’s more appropriate for the Visual Studio IDE team, so I’m not willing to give up on this yet.

A Default of Refactor-Rename

The next idea I’d like to propose here is that the Rename refactoring is, in fact, a sensible default behavior.  In other words, when I edit an identifier in my code, I more often than not want all of the references to that identifier to change as well.  This is based on my experience in invoking the refactoring explicitly countless times, compared to the relatively few times I want to “break away” that identifier from all the code that references.

Think about it: if you have 150 references to variable Foo, and you change Foo to FooBar, you’re going to have 150 broken references.  Are you going to create a new Foo variable to replace them?  That workflow doesn’t make any sense.  Why not just start editing the identifier and have the references update themselves implicitly?  If you want to be aware of the change, it would be trivial for the IDE to indicate the number of references that were updated behind the scenes.  Then, if for some reason you really did want to break the references, you could explicitly launch a refactoring tool to “break references”, allowing you to edit that identifier definition separately.

The challenge that comes to mind with this default behavior concerns code that spans across solutions that aren’t loaded into the IDE at the same time.  In principle, this could be dealt with by logging the refactoring somewhere accessible to all solutions involved, in a location they can all access and which gets checked into source control.  The next time the other solutions are loaded, the log is loaded and the identifiers are renamed as specified.

Language Property Paths

If you’ve done much development with Silverlight or WPF, you’ve probably run into the PropertyPath class when using data binding or animation.  PropertyPath objects represent a traversal path to a property such as “Company.CompanyName.Text”.  The travesty is that they’re always “magic strings”.

My argument is that the property path is such an important construct that it deserves to be an core part of language syntax instead of just a type in some UI-platform-specific library.  I created a data binding library for Windows Forms for which I created my own property path syntax and type, and there are countless non-UI scenarios in which this construct would also be incredibly useful.

The advantage of having a language like C# understand property path syntax is that you avoid a whole class of problems that developers have used “magic strings” to solve.  The compiler can then make intelligent decisions about the correctness of paths, and errors can be identified very early in the cycle.

Imagine being able to pass property paths to methods or return then from functions as first-class citizens.  Instead of writing this:

Binding NameTextBinding = new Binding("Name") { Source = customer1; }

… we could write something like this, have access to the Rename refactoring, and even get Intellisense support when hitting the dot (.) operator:

Binding NameTextBinding = new Binding(@Customer.Name) { Source = customer1; }

In this code example, I use the fictitious @ operator to inform the compiler that I’m specifying a property path and not trying to reference a static property called Name on the Customer class.

With property paths in the language, we could solve our dynamic Linq sort problem cleanly, without using lambda expressions to hack around the problem:

Customers = Customers.Order(@Customer.LastName).Order(@Customer.FirstName, SortDirection.Descending);

That looks and feels right to me.  How about you?

Summary

There are many factors of developer productivity, and I’ve established refactoring as one of them.  In this article I discussed tooling and coding practices that support or frustrate refactoring.  We took a deep look into the most important refactoring we have at our disposal, Rename, and examined how to get the greatest value out of it in terms of personal habits, as well as long-term tooling vision and language innovation.  I proposed including property paths in language syntax due to its general usefulness and its ability to solve a whole class of problems that have traditionally been solved using problematic “magic strings”.

It gives me hope to see the growing popularity of Fluent Interfaces and the use of lambda expressions to provide coding conventions that can be verified by the compiler, and a growing community of bloggers (such as here and here) writing about the abolition of “magic strings” in their code.  We can only hope that Microsoft program managers, architects, and developers on the Visual Studio and .NET Framework teams are listening.

Posted in Data Binding, Data Structures, Design Patterns, Development Environment, Dynamic Programming, Functional Programming, Language Innovation, LINQ, Oslo, Silverlight, Software Architecture, User Interface Design, Visual Studio, Visual Studio Extensibility, Windows Forms | Leave a Comment »

Multicasting with Silverlight 3 Local Messaging

Posted by Dan Vanderboom on April 29, 2009

[This article and the sample solution included were written with Silverlight 3 Beta.]

The very first thing I did to experiment with Silverlight 3’s new local messaging feature was to create an application with a listener name of “Everyone”, pop up multiple instances of the application, and try sending a message to all of the instances.  I got a nasty HRESULT E_FAIL exception message upon firing up the second instance.  I closed the application and restarted, only to find I got the same error message on the first instance as well (until I rebooted).

The problem was that a listener must have a unique name and I was violating that rule.  There are no groups, and multicasting to multiple receivers in the same group isn’t supported out of the box.  Because I didn’t dispose of the object, that name was never released.  This seems like a design flaw; when a Silverlight application instance ends, the Silverlight runtime should be able to detect that and release this name resource.

When I heard about this local message passing ability, my first thought was that it would create a neat opportunity, especially in out-of-browser applications, for multiple-window applications.  This would be great for those of us who use multiple monitors, as we could then slide panels around where we wanted them, taking full advantage of our workspace.

My sample application, which you can download here, consists of a TextBox, a Submit button that sends the content of that TextBox to all the other instances, and a TextBlock that displays important events.  The first time the application runs, it identifies itself as the master window.  All subsequent application runs identify themselves as child windows.  Here’s a screenshot of this application running out-of-browser:

Screenshot

To accomplish this, the master window will need to have a well known name.  I chose MyApp/Master to identify both the application and the window name.  Each of the child windows require a unique name, so I chose the format MyApp/{guid}.  Once an instance realizes there’s already a master window, it gives itself a child window GUID name and then registers that name with the master window.  When a child instance exits, it unregisters itself with the master instance.  And finally, when the master window exits, it informs all of the child windows (so they can shutdown, most likely).

I defined several static members in the App class itself, so they would be visible across pages, and also because I wanted to hook into Application_Exit and needed access from there.

public const string MasterWindowName = "MyApp/Master";
public static string WindowName;
public static Guid WindowID;
public static List<Guid> ChildWindows;

Hooking into the LocalMessageReceiver’s MessageReceived event, I looked for specific keywords in a protocol I quickly cooked up, and in most cases extracted a parameter by parsing the message string.  These commands are NewWindow, CloseWindow, MasterWindowClosed, and UpdateText.

void MessageReceiver_MessageReceived(object sender, MessageReceivedEventArgs e)
{
    if (e.Message.StartsWith("NewWindow:"))
    {
        if (App.ChildWindows == null)
            App.ChildWindows = new List<Guid>();

        Guid NewWindowID = new Guid(e.Message.Substring("NewWindow:".Length));

        App.ChildWindows.Add(NewWindowID);

        Log("New window detected, id = " + NewWindowID.ToString());

        return;
    }

    if (e.Message.StartsWith("CloseWindow:"))
    {
        var id = new Guid(e.Message.Substring("CloseWindow:".Length));

        if (App.WindowName == App.MasterWindowName)
            App.ChildWindows.Remove(id);

        Log("Closing window, id = " + id.ToString());
        
        return;
    }

    if (e.Message == "MasterWindowClosed" && App.WindowName != App.MasterWindowName)
    {
        Log("Master Window Closed");
        return;
    }

    if (e.Message.StartsWith("UpdateText"))
    {
        var text = e.Message.Substring("UpdateText:".Length);

        txtName.Text = text;

        // if this is the master window, then distribute to all child windows
        if (App.WindowName == "MyApp/Master")
            UpdateTextMulticast(text);

        return;
    }
}

As you can see, most of this is simple housekeeping code to track which application instances or windows are currently connected to the master window.  The UpdateText command calls the UpdateTextMulticast method, which looks like this:

private void UpdateTextMulticast(string Text)
{
    foreach (var id in App.ChildWindows)
    {
        var MessageSender = new LocalMessageSender("MyApp/" + id.ToString());
        MessageSender.SendAsync("UpdateText:" + txtName.Text);
    }
}

If the window is a child window, the Submit button sends a message to the master window; the master window, when clicking on Submit, calls the UpdateTextMulticast method.

private void btnSubmit_Click(object sender, RoutedEventArgs e)
{
    if (App.WindowName == "MyApp/Master")
    {
        UpdateTextMulticast(txtName.Text);
    }
    else
    {
        var MessageSender = new LocalMessageSender("MyApp/Master");
        MessageSender.SendAsync("UpdateText:" + txtName.Text);
    }
}

Finally, this is how a window alerts other windows that it’s closing (in App.xaml.cs):

private void Application_Exit(object sender, EventArgs e)
{
    if (WindowName != MasterWindowName)
    {
        var MessageSender = new LocalMessageSender(MasterWindowName);
        MessageSender.SendAsync("CloseWindow:" + WindowID);
    }
    else if (ChildWindows != null)
    {
        foreach (var id in ChildWindows)
        {
            var MessageSender = new LocalMessageSender("MyApp/" + id.ToString());
            MessageSender.SendAsync("MasterWindowClosed");
        }
    }
}

That’s about all there is to it.  Admittedly, having the same Silverlight application act as both master and child window might not be the best arrangement, and it certainly adds a little to the complexity of the code, but the sky is the limit as far as how the new local messaging feature of Silverlight 3 could be used.

What I’d really like to see is some kind of WCF customization that could define WCF services and host them specifically for consumption across this local messaging channel.  Doing so would eliminate the need for defining and parsing a protocol as I’ve done in this example, as WCF could handle the serialization and service method invocation.

Posted in Design Patterns, Distributed Architecture, Silverlight, User Interface Design | 3 Comments »

Advanced Customization of a Silverlight ListBox

Posted by Dan Vanderboom on April 13, 2009

[This article and its solution are based on Silverlight 3 Beta and Blend 3 Beta.]

The more I work with Silverlight, the more impressed I am.  Though I do keep running into frustrating situations, I haven’t encountered nearly as many dead ends as I did writing Windows Forms applications.  But where I used to run into dead ends, I now find myself lost in a labyrinth of deeply-composed control hierarchies, dichotomized content controls, and numerous interrelated control and data templates.

But ultimately I can find a way to do what I set out to do.  That’s huge.  If the learning curve is treacherously steep and my solution to a problem is tricky and twisted, I can reassure myself of more fluent UI design in the future due to increased understanding.  The difficulty of solving these problems is due to complexity of the UI design itself, the immaturity of Silverlight and its APIs, and my own inexperience working with it.

You can download the finished solution here.

The Goal

When I set out to customize a list control, I didn’t start with the tools of Silverlight.  I sketched out a design that assumed anything would be possible, and decided to figure out later how to implement exactly that (in behavior and layout, not final visuals).  The mock-up below is similar to what I came up with, simplified to include only those elements I’m going to illustrate in this article.

image

The first thing you’ll notice is that the data template renders differently based on the data for that item.  I found a Code Project article by Anil Gupta on doing the same kind of thing.  This turns out to be the easy part.  (The space around and between items isn’t intended to be rendered as such, and was added only to emphasize the separate identities of the item templates.)

I also wanted each item in the list to be expandable to display more information and to host interactive controls like sliders (shown in the example) to manipulate the underlying data.  In noticing that this button and behavior, as well as the border, are common elements to each of the item templates, I decided that what I was looking at involved a ContentControl.  This new control would contain these common elements and a ContentPresenter which would be filled by the specific item type template (one for airplane, one for truck, one for boat).  That way, I could build a whole bunch of new templates for new item types and I wouldn’t have to worry about placing the button correctly or wiring up its behavior each time.

Though the illustration might suggest that the only difference among the templates is background color, I wanted to be able to completely differentiate them.  The only thing that would be standard would be a collapsed height of 32 to give a nice consistent vertical layout (and for this example, a standard expanded height of 64).  Inside, the controls and their layout could follow any design.

Unspecified at first, one of my presumptions was that the width of each item would fill all of the available space, which is the width of the ListBox minus borders, padding, and the vertical ScrollBar.  This would prove to be the most difficult challenge, and I found some discussion in the forums, but ultimately I would find my own way to solve it.

Finally, I wanted to do as much as possible in Blend.  XAML is fine for setting complex bindings and wiring up other things, but for drawing of graphics (editing templates), I wanted to leverage Blend as much as possible.

The Solution

First we need a data model, to know the shape of items that we’re binding to in our ListBox.  I used a simple example of a Vehicle base class and three derived types.  Elsewhere, I instantiate several of each type of vehicle and add them to the ListBox’s ItemsSource collection.

public abstract class Vehicle
{
    public string Manufacturer { get; set; }
    public string Model { get; set; }
    public double Price { get; set; }
}

public class Truck : Vehicle
{
    public bool HasFourWheelDrive { get; set; }
}

public class Boat : Vehicle
{
    public double HullWidth { get; set; }
}

public class Airplane : Vehicle
{
    public int MaxAltitude { get; set; }
}

I then created three UserControls, one for each vehicle type, and called them AirplaneTemplate, BoatTemplate, and TruckTemplate.  I gave each of them a DesignHeight of 64 to represent their expanded state, let their Width be Auto, and set HorizontalAlignment to Stretch.  I set the Height of each of the two Grid rows to 32, to ensure they wouldn’t stretch as the ContentControl hosting this content collapsed.

image

Selecting a Template Based on Item Data

There’s no way that I know of to write an expression in XAML that will bind to a different data template based on item data.  I also know of no way to write code behind a data template.  To get around these limitations, I created a data template called VehicleListDataTemplate that contains a single VehicleItemTemplate custom control which I could write code behind.  This control is a ContentControl, so it’s capable of drawing its own content as well drawing content passed into it.  The content that it supplies itself consists of the common UI elements: the border and the button to toggle the expansion or collapse of the item.

This control is mocked up like so, showing both collapsed and expanded states:

image

The control’s ContentPresenter, set with its Content property, would occupy the same space, although the button would be placed on top to ensure it was clickable.

This was my first custom Silverlight control (other than UserControls), so several things were new to me.  For one, defining a default control template in generic.xaml and writing a separate class file for behavior.  This is what the default template looks like:

<Style TargetType="local:VehicleItemTemplate">
    <Setter Property="Template">
        <Setter.Value>
            <ControlTemplate TargetType="local:VehicleItemTemplate">

                <Grid x:Name="Core" Background="{TemplateBinding Background}"
                      d:DesignHeight="32" Height="{TemplateBinding Height}"
                      d:DesignWidth="312" Width="Auto"
                      VerticalAlignment="Stretch" HorizontalAlignment="Stretch">

                    <Border VerticalAlignment="Stretch" CornerRadius="5,5,5,5"
                            BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="2" />

                    <ContentPresenter
                        VerticalAlignment="{TemplateBinding VerticalContentAlignment}" 
                        HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" 
                        Content="{TemplateBinding Content}"
                        ContentTemplate="{TemplateBinding ContentTemplate}" Margin="0"/>

                    <Button x:Name="Expander" VerticalAlignment="Top" HorizontalAlignment="Right"
                            Margin="0,4,4,0" Width="28" Height="24" 
                            BorderBrush="{TemplateBinding BorderBrush}"/>
                </Grid>
                
            </ControlTemplate>
        </Setter.Value>
    </Setter>
</Style>

And here it’s referenced by the data template:

<DataTemplate x:Key="VehicleListDataTemplate">
    <local:VehicleItemTemplate VerticalAlignment="Top" HorizontalAlignment="Left"
        Background="#0014145D" Margin="0,0,0,0" BorderBrush="#FF5063A5" 
        d:DesignHeight="32" Height="32" d:DesignWidth="430"
        VerticalContentAlignment="Stretch" HorizontalContentAlignment="Stretch" />
</DataTemplate>

So far that’s not too bad.  We have a data template, which refers to VehicleItemTemplate (a ContentControl) that gives our common appearance and hosts a specific vehicle UserControl depending on the item data in question.  I count three layers so far, but unfortunately that isn’t enough.

Let’s take a look at how we set the content:

private void VehicleItemTemplate_Loaded(object sender, RoutedEventArgs e)
{
    var vehicle = DataContext as Vehicle;

    // vehicle will be null when this is executed in the designer
    if (vehicle == null)
        return;

    if (vehicle is Airplane)
        Content = new AirplaneTemplate();
    else if (vehicle is Truck)
        Content = new TruckTemplate();
    else if (vehicle is Boat)
        Content = new BoatTemplate();
}

Pretty simple: the DataContext is our item data object, we can inspect the type to figure out which one it is, and create a new vehicle UserControl of the appropriate matching type to set the Content.

To make it expand and collapse, we first need to get a reference to the button in our template, which is based on the parts I defined for the control.

[TemplatePart(Name = "Core", Type = typeof(FrameworkElement))]
[TemplatePart(Name = "Expander", Type = typeof(ButtonBase))]
public class VehicleItemTemplate : ContentControl

In the template, the Expander part must be some control that inherits from ButtonBase, and which therefore implements a Click event.  When the template is applied to the control at runtime, OnApplyTemplate is run, so we hook into that event there:

public override void OnApplyTemplate()
{
    base.OnApplyTemplate();
    ToggleButton = GetTemplateChild("Expander") as ButtonBase;
    ToggleButton.Click += new RoutedEventHandler(btnToggleSize_Click);
}

private void btnToggleSize_Click(object sender, RoutedEventArgs e)
{
    Duration duration = new Duration(TimeSpan.FromSeconds(0.2));

    Storyboard sb = new Storyboard();
    sb.Duration = duration;

    DoubleAnimation ani1 = new DoubleAnimation();
    ani1.Duration = duration;
    ani1.To = Height == 32 ? 64 : 32;
    Storyboard.SetTarget(ani1, this);
    Storyboard.SetTargetProperty(ani1, new PropertyPath("FrameworkElement.Height"));

    DoubleAnimation ani2 = new DoubleAnimation();
    ani2.Duration = duration;
    ani2.To = Height == 32 ? 64 : 32;
    Storyboard.SetTarget(ani2, Content as Control);
    Storyboard.SetTargetProperty(ani2, new PropertyPath("FrameworkElement.Height"));

    sb.Children.Add(ani1);
    sb.Children.Add(ani2);

    sb.Begin();
}

Now we have an animation that will smoothly expand or collapse our item and its content, and because I use the To property, we avoid jumping from one state to another.  Instead, if we click to expand and then click to collapse again, it will animate from its current position to the desired position.

Setting the Correct Width of ListBox Items

The biggest problem I had was in setting the correct width.  With all Widths set to Auto, each item in the list will take up only as much space as it needs.  They can be shorter or longer than the ListBox’s width, and each item could be rendered a different width (depending on the template).

The first thing I tried was to set the VehicleItemTemplate’s Width to the ActualWidth of the ListBox.  I didn’t have enough items in my list to see the vertical ScrollBar appear, but even without it, the borders of my item templates were being clipped by the right side of the ListBox, and I could see a gap of several pixels to the left as well as above and below each item.

image

With the default rendering of Silverlight being that nothing is drawn (border widths are zero, brushes are null, etc.), I find it odd that the ListBox assumes I want padding where I haven’t specified any.  After all, if I wanted this, couldn’t I add a Margin to my data template?

I removed the ListBox border, and finally added a ListBoxItem manually to the ListBox in Blend.  Right-clicking on that ListBoxItem, I edited a copy of the control template, shown in the screenshot below:

image

This turns out to be different from the data template defined earlier.  This ListBoxItem template is itself a ContentControl, and its content is my VehicleListTemplate… (which is another ContentControl that hosts the specific vehicle UserControls…).  See how confusing this can get?  I feel like Alice in Wonderland sometimes, seeing how far the Silverlight hole really goes.  I also wonder why there doesn’t appear to be any way to edit this template without manually creating a ListBoxItem, when it clearly matters even when you’re defining a data template.

It’s also in this ListBoxItem template that you can render visual decorators to indicate various visual states: Normal, MouseOver, Pressed, and so on.  You might expect to handle that in your data template, but that doesn’t seem to be the case.

Anyway, within that ListBoxItem template was this ContentPresenter.

<ContentPresenter x:Name="contentPresenter" 
                  HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" 
                  Margin="{TemplateBinding Padding}" Content="{TemplateBinding Content}" 
                  ContentTemplate="{TemplateBinding ContentTemplate}"/>

The third line shows a Margin being bound to the Padding property.  I removed this Margin altogether, and the gratuitous extra space around my items disappeared, making me happy.

Once you have this custom ListBoxItem template, you need to do two things:

  1. Delete the ListBoxItem you manually added in Blend.  Otherwise you’ll get an error when trying to set the ListBox’s ItemsSource property.
  2. Set the ItemContainerStyle property of the ListBox to point to this new template.  Note that this is different from the ItemTemplate property which sets the data template.

The ListBox XAML should now look something like this:

<ListBox x:Name="VehicleList" HorizontalAlignment="Stretch"
         Margin="20,20,20,20" Width="Auto"
         BorderThickness="2,2,2,2" BorderBrush="#FF99A712"
         ItemTemplate="{StaticResource VehicleListDataTemplate}"
         ItemContainerStyle="{StaticResource ListBoxItemStyle}">
    <ListBox.Background>
        <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0">
            <GradientStop Color="#FF03021E"/>
            <GradientStop Color="#FF191651" Offset="1"/>
        </LinearGradientBrush>
    </ListBox.Background>
</ListBox>

Now we’re at a point where the item container itself isn’t adding any extra space, so if we go without borders or a vertical ScrollBar, everything fits just right… until the vertical ScrollBar appears.  This is close, but not good enough.  How can we take care of the space taken up by the vertical ScrollBar?

While I was digging through the ListBox template, I noticed that the ScrollViewer control had a property called ViewportWidth, and with some debugging saw that it was not quite as wide as the total ListBox width.  If only I had a reference to the ScrollViewer from within my VehicleItemTemplate ContentControl!  I looked for a while but found nothing.  You can call GetTemplateChild from within a control’s class but not from the outside because it’s a protected method.

I decided to cheat.  I created a new ListBox class that exposed the ScrollViewer as a property.  I felt it was safe to do so because ScrollViewer is a TemplatePart defined in the ListBox’s parts and states contract.

public class MyListBox : ListBox
{
    public MyListBox() : base() { }

    public ScrollViewer ScrollViewer
    {
        get { return GetTemplateChild("ScrollViewer") as ScrollViewer; }
    }
}

I also needed to be able to reference the MyListBox object from each VehicleItemTemplate, so I created a DependencyProperty to store that:

// store a reference to the MyListBox that contains this item
public static readonly DependencyProperty ParentListBoxProperty = DependencyProperty.Register(
    "ParentListBox", typeof(MyListBox), typeof(VehicleItemTemplate), new PropertyMetadata(null));

public MyListBox ParentListBox
{
    get { return GetValue(ParentListBoxProperty) as MyListBox; }
    set { SetValue(ParentListBoxProperty, value); }
}

Next, I set this new ParentListBox property from within the data template I defined earlier (using Element binding, which is new to Silverlight 3):

<DataTemplate x:Key="VehicleListDataTemplate">
    <local:VehicleItemTemplate VerticalAlignment="Top" HorizontalAlignment="Left"
        Background="#0014145D" Margin="0,0,0,0" BorderBrush="#FF5063A5" 
        d:DesignHeight="32" Height="32" d:DesignWidth="430"
        VerticalContentAlignment="Stretch" HorizontalContentAlignment="Stretch" 
        ParentListBox="{Binding ElementName=VehicleList, Mode=OneWay}"/>
</DataTemplate>

Finally, I replaced the ListBox I was using with MyListBox, and in the VehicleItemTemplate_Loaded method, I added the following data binding in code:

// set up data binding:
// ViewportWidth of the ListBox's ScrollViewer tells us how much space we have available
//(ListBox.ActualWidth - borders - scrollbar)
WidthBinding = new Binding();
WidthBinding.Source = ParentListBox.ScrollViewer;
WidthBinding.Path = new PropertyPath("ViewportWidth");
SetBinding(WidthProperty, WidthBinding);

When there are more items than will fit in the ListBox, the vertical ScrollBar appears, and the width of all the item templates shrinks to accomodate it.  When the ListBox itself widens or shrinks, it adjusts.  This seems to produce the perfect fit for items.  If you download the sample source code, you’ll notice I set the page to auto size, so when you resize the browser, the ListBox will grow and shrink along with it, and you can easily test it.  Also, if you have the ListBox almost full and you expand one of the items with the expander button, you’ll see it adjust then as well.

Here is the final product:

image 

After all of that, we can finally rest assured that we’ll never see a horizontal ScrollBar in our ListBox.

Conclusion

There are several ListBox templates we didn’t take full advantage of: the ItemsPanel for the layout of items, the various embedded templates for parts such as ScrollBars, and the different states of the ListBoxItem template.  However, customization of these templates has been covered fairly well by other articles.

Being somewhat new to Silverlight, I’m curious to see how others would have accomplished the same things.  Is there an easier way to do some of this?  Are there some Silverlight API calls I could have used to reference the ListBox’s ScrollViewer, for example?

I spent many hours working out these details.  I hope I can spare some of you the trouble I encountered.  Happy Silverlight developing!

Posted in Custom Controls, Data Binding, Design Patterns, Expression Blend, Silverlight, User Interface Design | 10 Comments »

Spb Mobile Shell: An Alternative Windows Mobile Shell

Posted by Dan Vanderboom on May 6, 2008

Spb Software House has been creating some great Windows Mobile software for a while now, but recently they released their Spb Mobile Shell, and I’m so impressed that I felt the need to tell everyone about it.  The only reason it doesn’t quite rival the iPhone interface is its lack of scope: it doesn’t (yet?) replace all of the built-in applications for managing tasks, calendar items, file exploration, process/task management, and so on.

image image image image

Why am I so enamored with their user interface?  Let me count the ways:

  1. Visual effects such as linear gradients, faded backgrounds for emphasizing foreground popups, fast animated page transitions, and a choice of skin-like color schemes.
  2. Large, touch friendly buttons that eliminate the need to pull out your stylus and poke at impossibly tiny areas of the screen.
  3. A large button bar along the bottom, and the elimination of the start menu along the top and the Windows Mobile menu along the bottom (with only two top-level options).
  4. Selection of phone contacts using photo buttons.
  5. Gesture recognition of finger movements to flip pages.
  6. The ability to customize menus and other settings.

For only $29.95 (or packaged together with three other useful applications for $49.95), Spb Mobile Shell is a steal.  I’ve experimented with a few other shell applications, and had problems with glitch and buggy behavior, but so far this little gem has performed marvelously in all regards.

I hope they eventually provide some way to integrate new applications into their shell.  It would be nice to take advantage of some of their visual layouts, message box pop up effects, and so on, to provide an interface in my own software that’s consistent with their shell.  Better yet, the Windows Mobile team should throw in the towel with their existing shell and buy this one to use as a guideline to replace theirs.  As Mark Miller so rightly observed in the recent Dot Net Rocks episode “The Science of Great UI“, the Windows Mobile 5 user interface sucks.  Companies like Spb wouldn’t be providing alternatives if this wasn’t the case.  From what I’ve seen of Windows Mobile 6, things aren’t looking much better there, either.

Posted in User Interface Design, Windows Mobile | Leave a Comment »

Compact Framework Controls (Part 3): Linear Gradients

Posted by Dan Vanderboom on May 5, 2008

[This article is part of a series that starts in this article and precedes this one here.]

Linear Gradients

Linear gradients are a nice, subtle effect that can turn a boring control into something more interesting and professional looking.  You can use a linear gradient for the entire background of your control, which I’ll demonstrate in this article, or you can paint one or more regions selectively to display a gradient.  A linear gradient is a gradual transition from one color to another, and while you can transition through multiple colors along an axis, going from blue to red to green to white to black if you wanted, I’m going to start simple and create a control that blends between only two colors.  You can also define the line of change to be any angle: vertical (as shown in the example below), horizontal, or some other angle.  I’ll show how to draw just the vertical and horizontal linear gradients.

Linear Gradient Example

I’ll be using the control from my previous article, and adding to it, to demonstrate linear gradients.  We’re going to need some new properties to support gradients, so first we need to add a couple enumerations.

public enum RegionFillStyle
{
    SolidColor,
    LinearGradient
}

public enum LinearGradientDirection
{
    Horizontal,
    Vertical
}

And now the new properties.

private RegionFillStyle _FillStyle = RegionFillStyle.SolidColor;
[DefaultValue(RegionFillStyle.SolidColor)]
public RegionFillStyle FillStyle
{
    get { return _FillStyle; }
    set { _FillStyle = value; Refresh(); }
}

private LinearGradientDirection _LinearGradientDirection = LinearGradientDirection.Vertical;
[DefaultValue(LinearGradientDirection.Vertical)]
public LinearGradientDirection LinearGradientDirection
{
    get { return _LinearGradientDirection; }
    set { _LinearGradientDirection = value; Refresh(); }
}

private Color _BackColor2 = Color.White;
public Color BackColor2
{
    get { return _BackColor2; }
    set { _BackColor2 = value; Refresh();  }
}

The goal is to draw a background for our control that is a linear gradient, fading from BackColor to BackColor2.  We’re going to use a technique called interpolation, which is the calculation of new data points based on the values of adjacent data points.  In our case, we’re going to be interpolating color values.  We know the color at the top and the bottom (in the case of a vertical gradient), so a pixel halfway between them spatially should have a color value that is halfway between the color values at both ends.  Because colors are manipulated in bitmaps with an RGB scheme (using red, blue, and green aspects), we actually have three component values that need to be interpolated.

Understanding the Math

If our control is 100 pixels tall, and the color at the bottom is 100 units less blue than at the top, the translation is very simple: as we move down each pixel from top to bottom, we’ll subtract 1 unit of color from the blue value.  The challenge lies in the fact that we can’t assume our height and our color values will line up so nicely.  Furthermore, we have two other colors to deal with, and they may need to change at different rates or in different directions: the color may become slightly more red while simultaneously becoming drastically less green.

So we’re going to need some way of finding the scaling factor between the height or width of the control and the distance in color values for red, green, and blue separately.  In our example of 100 pixels to 100 color units, we have a 1:1 scaling factor.  If we instead had to make a color change of 200 units, we’d have a scaling factor of 1:2, or 1 pixel for 2 units of color change.  Another way to think of this is to say that for every pixel we move along the line, we’re going to increase or decrease our color by 2 units.

double RedScale = (double)Height / (BackColor2.R - BackColor.R);

The RedScale variable divides our height by our gradient’s difference (or change) in redness, and we make the same scaling calculation with green and blue.  This scaling takes increasing and decreasing color changes into account based on whether the subtraction expression on the right results in a positive or negative number.  As we move along the y axis, we’ll create a color that uses BackColor as a base and adds RGB values to it that divide the current y position with this scaling factor.  Let’s take a look at that code:

Bitmap LinearGradient = null;

if (LinearGradientDirection == LinearGradientDirection.Vertical)
{
    double RedScale = (double)Height / (BackColor2.R - BackColor.R);
    double GreenScale = (double)Height / (BackColor2.G - BackColor.G);
    double BlueScale = (double)Height / (BackColor2.B - BackColor.B);

    LinearGradient = new Bitmap(1, Height);
    for (int y = 0; y < Height; y++)
    {
        int red = BackColor.R + (int)((double)y / RedScale;
        int green = BackColor.G + (int)((double)y / GreenScale;
        int blue = BackColor.B + (int)((double)y / BlueScale;

        Color color = Color.FromArgb(red, green, blue);
        LinearGradient.SetPixel(0, y, color);
    }
}

if (LinearGradient != null)
{
    FillBrush = new TextureBrush(LinearGradient);
}

After calculating our scaling factors, we define a bitmap that’s as tall as our control, but only one pixel wide.  You can see this bitmap being used to create a TextureBrush at the bottom of the code.  This brush will be used to fill the entire area from left to right, copying the bitmap across the entire surface, so there’s no need to make it any wider than a single pixel.  (For horizontal gradients, we do the opposite: create a bitmap as wide as our control but only one pixel tall.)

In our hypothetical 100-pixel-tall control, with a red value that decreases 200 units from top to bottom (and therefore has a scaling factor of -0.5), we calculate each pixel’s redness by dividing y by -0.5.  At pixel y=25, which is 25% of the way down, we get a value of -50, which is 25% of -200.  At pixel y=75, we get 75 / -0.5 = -150.  So we take our original BackColor.R, and add this negative number, which makes it decrease from the base color as desired.

The only thing we need to do now is to ensure that each of our three color values never go outside the range of 0 to 255, otherwise we’ll get an error thrown from the Bitmap class.  We can do this with the Math class’s methods Min and Max, like this:

int red = Math.Max(Math.Min(BackColor.R + (int)((double)y / RedScale), 255), 0);
int green = Math.Max(Math.Min(BackColor.G + (int)((double)y / GreenScale), 255), 0);
int blue = Math.Max(Math.Min(BackColor.B + (int)((double)y / BlueScale), 255), 0);

Min returns the smaller of two numbers, and since we pass in 255 along with our calculated color, if our calculation is over 255, then the value it will return is 255.  Max does the opposite, and the combination ensures we stay within the valid range.

Results

The code sample above only showed the code for a vertical gradient, so here is the complete listing of our control with the logic for horizontal gradients as well.

using System;
using System.Collections.Generic;
using System.Windows.Forms;
using System.Drawing;
using System.Reflection;
using System.ComponentModel;

namespace CustomControlsDemo
{
    public class ClippingControl : Control
    {
        private RegionFillStyle _FillStyle = RegionFillStyle.SolidColor;
        [DefaultValue(RegionFillStyle.SolidColor)]
        public RegionFillStyle FillStyle
        {
            get { return _FillStyle; }
            set { _FillStyle = value; Refresh(); }
        }
        
        private LinearGradientDirection _LinearGradientDirection = LinearGradientDirection.Vertical;
        [DefaultValue(LinearGradientDirection.Vertical)]
        public LinearGradientDirection LinearGradientDirection
        {
            get { return _LinearGradientDirection; }
            set { _LinearGradientDirection = value; Refresh(); }
        }
        
        private Color _BackColor2 = Color.White;
        public Color BackColor2
        {
            get { return _BackColor2; }
            set { _BackColor2 = value; Refresh(); }
        }

        protected override void OnPaint(PaintEventArgs e)
        {
            // define a canvas for the visual content of the control
            Bitmap MyBitmap = new Bitmap(Width, Height);
            Graphics g = Graphics.FromImage(MyBitmap);

            Brush FillBrush = null;
            if (FillStyle == RegionFillStyle.SolidColor)
            {
                FillBrush = new SolidBrush(BackColor);
            }
            else if (FillStyle == RegionFillStyle.LinearGradient)
            {
                Bitmap LinearGradient = null;

                if (LinearGradientDirection == LinearGradientDirection.Horizontal)
                {
                    double RedScale = (double)Width / (BackColor2.R - BackColor.R);
                    double GreenScale = (double)Width / (BackColor2.G - BackColor.G);
                    double BlueScale = (double)Width / (BackColor2.B - BackColor.B);

                    LinearGradient = new Bitmap(Width, 1);
                    for (int x = 0; x < Width; x++)
                    {
                        int red = Math.Max(Math.Min(BackColor.R + (int)((double)x / RedScale), 255), 0);
                        int green = Math.Max(Math.Min(BackColor.G + (int)((double)x / GreenScale), 255), 0);
                        int blue = Math.Max(Math.Min(BackColor.B + (int)((double)x / BlueScale), 255), 0);

                        Color color = Color.FromArgb(red, green, blue);
                        LinearGradient.SetPixel(x, 0, color);
                    }
                }
                else if (LinearGradientDirection == LinearGradientDirection.Vertical)
                {
                    double RedScale = (double)Height / (BackColor2.R - BackColor.R);
                    double GreenScale = (double)Height / (BackColor2.G - BackColor.G);
                    double BlueScale = (double)Height / (BackColor2.B - BackColor.B);

                    LinearGradient = new Bitmap(1, Height);
                    for (int y = 0; y < Height; y++)
                    {
                        int red = Math.Max(Math.Min(BackColor.R + (int)((double)y / RedScale), 255), 0);
                        int green = Math.Max(Math.Min(BackColor.G + (int)((double)y / GreenScale), 255), 0);
                        int blue = Math.Max(Math.Min(BackColor.B + (int)((double)y / BlueScale), 255), 0);

                        Color color = Color.FromArgb(red, green, blue);
                        LinearGradient.SetPixel(0, y, color);
                    }
                }

                if (LinearGradient != null)
                {
                    FillBrush = new TextureBrush(LinearGradient);
                }
            }

            if (FillBrush != null)
            {
                g.FillRectangle(FillBrush, ClientRectangle);
            }

            // draw graphics on our bitmap
            g.DrawLine(new Pen(Color.Black), 0, 0, Width - 1, Height - 1);
            g.DrawEllipse(new Pen(Color.Black), 0, 0, Width - 1, Height - 1);

            // dispose of the painting tools
            g.Dispose();

            // paint the background to match the Parent control so it blends in
            e.Graphics.FillRectangle(new SolidBrush(Parent.BackColor), ClientRectangle);

            // define the custom shape of the control: a trapezoid in this example
            List<Point> Points = new List<Point>();
            Points.AddRange(new Point[] { new Point(20, 0), new Point(Width - 21, 0), 
                new Point(Width - 1, Height - 1), new Point(0, Height - 1) });

            // draw that content inside our defined shape, clipping everything that falls outside of the region;
            // only if the image is much smaller than the control does it really get "tiled" and act like a textured painting brush
            // but our bitmap image is the same size as the control, so we're just taking advantage of clipping
            Brush ImageBrush = new TextureBrush(MyBitmap);

            e.Graphics.FillPolygon(ImageBrush, Points.ToArray());
        }
    }
}

Placing a couple of these controls on a form, I can set one of them to use a solid background (yellow), and the others to use vertical and horizontal linear gradients.

Linear Gradient Control Screenshot

Conclusion

Linear gradients are a great effect to have in your repertoire of techniques.  Compact Framework applications especially tend to be flat and dull, with an unimpressive array of built-in controls, and with more focus on user interfaces like the iPhone and some of the cool new HTC touch devices, the desire for fancier interfaces is growing.  As we start to mix operations like polygon clipping and quasi-transparency (presented in the previous article), linear gradients, and others, we can put together a bag of tricks for composing beautiful and interesting user experiences.

Posted in Algorithms, Compact Framework, Custom Controls, User Interface Design, Windows Forms | 9 Comments »

Compact Framework Controls (Part 2): Polygon Clipping

Posted by Dan Vanderboom on May 4, 2008

[This article is part of a series that starts in this article.]

My intention is to cover a full spectrum of activities around custom control development, with an emphasis on the compromises, workarounds, and special care that must be taken when developing controls for the Compact Framework.  In my first article, I talked about how most design-time attributes must be applied to control classes and their members, and what some of those attributes mean.  I have a number of articles planned that explore those attributes more, and will go into extending the design-time experience in more depth, but I’m going to take a detour into custom painting of the control surface first, so we have a control to reference and work with in the examples.

Polygon Clipping

If you’re new to creating graphical effects and unfamiliar with the techniques invovlved, clipping refers to the chopping off of an image based on some kind of border or boundary.  In Windows Forms interfaces, controls are inherently rectangular because clipping occurs automatically at the window’s boundary (which is a shame considering how this presumption of need slows rendering, and WPF takes good advantage of not doing so).  Everything outside the control’s outer shape doesn’t get drawn at all.  You can draw anywhere you want, including negative coordinates, but only the points that fall within the clipping region will be displayed.

Clipping Illustration

But what if you want to make your control a different shape, other than the standard rectangle, like an ellipse or a rounded rectangle?  How do you make sure that whatever you draw inside never leaks outside of your defined shape?  In the full .NET Framework, there is a Region property in the Control class that defines these boundaries, and there are several good articles that explain this.  The clipping mask is applied based on that Region’s definition.  In Compact Framework, the Region property doesn’t exist, and you’re forced to find your own way of defining different shapes.

The key to this is to understand the Graphics class’s Fill methods.  While FillEllipse and FillRectangle definitely have their uses, I’d like to focus on situations that are a little bit more demanding, such as when you want to represent a more complex shape like a rounded rectangle (with many points) or some kind of UML diagram element.  FillPolygon takes a list of Points, and with them can define the most eccentric and specific of shapes.  By filling a polygon with an image using a TextureBrush, clipping happens automatically as part of the operation.

Let’s take a look at some code to see how we perform each of these steps: preparing and drawing on a bitmap image in memory, defining our shape’s boundaries, and then clipping that image within the specified shape.

using System;
using System.Collections.Generic;
using System.Windows.Forms;
using System.Drawing;
using System.Reflection;

namespace CustomControlsDemo
{
    public class ClippingControl : Control
    {
        protected override void OnPaint(PaintEventArgs e)
        {
            // define a canvas for the visual content of the control
            Bitmap MyBitmap = new Bitmap(Width, Height);

            // create a painting tools object
            Graphics g = Graphics.FromImage(MyBitmap);

            // draw graphics on our bitmap
            g.FillRectangle(new SolidBrush(Color.PaleGoldenrod), ClientRectangle);
            g.DrawLine(new Pen(Color.Black), 0, 0, Width - 1, Height - 1);
            g.DrawEllipse(new Pen(Color.Black), 0, 0, Width - 1, Height - 1);

            // dispose of the painting tools
            g.Dispose();

            // define the custom shape of the control: a trapezoid in this example
            List<Point> Points = new List<Point>();
            Points.AddRange(new Point[] { new Point(20, 0), new Point(Width - 21, 0), 
                new Point(Width - 1, Height - 1), new Point(0, Height - 1) });

            // draw that content inside our defined shape, clipping everything that falls outside of the region;
            // only if the image is much smaller than the control does it really get "tiled" and act like a textured painting brush
            // but our bitmap image is the same size as the control, so we're just taking advantage of clipping
            Brush ImageBrush = new TextureBrush(MyBitmap);
            e.Graphics.FillPolygon(ImageBrush, Points.ToArray());
        }
    }
}

Although this control has a silly shape and doesn’t do much yet, it does illustrate the basics of painting within the bounds of an irregular shape.  As long as we draw on MyBitmap, everything will be properly clipped by the call to FillPolygon.  However, as you can see in the screenshots below, the white background around our custom shape could be a problem.  You can change the BackColor property to match the color of the container its on (a Panel control in this case, which is Color.BurlyWood), but really it makes more sense for BackColor to describe the color within our shape.  We’d like the surrounding background to blend in with whatever container the control is sitting in.

first version

We can accomplish this with two simple changes.  First, at some point before the FillPolygon call, we need to fill the entire control’s area with the BackColor property of the parent control.  We will draw using the e.Graphics object, which paints on the whole rectangular control, not our g Graphics object, whose contents get clipped.  Then, instead of hard coding Color.PaleGodenrod, we can use the BackColor property to specify our fill color.  Here is the changed section of code:

// draw graphics on our bitmap
g.FillRectangle(new SolidBrush(BackColor), ClientRectangle);
g.DrawLine(new Pen(Color.Black), 0, 0, Width - 1, Height - 1);
g.DrawEllipse(new Pen(Color.Black), 0, 0, Width - 1, Height - 1);

// dispose of the painting tools
g.Dispose();

e.Graphics.FillRectangle(new SolidBrush(Parent.BackColor), ClientRectangle);

Now if we set the BackColor to PaleGodenrod, we’ll get this rendering:

transparent background

Dragging the control off the panel and into the white area will cause the area around the control to paint white, so now you can see how it blends in with whatever background we have as long as it’s a solid color.

In a future article, after I’ve covered how to draw arcs and curves, I will revisit this technique and demonstrate how to draw rectangles with rounded corners.

[This article is part of a series that continues in this article.]

Posted in Algorithms, Compact Framework, Custom Controls, User Interface Design, Visual Studio, Windows Forms, Windows Mobile | 12 Comments »

Using Extension Methods to Manipulate Control Bitmaps in Compact Framework

Posted by Dan Vanderboom on April 11, 2008

I’m loving extension methods.  All of the methods that I wish BCL classes had, I can now add.  While I consider it highly unfortunate that we can’t yet add extension properties, events, or static members of any kind, still it’s a great amount of power in terms of making functionality discoverable in ways not possible before.

During the implementation of my Compact Framework application’s MVC framework, I wanted to support displaying views modally.  However, using a screen stack of UserControls that are all hosted in a single master Form object, I lose out on this built-in functionality and so found myself in need of creating this behavior myself.  One of the difficulties in doing this is displaying a view that may not cover every portion of other views beneath it; if the user clicks on one of the views “underneath”, that control gets activated, and if pressed on a control, that control will handle the event (such as Button.Click).

My solution to the problem is simple.  Take a snapshot of the master form and everything on it, create a PictureBox control that covers the whole form and bring it to front, and set its image to the snapshot bitmap.  Doing this provides the illusion that the user is still looking at the same form full of controls, and yet if they touch any part of the screen, they’ll be touching a PictureBox that just ignores them.  The application is then free to open a new view UserControl on top of that.  When the window is finally closed, the MVC infrastructure code tears down the PictureBox, and the real interface once again becomes available for interaction.

Screenshots before and after screen capture and darkening

In addition, I wanted the ability to emphasize the modal view, so you can see from the picture above that I decided to accomplish this by de-emphasizing the background bitmap.  By darkening the snapshot, the pop-up modal view really does seem to pop out.  The only problem with bitmap manipulation using the Compact Framework library is that it’s extremely slow, but I get around this by using some unsafe code to manipulate the memory region where the bitmap image gets mapped.  (If you’re unfamiliar with the unsafe keyword, don’t worry: this code actually is safe to use.)

Here is the full source code for taking a snapshot of a form (or any control), as well as adjusting the brightness.

using System;
using System.Linq;
using System.Collections.Generic;
using System.Text;
using System.Drawing;
using System.Drawing.Imaging;
using System.Windows.Forms;
using System.Runtime.InteropServices;

public static class ControlBitmapExtensions
{
    [DllImport("coredll.dll")]
    private static extern bool BitBlt(IntPtr hdc, int nXDest, int nYDest, int nWidth, int nHeight,
        IntPtr hdcSrc, int nXSrc, int nYSrc, int dwRop);

    public struct PixelData
    {
        public byte Blue;
        public byte Green;
        public byte Red;
    }

    public static Bitmap GetSnapshot(this Control Control)
    {
        Rectangle rect = new Rectangle(0, 0, Control.Width, Control.Height - 1);
        Graphics g = Control.CreateGraphics();
        Bitmap Snapshot = new Bitmap(rect.Width, rect.Height);
        Graphics gShapshot = Graphics.FromImage(Snapshot);
        BitBlt(gShapshot.GetHdc(), 0, 0, rect.Width, rect.Height, g.GetHdc(), rect.Left, rect.Top, 0xCC0020);
        gShapshot.Dispose();

        return Snapshot;
    }

    public static unsafe Bitmap AdjustBrightness(this Bitmap Bitmap, decimal Percent)
    {
        Percent /= 100;
        Bitmap Snapshot = (Bitmap)Bitmap.Clone();
        Rectangle rect = new Rectangle(0, 0, Bitmap.Width, Bitmap.Height);

        BitmapData BitmapBase = Snapshot.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
        byte* BitmapBaseByte = (byte*)BitmapBase.Scan0.ToPointer();

        // the number of bytes in each row of a bitmap is allocated (internally) to be equally divisible by 4
        int RowByteWidth = rect.Width * 3;
        if (RowByteWidth % 4 != 0)
        {
            RowByteWidth += (4 - (RowByteWidth % 4));
        }

        for (int i = 0; i < RowByteWidth * rect.Height; i += 3)
        {
            PixelData* p = (PixelData*)(BitmapBaseByte + i);

            p->Red = (byte)Math.Round(Math.Min(p->Red * Percent, (decimal)255));
            p->Green = (byte)Math.Round(Math.Min(p->Green * Percent, (decimal)255));
            p->Blue = (byte)Math.Round(Math.Min(p->Blue * Percent, (decimal)255));
        }

        Snapshot.UnlockBits(BitmapBase);
        return Snapshot;
    }

    public static Bitmap Brighten(this Bitmap Bitmap, decimal PercentChange)
    {
        return AdjustBrightness(Bitmap, 100 + PercentChange);
    }

    public static Bitmap Darken(this Bitmap Bitmap, decimal PercentChange)
    {
        return AdjustBrightness(Bitmap, 100 - PercentChange);
    }
}

 

Because Control is extended by GetSnapshot, and Bitmap is extended by AdjustBrightness, Brighten, and Darken, I can write very clear and simple code like this on the consuming side:

Bitmap bitmap = MyForm.GetSnapshot().Darken(40);

…and voila!  I have a snapshot.  Note that because Darken extends Bitmap, it can now be used with any Bitmap.  As we read from this code from left to right, we’re observing a pipeline of transformations.  MyForm is the source data, GetSnapshot is the first step, Darken is the next change, and with more extension methods on Bitmap we could continue to process this in a way that is very natural to think about and construct.

I do have to admit that I cheated a little, though.  Even with the direct memory manipulation with pointers, it still didn’t perform very well on the Symbol and DAP devices I tested on.  So instead of adjusting the brightness on every pixel, I only darken every third pixel.  They’re close enough together that you can’t really tell the difference; however, the closer to 100 percent you darken or brighten an image, the more apparent the illusion will be, since two thirds of the pixels won’t be participating.  So it’s good for subtle effects, but I wouldn’t count on it for all scenarios.

This every-third-pixel dirty trick happens in the for loop, where you see i += 3, so go ahead and experiment with it.  Just be careful not to set it to large even numbers or you’ll end up with stripes!

Posted in Algorithms, Compact Framework, Object Oriented Design, Problem Modeling, User Interface Design, Windows Forms, Windows Mobile | 5 Comments »