Critical Development

Language design, framework development, UI design, robotics and more.

Archive for September, 2009

The Wonders of Aruba

Posted by Dan Vanderboom on September 22, 2009

Untitled 0 00 07-08 

This morning, after being awoken at 5:30am by a rooster living nearby, I went for a walk to the northern tip of the island where beautiful homes are surrounded by lush tropical flowers and various palm trees, ferns, and cacti.  Clouds with serious character muddied the early morning sky, and large birds hovered playfully in the air above the beautiful homes on J. E. Irasquin Blvd—not covering any ground, simply enjoying the feeling of the strong ocean wind, gliding without effort or purpose, hovering in place just above the tallest trees.

It’s surprising to me that humans consider such aimless delight a luxury.  I’m in Aruba for the month September in part because I disagree; I think from time to time, it’s an absolute necessity to stay sane and keep a healthy perspective and sense of balance.  When so many of our moments are goal-directed and serious, and as Americans we have less time off work than virtually every country on Earth, it’s only a matter of time before the intelligence of our own bodies revolts against us in protest, a petition against the undue stress and unrealistic expectations we often have of ourselves.

An hour later, I was following the winding road past Arushi beach, onto the part of Aruba that isn’t polluted much by light at night, where you can see thousands of stars and galaxies and the colorful dust of the Milky Way.  The road curves back and forth several times and climbs steeply toward the California Lighthouse where I normally turn around and head back.  Except today, to my surprise, I came across a herd of goats!

Untitled 0 00 36-25

I first spotted them on the road and let them cross in front of me.  A baby lagged behind, and I followed as closely as possible to get some better pictures.  When I got within 20 feet, the little one bolted ahead, sprinting over ground that was treacherously uneven volcanic rock.  The goats didn’t seem to have any problem running over this terrain, however, nor did they seem to mind me following them around for a half hour.  Here you can see the little one in mid stride of a dashing pace, and notice how well it blends in with the ground’s color.  I also enjoy seeing all the lizards here.  I’ve seen several kinds and most of them are small, but this large one was hanging out at the Raddisson hotel by the pool.

Untitled 0 01 13-04 Untitled 0 00 36-27

I had the pleasure of going to a huge DJ party called something like Maj 4 Stix.  The DJ rig was enormous, with thick outdoor smoke effects, blasts of fire and bright lights of every color, and thumping dance music.  There were acrobats running in translucent plastic balls in the water that surrounded the dance stage like a moat, and hundreds of people dancing to really great music.

Untitled 0 05 13-21 Untitled 0 00 43-19

Every few days, I head to Oranjestad to work: the capital of Aruba.  The best shopping seems to be there, since that’s where the cruise ships stop.  The pictures below are of a shopping area in Oranjestad, and a church and graveyard in Noord where many people are buried in elaborate above-ground stone tombs.

Untitled 0 00 06-22 Untitled 0 00 06-24

Finally, here are two pictures of me: one in front of the rock waterfalls at the Raddisson Hotel from a video I made to wish my niece Ava a happy birthday, and a fun picture of me at Confession Club in Palm Beach.

Happy Birthday Ava - Trimmed 0 00 00-01RedDan

I’ve hiked through the unpopulated countryside of Aruba; I’ve gone to the big parties and night clubs, spent a lot of time tanning on the beaches, enjoyed Dutch food (Cafe Rembrandt is my favorite), and went on a Jeep tour (through ABC Tours) to the natural pools, the gold mine buildings, and the old Indian-painted caves; and somehow have still managed to be very productive writing software for my current client as well as some personal projects I have in the works.  I don’t often give advice, but I would definitely recommend enjoying life as much as possible while it lasts.  Travel, work remotely, start a business, or do whatever makes sense in your life to follow your dreams, but don’t wait to do it!

Posted in Aruba | 3 Comments »

Filtering with Metadata in the Managed Extensibility Framework

Posted by Dan Vanderboom on September 19, 2009

The Managed Extensibility Framework (MEF) is the new extensibility framework from Microsoft.  Pioneered by Glenn Block in the patterns & practices group, and leveraged by the behemoth Visual Studio 2010, it has a striking resemblance to my own Inversion of Control (IoC) and Dependency Injection (DI) framework—which led to me to have a couple great conversations about IoC with Glenn at Tech Ed 2008 and then again at PDC 2008.

But MEF isn’t really written to be your IoC.  Instead, the IoC engine and DI aspects are implementation details, allowing you to do really no more than “MEF things together”.  The core concept of MEF is to provide very simple and powerful application composability.  Not in the user interface composition sense—for that, see Prism for WPF and Silverlight (explained in MSDN Magazine, September 2008)—but for virtually all other dynamic component assembly needs, MEF is your best friend.

The two things I like most about MEF is its simplicity as its lack of presumption on how it will be used.  Compose collections of strings, single method delegates, or implementations of complex services.  All you’re doing is importing and exporting things, with little code required to wire things up.

MEF is currently in its seventh preview release, so expect beta-like quality.  My own experience with it has been very positive, but there are a number of shortcomings in the API.  This article is about a few of them and what can be done to add some much-needed functionality.

System.AddIn vs. MEF

There’s been some confusion with Microsoft coopetition among products with similar aims, and extensibility and composition are no exception.  The AddIn API (team blog) serves a similar purpose as MEF.  (See this two-part MSDN article on System.AddIn: first and second.)  The primary differentiator, from my understanding, is that the AddIn API is a bit more robust and a lot more complicated, and supports such things as isolating extensions in separate AppDomains.

With Visual Studio siding with MEF, it’s personally hard for me to imagine using the AddIn API.  If MEF is flexible and robust enough for Visual Studio, is it really likely to fall short for my own much smaller software systems?  Krzysztof Cwalina suggests they are complementary approaches, but I find that hard to swallow.  Why would I want to use two different extensibility frameworks instead of one coherent API?  If anything, I imagine that the lessons learned from the AddIn API will eventually migrate to MEF.

Daniel Moth notes that with the AddIn API, “there are many design decisions to make and quite a few subtleties in implementing those decisions in particular when it comes to discovering addins, version resiliency, isolation from the host etc.”  A customer of mine using the AddIn API was using a Visual Studio plug-in to manage pipelines, and things were a real mess.  There were a bunch of assemblies, a lot of generated code, and not much clarity or confidence that it was all really necessary.

MEF: Import & ImportMany

In MEF, the Import attribute allows you to inject a value that is exported somewhere else using the Export attribute—typically from another assembly.  There is also an ImportMany attribute which is useful when you expect several exports that use the same contract.  By defining an IEnumerable<T> field or property and decorating it with the ImportMany attribute, all matching exports will be added to an enumerable type.

[ImportMany]
public IEnumerable<IVehicle> Vehicles;

What if you want to filter the exported vehicle types by some kind of metadata, though?  Let’s take a look at the IVehicle contract and some concrete classes that implement the contract.

public interface IVehicle { }

[Export(typeof(IVehicle))]
[ExportMetadata("Speed", "Slow")]
public class ToyotaPrius : IVehicle
{
    public ToyotaPrius() { }
}

[Export(typeof(IVehicle))]
[ExportMetadata("Speed", "Fast")]
public class LamborghiniDiablo : IVehicle
{
    public LamborghiniDiablo() { }
}

The object model isn’t very interesting, but that’s not the point.  What is interesting is that MEF allows us to supply metadata corresponding to our exports.  In this case, my contrived example has defined a metadata variable of “Speed”, with two possible values: “Fast” and “Slow”.  The variable name must be a string, but its value can be any value; that is, any value that’s supported from within an attribute, which means string literals and constants, type objects, and the like.

Filtering Imports on Metadata

What if you want to ImportMany for all exports that have a particular metadata value?  Unfortunately, there are no such options in the ImportMany attribute class.

In my scenario, I’ve defined a static factory class called VehicleFactory, which at some imaginary point in the future will be responsible for building a city full of trafic.

public static class TrafficFactory
{
    // type initialization fails without a static constructor
    static TrafficFactory() { }

    public static IEnumerable<IVehicle> SlowVehicles =
        App.Container.GetExportedValues<IVehicle>(metadata => metadata.ContainsKeyWithValue("Speed", "Slow"));

    public static IEnumerable<IVehicle> FastVehicles =
        App.Container.GetExportedValues<IVehicle>(metadata => metadata.ContainsKeyWithValue("Speed", "Fast"));

    public static IDictionary<object, IVehicle> AllVehicles =
        App.Container.GetKeyedExportedValues<IVehicle>("Speed");
}

This is what I want to do, but there is no overload of GetExportedValues that supplies a metadata-dependent predicate function.  Adding one is easy, though.  While we’re at it, we’ll also add the ContainsKeyWithValue which I borrow from The Code Junky article also on MEF container filtering.

public static class IDictionaryExtensions
{
    public static bool ContainsKeyWithValue<KeyType, KeyValue>(
        this IDictionary<KeyType, ValueType> Dictionary,
        KeyType Key, ValueType Value)
    {
        return (Dictionary.ContainsKey(Key) && Dictionary[Key].Equals(Value));
    }
}

public static class MEFExtensions
{
    public static IEnumerable<T> GetExportedValues<T>(this CompositionContainer Container,
        Func<IDictionary<string, object>, bool> Predicate)
    {
        var result = new List<T>();

        foreach (var PartDef in Container.Catalog.Parts)
        {
            foreach (var ExportDef in PartDef.ExportDefinitions)
            {
                if (ExportDef.ContractName == typeof(T).FullName)
                {
                    if (Predicate(ExportDef.Metadata))
                        result.Add((T)PartDef.CreatePart().GetExportedValue(ExportDef));
                }
            }
        }

        return result;
    }
}

Now we can test this logic by wiring up MEF and then accessing the two filtered collections of cars, which will each contain a single IVehicle instance.

class App
{
    [Export]
    public CompositionContainer Container;

    static void Main(string[] args)
    {
        AssemblyCatalog catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
        Container = new CompositionContainer(catalog);
        Container.ComposeParts();

        var FastCars = TrafficFactory.FastVehicles;
        var SlowCars = TrafficFactory.SlowVehicles;
    }
}

Viola!  We have metadata-based filtering.

You’ll also noticed that I added an Export attribute to the Container itself.  By doing this, you can Import the container into any module that gets dynamically loaded.  It’s not used in this article, but getting to the container from a module is otherwise impossible without some kind of work-around.  (Thanks for pointing out the problem, Damon.)

Using Metadata to Assign Dictionary Keys

Let’s take this one step further.  Let’s say you want to import many instances of MEF exported values into a Dictionary, using one of the metadata properties as the key.  This is how I’d like it to work:

public static IDictionary<object, IVehicle> AllVehicles =
    App.Container.GetKeyedExportedValues<IVehicle>("Speed");

Again, the current MEF Preview doesn’t support this, but another extension method is all we need.  We’ll add two, so that one version gives us all exported values and the other allows us to filter that selection based on other metadata.

public static IDictionary<object, T> GetKeyedExportedValues<T>(this CompositionContainer Container,
    string MetadataKey, Func<IDictionary<string, object>, bool> Predicate)
{
    var result = new Dictionary<object, T>();

    foreach (var PartDef in Container.Catalog.Parts)
    {
        foreach (var ExportDef in PartDef.ExportDefinitions)
        {
            if (ExportDef.ContractName == typeof(T).FullName)
            {
                if (Predicate(ExportDef.Metadata))
                    result.Add(ExportDef.Metadata[MetadataKey], 
                        (T)PartDef.CreatePart().GetExportedValue(ExportDef));
            }
        }
    }

    return result;
}

public static IDictionary<object, T> GetKeyedExportedValues<T>(this CompositionContainer Container,
    string MetadataKey)
{
    return GetKeyedExportedValues<T>(Container, MetadataKey, metadata => true);
}

Add an assignment to TrafficFactory.AllVehicles in the App.Main method and see for yourself that it works.

If you’re using metadata values as Dictionary keys, it’s probably important for you not to mess them up.  I recommend using enum values for both metadata property names as well as valid values if it’s possible to enumerate them, and string const values otherwise.

Now go forth and start using MEF!

Posted in Algorithms, Component Based Engineering, Composability, Design Patterns, Visual Studio Extensibility | Tagged: , , , , | 4 Comments »

Better Tool Support for .NET

Posted by Dan Vanderboom on September 7, 2009

Productivity Enhancing Tools

Visual Studio has come a long way since its debut in 2002.  With the imminent release of 2010, we’ll see a desperately-needed overhauling of the archaic COM extensibility mechanisms (to support the Managed Package Framework, as well as MEF, the Managed Extensibility Framework) and a redesign of the user interface in WPF that I’ve been pushing for and predicted as inevitable quite some time ago.

For many alpha geeks, the Visual Studio environment has been extended with excellent third-party, productivity-enhancing tools such as CodeRush and Resharper.  I personally feel that the Visual Studio IDE team has been slacking in this area, providing only very weak support for refactorings, code navigation, and better Intellisense.  While I understand their desire to avoid stepping on partners’ toes, this is one area I think makes sense for them to be deeply invested in.  In fact, I think a new charter for a Developer Productivity Team is warranted (or an expansion of their team if it already exists).

It’s unfortunately a minority of .NET developers who know about and use these third-party tools, and the .NET community as a whole would without a doubt be significantly more productive if these tools were installed in the IDE from day one.  It would also help to overcome resistance from development departments in larger organizations that are wary of third-party plug-ins, due perhaps to the unstable nature of many of them.  Microsoft should consider purchasing one or both of them, or paying a licensing fee to include them in every copy of Visual Studio.  Doing so, in my opinion, would make them heroes in the eyes of the overwhelming majority of .NET developers around the world.

It’s not that I mind paying a few hundred dollars for these tools.  Far from it!  The tools pay for themselves very quickly in time saved.  The point is to make them ubiquitous: to make high-productivity coding a standard of .NET development instead of a nice add-on that is only sometimes accepted.

Consider just from the perspective of watching speakers at conferences coding up samples.  How many of them don’t use such a tool in their demonstration simply because they don’t want to confuse their audience with an unfamiliar development interface?  How many more demonstrations could they be completing in the limited time they have available if they felt more comfortable using these tools in front of the masses?  You know you pay good money to attend these conferences.  Wouldn’t you like to cover significantly more ground while you’re there?  This is only likely to happen when the tool’s delivery vehicle is Visual Studio itself.  Damon Payne makes a similar case for the inclusion of the Managed Extensibility Framework in .NET Framework 4.0: build it into the core and people will accept it.

The Gorillas in the Room

CodeRush and Resharper have both received recent mention in the Hanselminutes podcast (episode 196 with Mark Miller) and in the Deep Fried Bytes podcast (episode 35 with Corey Haines).  If you haven’t heard of CodeRush, I recommend watching these videos on their use.

For secondary information on CodeRush, DXCore, and the principles with which they were designed, I recommend these episodes of DotNetRocks:

I don’t mean to be so biased toward CodeRush, but this is the tool I’m personally familiar with, has a broader range of functionality, and it seems to get the majority of press coverage.  However, those who do talk about Resharper do speak highly of it, so I recommend you check out both of them to see which one works best for you.  But above all: go check them out!

Refactor – Rename

Refactoring code is something we should all be doing constantly to avoid the accumulation of technical debt as software projects and the requirements on which they are based evolve.  There are many refactorings in Visual Studio for C#, and many more in third-party tools for several languages, but I’m going to focus here on what I consider to be the most important refactoring of them all: Rename.

Why is Rename so important?  Because it’s so commonly used, and it has such far-reaching effects.  It is frequently the case that we give poor names to identifiers before we clearly understand their role in the “finished” system, and even more frequent that an item’s role changes as the software evolves.  Failure to rename items to accurately reflect their current purpose is a recipe for code rot and greater code maintenance costs, developer confusion, and therefore buggy logic (with its associated support costs).

When I rename an identifier with a refactoring tool, all of the references to that identifier are also updated.  There might be hundreds of references.  In the days before refactoring tools, one would accomplish this with Find-and-Replace, but this is dangerous.  Even with options like “match case” and “match whole word”, it’s easy to rename the wrong identifiers, rename pieces of string literals, and so on; and if you forget to set these options, it’s worse.  You can go through each change individually, but that can take a very long time with hundreds of potential updates and is a far cry from a truly intelligent update.

Ultimately, the intelligence of the Rename refactoring provides safety and confidence for making far-reaching changes, encouraging more aggressive refactoring practices on a more regular basis.

Abolishing Magic Strings

I am intensely passionate about any tool or coding practice that encourages refactoring and better code hygiene.  One example of such a coding practice is the use of lambda expressions to select identifiers instead of using evil “magical strings”.  From my article on dynamically sorting Linq queries, the use of “magic strings” would force me to write something like this to dynamically sort a Linq query:

Customers = Customers.Order("LastName").Order("FirstName", SortDirection.Descending);

The problem here is that “LastName” and “FirstName” are oblivious to the Rename refactoring.  Using the refactoring tool might give me a false sense of security in thinking that all of my references to those two fields have been renamed, leading me to The Pit of Despair.  Instead, I can define a function and use it like the following:

public static IOrderedEnumerable<T> Order<T>(this IEnumerable<T> Source, 
    Expression<Func<T, object>> Selector, SortDirection SortDirection)
{
    return Order(Source, (Selector.Body as MemberExpression).Member.Name, SortDirection);
}

Customers = Customers.Order(c => c.LastName).Order(c => c.FirstName, SortDirection.Descending);

This requires a little understanding of the structure of expressions to implement, but the benefit is huge: I can now use the refactoring tool with much greater confidence that I’m not introducing subtle reference bugs into my code.  For such a simple example, the benefit is dubious, but multiply this by hundreds or thousands of magic string references, and the effort involved in refactoring quickly becomes overwhelming.

Coding in this style is most valuable when it’s a solution-wide convention.  So long as you have code that strays from this design philosophy, you’ll find yourself grumbling and reaching for the inefficient and inelegant Find-and-Replace tool.  The only time it really becomes an issue, then, is when accessing libraries that you have no control over, such as the Linq-to-Entities and the Entity Framework, which makes extensive use of magic strings.  In the case of EF, this is mitigated somewhat by your ability to regenerate the code it uses.  In other libraries, it may be possible to write extension methods like the Order method shown above.

It’s my earnest hope that library and framework authors such as the .NET Framework team will seriously consider alternatives to, and an abolition of, “magic strings” and other coding practices that frustrate otherwise-powerful refactoring tools.

Refactoring Across Languages

A tool is only as valuable as it is practical.  The Rename refactoring is more valuable when coding practices don’t frustrate it, as explained above.  Another barrier to the practical use of this tool is the prevalence of multiple languages within and across projects in a Visual Studio solution.  The definition of a project as a single-language container is dubious when you consider that a C# or VB.NET project may also contain HTML, ASP.NET, XAML, or configuration XML markup.  These are all languages with their own parsers and other language services.

So what happens when identifiers are shared across languages and a Rename refactoring is executed?  It depends on the languages involved, unfortunately.

When refactoring a C# class in Visual Studio, the XAML’s x:Class value is also updated.  What we’re seeing here is cross-language refactoring, but unfortunately it only works in one direction.  There is no refactor command to update the x:Class value from the XAML editor, so manually changing it causes my C# class to become sadly out of sync.  Furthermore, this seems to be XAML specific.  If I refactor the name of an .aspx.cs class, the Inherits attribute of the Page directive in the .aspx file doesn’t update.

How frequent do you think it is that someone would want to change a code-behind file for an ASP.NET page, and yet would not want to change the Inherits attribute?  Probably not very common (okay, probably NEVER).  This is a matter of having sensible defaults.  When you change an identifier name in this way, the development environment does not respond in a sensible way by default, forcing the developer to do extra work and waste time.  This is a failure in UI design for the same reason that Intellisense has been such a resounding success: Intellisense anticipates our needs and works with us; the failure to keep identifiers in sync by default is diametrically opposed to this intelligence.  This represents a fragmented and inconsistent design for an IDE to possess, thus my hope that it will be addressed in the near future.

The problem should be recognized as systemic, however, and addressed in a generalized way.  Making individual improvements in the relationships between pairs of languages has been almost adequate, but I think it would behoove us to take a step back and take a look at the future family of languages supported by the IDE, and the circumstances that will quickly be upon us with Microsoft’s Oslo platform, which enables developers to more easily build tool-supported languages (especially DSLs, Domain Specific Languages). 

Even without Oslo, we have seen a proliferation of languages: IronRuby, IronPython, F#, and the list goes on.  A refactoring tool that is hard-coded for specific languages will be unable to keep pace with the growing family of .NET and markup languages, and certainly unable to deal with the demands of every DSL that emerges in the next few years.  If instead we had a way to identify our code identifiers to the refactoring tool, and indicate how they should be bound to identifiers in other languages in other files, or even other projects or solutions, the tools would be able to make some intelligent decisions without understanding each language ahead of time.  Each language’s language service could supply this information.  For more information on Microsoft Oslo and its relationship to a world of many languages, see my article on Why Oslo Is Important.

Without this cross-language identifier binding feature, we’ll remain in refactoring hell.  I offered a feature suggestion to the Oslo team regarding this multi-master synchronization of a model across languages that was rejected, much to my dismay.  I’m not sure if the Oslo team is the right group to address this, or if it’s more appropriate for the Visual Studio IDE team, so I’m not willing to give up on this yet.

A Default of Refactor-Rename

The next idea I’d like to propose here is that the Rename refactoring is, in fact, a sensible default behavior.  In other words, when I edit an identifier in my code, I more often than not want all of the references to that identifier to change as well.  This is based on my experience in invoking the refactoring explicitly countless times, compared to the relatively few times I want to “break away” that identifier from all the code that references.

Think about it: if you have 150 references to variable Foo, and you change Foo to FooBar, you’re going to have 150 broken references.  Are you going to create a new Foo variable to replace them?  That workflow doesn’t make any sense.  Why not just start editing the identifier and have the references update themselves implicitly?  If you want to be aware of the change, it would be trivial for the IDE to indicate the number of references that were updated behind the scenes.  Then, if for some reason you really did want to break the references, you could explicitly launch a refactoring tool to “break references”, allowing you to edit that identifier definition separately.

The challenge that comes to mind with this default behavior concerns code that spans across solutions that aren’t loaded into the IDE at the same time.  In principle, this could be dealt with by logging the refactoring somewhere accessible to all solutions involved, in a location they can all access and which gets checked into source control.  The next time the other solutions are loaded, the log is loaded and the identifiers are renamed as specified.

Language Property Paths

If you’ve done much development with Silverlight or WPF, you’ve probably run into the PropertyPath class when using data binding or animation.  PropertyPath objects represent a traversal path to a property such as “Company.CompanyName.Text”.  The travesty is that they’re always “magic strings”.

My argument is that the property path is such an important construct that it deserves to be an core part of language syntax instead of just a type in some UI-platform-specific library.  I created a data binding library for Windows Forms for which I created my own property path syntax and type, and there are countless non-UI scenarios in which this construct would also be incredibly useful.

The advantage of having a language like C# understand property path syntax is that you avoid a whole class of problems that developers have used “magic strings” to solve.  The compiler can then make intelligent decisions about the correctness of paths, and errors can be identified very early in the cycle.

Imagine being able to pass property paths to methods or return then from functions as first-class citizens.  Instead of writing this:

Binding NameTextBinding = new Binding("Name") { Source = customer1; }

… we could write something like this, have access to the Rename refactoring, and even get Intellisense support when hitting the dot (.) operator:

Binding NameTextBinding = new Binding(@Customer.Name) { Source = customer1; }

In this code example, I use the fictitious @ operator to inform the compiler that I’m specifying a property path and not trying to reference a static property called Name on the Customer class.

With property paths in the language, we could solve our dynamic Linq sort problem cleanly, without using lambda expressions to hack around the problem:

Customers = Customers.Order(@Customer.LastName).Order(@Customer.FirstName, SortDirection.Descending);

That looks and feels right to me.  How about you?

Summary

There are many factors of developer productivity, and I’ve established refactoring as one of them.  In this article I discussed tooling and coding practices that support or frustrate refactoring.  We took a deep look into the most important refactoring we have at our disposal, Rename, and examined how to get the greatest value out of it in terms of personal habits, as well as long-term tooling vision and language innovation.  I proposed including property paths in language syntax due to its general usefulness and its ability to solve a whole class of problems that have traditionally been solved using problematic “magic strings”.

It gives me hope to see the growing popularity of Fluent Interfaces and the use of lambda expressions to provide coding conventions that can be verified by the compiler, and a growing community of bloggers (such as here and here) writing about the abolition of “magic strings” in their code.  We can only hope that Microsoft program managers, architects, and developers on the Visual Studio and .NET Framework teams are listening.

Posted in Data Binding, Data Structures, Design Patterns, Development Environment, Dynamic Programming, Functional Programming, Language Innovation, LINQ, Oslo, Silverlight, Software Architecture, User Interface Design, Visual Studio, Visual Studio Extensibility, Windows Forms | Leave a Comment »

Living & Working in Sunny Aruba

Posted by Dan Vanderboom on September 5, 2009

Aruba 381

I am thrilled to finally be living in Aruba, at least for the month of September.  This is an experiment in remote working, and an experiment in living outside the United States.  “Why Aruba?” you ask.  Why not?  Aruba has weather that’s perfect for the beach year round, lies safely outside the hurricane belt, and has one of the highest per capita incomes in the Caribbean, making it a very safe and happy place.  In fact, their license plate tag line is “One Happy Island”.

 Aruba 329

Indeed it is!  Everyone here has been extremely friendly.  The population is ethnically diverse and many languages can be heard.  Residents speak Papiamento, Spanish, English, and Dutch generally, and I often hear German, French, Japanese, and other languages I can’t yet identify.  It seems common for people living here to speak six or more languages.  Being a lover of languages, I hope to pick up as much as I can while I have the opportunity.

I planned many months ahead of time, but found a paucity of information available online and have had to wing-it for many aspects of the trip, which just makes it more of an adventure.  Aruban websites are geared toward mainstream tourism and high-profile resort hotel-casinos (many of which are beautiful), but I was looking for longer-term residency, and a bargain at that.  I settled for a cheap room off the beaten path, which was about the same rate for a month as a hotel room would be for a week.  As it turned out, I was upgraded for free to a nice two-bedroom condominium due to last-minute rescheduling of my original room.  I’m a ten-minute walk from Palm Beach, a two-hour walk from the capital of Oranjestad, and at about 20 miles by 5 miles, Aruba is large enough to keep me busy exploring but small enough to make exploring most areas of it possible within my month here.

Do You Ever Work?

Yes, I work on projects for several customers while I’m here.  I found a fantastic free-Internet Dutch cafe called Cafe Rembrandt with a wonderful staff.  I have plugs to power my laptop, and use Skype or iCall to make calls to customers.  Both of these have applications for iPhone.  With them, I pay $0.20 – $0.27 per minute for calls.  Without them, through AT&T (and through SETAR, who is the cell and wi-fi provider for the island), I’d be paying an outrageous $1.69 per minute.  This limits me to making calls from free Internet hotspots, or I could pay SETAR $70 per month for unlimited access to their wireless access that blankets the popular parts of the island.

From a technical communication perspective, it’s all working well so far.  Because I’m working on smaller projects and my customers are geographically distributed anyway, I’m not running up against many of the hurdles that would appear on larger projects, so it’s a good way to dip one foot into the water without jumping in all the way on day one.  Working side-by-side in person with other members on larger projects is always the highest-bandwidth method of communicating, but remote working scenarios are becoming more and more common and have many benefits.  The only real way to identify the challenges these scenarios impose is to put yourself into them again and again, and deal with the issues as they come up, finding solutions to problems, working around limitations, and exploiting the advantages that decentralization provides.

Getting Around & Communicating

Being an avid running and hiker, I’ve walked about four hours a day since I’ve been here, pushing myself as I usually do.  The busses, however, are air-conditioned, cheap (about $1.30 per trip just about anywhere), clean and safe, so I always have an easy way home when I’m completely exhausted.  They’ll go anywhere you need them to, so renting a vehicle is unnecessary, but car rentals are reasonable if you need one.  If you want to rent one, make sure to go to your local AAA and get an International Driver’s License before coming.  Also check AAA and tourist books for coupons, which can be 10-20% off of listed rates.

Phones are available for rent, or you can use your existing phone as long as your carrier allows international roaming (you may have to call them to authorize that feature).  AT&T customers need to sign up for their World Traveler plan.  I use mine only for Google Maps to navigate and to check for email periodically, as the data plans are outrageously expensive if you go over your limit (over $5 per MB).

Wrapping Up

I could write many pages more about my few days here already, but instead I’ll conclude with a few of the pictures I’ve taken from my iPhone.  If you have your own stories about Aruba, or living and working abroad or remotely in general and the lessons you learned, I’d love to hear about them.

Aruba 325 Aruba 331   Aruba 348 Aruba 357 Aruba 396 Aruba 399 Aruba 402 Aruba 415Aruba 406

Posted in Aruba, Remote Working | 5 Comments »