Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘Conferences’ Category

Cloud Slam ‘09 Conference

Posted by Dan Vanderboom on April 14, 2009

If you’re interested in Cloud Computing, you should consider signing up for Cloud Slam, a very inexpensive four-day virtual conference.  You can attend from the comfort of your home (or local wine or coffee shop), and have access to about 100 hours of sessions for only $52.  It goes from April 20-24.

Speakers include such industry leaders as Stephen Herrod, CTO of VMWare, Simon Crosby, CTO of Citrix Systems, Werner Vogels, CTO of Amazon.com, and many more.

I can’t say I’ll see you there, but I’m definitely looking forward to it.  It should be a great source of information for what industry leaders are thinking and where cloud computing is headed.

Posted in Cloud Computing, Conferences | Leave a Comment »

ReMIX09 in Chicago

Posted by Dan Vanderboom on April 1, 2009

I’m taking the train with Mr. Payne down to Chicago for the first stop of the Mix It Up tour, hosted by the CD2 user group.  It starts at 6:30pm today (April 1st, no joke), at:

200 S. Wacker Dr., 15th Floor
Chicago, IL 60606

This is a Microsoft sponsored event to bring the MIX09 technology announcements to those of us who couldn’t make it to MIX.  If you weren’t able to go, you can still watch the recorded MIX session videos.

You can find more information about the Mix It Up tour dates and locations on Damon’s blog.

Posted in Conferences | Leave a Comment »

MSDN Developer Conference in Chicago

Posted by Dan Vanderboom on January 13, 2009

I just got home to Milwaukee from the MSDN Developer Conference in Chicago, about two hours drive.  I knew that it would be a rehash of the major technologies revealed at the PDC which I was at in November, so I wasn’t sure how much value I’d get out of it, but I had a bunch of questions about their new technologies (Azure, Oslo, Geneva, VS2010, .NET 4.0, new language stuff), and it just sounded like fun to go out to Fogo de Chao for dinner (a wonderful Brazilian steakhouse, with great company).

So despite my reservations, I’m glad I went.  I think it also helped that I’ve had since November to research and digest all of this new stuff, so that I could be ready with good questions to ask.  There’ve been so many new announcements, it’s been a little overwhelming.  I’m still picking up the basics of Silverlight/WPF and WCF/WF, which have been out for a while now.  But that’s part of the fun and the challenge of the software industry.

Sessions

With some last minute changes to my original plan, I ended up watching all four Azure sessions.  All of the speakers did a great job.  That being said, “A Lap Around Azure” was my least favorite content because it was so introductory and general.  But the opportunity to drill speakers for information, clarification, or hints of ship dates made it worth going.

I was wondering, for example, if the ADO.NET Data Services Client Library, which talks to a SQL Server back end, can also be used to point to a SQL Data Services endpoint in the cloud.  And I’m really excited knowing now that it can, because that means we can use real LINQ (not weird LINQ-like syntax in a URI).  And don’t forget Entities!

I also learned that though my Mesh account (which I love and use every day) is beta, there’s a CTP available for developers that includes new features like tracking of Mesh Applications.  I’ve been thinking about Mesh a lot, not only because I use it, but because I wanted to determine if I could use the synchronization abilities in the Mesh API to sync records in a database.

<speculation Mode=”RunOnSentence”>
If Microsoft is building this entire ecosystem of interoperable services, and one of them does data storage and querying (SQL Data Services), and another does synchronization and conflict resolution (Mesh Services)–and considering how Microsoft is making a point of borrowing and building on existing knowledge (REST/JSON/etc.) instead of creating a new proprietary stack–isn’t it at least conceivable that these two technologies would at some point converge in the future into a cloud data services replication technology?
</speculation>

I’m a little disappointed that Ori Amiga’s Mesh Mobile wasn’t mentioned.  It’s a very compelling use of the Mesh API.

The other concern I’ve had lately is the apparent immaturity of SQL Data Services.  As far as what’s there in the beta, it’s tables without enforceable schemas (so far), basic joins, no grouping, no aggregates, and a need to manually partition across virtual instances (and therefore to also deal with the consequences of that partitioning, which affects querying, storage, etc.).  How can I build a serious enterprise, Internet-scale system without grouping or aggregates in the database tier?  But as several folks suggested and speculated, Data Services will most likely have these things figured out by the time it’s released, which will probably be the second half of 2009 (sooner than I thought).

Unfortunately, if you’re using Mesh to synchronize a list of structured things, you don’t get the rich querying power of a relational data store; and if you use SQL Data Services, you don’t get the ability to easily and automatically synchronize data with other devices.  At some point, we’ll need to have both of these capabilities working together.

When you stand back and look at where things are going, you have to admit that the future of SQL Data Services looks amazing.  And I’m told this team is much further ahead than some of the other teams in terms of robustness and readiness to roll out.  In the future (post 2009), we should have analytics and reporting in the cloud, providing Internet-scale analogues to their SQL Analysis Server and SQL Reporting Services products, and then I think there’ll be no stopping it as a mass adopted cloud services building block.

Looking Forward

The thought that keeps repeating in my head is: after we evolve this technology to a point where rapid UX and service development is possible and limitless scaling is reached in terms of software architecture, network load balancing, and hardware virtualization, where does the development industry go from there?  If there are no more rungs of the scalability ladder we have to climb, what future milestones will we reach?  Will we have removed the ceiling of potential for software and what it can accomplish?  What kind of impact will that have on business?

Sometimes I suspect the questions are as valuable as the answers.

Posted in ADO.NET Data Services, Conferences, Distributed Architecture, LINQ, Mesh, Oslo, Service Oriented Architecture, SQL Analysis Services, SQL Data Services, SQL Reporting Services, SQL Server, Virtualization, Windows Azure | 1 Comment »

The Future of Programming Languages

Posted by Dan Vanderboom on November 6, 2008

Two of the best sessions at the PDC this year were Anders Hejlsberg’s The Future of C# and a panel on The Future of Programming.

A lot has been said and written about dynamic programming, metaprogramming, and language syntax extensions–not just academically over the past few decades, but also as a recently growing buzz among the designers and users of mainstream object-oriented languages.

Anders Hejlsberg

Dynamic Programming

After a scene-setting tour through the history and evolution of C#, Anders addressed how C# 4.0 would allow much simpler interoperation between C# and dynamic languages.  I’ve been following Charlie Calvert’s Language Futures website, where they’ve been discussing these features early on with the development community.  It’s nice to see how seriously they take the feedback they’re getting, and I really think it’s going to have a positive impact on the language as a whole.  Initial thoughts revolved around creating a new block of code with code like dynamic { DynamicStuff.SomeUndefinedProperty = “whatever”; }.

But at the PDC we saw that instead dynamic will be a type for our dynamic objects, and so dynamic lookup of members will only be allowed for those variables.  Anders’ demo showed off interactions with JavaScript and Python, as well as Office via COM, all without the ugly Type.Missing parameters (optional parameter support also played a part in that).  Other ideas revolved around easing Reflection access, and XML document access for Xml nodes dynamically.

Meta-Programming

At the end of his talk, Anders showed a stunning demo of metaprogramming working within C#.  It was an early prototype, so all language features were not supported, but it worked similar to Eval where the code was constructed inside a string and then compiled at runtime.  But it was flexible and powerful enough that he could create delegates to functions that he Eval’ed up into existence.  Someone in the audience asked how this was different from Lisp macros, to which Anders replied: “This is basically Lisp macros.”

Before you get too excited (or worried) about this significant bit of news, Anders made no promises about when metaprogramming would be available, and he subtly suggested that it may very well be a post-4.0 feature.  As he said in the Future of Programming Panel, however: “We’re rewriting the compiler in managed code, and I’d say one of the big motivators there is to make it a better metaprogramming system, sort of open up the black box and allow people to actually use the compiler as a service…”

Regardless of when it arrives, I hope they will give serious consideration to providing syntax checking of this macro or meta code, instead of treating it blindly at compile-time as a “magic string”, as has so long plagued the realm of data access.  After all, one of the primary advantages of Linq is to enable compile-time checking of queries, to enforce not only strict type checking, but to also more fundamentally ensure that data sources and their members are valid.  The irregularity of C#’s syntax, as opposed to Lisp, will make that more difficult (thanks to Paul for pointing this out), but I think most developers will eventually agree it’s a worthwhile cause.  Perhaps support for nested grammars in the generic sense will set the stage for enabling this feature.

Language Syntax Extensions

If metaprogramming is about making the compiler available as a service, language extensions are about making the compiler service transparent and extensible.

The majority (but not all) of the language design panel stressed caution in evolving and customizing language syntax and discussed the importance of syntax at length, but they’ve been considering the demands of the development community seriously.  At times Anders vacillated between trying to offer alternatives and admitting that, in the end, customization of language syntax by developers would prevail; and that what’s important is how we go about enabling those scenarios without destroying our ability to evolve languages usefully, avoiding their collapse from an excess of ambiguity and inconsistency in the grammar.

“Another interesting pattern that I’m very fond of right now in terms of language evolution is this notion that our static languages, and our programming languages in general, are getting to be powerful enough, that with all of these things we’re picking up from functional programming languages and metaprogramming, that you can–in the language itself–build these little internal DSLs, where you use fluent interface style, and you dot together operators, and you have deferred execution… where you can, in a sense, create little mini languages, except for the syntax.

If you look at parallel extensions for .NET, they have a Parallel.For, where you give the start and how many times you want to go around, and a lambda which is the body you want to execute.  And boy, if you squint, that looks like a Parallel For statement.

But it allows API designers to experiment with different styles of programming.  And then, as they become popular, we can pick them up and put syntactic veneers on top of them, or we can work to make languages maybe even richer and have extensible syntax like we talked about, but I’m encouraged by the fact that our languages have gotten rich enough that you do a lot of these things without even having to have syntax.” – Anders Hejlsberg

On one hand, I agree with him: the introduction of lambda expressions and extension methods can create some startling new syntax-like patterns of coding that simply weren’t feasible before.  I’ve written articles demonstrating some of this, such as New Spin on Spawning Threads and especially The Visitor Design Pattern in C# 3.0.  And he’s right: if you squint, it almost looks like new syntax.  The problem is that programmers don’t want to squint at their code.  As Chris Anderson has noted at the PDC and elsewhere, developers are very particular about how they want their code to look.  This is one of the big reasons behind Oslo’s support for authoring textual DSLs with the new MGrammar language.

One idea that came up several times (and which I alluded to above) is the idea of allowing nested languages, in a similar way that Linq comprehensions live inside an isolated syntactic context.  C++ developers can redefine many operators in flexible ways, and this can lead to code that’s very difficult to read.  This can perhaps be blamed on the inability of the C++ language to provide alternative and more comprehensive syntactic extensibility points.  Operators are what they have to work with, so operators are what get used for all kinds of things, which change per type.  But their meaning gets so overloaded, literally, that they lose any obvious (context-free) meaning.

But operators don’t have to be non-alphabetic tokens, and the addition of new keywords or symbols could be introduced in limited contexts, such as a modifier for a member definition in a type (to appear alongside visibility, overload, override, and shadowing keywords), or within a delimited block of code such as an r-value, or a curly-brace block for new flow control constructs (one of my favorite ideas and an area most in need of extensions).  Language extensions might also be limited in scope to specific assemblies, only importing extensions explicitly, giving library authors the ability to customize their own syntax without imposing a mess on consumers of the library.

Another idea would be to allow the final Action delegate parameter of a function to be expressed as a curly-brace-delimited code block following the function call, in lieu of specifying the parameter within parentheses, and removing the need for a semicolon.  For example, with a method defined like this:

public static class Parallel
{
    // Action delegate defined last, to take advantage of C# syntactic sugar
    public static void For(long Start, long Count, Action Action)
    {
        // TODO: implement
    }
}

…a future C# compiler might allow you to write code like this:

Parallel.For(0, 10)
{
    // add code here for the Action delegate parameter
}

As Dr. T points out to me, however, the tricky part will consist of supporting local returns: in other words, when you call return inside that delegate’s code block, you really expect it to return from the enclosing method, not the one defined by the delegate parameter.  Support for continue or break would also make for a more intuitive fit.  If there’s one thing Microsoft does right, it’s language design, and I have a lot of confidence that issues like this will continue to be recognized and ultimately implemented correctly.  In reading their blogs and occasionally sharing ideas with them, it’s obvious they’re as passionate about the language and syntax as I am.

The key for language extensions, I believe, will be to provide more structured extensibility points for syntax (such as control flow blocks), instead of opening up the entire language for arbitrary modification.  As each language opens up some new aspect of its syntax for extension, a number of challenges will surface that will need to be dealt with, and it will be critical to solve these problems before continuing on with further evolution of the language.  Think of all we’ve gained from generics, and the challenges of dealing with a more complex type system we’ve incurred as a result.  We’re still getting updates in C# 4.0 to address shortcomings of generics, such as issues regarding covariance and contravariance.  Ultimately, though, generics were well worth it, and I believe the same will be said of metaprogramming and language extensions.

Looking Forward

I’ll have much more to say on this topic when I talk about Oslo and MGrammar.  The important points to take away from this are that mainstream language designers are taking these ideas to heart now, and there are so many ideas and options out there that we can and will experiment to find the right combination (or combinations) of both techniques and limitations to make metaprogramming and language syntax extensions useful, viable, and sustainable.

Posted in Conferences, Design Patterns, Dynamic Programming, Functional Programming, Language Extensions, LINQ, Metaprogramming, Reflection, Software Architecture | 1 Comment »

A First Look at Windows 7

Posted by Dan Vanderboom on October 30, 2008

During the keynote on Tuesday, Ray Ozzie made a point of stressing the advantages of each of three user interaction modes: desktop, phone, and web; and all the speakers’ topics revolved around this central theme.  The key points were that systems which use all three are capable of delivering the greatest value, that providing an integrated experience across all three will become increasingly common, and that Microsoft is committed to encouraging these platforms to work together.  This will be accomplished through Windows Azure (cloud services), Live Services, SQL Services, and so on.  (See my article on Windows Azure for more details.)

Ray Ozzie did a great job presenting.  I’ve seen Bill Gates speak, and while he’s obviously very intelligent, he left a lot to be desired when it came to projecting a presence in front of an audience.  I have a lot of faith in Microsoft under Ray’s direction; he’s obviously a charismatic leader with a passion for technology and the value it provides.  Humorously enough, Ray’s favorite big word during his Tuesday keynote was “nascent”, which became a talking point for several of the speakers: notably Don Box and Chris Anderson.  So let’s talk more about these nascent technologies!

RIMG0086

The Taskbar & UAC Improvements

You have to admit, in looking at the taskbar of Windows 7, that it looks a bit more like the Mac’s.  Icons can be reordered by dragging them, similar to Taskbar Shuffle, and text describing the task name is conspicuously absent.  This is a welcome change; as many windows as I have open throughout the day, this text and the horizontal space it occupies has always been a nuisance, especially as it becomes squished and abbreviated to the point of uselessness.  The icons can be large or small, and the space between them adjusts depending on how many there are.

Just as multiple windows in the same application can be grouped together in the taskbar, windows are grouped into these new icon-only buttons.  Hovering over one brings up a submenu in the form of a set of large window thumbnails.  Hovering over one of these brings that full window temporarily into view.  You can also close any of these windows from their thumbnail representation (with a small X that appears when hovered over).  Although it sounded like a good idea when window grouping was announced for Vista, I’ve always found the actual implementation annoying, and I always turn this feature off, but Windows 7 provides a superior experience of grouping windows that I’m actually looking forward to using.

System tray notifications can be customized, putting the user in complete control over these pop-ups.  You can even control which icons, on an individual basis, appear in your system tray, and which remain hidden.  This is a subtle change, but is nevertheless nice if you have a lot of these icons.

User Access Control (UAC) has also been improved.  A joke was made of the negative feedback they had received about this intrusive feature, and in response, Microsoft has now included a way to control the level of protection UAC provides, similar to the security level in Internet Explorer.

Media, Multiple Monitors, and Multi-Touch Support

Media in Windows 7 can be shared across all computers in your house.  You can play them from Windows Explorer, Media Player, and a lightweight version of Media Player.  Music, videos, and other media can be played to a specific device, such as another computer, or on your home entertainment system.  A new Explorer context menu option named Play To provides a list of discovered devices.

As a user and strong supporter of multiple monitors, some of the most exciting announcements were the improved support for multiple monitor scenarios.  There have been several enhancements to multiple monitor setup and configuration, including support for connecting to multiple projectors for presentations.  This feature was demonstrated and used during the keynote itself.  Another impressive addition is support for multiple monitors in Remote Desktop.  Hopefully this ability will also filter down to Vista and XP.

The multi-touch HP computer was shown off, which has been commercially available for about $1,500.  Special drivers and updates have been made to enable touch for things like scrolling through lists and other containers, without applications needing to be aware of the multi-touch hardware, and applications taking advantage of the new Windows 7 multi-touch API can do more, such as the pinch gestures for zooming or throwing objects around the screen with simulated physics effects.  The HP computer uses two cameras in the upper corners of the screen, which means it is subject to shadow points that four-camera and (to a greater extent) rear-projected multi-touch panels do not have.  Shadow points occur when one touch happens higher on the screen (within view of the cameras), and a touch or motion occurs below that, obstructed by the higher touch point.  Some of this can be compensated for by algorithms that make good guesses of touch and motion based on position, velocity, and the assumption of continuity of motion, similar to the way our own brains fill in details that we’re unable to perceive.

The primary disadvantage of the HP touch computer is the tight coupling between the multi-touch screen and the rest of the computer hardware.  I personally feel this is a bad design, though I’m sure it serves some business need to lock in consumers to HP computers at least this early in the game, when there aren’t many multi-touch monitor options out there.  At the Windows 7 booth on the conference floor, however, they did have a 42 inch LG monitor set up with a multi-touch add-on screen that fit right over the monitor.  The company that manufactures these, NextWindow, has offerings for screens as large as 120 inches, and companies like Identity Mine are already developing software for these units.

Surface

Surface was a big hit.  With multi-person games to play, a photo hunt for attendees at PDC, and hands-on labs to explore multi-touch development, everyone got really got into these fun devices, which were scattered all over the conference.  I believe there was one at Tech Ed, but they really went big for PDC.  Unfortunately, these table devices have a price tag of about $13,500.  So compared to open source versions that go for $1,500 (albeit with some assembly required), and other options like the overlay screens offered by NextWindow, I won’t be jumping into Surface development anytime soon.

The application in the photo below is a digital DJ.  Sound loops can be tossed into the center.  Outside the circle, they’re silent.  The closer they get to the center of the blue area, the louder and more dominant they are.  Different styles of music were presented, and here we were playing some funky techno.

RIMG0071

Microsoft Research’s keynote presentation on Wednesday was really fantastic.  In addition to a new version of the World Wide Telescope (which has the potential to change the way students discover, explore, and learn about astronomy), they revealed a new technology (in early stages) called Second Light, which builds upon Surface, and is able to project two images: one of them which you see on the surface computer itself, and a second that only becomes visible when another diffuse surface is placed above the Surface table.  This can be a cheap piece of plastic or even some tracing paper.  One example displayed satellite images on the primary surface, and when the tracing paper was moved above it, you could see a map-only view overlaid on top.  The amazing part was that you could raise the second surface off the table, into the air, and place it at virtually any angle, and the Surface computer compensated for the angle by pre-distorting the image.  In addition, when using a piece of plastic, the infrared camera was able to perceive and interpret touch and multi-touch on the second surface that hovered several inches or better off the table!

Devices and Printers

Devices on your network, such as printers and other computers, appear in a nice visual explorer where they can be manipulated.  When a laptop is moved from your home to your office (and back), your default printer will change depending on your current environment.  Will they do the same thing with auto-detection of monitors, including multiple-monitor arrangements?  Let’s hope so!

 RIMG0093

Performance & Miscellaneous

Windows Seven uses the same kernel and driver model that Windows Server 2008 and Windows Vista uses, so there will be no need for a huge reworking of these fundamentals.  However, a great deal of work has gone into, and continues to be done, in the areas of optimizing performance and minimizing resource requirements: for Windows startup, application startup, and much more.  Windows 7 was shown running on a tiny Netbook, running at 1 GHz with 1 GB of RAM.  Try doing that with Vista!

RIMG0102

Virtualization is another key technology with many improvements, such as Window 7’s ability to boot off virtual hard drives, the ability to keep a boot image frozen (see my article on setting up such an environment with VMWare in this article), and application virtualization (versus whole-machine virtualization).  Any virtual hard drive file (a .vhd) can be mounted in the Disk Management Utility.

Conclusion

Windows 7 promises a lot of great new features.  Whether they’ll be compelling enough for businesses to upgrade from XP and Vista, especially for those that didn’t see the ROI on Vista, we have yet to see.  As a lucky attendee of the PDC, I’ll be running the Windows 7 CTP they gave me to learn all I can about it.  As I discover both its great features and buggy failures, I’ll be writing more about it.  For the rest of you, it will be available around April 2009 when it reaches beta release.

Other news and discussion can be found on the Engineer Windows 7 blog, and there’s a good article at Ars Technica with screen shots.

Posted in Conferences, Research, Virtualization, Windows 7 | 2 Comments »

Windows Azure: Internet Operating System

Posted by Dan Vanderboom on October 27, 2008

image

As I write this during the keynote of PDC 2008, Ray Ozzie is announcing (with a great deal of excitement) a new Windows-branded Internet operating system called Windows Azure.  Tipping their hats to work done by Amazon.com with their EC2 (Elastic Cloud Computing) service, Microsoft is finally revealing their vision for a world-wide, scalable, cloud services platform that looks forward toward the next fifty years.

Windows Azure, presumably named for the color of the sky, is being released today as a CTP in the United States, and will be rolled out soon across the rest of the planet.  This new Internet platform came together over the past few years after a common set of scalable technologies emerged behind the scenes across MSN, Live, MSDN, XBox Live, Zune, and other global online services.  Azure is the vehicle for delivering that expertise as a platform to Microsoft developers around the world.

SQL Server Data Services, now called SQL Services (storage, data mining, reporting), .NET Services (federated identity, workflows, etc.), SharePoint Services, Live Services (synchronization), and Dynamics/CRM Services are provided as examples of services moving immediately to Windows Azure, with many more on their way, both from Microsoft and from third parties such as Bluehoo.com, a social networking technology using Bluetooth on Windows Mobile cell phones to discover and interact with others nearby.

The promise of a globally distributed collection of data centers offering unlimited scalability, automatic load-balancing, intelligent data synchronization, and model-driven service lifecycle management is very exciting.  Hints of this new offering could be seen from podcasts, articles, and TechEd sessions earlier this year on cloud services in general, talk of Software-As-Service (SAS), Software+Services, the Internet Service Bus, and the ill-named BizTalk Services.

The development experience looks pretty solid, even this early in the game.  Testing can be done on your local development machine, without having to upload your service to the cloud, and all of the cloud services APIs will be available, all while leveraging existing skills with ASP.NET, web services, WCF and WF, and more.  Publishing to the cloud involves the same Publish context menu item you’re used to in deploying your web applications today, except that you’ll end up at a web page where you’ll upload two files.  Actual service start time may take a few minutes.

Overall, this is a very exciting offering.  Because of Microsoft’s commitment to building and supporting open platforms, it’s hard to imagine what kinds of products and services this will lead to, how this will catalyze the revolution to cloud services that has already begun.

Posted in Conferences, Distributed Architecture, Service Oriented Architecture, Software Architecture, Virtualization | Leave a Comment »

Going to the Professional Developer Conference (PDC)

Posted by Dan Vanderboom on September 20, 2008

This year will be my first at PDC, and I’m really looking forward to it, especially since it doesn’t come around every year.  I was at Tech Ed in Orlando earlier this year, which focuses on existing and imminent Microsoft products, and went to Dev Connections a few years back, so I’m really excited to check out Los Angeles and what PDC has to offer.  From several hints I’ve heard from Microsoft employees, this should be a very exciting conference, full of good surprises.

If you’re planning on going to PDC this year and are interested in hanging out for a couple drinks (okay, probably quite a few drinks), picking my brain, talking about development and technology in-depth, or just arguing with me on any subject, feel free to leave a comment.  I’m always fascinated by new people, personalities, and perspectives.

In case you can’t make it, I’ll be sure to take lots of notes and pictures to share the highlights of my experience there, and I’ll be bringing my own camera this time (I learned my lesson with disposables at Tech Ed).

Posted in Conferences | Leave a Comment »

TechEd 2008

Posted by Dan Vanderboom on June 9, 2008

If you weren’t able to make it to TechEd this year, you really missed out on a fantastic conference and countless opportunities to explore, learn, meet, and connect.

image

I didn’t bring a digital camera, so I take ultimate responsibility for the results, but I was duped into Kodak’s very misleading marketing when I bought a couple of their disposal “digital” cameras.  I found out while developing them at Walgreen’s that it’s actually a film camera.  Apparently they get away with calling it digital because the price of the camera includes having the pictures burned to a CD, which is a digital object.  I still don’t get how the camera can be called digital.  This is dishonest as far as I’m concerned.  Shame on you, Kodak.

So aside from 50 grainy pictures (of memories that are fuzzy to begin with, due to closing the bar every night of the week), it was a great time.  From sessions on robotics and game development, to Carl Franklin jamming on acoustic guitar at the conference center, to meeting and talking with Microsoft employees and others about emerging technologies, and VIP and MVP parties at Charley’s Steakhouse (phenomenal food and service) and House of Blues (thanks Beth!  hi Theresa!), there was something there for every-nerd.

Here’s another bad picture of something I found pretty funny: it’s Windows rebooting on a kiosk at Universal Studios and informing us that we may want to start in safe mode.

image

Here’s one more bad picture, this time of me, at Universal Studios, hanging out with Jaws.

image

I paid particular attention, and even took notes, on the presenters’ speaking styles and skill levels, technical competence, confidence, enthusiasm, audience engagement and participation, humor, as well as the tools they used for zooming, screen annotation, altering UI and font sizes for the audience, etc.  I’ve given some serious thought to submitting proposals for future conferences, and during some of the sessions I couldn’t help but think, “That should be me up there!”

Overall, TechEd has gotten me excited, and sessions often left me wanting to write tons of code and build lots of new programs, from small but useful Pocket PC apps to radical new ideas for libraries, UI frameworks, and robotics control systems.  As The Damonator accurately explained, conferences like TechEd are great for getting you re-energized about development.  It’s been a few years since I was at DevConnections, and I hope I’ll be able to attend these events (PDC later this year, for example) more frequently in the future.

Bill Gates’ Keynote

Bill Gates gave his last public speech Tuesday morning as a full time Microsoft member.  I’ve seen some videos of him online, and I wasn’t blown away by his presenting style.  It’s not very smooth, and he doesn’t seem very comfortable going through a rehearsed script.

However, when it came time to answer audience questions, his intelligence shone through in spades.  His answers were insightful, articulate, and substantive, even when the questions were confusing, long-winded, or occasionally really lame.

Robots

Toward the end of Bill Gates’ keynote, a robot rolled out balancing on two wheels, featuring an LCD screen with a still picture of Steve Ballmer’s head and very articulate arms: the Ballmer-bot is $60,000 of hardware, and I can’t even guess the amount for design and software development.  It balanced on its wheels while the arms extended (changing its center of gravity, which requires compensation), and it announced loudly, “Developers! Developers! Developers!”  Over and over again.  Very funny and well done.  The Ballmer-bot handed Gates his “lifetime XBox Live membership”.  The only disappointing part was the wire that connected this humanoid robot to some kind of game controller.  Why wasn’t it wireless?  As someone pointed out to me, the last thing they wanted for Gates’ last speech was for this robot to get away from them and launch itself into the crowd, injuring someone.  So it must still be in beta.  🙂

I had a chance to meet and talk with Nicolas Delmasso from SimplySim (located in France).  They are experts in 3D simulation.  SimplySim was involved in creating the simulation environment for Microsoft Robotics Developer Studio, which is based on the XNA Game Developer Studio.  SimplySim will likely be working on support for physics to support flight soon (helicopters, airplanes, etc.), as that has been so frequently requested.  How cool would it be to program autonomous aircraft for search and rescue or fire fighting scenarios?  RoboChamps could create same amazing new competitions based on this.

I also attended a session called Software + Services + Robots, which I think is a clever name.  This was about building the RoboChamps competition itself and all of the technology involved, including social/community aspects, Silverlight media content, writing referee services, cameras that can be watched from their website by spectators, and much more.

Session Highlights

There were so many good sessions to attend.  During a few time slots, I found myself annoyed that there wasn’t much to be excited by, but most of the time slots had so many good sessions scheduled simultaneously, it was difficult to pick just one.  In some cases, I didn’t: I went to one for ten or fifteen minutes, and then changed my mind and went to another.

Unity & Prism – Lightweight IoC & WPF Composite Clients

It was during one of these switch-ups that I wound up catching the tail end of one of Glen Block’s talk on the Unity and Prism libraries.  Unity is a lightweight, IoC dependency injection container that is almost identical to one I created about two years ago while working for Panatrack, and which I have redesigned working for CarSpot.  Unity does support some things that I didn’t have any need for, and I really like Unity’s approach: for example, allowing you to plug in your own module loader and module initializer, separately.  Prism is the new composite client framework (they’re cautiously calling it a library now, I think) for WPF, though its concepts can be used in other technologies (like Windows Forms) with some additional work.  This is essentially a redesign and simplification of the same concepts that appeared in the Smart Client Software Factory, and I’m really excited to see support for patterns like MVC and MVP, which I use extensively.  Prism will work with Silverlight (great news!), but neither Unity nor Prism support Compact Framework currently.  If I end up using one or both of them, I will likely port them to Compact Framework, and will contribute to the project on CodePlex so that everyone will benefit.

XNA Game Development on Zune

This was a great lunch session.  Andrew Dunn explained that Game Studio, DirectX, and a few others are all owned by the XNA brand, and he demonstrated not only how to create a game from templates installed from Game Studio, but also how to publish the game on XBox live so it can be rated and reviewed by other game developers.  I’m definitely going to join the Content Creator’s Club so I can play around with this.

Unfortunately, XNA is not supported for Windows Mobile devices.  Zune was chosen primarily because it’s a fixed target, but as Zune runs some version or subset of the Compact Framework, hopefully a Windows Mobile version will emerge sometime in the not-so-distant future.  With 3D accelerator cards and VGA or better screens appearing in awesome new phones like the HTC Diamond, this could be a hot new gaming platform.  Zune is very limited, of course, but it still sounds like a lot of fun, especially knowing that up to 16 Zunes can play via the built-in Wifi.

Data Visualization Applications with Windows Presentation Foundation

Tim Huckaby did a great job and attempted to break the record for the most demos done in a single presentation.  I don’t know if he accomplished his goal, but he did do a dazzling number of nice demos.  He showed off the cancer research 3D molecule application (which strangely plugs into SharePoint), and had guest presenters walk through an application that allows administration, monitoring, and flexible visualizations of all of the slot machines in various casinos around the world.

My favorite demo, though, was a system that manages tours of the San Diego Zoo, the largest zoo in the world, and apparently impossible to see in its entirety in a single day.  Visitors can specify what animals and attractions they’re interested, and the system will map out a path and plan for them, making sure they see animals at the best times (while pandas are eating, at 2pm, for example).

Hardcore Reflection

I eat, sleep, and breathe reflection, so it was a special treat to see Dustin Campbell’s 400-level session on this topic.  I still wasn’t sure I would learn much, but I’m glad I went.  From dispelling myths about reflection’s performance and memory consumption problems (which were real prior to .NET 2.0), to seeing some (albeit simple) examples of creating dynamic methods and emitting IL, I got a few nuggets of goodness out of this.

Mock Objects & Advanced Unit Testing

I saw a presentation at a local .NET User Group about mock objects, specifically with Rhinomock.  Typemock was mentioned, and something peculiar, interesting, and… amazing was happening, and I knew C# was incapable of making use of the code I saw up on the screen, and then someone asked.  It turns out that Typemock uses the Debugger Profiler API to rewrite the code as it executed on the desktop.  A similar approach is used for code coverage in NCover.  Because of the dependency on this API, these tools won’t work for Compact Framework software, and so they’re useless to me.

I do have a plan to bring code coverage to Compact Framework, perhaps even plugging into NCover.  I’ll be writing some articles about that this summer, I’m guessing.

Conclusion

Overall, TechEd was a great experience.  I met a lot of interesting people, was inspired with new ideas, and had seriously geeky conversations with some very smart people.  As I took notes for each session, I found myself jotting down specifications for new applications and tools that I’m eager to start working on, and enhancements and new avenues to explore for ongoing projects that I’ve already blogged about.

Posted in Conferences, Personal, Reflection, Robotics | 2 Comments »