Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘Virtualization’ Category

MSDN Developer Conference in Chicago

Posted by Dan Vanderboom on January 13, 2009

I just got home to Milwaukee from the MSDN Developer Conference in Chicago, about two hours drive.  I knew that it would be a rehash of the major technologies revealed at the PDC which I was at in November, so I wasn’t sure how much value I’d get out of it, but I had a bunch of questions about their new technologies (Azure, Oslo, Geneva, VS2010, .NET 4.0, new language stuff), and it just sounded like fun to go out to Fogo de Chao for dinner (a wonderful Brazilian steakhouse, with great company).

So despite my reservations, I’m glad I went.  I think it also helped that I’ve had since November to research and digest all of this new stuff, so that I could be ready with good questions to ask.  There’ve been so many new announcements, it’s been a little overwhelming.  I’m still picking up the basics of Silverlight/WPF and WCF/WF, which have been out for a while now.  But that’s part of the fun and the challenge of the software industry.

Sessions

With some last minute changes to my original plan, I ended up watching all four Azure sessions.  All of the speakers did a great job.  That being said, “A Lap Around Azure” was my least favorite content because it was so introductory and general.  But the opportunity to drill speakers for information, clarification, or hints of ship dates made it worth going.

I was wondering, for example, if the ADO.NET Data Services Client Library, which talks to a SQL Server back end, can also be used to point to a SQL Data Services endpoint in the cloud.  And I’m really excited knowing now that it can, because that means we can use real LINQ (not weird LINQ-like syntax in a URI).  And don’t forget Entities!

I also learned that though my Mesh account (which I love and use every day) is beta, there’s a CTP available for developers that includes new features like tracking of Mesh Applications.  I’ve been thinking about Mesh a lot, not only because I use it, but because I wanted to determine if I could use the synchronization abilities in the Mesh API to sync records in a database.

<speculation Mode=”RunOnSentence”>
If Microsoft is building this entire ecosystem of interoperable services, and one of them does data storage and querying (SQL Data Services), and another does synchronization and conflict resolution (Mesh Services)–and considering how Microsoft is making a point of borrowing and building on existing knowledge (REST/JSON/etc.) instead of creating a new proprietary stack–isn’t it at least conceivable that these two technologies would at some point converge in the future into a cloud data services replication technology?
</speculation>

I’m a little disappointed that Ori Amiga’s Mesh Mobile wasn’t mentioned.  It’s a very compelling use of the Mesh API.

The other concern I’ve had lately is the apparent immaturity of SQL Data Services.  As far as what’s there in the beta, it’s tables without enforceable schemas (so far), basic joins, no grouping, no aggregates, and a need to manually partition across virtual instances (and therefore to also deal with the consequences of that partitioning, which affects querying, storage, etc.).  How can I build a serious enterprise, Internet-scale system without grouping or aggregates in the database tier?  But as several folks suggested and speculated, Data Services will most likely have these things figured out by the time it’s released, which will probably be the second half of 2009 (sooner than I thought).

Unfortunately, if you’re using Mesh to synchronize a list of structured things, you don’t get the rich querying power of a relational data store; and if you use SQL Data Services, you don’t get the ability to easily and automatically synchronize data with other devices.  At some point, we’ll need to have both of these capabilities working together.

When you stand back and look at where things are going, you have to admit that the future of SQL Data Services looks amazing.  And I’m told this team is much further ahead than some of the other teams in terms of robustness and readiness to roll out.  In the future (post 2009), we should have analytics and reporting in the cloud, providing Internet-scale analogues to their SQL Analysis Server and SQL Reporting Services products, and then I think there’ll be no stopping it as a mass adopted cloud services building block.

Looking Forward

The thought that keeps repeating in my head is: after we evolve this technology to a point where rapid UX and service development is possible and limitless scaling is reached in terms of software architecture, network load balancing, and hardware virtualization, where does the development industry go from there?  If there are no more rungs of the scalability ladder we have to climb, what future milestones will we reach?  Will we have removed the ceiling of potential for software and what it can accomplish?  What kind of impact will that have on business?

Sometimes I suspect the questions are as valuable as the answers.

Advertisements

Posted in ADO.NET Data Services, Conferences, Distributed Architecture, LINQ, Mesh, Oslo, Service Oriented Architecture, SQL Analysis Services, SQL Data Services, SQL Reporting Services, SQL Server, Virtualization, Windows Azure | 1 Comment »

A First Look at Windows 7

Posted by Dan Vanderboom on October 30, 2008

During the keynote on Tuesday, Ray Ozzie made a point of stressing the advantages of each of three user interaction modes: desktop, phone, and web; and all the speakers’ topics revolved around this central theme.  The key points were that systems which use all three are capable of delivering the greatest value, that providing an integrated experience across all three will become increasingly common, and that Microsoft is committed to encouraging these platforms to work together.  This will be accomplished through Windows Azure (cloud services), Live Services, SQL Services, and so on.  (See my article on Windows Azure for more details.)

Ray Ozzie did a great job presenting.  I’ve seen Bill Gates speak, and while he’s obviously very intelligent, he left a lot to be desired when it came to projecting a presence in front of an audience.  I have a lot of faith in Microsoft under Ray’s direction; he’s obviously a charismatic leader with a passion for technology and the value it provides.  Humorously enough, Ray’s favorite big word during his Tuesday keynote was “nascent”, which became a talking point for several of the speakers: notably Don Box and Chris Anderson.  So let’s talk more about these nascent technologies!

RIMG0086

The Taskbar & UAC Improvements

You have to admit, in looking at the taskbar of Windows 7, that it looks a bit more like the Mac’s.  Icons can be reordered by dragging them, similar to Taskbar Shuffle, and text describing the task name is conspicuously absent.  This is a welcome change; as many windows as I have open throughout the day, this text and the horizontal space it occupies has always been a nuisance, especially as it becomes squished and abbreviated to the point of uselessness.  The icons can be large or small, and the space between them adjusts depending on how many there are.

Just as multiple windows in the same application can be grouped together in the taskbar, windows are grouped into these new icon-only buttons.  Hovering over one brings up a submenu in the form of a set of large window thumbnails.  Hovering over one of these brings that full window temporarily into view.  You can also close any of these windows from their thumbnail representation (with a small X that appears when hovered over).  Although it sounded like a good idea when window grouping was announced for Vista, I’ve always found the actual implementation annoying, and I always turn this feature off, but Windows 7 provides a superior experience of grouping windows that I’m actually looking forward to using.

System tray notifications can be customized, putting the user in complete control over these pop-ups.  You can even control which icons, on an individual basis, appear in your system tray, and which remain hidden.  This is a subtle change, but is nevertheless nice if you have a lot of these icons.

User Access Control (UAC) has also been improved.  A joke was made of the negative feedback they had received about this intrusive feature, and in response, Microsoft has now included a way to control the level of protection UAC provides, similar to the security level in Internet Explorer.

Media, Multiple Monitors, and Multi-Touch Support

Media in Windows 7 can be shared across all computers in your house.  You can play them from Windows Explorer, Media Player, and a lightweight version of Media Player.  Music, videos, and other media can be played to a specific device, such as another computer, or on your home entertainment system.  A new Explorer context menu option named Play To provides a list of discovered devices.

As a user and strong supporter of multiple monitors, some of the most exciting announcements were the improved support for multiple monitor scenarios.  There have been several enhancements to multiple monitor setup and configuration, including support for connecting to multiple projectors for presentations.  This feature was demonstrated and used during the keynote itself.  Another impressive addition is support for multiple monitors in Remote Desktop.  Hopefully this ability will also filter down to Vista and XP.

The multi-touch HP computer was shown off, which has been commercially available for about $1,500.  Special drivers and updates have been made to enable touch for things like scrolling through lists and other containers, without applications needing to be aware of the multi-touch hardware, and applications taking advantage of the new Windows 7 multi-touch API can do more, such as the pinch gestures for zooming or throwing objects around the screen with simulated physics effects.  The HP computer uses two cameras in the upper corners of the screen, which means it is subject to shadow points that four-camera and (to a greater extent) rear-projected multi-touch panels do not have.  Shadow points occur when one touch happens higher on the screen (within view of the cameras), and a touch or motion occurs below that, obstructed by the higher touch point.  Some of this can be compensated for by algorithms that make good guesses of touch and motion based on position, velocity, and the assumption of continuity of motion, similar to the way our own brains fill in details that we’re unable to perceive.

The primary disadvantage of the HP touch computer is the tight coupling between the multi-touch screen and the rest of the computer hardware.  I personally feel this is a bad design, though I’m sure it serves some business need to lock in consumers to HP computers at least this early in the game, when there aren’t many multi-touch monitor options out there.  At the Windows 7 booth on the conference floor, however, they did have a 42 inch LG monitor set up with a multi-touch add-on screen that fit right over the monitor.  The company that manufactures these, NextWindow, has offerings for screens as large as 120 inches, and companies like Identity Mine are already developing software for these units.

Surface

Surface was a big hit.  With multi-person games to play, a photo hunt for attendees at PDC, and hands-on labs to explore multi-touch development, everyone got really got into these fun devices, which were scattered all over the conference.  I believe there was one at Tech Ed, but they really went big for PDC.  Unfortunately, these table devices have a price tag of about $13,500.  So compared to open source versions that go for $1,500 (albeit with some assembly required), and other options like the overlay screens offered by NextWindow, I won’t be jumping into Surface development anytime soon.

The application in the photo below is a digital DJ.  Sound loops can be tossed into the center.  Outside the circle, they’re silent.  The closer they get to the center of the blue area, the louder and more dominant they are.  Different styles of music were presented, and here we were playing some funky techno.

RIMG0071

Microsoft Research’s keynote presentation on Wednesday was really fantastic.  In addition to a new version of the World Wide Telescope (which has the potential to change the way students discover, explore, and learn about astronomy), they revealed a new technology (in early stages) called Second Light, which builds upon Surface, and is able to project two images: one of them which you see on the surface computer itself, and a second that only becomes visible when another diffuse surface is placed above the Surface table.  This can be a cheap piece of plastic or even some tracing paper.  One example displayed satellite images on the primary surface, and when the tracing paper was moved above it, you could see a map-only view overlaid on top.  The amazing part was that you could raise the second surface off the table, into the air, and place it at virtually any angle, and the Surface computer compensated for the angle by pre-distorting the image.  In addition, when using a piece of plastic, the infrared camera was able to perceive and interpret touch and multi-touch on the second surface that hovered several inches or better off the table!

Devices and Printers

Devices on your network, such as printers and other computers, appear in a nice visual explorer where they can be manipulated.  When a laptop is moved from your home to your office (and back), your default printer will change depending on your current environment.  Will they do the same thing with auto-detection of monitors, including multiple-monitor arrangements?  Let’s hope so!

 RIMG0093

Performance & Miscellaneous

Windows Seven uses the same kernel and driver model that Windows Server 2008 and Windows Vista uses, so there will be no need for a huge reworking of these fundamentals.  However, a great deal of work has gone into, and continues to be done, in the areas of optimizing performance and minimizing resource requirements: for Windows startup, application startup, and much more.  Windows 7 was shown running on a tiny Netbook, running at 1 GHz with 1 GB of RAM.  Try doing that with Vista!

RIMG0102

Virtualization is another key technology with many improvements, such as Window 7’s ability to boot off virtual hard drives, the ability to keep a boot image frozen (see my article on setting up such an environment with VMWare in this article), and application virtualization (versus whole-machine virtualization).  Any virtual hard drive file (a .vhd) can be mounted in the Disk Management Utility.

Conclusion

Windows 7 promises a lot of great new features.  Whether they’ll be compelling enough for businesses to upgrade from XP and Vista, especially for those that didn’t see the ROI on Vista, we have yet to see.  As a lucky attendee of the PDC, I’ll be running the Windows 7 CTP they gave me to learn all I can about it.  As I discover both its great features and buggy failures, I’ll be writing more about it.  For the rest of you, it will be available around April 2009 when it reaches beta release.

Other news and discussion can be found on the Engineer Windows 7 blog, and there’s a good article at Ars Technica with screen shots.

Posted in Conferences, Research, Virtualization, Windows 7 | 2 Comments »

Windows Azure: Internet Operating System

Posted by Dan Vanderboom on October 27, 2008

image

As I write this during the keynote of PDC 2008, Ray Ozzie is announcing (with a great deal of excitement) a new Windows-branded Internet operating system called Windows Azure.  Tipping their hats to work done by Amazon.com with their EC2 (Elastic Cloud Computing) service, Microsoft is finally revealing their vision for a world-wide, scalable, cloud services platform that looks forward toward the next fifty years.

Windows Azure, presumably named for the color of the sky, is being released today as a CTP in the United States, and will be rolled out soon across the rest of the planet.  This new Internet platform came together over the past few years after a common set of scalable technologies emerged behind the scenes across MSN, Live, MSDN, XBox Live, Zune, and other global online services.  Azure is the vehicle for delivering that expertise as a platform to Microsoft developers around the world.

SQL Server Data Services, now called SQL Services (storage, data mining, reporting), .NET Services (federated identity, workflows, etc.), SharePoint Services, Live Services (synchronization), and Dynamics/CRM Services are provided as examples of services moving immediately to Windows Azure, with many more on their way, both from Microsoft and from third parties such as Bluehoo.com, a social networking technology using Bluetooth on Windows Mobile cell phones to discover and interact with others nearby.

The promise of a globally distributed collection of data centers offering unlimited scalability, automatic load-balancing, intelligent data synchronization, and model-driven service lifecycle management is very exciting.  Hints of this new offering could be seen from podcasts, articles, and TechEd sessions earlier this year on cloud services in general, talk of Software-As-Service (SAS), Software+Services, the Internet Service Bus, and the ill-named BizTalk Services.

The development experience looks pretty solid, even this early in the game.  Testing can be done on your local development machine, without having to upload your service to the cloud, and all of the cloud services APIs will be available, all while leveraging existing skills with ASP.NET, web services, WCF and WF, and more.  Publishing to the cloud involves the same Publish context menu item you’re used to in deploying your web applications today, except that you’ll end up at a web page where you’ll upload two files.  Actual service start time may take a few minutes.

Overall, this is a very exciting offering.  Because of Microsoft’s commitment to building and supporting open platforms, it’s hard to imagine what kinds of products and services this will lead to, how this will catalyze the revolution to cloud services that has already begun.

Posted in Conferences, Distributed Architecture, Service Oriented Architecture, Software Architecture, Virtualization | Leave a Comment »

Misadventures in Pursuit of an Immutable Development Virtual Machine

Posted by Dan Vanderboom on May 23, 2008

Problem

Every three to six months, I end up having to rebuild my development computer.  This one machine is not only for development, but also acts as my communications hub (e-mail client, instant messenger, news and blog aggregator), media center, guitar effects and music recording studio, and whatever other roles are needed or desired at the time.  On top of that, I’m constantly installing and testing beta software, technical previews, and other unstable sneak peeks.  After several months of this kind of pounding, it’s no wonder the whole system doesn’t grind to a complete halt.

This is an expensive and tedious operation, not to mention time lost from poor performance.  It normally takes me a day and a half, sometimes two days, to rebuild my machine with all of the tools I use on a regular or semi-regular basis.  Drivers, SDKs, and applications have to be installed in the correct order, product keys have to be entered and software activated over the Internet and set up the way I want, wireless network access and VPN connections have to be configured, backups have to be made and application data restored once they’re reinstalled, and there is never a good time for all of the down time.  A developer’s environment can be a very complicated place!

If it’s not error messages and corruption, it’s performance problems that hurt and annoy the most.  There’s a profound difference in speed between a clean build and one that’s been clogged with half a year or more of miscellaneous gunk.  It can mean the difference in Visual Studio, for example, between a 30 second build and three or four minutes of mindless disk thrashing.

Using an immutable development machine means that any viruses that you get, or registry or file corruption that occurs–any problems that arise in the state of the machine–never gets saved, and therefore disappears the next time you start it up.  It is important, however, that everything within the environment is set up just the way you want it.  If you set up your image without ever opening Visual Studio, for example, you’ll always be prompted with a choice about the style of setup you want, and then you’d have to wait for Visual Studio to set itself up for the first time, every time.

Still, if you invest a little today in establishing a solid environment, the benefits and savings over the next year or two can be well worth the effort.  As I discovered over the past week and a half, there are a number of pitfalls and dangers involved.  If you’ve considered setting up something similar, I hope the lessons I’ve learned will be able to save you much of the trouble that I went through.

Solution

After listening to Security Now! and several other podcasts over the past couple of years about virtual machines and how they’re being used for software testing to ensure a consistent starting state, I began thinking of how nice it would be if I could guaranty that my development environment would always remain the same.  If I could get all of my tools installed cleanly to maximize performance and stability, and then freeze it that way, while still allowing me to change the state of my source code files and other development assets, I might never have to rebuild my computer again.  At least, not until it’s time to make important software upgrades, but then it would be a controlled update, from which I could easily roll back if necessary.

But how can this be done?  How can I create an immutable development environment and still be able to update my projects and save changes?  By storing mutable state on a separate drive, physical or virtual, which is not affected by virtual machine snapshots.  It turns out to be not so simple, but achievable nonetheless, and I’ll explain why and how.

If the perfect environment is to be discovered, I have several more criteria.  First, it has to support, or at least not prevent, the use of multiple monitors.  Second, I’d like to synchronize as much application data as possible across multiple computers.  Third, as I do a lot of mobile device development, I need access to USB and other ports for connecting to devices.

Implementation

For data synchronization across machines, I’ve been using Microsoft’s Mesh.com which is based on FeedSync, and is led by Ray Ozzie.  Based on my testing over the past two weeks, it actually works very well.  Though it’s missing many of the features you would expect from a mature synchronization platform and toolset, for the purposes of my goals explained in this article, it’s a pretty good choice and has already saved me a lot of time where I would have otherwise been transferring files to external drives and USB keys, or e-mailing myself files and trying to get around file size and content limitations.  If this is the first time you’ve heard of Mesh, make a point of learning more about it, and sign up for the technical preview to give it a test drive!  (It’s free.)

Originally I wanted to use Virtual PC to create my development virtual machine, however as of the 2007 version, it still does not support USB access, so immediately I had to rule it out.  I downloaded a demo of VMWare’s Workstation instead, which does support USB and provides a very nice interface and set of tools for manipulating VMs.

The diagram below illustrates the basic system that I’ve created.  You’ll notice (at the bottom) that I have multiple development environments.  I can set these environments up for different companies or software products that each have unique needs or toolsets, and because they’re portable (unlike normal disk images), I can share them with other developers to get them up and running as quickly as possible.

Ideal Development Environment

Partitioning of the hard drive, or the use of multiple hard drives, is not strictly necessary.  However, as I’m working with a laptop and have only a single disk, I decided to partition it due to the many problems I had setting up all of the software involved.  I rebuilt the machine so many times, it became a real hassle to restore the application data over and over again, so putting it on a separate partition allowed me to reformat and rebuild the primary partition without constantly destroying this data.

My primary partition contains the host operating system, which is Windows XP Professional SP3 in my case.  (If you’re using Vista, be aware that Mesh will not work with UAC (user account control) disabled, which I find both odd and irritating.)  The host OS acts as my communication workstation where I read e-mail, chat over messenger, read blogs and listen to podcasts, surf the Internet and save bookmarks, etc.  I always want access to these functions regardless of the development environment I happen to have fired up at the time.

Mesh is installed only on the host operating system.  To install it on each virtual machine would involve having multiple copies of the same data on the same physical machine, and clearly this isn’t desirable.  I considered using VMWare’s ESXi server, which allows you to run virtual machines off the bare metal instead of requiring a host operating system, but as I always want my communications and now also synchronization software running, their Workstation product seemed like an adequate choice.  Which is great because it’s only $189 at the time I’m writing this, opposed to $495 for ESXi Server.

With the everyday software taken care of in the host OS, the development virtual machines can be set up unencumbered by these things, simplifying the VM images and reducing the number of friction points and potential problems that can arise from the interaction of all of this software on the same machine.  That alone is worth considering this kind of setup.

Setting up my development VM was actually easier than installing the host OS since I didn’t have to worry about drivers.  VMWare Workstation is very pleasant to use, and as long as the host OS isn’t performing any resource intensive operations (it is normally idle), the virtual machine actually runs at “near native speed” as suggested by VMWare’s website.  The performance hit feels similar to using disk encryption software such as TrueCrypt.  With a 2.4 GHz dual-core laptop, it’s acceptable even by my standards.  I’m planning to start using a quad-core desktop soon, and ultimately that will be a better hardware platform for this setup.

Hiccup in the Plan (Part 1)

The first problem I ran into was in attempting to transfer a virtual machine from one computer to another.  Wanting to set up my new super-environment on a separate computer from my normal day-to-day development machine, I set up VMWare on one computer and when I thought I had the basics completed, I attempted to transfer the virtual machine to my external hard drive.  Because the virtual disk files were over 2 GB and the external hard drive is formatted as FAT32 (which has a file size limitation of 2 GB), I was immediately stopped in my tracks.  I tried copying the files over the local network from one computer to the other, but after 30 minutes of copying, Windows kindly informed me that a failure had occurred and the file could not, in fact, be found.  (Ugh.)  Lesson learned: VMWare has an option, when creating a new virtual machine, to break up virtual disks into 2 GB files.  From that point on, I decided not only to use this option, but also to simply build the virtual machines on the actual target computer, just in case.

The next trick with VMWare was in allowing the virtual machine to share a disk with the host operating system.  My first route was to set up a shared folder.  This is a nice feature, and it allows you to select a folder in your host OS to make visible to the virtual machine.  I thought it would be perfect.  However, Visual Studio saw it as non-local and therefore didn’t trust it.  In order to enable Visual Studio to trust it, you have to change machine-level security configuration (in the VM).  There are two ways of doing this: there is a .NET Configuration tool (mscorcfg.msc) with a nice user interface, and then there is the command-line caspol.exe tool with lots of confusing options and syntax to get right.

Navigating to Administrative Tools, I was stumped to find that I didn’t have this nice GUI tool any more.  I’ve fully converted everything to Visual Studio 2008 and no longer work in 2005, so the last time I built my machine, I ran the VS2008 install DVD.  I learned the hard way that Microsoft no longer includes this tool in the new Visual Studio 2008 install DVD.  I Googled around to discover that Microsoft, for reasons unknown, did in fact decide to remove this tool from their installer, and that if I wanted to have it, I would have to install (first) the .NET 2.0 Redistributable, (second) the .NET 2.0 SDK, and (finally) Visual Studio 2008.  This would mean rebuilding the VM… again.  I tried caspol.exe briefly and finally gave up (the example I found in the forums didn’t work), telling myself that I really did want this GUI tool anyway, and if I was going to set up the perfect development environment, it was worth one more rebuild to get it right.

Several blue screens of death, some puzzling file corruption, and two reinstallations later, I was thinking that the prescribed solution I was attempting wasn’t all it was cracked up to be after all.  Whoever suggested it was messing with me, and was probably crazy anyway.  I did eventually get these components installed and working by simply repeating the procedure, and after using the configuration tool, Visual Studio did seem pretty happy to open the solutions and build them for me.

Until I opened my other solution and tried to build it, that is.  I keep custom controls and designers in a separate solution because of a post-build task that runs and registers the control designers (which is itself an infuriating requirement, but that’s for another article).  Whenever I tried building these projects, I would get an error that access was denied to create some kind of resource file.  I looked at the properties of the shared folder and saw that the file system claimed to be HPFS instead of NTFS.  HPFS is a proprietary format of VMWare’s that somehow provides an accessibility tunnel to the real underlying storage format, and I don’t know anything else about it, but I wouldn’t be surprised if that didn’t have something to do with my problem.  Visual Studio does some finicky things, and VMWare certainly does its share of hocus pocus.  Figuring out the possible interaction between them was going to be beyond my voodoo abilities and resources, so I had to find another way around this shared disk situation if I planned on developing custom controls in this environment.

Hiccup in the Plan (Part 2)

My Dell Latitude D830 is four months old.  I requested a Vostro but was absolutely refused by the company I work for, who shall remain nameless.  Supposedly the Latitude’s are a “known quantity”, will have fewer problems, and are therefore better for support reasons.  Regardless, the D830 is for the most part a good, fast machine.  This one in particular, however, became a monster in the past week during the time I was trying to get this new setup working, costing me a full week of lost time and a great deal of frustration.  Every time I thought I had isolated the cause, some other misbehavior appeared to confuse matters.  Each step of troubleshooting and repair seemed reasonable at the time, and yet as new symptoms emerged, the dangling carrot moved just beyond my reach, my own modern reenactment of Sisyphus’s repeated torment.

MeshCorruption

Not only was I getting Blue Screens of Death several times a day, but CHKDSK would start up before Windows and all kinds of disk problems would be discovered, such as orphaned files and bad indexes.  Furthermore, the same things were happening with the virtual disks, and VMWare reported fragmentation on those disks immediately after installing the operating system and a few tools.  There were folders and files I couldn’t rename or delete, and running the Dell diagnostics software turned up nothing at all.

Finally, having a second D830 laptop, the Dell tech suggested swapping hard drives.  After installing my whole environment, plus the VMs, about a dozen times, I really didn’t feel like going through this yet again, but it seemed like a reasonable course of action, and so I went through the process again.  Getting almost all the way through everything without a problem, I finally (with a smile) rebooted my VM and waited for it to come back up, only to see CHKDSK run and find many pages of problems once again.

Warning: The great thing about Mesh is that you can make changes to your files, such as source code, recompile, and all of those changes shoot up into the cloud in a magical dance of synchronization, and those changes get pushed down to all the other computers in your mesh.  What’s scary, though, is when you have a hard drive with physical defects that corrupts your files.  Those corruptions also get pushed up to the cloud, and this magically corrupts the data on all of the computers in your mesh.  So be aware of this!

The Value of Offline Backups

Make backups.  Check your code into version control.  Mesh is a great tool for synchronizing data, and initially I thought this would be sufficient for backups of at least some of my data, but it falls short in several ways.

First, Mesh doesn’t version your backups.  Once you make a change and you connect to the Internet, everything gets updated immediately.  If data is accidentally deleted or corrupted, these operations will replicate to the cloud and everywhere in your mesh.  Versioned backups, as snapshots in time, are the only way to ensure that you can recover historical state if things go awry as they did for me.

Second, Mesh is great for synchronizing smaller, discrete files that either aren’t supplemented with metadata, or whose metadata also exists in files within the same folder structure and which also gets synchronized.  By the latter, I mean systems such as Visual Studio projects and files: the files are referenced by project files, and project files referenced by solution files, but these are all small, discrete files themselves that can also be seamlessly synchronized.  When I add a file to a project and save, Mesh will update the added file as well as the project file.

Application data that doesn’t work well would be any kind of monolithic data store, such as a SQL Server database or an Outlook’s (.pst) data file.  Every time you got an e-mail and your .pst file changed, the whole file would be sent up to Mesh, and if your e-mail files get as large as mine, that would be a problem.  Hopefully in the future, plug-ins will be developed that can intelligently synchronize this data as well through Mesh.

I’m using and highly recommend Acronis TrueImage for backups.  It really is a modern, first-rate backup solution.

Conclusion

In the end, Dell came and replaced my motherboard, hard drive, and DVD-RW drive (separate problem!), and I was able to get back to building my immutable development environment.  Instead of using shared folders, VMWare lets you add a hard drive that is actually the physical disk itself, or a partition of it.  Unfortunately, VMWare doesn’t let you take a snapshot of a virtual machine that has such a physical disk mounted.  I don’t know why, and I’m sure there’s a reason, but the situation does suck.  The way I’ve gotten around it is to finish setting up my environment without the additional disk mounted, take a snapshot, and then add the physical disk.  I’ll run with it set up for a day or two, allowing state to change, and then I’ll remove the physical disk form the virtual machine, revert to the latest snapshot, and then add the physical disk back in to start it up again.  This back-and-forth juggling of detaching and attaching the physical disk is less than ideal, but ultimately not so bad as the alternative of not having an immutable environment, and I haven’t had the last word quite yet.

I’ll continue to research and experiment with different options, and will work with VMWare (and perhaps Xen) to come up with the best possible arrangement.  And what I learn I will continue to share with you.

Posted in Custom Controls, Development Environment, Uncategorized, Virtualization, Visual Studio | 5 Comments »