Critical Development

Language design, framework development, UI design, robotics and more.

Archive for the ‘SQL Server Compact’ Category

Bad SDK Installation Practices

Posted by Dan Vanderboom on July 8, 2008

Some Compact Framework SDKs (SQLCE, Symbol, OpenNETCF), when they install on a development machine, install a deployment “package”, which is a Visual Studio construct that associates one or more assemblies with a CAB file.  When you add a reference to one of those assemblies and deploy to a device, the entire CAB file installs instead of the individual assembly.

In Visual Studio 2005, the definition of these deployment packages consists of an XSL file deeply nested in the Application Data folder (the file name is conman_ds_package.xsl).  Though I couldn’t find any reference to this at all on the Internet when I first played around with this, you can edit this file, and even remove package definitions altogether, which will circumvent this mechanism and deploy only the assemblies you explicitly reference, as you would expect it to behave in the first place.

I’m not recommending this as a general practice, but it sometimes comes in handy: it’s nice to have that level of control over which assemblies are selected and how they get deployed.  My article on the frustrations and oddness with references illustrates some of the problems of “hiding the details” to such an extent that the ultimate behavior is unpredictable and difficult to troubleshoot.

I have so far been unable to find these deployment package definitions in Visual Studio 2008, nor have I found any part of the IDE that allows you to view or edit this.  This is a fundamental design flaw.  If you’re going to raise the level of abstraction, providing a way to make the guts visible and editable is essential for traceability and debugging, especially when the mechanism itself is flawed or is misused by third-party vendors.

For example, there is apparently some room for misinterpretation between where the CAB is located and where Visual Studio actually looks for it when deploying to a device.  When deploying a Symbol DLL file, Visual Studio notices that a Symbol deployment package exists, presumes it’s installed under the current Visual Studio directory (in my case, with Visual Studio 2008, it’s the directory Visual Studio 9.0), and proceeds to deploy from that location.  Unfortunately, the CAB file doesn’t actually exist there, so a deployment error results.

It looks here:

C:\Program Files\Microsoft Visual Studio 9.0\SmartDevices\SDK\Symbol Technologies

However, the SDK was actually installed here:

C:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\Symbol Technologies

Error    1    Deployment and/or registration failed with error: 0x8973190d. Error opening file ‘C:\Program Files\Microsoft Visual Studio 9.0\smartdevices\sdk\Symbol Technologies\wce500\armv4i\symbol.all.arm.cab’. Error 0x80070003: The system cannot find the path specified.
      Device Connectivity Component  

Symbol’s .NET SDK v1.4 presumes you are using Visual Studio 2005, and installs itself into the Visual Studio 8.0 directory under Program Files.  This presumption is such an obviously bad practice, it simply shocks me.  They have their own install directory in “C:\Program Files\Symbol Mobility Developer Kit for .NET” where (a little deeper) they keep a copy of the CAB file, which exists independent of any tool that might use it (different versions of Visual Studio, SharpDevelop, etc.).

SQL Server CE / Compact has the same problem, depending on whether it gets installed with Visual Studio or is installed as a separate SDK.  I mentioned SQLCE SDK directory confusions in this article.

So take note: if you’re distributing an SDK, use a little common sense (and consistency) when deciding where to install it, and what presumptions you’re making about your audience.  Learn from SQLCE, Symbol, and others, what not to do.

Posted in Compact Framework, SQL Server Compact, Visual Studio | Leave a Comment »

Remote SQL Server Compact Queries: Fun with VS2008 & ActiveSync

Posted by Dan Vanderboom on January 5, 2008

I wanted to show you how to use Server Explorer to connect to an ActiveSync-connected Windows Mobile device in order to write queries against a SQL Server Mobile/Compact database, so that I can demonstrate all of the problems I’ve had with that tool, the error messages I’ve encountered, and to explain just why I’ve chosen to write a replacement for it.  However, a funny thing happened on the way to the forum…

Trouble at the Start

In setting up for writing this query, I intended to use the emulator, but quickly realized that creating an ActiveSync connection to it presented some difficulties.  In an older version of the Windows Mobile Developer Power Toys, there was a driver that supported ActiveSync connections to an emulator device (which can be found here), but this was targeted for Visual Studio 2003.  When I tried installing it to see if it would work on VS2008, I first got a nasty warning that the publisher couldn’t be verified (isn’t it Microsoft?), and then was told that Vista was rejecting it due to incompatibility.  The new v3.5 of the Power Toys does not include this driver.  Bummer.

Not knowing where my Symbol MC50 is at the moment, I decided to use my WM5 cell phone (an AT&T 8525).  With the restrictive security blocking my attempt to deploy and debug from Visual Studio, I had to use the Device Security Manager (under the Tools menu), which is a very handy tool.  After connecting to the device (after several spontaneous network hiccups), I selected “Security Off”, right-clicked on the connected device, and chose Deploy.  After that, I was able to deploy and debug on the device wirelessly.

A good start, but next I needed to connect via ActiveSync.  I plugged in the USB cable between my cell phone and PC, and nothing happened.  I tried all of the USB ports, rebooted my cell phone and laptop, made sure Windows Mobile Device Center was running on the PC and the ActiveSync window was running on the mobile device, I pressed the “Sync” menu button on the device, … but nothing happens.  It’s “Waiting for network” supposedly.  I’ve gotten ActiveSync to work with my cell phone many times in the past (though not consistently).

I also tried using the Device Emulator Manager, shown below.  If you right-click, you get a context menu with an option to Cradle, which I hoped would trigger an ActiveSync, but it doesn’t.  Either it’s just not supported, or ActiveSync is angry with my device for some reason.

DeviceEmulatorManager

To Be Continued…

This is typical of the kind of behavior that has given me the opinion that ActiveSync is a piece of junk, and any tool or technology built on top of it will be susceptible of the same unreliability.  I’m very busy and have a lot to get done in a short period of time, and mobile device development is already inherently complicated and slow compared to desktop development.  I just don’t have time to fight with the tools like this.

I will find another device, get it to ActiveSync, and then return to this topic in a future article to demonstrate the remote query tool inside VS2008.  Then I’ll get back to solving the problem with my new product.

Posted in Compact Framework, SQL Server Compact, Visual Studio | Leave a Comment »

Project: Code-Named “SQL Mobile Bridge”

Posted by Dan Vanderboom on December 26, 2007

SQLMobileBridge

This is not the final name.  But it will be a useful product.  With as much as I’ve been working with rapidly-evolving mobile database schemas lately, I expect to save from 30 minutes to an hour a day in my frequent build-deploy-test cycles.  The lack of a good tool for mobile device database queries causes me a lot of grief.  I know Visual Studio 2008 has something built-in to connect to mobile devices over ActiveSync, but let’s face it: ActiveSync has been a real pain in the arse, and more often fails than works (my next blog will detail some of those errors).  I can only connect to one device at a time, and I lose that connection frequently (meanwhile, SOTI Pocket Controller continues to work and communicate effectively).  Plus I have a window constantly bugging me to create an ActiveSync association.

I work on enteprise systems using sometimes hundreds of Windows Mobile devices on a network.  So I don’t want to create an association on each one of those, and getting ActiveSync to work over wireless requires an association, as far as I know.

Pocket Controller or other screen-sharing tools can be used to view the mobile device, and run Query Analyzer in QVGA from the desktop, but my queries get big and ugly, and even the normal-looking ones don’t fit very well on such as small screen.  Plus Query Analyzer on PDAs is very sparse, with few of the features that most of us have grown accustomed to in our tools.  Is Pocket Query Analyzer where you want to be doing some hardcore query building or troubleshooting?

So what would a convenient, time-saving, full-featured mobile database query tool look like?  How could it save us time?  First, all of the basics would have to be there.  Loading and saving query files, syntax color coding, executing queries, and displaying the response in a familiar “Query Analyzer”/”Management Studio” UI design.  I want to highlight a few lines of SQL and press F5 to run it, and I expect others have that instinct as well.  I also want to be able to view connected devices, and to use several tabs for queries, and to know exactly which device and database the active query window corresponds to.  No hunting and searching for this information.  It should also have the ability to easily write new providers for different database engines (or different versions of them).

Second, integration.  It should integrate into my development environments, Visual Studio 2005 & 2008.  It should also give an integrated list of databases, working with normal SQL Server databases as well as mobile servers.  If we have a nice extensible tool for querying our data, why limit it to Windows Mobile databases?

Sometimes it’s the little details, the micro-behaviors and features, the nuances of the API and data model, that defines the style and usefulness of a product.  I’ve been paying a lot of attention to these little gestures, features, and semantics, and I’m aiming for a very smooth experience.  I’m curious to know what happens when we remove all the unnecessary friction in our development workflows (when our brains are free to define solutions as fast as we can envision them).

Third, an appreciation for and focus on performance.  Instead of waiting for the entire result to return before marshalling the data back to the client, why not stream it across as it’s read? — several rows at a time.  Users could get nearly instantaneous feedback to their queries, even if the query takes a while to come fully across the wire.  Binary serialization should be used for best performance, and is on the roadmap, but that’s coming after v1.0, after I decide to build vs. buy that piece.

Finally, a highly-extensible architecture that creates the opportunity for additional functionality (and therefore product longevity).  The most exciting part of this project is probably not the query tool itself, but the Device Explorer window, the auto-discovering composite assets it visualizes, and the ability to remotely fetch asset objects and execute commands on them.

The Device identifies (broadcasts) itself and can be interrogated for its assets, which are hierarchically composed to represent what is visualized as an asset tree.  One Device might have some Database assets and some Folder and File assets.  The Database assets will contain a collection of Table assets, which will contain DatabaseRow and DatabaseColumn assets, etc.  In this way, the whole inventory of objects on the device that can be interrogated, discovered, and manipulated in a standard way that makes inherent sense to the human brain.  RegistryEntry, VideoCamera, whatever you want a handle to.

This involves writing “wrapper” classes (facades or proxies) for each kind of asset, along with the code to manipualte it locally.  Because the asset classes are proxies or pointers to the actual thing, and because they inherit from a base class that handles serialization, persistence, data binding, etc., they automatically support being remoted across the network, from any node to any other node.  Asset objects are retrieved in a lazy-load fashion: when a client interrogates the device, it actually interrogates the Device object.  From there it can request child assets, which may fetch them from the remote device at that time, or use its locally-cached copies.  If a client already knows about a remote asset, it can connect to and manipulate it directly (as long as the remote device is online).

With a remoting framework that makes shuffling objects around natural, much less message parsing and interpretation code needs to be written.  Normal validation and replication collision logic can be written in the same classes that define the persistent schema.

So what about services?  Where are the protocols defined?  Assets and Services have an orthagonal relationship, so I think that Services should still exist as Service classes, but each service could provide a set of extension methods to extend the Asset classes.  That way, if you add a reference to ServiceX, you will have the ability to access a member Asset.ServiceXMember (like Device.Databases, which would call a method in MobileQueryService).  If this works out the way I expect, this will be my first real use of extension methods.  (I have ideas to extend string and other simple classes for parsing, etc., of course, but not as an extension to something else I own the code for.)  In the linguistic way that I’m using to visualize this: Services = Verbs, Assets = Nouns.  Extension methods are the sticky tape between Nouns and Verbs.

public static AssetCollection<Database> Databases(this Device device) { }

With an ability to effortlessly and remotely drill into the assets in a mobile device (or any computer, for that matter), and the ability to manipulate them through a simple object model, I expect to be a significantly productive platform on which to build.  Commands executed against those assets could be scripted for automatic software updates, they could be queued for guaranteed delivery, or they could be supplemented with new commands in plug-in modules that aid in debugging, diagnostics, runtime statistics gathering, monitoring, synchronizing the device time with a server, capturing video or images, delivering software updates, etc.

And if the collection of assets can grow, so can UI components such as context menu items, document windows, and so on, extending and adding to the usefulness of the Device Explorer window.  By defining UI components as UserControls and defining my own Command invocation mechanism, they can be hosted in Visual Studio or used outside of that with just a few adjustments.

More details to come.

Posted in Compact Framework, Linguistics, My Software, Object Oriented Design, Problem Modeling, Software Architecture, SQL Server Compact, User Interface Design | Leave a Comment »

Multithreaded Design: Dedicated Task Threads or Bucket Brigade Strategy?

Posted by Dan Vanderboom on December 17, 2007

A few days ago, I took a dive into building a new software product.  It’s aim is mobile developers, such as myself, whose applications access databases on those mobile devices.  After six weeks of development, v1.0 will be released (by the end of January, 2008).  More details on that product to come later.

One of my goals is that its user interface should be very responsive.  When commands take a while to complete, they’ll need to execute on some other thread.  It’s a .NET Framework application, and in that environment, only one thread can update the user interface, so it’s a rare resource that needs to be protected from doing too much work that doesn’t fit that primary purpose.

I’ve written a fair amount of multithreaded code in the past.  Working with RFID hardware before the “Gen 2” standard, on pre-release Symbol (now Motorola) devices, I figured out how to integrate with the RFID radio at a low level, and only by using several threads was I able to obtain acceptable performance.  After wrestling with the issues like race conditions, deadlocks, and other complexities, I have a healthy respect for the amount of planning that’s required to make it work correctly.

My new product consists of three major layers: a user interface layer, a service layer, and a network communication layer.  The Command pattern will be used to publish and service commands.  Actions in the user interface will enqueue command objects to the service layer, where they’ll either be processed immediately or passed to another node in the network for processing remotely.

A simplification of this scenario is illustrated in the sequence diagram below.  Each color represents a separate thread.

DataService multithreaded sequence

The red user interface thread does minimal work.  When it calls the DataServiceProxy to request data, a command object is enqueued and the thread is allowed to return immediately to the user interface.  The blue line represents one thread in a larger pool of worker threads that grows or shrinks depending on the command queue size, and is dedicated to processing these commands.  In this example, the DataServiceProxy makes a call to the TCPConnection object and returns back to the pool.  When a message does arrive, this happens on a thread owned by the TCPConnection object.  Because we don’t want this thread to “get lost” and therefore be unavailable to process more TCPConnection logic, we enqueue a response object and return control immediately.  Finally, the DataServiceProxy object, using one of its threads, fires a MessageReceived event, which in turn calls Invoke so that it can update the user interface on the only thread that is capable of doing so.

I like using Sequence diagrams like this to plan multithreaded work.  It clears up issues that you wouldn’t normally be able to visualize clearly.  The first (and worst) mistake would be to allow the UI thread to cross all the way through to the TCPConnection object, making some blocking call while the network connection times out, throws an exception, handles the exception, etc.  Meanwhile, the user is waiting until they can interact with the screen again, waiting for the hourglass to go away so they can get on with their work.  By assigning specific roles to threads, and creating mechanisms by which threads can be kept within specific boundaries, their behavior is easier to predict and manage.

I don’t know a lot about how ThreadPools behave, but I’m going to be looking into that in detail over the next few weeks, and if I come across anything particularly noteworthy, I’ll write about it.  My instinct is to continue to use this pattern of thread role assignment, especially for my new product, but there’s another pattern I’ve been thinking about that may also work well.  A friend of mine at Bucketworks mentioned to me a while back about the efficiency of bucket brigades, and about how intensely they’ve been studied and found to create self-organizing and automatically-optimizing lines of workers.

Each Thread Is A Worker

The way I’ve been modeling thread work, I treat each thread like a worker: the UI worker hands off a job description to a DataServiceProxy worker, who then does something with it and passes it off to a TCPConnection worker, who in turn does some work with it and hands it off (over a network in that case).  So what happens when the scenario becomes more complicated than the simple sequence diagram above?  What if I have four or five hand-offs?  Is the assignment of specific threads to specific roles really the best use of those threads?  The UI thread is stuck, but the rest of them could technically move around.  Perhaps they’re devoted to one task at a time, but when a task is completed with nothing in that queue, could the worker thread be moved to perform another task?  So I started thinking about how a bucket brigade strategy may help.

Threads aren’t exactly like people workers, so I have to be careful about the analogy.  It’s probably no big deal processing-wise for threads to be sleeping if they’re not used, but each thread does consume some memory, and there must be some kind of overhead to deallocate and reallocate their thread-local resources.  Would it be better to share a single thread pool among a connected linkage of worker objects, or for each service object to allocate its own thread pool?

Will this work?  Bucket brigades tend to work best when the density of work is high and the amount of time to complete a task is relatively small and constant.  If a task takes too long to complete, the line can more easily become unbalanced.  I’m going to be thinking more about this possibility, and come up with some objective measurements that I can use to compare the approaches and determine when this would be appropriate in a thread-management strategy (if at all).

Bucket Brigades In Business

If you want some background on bucket brigades and their use in manufacturing and assembly lines, check out this paper by John Bartholdi and Donald Eisenstein.  They explain how expensive time-motion studies are done to balance assembly lines in large organizations to maximize throughput, and how this needs to be adjusted or redone when the length or configuration of work changes, or when worker productivity varies.  With bucket brigades, however, there’s less need for planning because the strategy “spontaneously generates the optimal division of work”.  Pretty neat stuff.

The list of companies using bucket brigades is impressive: McGraw-Hill, Subway (yes, the sandwich company), Readers Digest, Ford Customer Service Division, The Gap, Walgreen’s, Radio Shack, etc.

If anyone’s using this kind of strategy to manage work in software systems, I’d love to hear about it.

Posted in My Software, Object Oriented Design, Problem Modeling, Software Architecture, SQL Server Compact, User Interface Design | 1 Comment »

Data Synchronization For Flexible Back-End Integrations

Posted by Dan Vanderboom on December 15, 2007

Data synchronization is some of the most difficult logic to write.  There are many interactions and transformations to express, and many factors to consider.  This article discusses synchronization in the context of integrating with third-party back-end software such as ERP and CRM systems.  If you find yourself responsible for creating and implementing synchronization strategies, it will save you a lot of time to list and consider each of the issues.  I’ve spent the past three years or better thinking about and planning different aspects of synchronization logic, at different stages of product maturity, and conversations occassionally fire up about it, with diagrams being drawn and plans being outlined and reworked.  As we implement it, many aspects have emerged and occassionally surprise us.

Existing & Upcoming Technologies

Remote Data Access (RDA) is pretty nice.  In simple applications, I have no doubt it serves its purpose well.  But when we’re talking about enterprise-scale applications that need complete control over synchronization behavior (such as collision handling), as well as data shaping/transformation, more is needed.  Merge synchronization provides an ability to add some custom collision handling, and it requires schema updates and table schema locks.  This is unfriendly to back-end systems that we integrate into.  We want to have as light a footprint on external systems as possible.  What would happen if our customer’s DynamicsGP database upgrade failed because we were locking the tables with synchronization mechanisms?  Not a great idea.  This is really too bad, because so many of the ugly details are taken care of nicely by SQL Server replication.

Microsoft Synchronization Services looks like a fascinating new unification of synchronization services for the Microsoft platform, but unfortunately it doesn’t look like Windows Mobile will be supporting this for a while.  We needed a solution to deliver to customers now, and can’t wait around, and this is how we come to be involved with this.

I also needed to make sure that multiple back-end company databases could be supported, or that it could run viably without any back-end system.  How do you easily set up SQL Server merge replication with a growing set of back-end databases?  Can databases be updated easily?  It’s about more than product functionality.  It’s also about the development process and the costs associated with accurrately performing upgrades (without losing data or falling into an unusable state).  Handshakes, recovery tactics, and other protocol details can become sophisticated and difficult to troubleshoot.  More about this later.

Data Transformations

What data do you need?  How can you map from source and destination of data, plus any transformations in between, in a way that’s as transparent and efficient as possible?  This will depend on whether you use an ORM library or direct SQL access.  Do you store these data replications centrally or in a distributed way?  If distributed, do they themselves get synchronized in some default way?  How easy is this metadata to maintain and refactor?

Security & Performance

If your data isn’t secure, you don’t have a viable enterprise system.  Not only does data need to be encrypted (and those keys managed well), but you need to restrict access to that data based on roles, users, and ultimately permissions.  Sarbanes-Oxley has some strict guidelines that you’ll have to play along with for any of your publicly-traded customers.

Another major concern is performance.  Because mobile devices may be synchronizing over slow connections (cellular modems and cell phones), which can be 50 times slower than typical high-speed connections, synchronization speed is crucial.  You may be pulling tens or hundreds of thousands of rows of data without the right shortcuts in place.  These are the typical ones:

  • Minimizing scope of data vertically (data filtering) and horizontally (data shaping).  Data can be filtered by any other aspect of your data model that makes sense, and because we don’t always need every column on the server, we can store only that subset we absolutely require on the mobile device or other client node.
  • Remoting technology choice.  Binary serialization is must faster than XML, but requires third-party or custom solutions on Compact Framework.
  • Compression of messages between client and server.  Even with the additional processing burden to handle the compression, the overall improvement in throughput is significant.
  • Multithreading of synchronization tasks.  Some tasks can be performed in parallel, while some tasks (and in some contexts, whole synchronizations) can be performed while the user is doing something else (where it’s safe, such as sitting idle in the menu).

Simplifying Data Flow

It’s tempting to let any data object update any other, but in information systems that are critical to the success of a business, every update typically requires some traceability and accountability.  Transactions are used to formally define an addition or update.  In integration scenarios, you usually hand off a message which has second-order affects on other tables in that system when executed.  How this is done is not usually visible to you.

By stereotyping data flows and differentiating a pull of reference data (from an external system to your system or one if its clients) and a push of transaction data (from your system to the external system), the data synchronization challenge can be divided and therefore simplified.  Solving separate push and pull strategies is much easier than trying to implement a true generic merge replication.  Reference data pulls are defined as data-shaped transformations.  Transaction pushes are defined as queued invocations on third-party entry points.  Transactions can have an affect on reference data, which will then be pulled down and therefore update your own system.  Data flow is circular.

Using explicit transactions provides some other advantages.

  • We are able to protect the ERP/CRM from our system.  We can’t assume that the back-end system will accept the messages we send it.  They might be formatted poorly, or may contain invalid data that we were unable to identify or validate.  If our attempt is rejected, we may need to execute some compensating actions.
  • We can provide an approval mechanism to support more complex business workflows, and provide checkpoints before the transactions are submitted or posted.
  • We create an audit trail of data updates for business analysis and system troubleshooting.
  • We can support integrations with different back-end products.  By defining a data model that is abstracted away from any specific back-end system’s model, we have the ability to build integration adapters for each back-end system.  This allows us to transform our generic transactions into the vendor-specific ERP or CRM transactions.  By swapping out the adapters, we can switch our whole integration to another vendor’s system.

Location-Based Behavior

Being occassionally disconnected presents some big challenges.  While connected we can immediately submit transactions, which updates reference data that we have access to in real time.  When disconnected, however, we may enqueue transactions that should have an impact on reference data.  Because these transactions don’t normally affect reference data until they run on the server, data cached on the disconnected device can become stale—and useless after a while, in many cases.

Just as ASP.NET developers learn the page request event lifecycle, it’s easy to see why a transaction (in the journey that it takes) could have different behaviors in different environments and at different stages.  On the handheld, queueing a transaction may need to make an adjustment to reference data so that subsequent workflows can display and use up-to-date data.  The eventual success of that transaction can’t be guaranteed, but for that unit of work, that daily work session, it’s as close to the truth as we can get.

Detecting the state of transactions of the host system you’re integrating into can be tricky, too.  Sometimes when you save or enqueue a transaction in the back-end system, you get immediate changes to its own reference tables, or it may only update reference data only once it’s approved and “posted”.  Only experience and experimentation in working with these systems (lots of trial and error) can give you the necessary insight to do it right.

Schema Versioning

You should define your data schema correctly from the beginning, and then never change it.  (Yeah, right.)  Unfortunately, requirements change, scope expands, products follow new paths, and schemas will have to change, too.  This means you’ll need some way of identifying, comparing, and dealing with version conflicts.  Do you define a “lock-down” period during which existing transactions can be submitted to the server but new transactions can’t be started?  What happens if the server gets updated and then a client with the old schema finally connects?  Can you update the schema with the data in place before submitting the transaction, or have you defined a transaction version format converter?  Can schema upgrades deal with multiple schema version hops?  Can it roll back if necessary?  Do you tell your customers “tough luck” in some of those scenarios?

Summary

This discussion was meant as a rough outline, a preliminary introduction to some of the issues, and is not meant as a comprehensive account of data synchronization in distributed architectures.  Fortunately, there is a managable core of useful functionality that can be implemented without solving every conceivable problem mentioned above.  There are always opportunities for optimizations and tightening down.  I’ll be returning to this topic many times as I discover and work with new scenarios and design goals.

Posted in Compact Framework, DynamicsGP, Object Oriented Design, Problem Modeling, Software Architecture, SQL Server Compact | 1 Comment »

Visual Studio 2008 – Issues with Compact Framework

Posted by Dan Vanderboom on December 9, 2007

I’ve been developing for the Compact Framework for over three years now, and the road has been long.  So much is missing from CF that I have waited eagerly for v3.5 to arrive.  So, in typical fashion, I downloaded Visual Studio 2008 on November 20th and was up and running with it soon after.  The system I develop is distributed and highly complex, incorporating add-in modules, dependency injection, custom controls, MVC GUI patterns, SQL Server CE/Mobile, custom data synchronization, occassionally-offline behaviors, shared ORM data model across platforms, etc. At first glance, VS2008 looks good.  It looks like VS2005 with a few subtle changes, but it’s familiar, faster, and seems (so far) more stable.

Windows Mobile 5 R2 SDK – Failed Install (Quietly)

Converting from the VS2005 to the VS2008 project file format was at first problematic.  Though the installation didn’t error out, I got errors converting projects targeting Windows Mobile 5.  When I switched them to Pocket PC 2003, they worked, but this wasn’t going to be an acceptable solution.  I ended up uninstalling and then reinstalling the Windows Mobile 5 R2 SDK, and now I can successfully convert those projects. The worst part about this was that, although the projects seemed to upgrade, they couldn’t be read by Visual Studio, so they were unavailable in Solution Explorer.  So it’s impossible through Visual Studio to change them back.

SQL Server Mobile – References Updated Automatically

Annoyingly, the conversion from VS2005 to VS2008 projects assumed that I wanted to use SQL Server Compact 3.5 instead of the SQL Server Mobile 3.0 database I was referencing previously.  As much as I’d love to upgrade to the new database, and gain from improved performance as well as the ability to use the TOP keyword and use nested subqueries, our ORM tool is bound to v3.0.  Not a big deal to find and correct that problem.

Confused Path to SQL Server Mobile

Adding a reference to SQL Mobile 3.0 on the .NET tab of the Add References dialog points to VS2005 directory by default… not a good idea.  What happens when I uninstall VS2005?  This assembly goes away and now my references are broken.

 

So I copied this directory:

C:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile

… to a “ThirdParty” folder structure under “SQL Server Mobile” that we can reference instead.  This folder gets shared and synchronized among developers.

 

I hoped it would have been easier to find, located in some shared place that makes sense.  There is no “Mobile” folder here:

C:\Program Files\Microsoft Visual Studio 9.0\SmartDevices\SDK\SQL Server

… which is too bad.  Though you’d think they’d put it somewhere like here instead, so it’s not VS folder dependent:

C:\Program Files\Microsoft.NET\SDK\CompactFramework\v2.0

… or even in one of these places:

C:\Program Files\Microsoft SQL Server 2005 Mobile Edition

C:\Program Files\Microsoft SQL Server Compact Edition

C:\Program Files\Microsoft SDKs

… but it’s not.  I think there is some great collective confusion at MS when it comes to organizing folder structures.  How many SDK directories do you need, anyway?

 

Build Times – Scary at First, Now Fantastic

 

After one of the development machines kept getting 20 minute build times on a 20 project solution (when we had 4–7 minute build times in VS2005), we were on the gut-wrenching verge of reverting back to VS2005.  But after some random tinkering, removing and re-adding references, and by deploying all add-in projects instead of referencing them from our entry point application, we were able to bring this down to about one minute. My laptop is a single core machine, and someone I work with has a dual core.  Because of the hype I’ve read about 2008’s support for dual core builds, I expected his machine to beat the pants off of mine during builds.  Such is not the case.  I still build a bit faster, all other aspects of the machines being pretty comparable. One thing MS definitely got right was the F5 behavior.  In VS2005, hitting F5 would rebuild our projects every time.  Even if a project would occassionally not get built, MSBuild would still “process” the project for way too long.  Now what we see is MSBuild activity bypassed altogether when a previous build succeeded and no changes have been made subsequently.  This, along with the other performance improvements (and some unexplained voodoo), is saving us somewhere around an hour per day per developer. Note that the build behavior is not perfect yet.  I still notice that projects that haven’t changed, and where no projects they depend on have changed, are still occassionally getting rebuilt for some reason.  But it’s good enough, and certainly an improvement over what we had, and there’s a possibility that our projects have some buried dependency (I don’t pretend to be an expert in MSBuild).

 

Not Yet Compact Framework 3.5

 

Now that our system has been converted to VS2008 and we’ve been working in that environment for a few weeks, the next step is to upgrade from Compact Framework 2.0 to 3.5.

 

LINQ Headaches

LINQ support in Compact Framework 3.5 deserves it’s own blog post.

Posted in Compact Framework, SQL Server Compact, Visual Studio | Tagged: , , , | Leave a Comment »