Critical Development

Language design, framework development, UI design, robotics and more.

Archive for December, 2007

Project: Code-Named “SQL Mobile Bridge”

Posted by Dan Vanderboom on December 26, 2007

SQLMobileBridge

This is not the final name.  But it will be a useful product.  With as much as I’ve been working with rapidly-evolving mobile database schemas lately, I expect to save from 30 minutes to an hour a day in my frequent build-deploy-test cycles.  The lack of a good tool for mobile device database queries causes me a lot of grief.  I know Visual Studio 2008 has something built-in to connect to mobile devices over ActiveSync, but let’s face it: ActiveSync has been a real pain in the arse, and more often fails than works (my next blog will detail some of those errors).  I can only connect to one device at a time, and I lose that connection frequently (meanwhile, SOTI Pocket Controller continues to work and communicate effectively).  Plus I have a window constantly bugging me to create an ActiveSync association.

I work on enteprise systems using sometimes hundreds of Windows Mobile devices on a network.  So I don’t want to create an association on each one of those, and getting ActiveSync to work over wireless requires an association, as far as I know.

Pocket Controller or other screen-sharing tools can be used to view the mobile device, and run Query Analyzer in QVGA from the desktop, but my queries get big and ugly, and even the normal-looking ones don’t fit very well on such as small screen.  Plus Query Analyzer on PDAs is very sparse, with few of the features that most of us have grown accustomed to in our tools.  Is Pocket Query Analyzer where you want to be doing some hardcore query building or troubleshooting?

So what would a convenient, time-saving, full-featured mobile database query tool look like?  How could it save us time?  First, all of the basics would have to be there.  Loading and saving query files, syntax color coding, executing queries, and displaying the response in a familiar “Query Analyzer”/”Management Studio” UI design.  I want to highlight a few lines of SQL and press F5 to run it, and I expect others have that instinct as well.  I also want to be able to view connected devices, and to use several tabs for queries, and to know exactly which device and database the active query window corresponds to.  No hunting and searching for this information.  It should also have the ability to easily write new providers for different database engines (or different versions of them).

Second, integration.  It should integrate into my development environments, Visual Studio 2005 & 2008.  It should also give an integrated list of databases, working with normal SQL Server databases as well as mobile servers.  If we have a nice extensible tool for querying our data, why limit it to Windows Mobile databases?

Sometimes it’s the little details, the micro-behaviors and features, the nuances of the API and data model, that defines the style and usefulness of a product.  I’ve been paying a lot of attention to these little gestures, features, and semantics, and I’m aiming for a very smooth experience.  I’m curious to know what happens when we remove all the unnecessary friction in our development workflows (when our brains are free to define solutions as fast as we can envision them).

Third, an appreciation for and focus on performance.  Instead of waiting for the entire result to return before marshalling the data back to the client, why not stream it across as it’s read? — several rows at a time.  Users could get nearly instantaneous feedback to their queries, even if the query takes a while to come fully across the wire.  Binary serialization should be used for best performance, and is on the roadmap, but that’s coming after v1.0, after I decide to build vs. buy that piece.

Finally, a highly-extensible architecture that creates the opportunity for additional functionality (and therefore product longevity).  The most exciting part of this project is probably not the query tool itself, but the Device Explorer window, the auto-discovering composite assets it visualizes, and the ability to remotely fetch asset objects and execute commands on them.

The Device identifies (broadcasts) itself and can be interrogated for its assets, which are hierarchically composed to represent what is visualized as an asset tree.  One Device might have some Database assets and some Folder and File assets.  The Database assets will contain a collection of Table assets, which will contain DatabaseRow and DatabaseColumn assets, etc.  In this way, the whole inventory of objects on the device that can be interrogated, discovered, and manipulated in a standard way that makes inherent sense to the human brain.  RegistryEntry, VideoCamera, whatever you want a handle to.

This involves writing “wrapper” classes (facades or proxies) for each kind of asset, along with the code to manipualte it locally.  Because the asset classes are proxies or pointers to the actual thing, and because they inherit from a base class that handles serialization, persistence, data binding, etc., they automatically support being remoted across the network, from any node to any other node.  Asset objects are retrieved in a lazy-load fashion: when a client interrogates the device, it actually interrogates the Device object.  From there it can request child assets, which may fetch them from the remote device at that time, or use its locally-cached copies.  If a client already knows about a remote asset, it can connect to and manipulate it directly (as long as the remote device is online).

With a remoting framework that makes shuffling objects around natural, much less message parsing and interpretation code needs to be written.  Normal validation and replication collision logic can be written in the same classes that define the persistent schema.

So what about services?  Where are the protocols defined?  Assets and Services have an orthagonal relationship, so I think that Services should still exist as Service classes, but each service could provide a set of extension methods to extend the Asset classes.  That way, if you add a reference to ServiceX, you will have the ability to access a member Asset.ServiceXMember (like Device.Databases, which would call a method in MobileQueryService).  If this works out the way I expect, this will be my first real use of extension methods.  (I have ideas to extend string and other simple classes for parsing, etc., of course, but not as an extension to something else I own the code for.)  In the linguistic way that I’m using to visualize this: Services = Verbs, Assets = Nouns.  Extension methods are the sticky tape between Nouns and Verbs.

public static AssetCollection<Database> Databases(this Device device) { }

With an ability to effortlessly and remotely drill into the assets in a mobile device (or any computer, for that matter), and the ability to manipulate them through a simple object model, I expect to be a significantly productive platform on which to build.  Commands executed against those assets could be scripted for automatic software updates, they could be queued for guaranteed delivery, or they could be supplemented with new commands in plug-in modules that aid in debugging, diagnostics, runtime statistics gathering, monitoring, synchronizing the device time with a server, capturing video or images, delivering software updates, etc.

And if the collection of assets can grow, so can UI components such as context menu items, document windows, and so on, extending and adding to the usefulness of the Device Explorer window.  By defining UI components as UserControls and defining my own Command invocation mechanism, they can be hosted in Visual Studio or used outside of that with just a few adjustments.

More details to come.

Posted in Compact Framework, Linguistics, My Software, Object Oriented Design, Problem Modeling, Software Architecture, SQL Server Compact, User Interface Design | Leave a Comment »

Extreme Development Efficiency

Posted by Dan Vanderboom on December 20, 2007

I feel this overwhelming need to be a highly productive and efficient developer through constant attention and continuous improvement to the processes, techniques, and tools that I use.  Whenever there are bottlenecks in the process, frustrations and annoyances with the tools, or intuitive discomfort with the techniques and patterns that I’m using, I seek to eliminate the largest obstacle in my path to approach perfect fluidity and rapid development.

It’s important to move the big boulders first before filling in with smaller rocks and pebbles.  The big boulders: defining the right goals, collecting the right requirements, solving the right problems, and using the right processes.  If you’re heading in the wrong direction, it doesn’t really matter how fast you’re going.  Misunderstand your customer’s needs, and your chosen programming language won’t mean diddly squat.  The medium-sized rocks: applying the right design patterns and using the right tools for the job.  You can accomplish more with the right 100 lines of code than you can with 2000 lines of a rushed, poorly planned mess that will require constant maintenance and an eventual rewrite.  The small pebbles get really specific.  They may involve using code snippets or IDE plug-ins to more rapidly configure or develop projects, altering your workflow to circumvent predictable social distractions, standardizing on naming conventions, and so on.

I’ve spent a great deal of time studying architecture, design, and agile processes, and I admit that I’m constantly learning new things in these areas.  These things fascinate and stimulate me.  But I’ve been writing code for 24 years (starting with Apple 2 Basic), and I’ve developed an intuitive feel for how to write code, formulate complex algorithms, and design components and their interactions.  Explaining these concepts and techniques to less experienced developers, even with the vocabulary that I’ve acquired over the past few years on these topics, proves to be a great challenge.

But the small pebbles, the little tweaks of efficiency that I make to shave 5 to 10 minutes off each day, are easier to explain, much like the end game is easier to teach a beginner of chess than the middle or beginning game.  It’s more tactical, and more tangible, because it’s easier to measure and demonstrate.

It also offers a less competitive market to produce tools for, as these optimizations tend to be very specific to the technologies and tools, as well as the developers who use them.

So I’ve spent a lot of effort recently reducing build times, optimizing data synchronization strategies, applying multithreading to minimize application startup times, and developing some tools to improve development speed and reduce risk and errors.  One purpose of these efforts is to attain greater performance in the applications and systems that I build.  The end users will appreciate this effort, but there’s a selfish aim here as well: if I can speed things up to the point where I’m saving 10 minutes a day here, 15 minutes a day there, and more or less in other areas, I can easily recoup an hour or two a day that I would normally spend waiting around for a build to complete, an application to start up, or a query to run.  Per developer.

Development time is very expensive.  If we can minimize the time spent waiting around for things to happen (which is boring anyways), and maximize our productivity (which is exciting and satisfying), everyone will be happier in the end.  Developers will crank out new features and products quicker and customers will get their needed functionality sooner.

On the other hand, I’ve made a number of decisions to trade application performance for developer productivity.  My priority is usually to create correctly functioning software first, not with complete disregard to performance, but with performance considerations not automatically being at the forefront.  If the code is 20% slower than it could be, but the development team can refactor and extend the solution 250% faster, sometimes that trade-off is worthwhile.  And I’ve programmed in assembly language, so I’m familiar with the attitude that we have to preserve every byte, and reduce every unnecessary processing cycle we can to get the most out of the hardware.  It’s just not sensible to think this way anymore (except for very specialized functions or embedded systems).

Just as our perspective of hardware and software systems has matured to include concepts like total cost of ownership (TCO) once the system has been deployed into production, so too must we consider all of the ramifications of software design and performance on developer productivity and the flexibility of product evolution, and therefore the total cost of development.  If it’s cheap to build, it might be expensive to support.  In the end, we’ll have to pay for it one way or another.  We need to think about more than what it will cost us now.  What design and process decisions will minimize the cost over the entire lifetime of our products?

Posted in Software Architecture | Leave a Comment »

Visual Studio Add-In: Reference Sentinel

Posted by Dan Vanderboom on December 20, 2007

My article on Project Reference Oddness has been the most popular so far.  I like seeing the blog statistics, and I’m constantly observing the search terms people use to get to my blog.  I guess I shouldn’t have been surprised by the number of people having problems with references and searching for answers online.  It’s a pretty messed up system.

I’ve been contemplating writing an add-in for Visual Studio that would track and manage references, set some rules, and enforce them. I’d like to be able to specify that I use v3.1 of Library X across all projects in a solution. I’d then like to be able to make a change in one place (like a Reference Explorer) and have it update all projects in the solution. If one of the references changes outside of this system (for whatever reason), it should act as a sentinel, detecting the change and changing it back to what you defined in the Reference Explorer.  (It should also notify you that this has happened to help troubleshoot the root cause.)

It could also set reference paths.  By saving this information as an extension to the solution file, it could be checked into source control and shared with the rest of your development team.  This should eliminate the warnings and errors you get when you accidentally reference different versions of the same assembly.  It would also reduce the pain of creating a new branch or pulling code down to a new machine and having to spend a bunch of time setting the reference paths over and over again.

Am I missing anything?  What other reference pains could be alleviated?

Posted in Invention-A-Day, My Software, Visual Studio | 3 Comments »

Global ImageList Control – ImageListProxy

Posted by Dan Vanderboom on December 20, 2007

It would be normal for me to have already jumped forward to WPF and to leave Windows Forms behind, but being a Compact Framework developer, I don’t yet have that option.  Though I look forward to Silverlight, this won’t work for occassionally-connected or totally-disconnected mobile applications because Silverlight won’t work without some kind of back end server.  (Unless a certain company’s Windows Mobile ASP.NET server can support Silverlight, which would be awesome.  Someone in the .NET world needs to compete with Adobe’s FlashLite!)

One of the things I need to do is manage assets, such as graphic files.  Because the applications I write need that polished, commercial, and professional spit-and-polish, I am constantly including images into my user interfaces.  Sometimes these are stored in ImageList controls, to feed data-bound list controls and so on.  When you share common images across many forms, many views, many projects, such as checkboxes, question mark icons, etc., it’s a pain to have to include them over and over again.  Why not have an ImageList-derived component that acts as a proxy to a shared, global image list?  Images could be stored as resources in the project (or a resource assembly), or could be retrieved as files or database objects, but exposed from a single shared object to a collection of form-specific ImageListProxy objects.

The more cool stuff I read about WPF, the more I think that new controls, components, and other simple tools could be included in Windows Forms to accomplish many of the same goals.  To an extent, of course, but definitely worthwhile pursuing.  Windows Forms is going to be with us for a while longer.

Posted in Compact Framework, Invention-A-Day, Object Oriented Design | Leave a Comment »

Invention-A-Day: Social Networking Protocol and Super Client Shell

Posted by Dan Vanderboom on December 18, 2007

There has been an overwhelming explosion of social networking websites in the past several years.  Myspace, MS Live Spaces, Zaadz, Facebook, Flickr, LinkedIn, … the list goes on forever, it seems.  How many of these things do I need to create an account for, anyway?  Like I don’t already have a bunch of e-mail addresses to keep track of—now I need to check my social network sites for messages and connect to people I know (or those I’d like to meet).

I don’t know if this is an invention per se or if I’m just seeing the convergence of a trend, but with the huge amount of overlap in social networking services, I’m envisioning some kind of standardized protocol for publishing personal profiles, sharing group folders and web-page-zones, defining schemas and permissions for contacts and groups, etc.  Just as we’ve standardized on protocols for e-mail, web content management and syndication, file transfers, newsgroup article distribution, P2P content sharing, and other information technologies, I’d like to see a standard emerge for various aspects of what we call “social networking”.  I have many choices when it comes to Bittorrent client software, but each of those clients communicates on the same protocol over the same network, and so they’re interoperable.

This is the age of interoperability, is it not?  I’d like to define my profile once, and publish it to all of the social networking websites out there.  Better yet, give me a thick client with a beautiful cinematic experience WPF interface so that I can synchronize some of that data to use offline, sync it to my Windows Mobile phone, etc.  How many times do we really need to define a set of contacts?  In our thick client e-mail, our web mail, our company CRM, and each of our social networking sites?  How ubiquitous is the need now to share files with individuals or groups, for business or personal reasons?  All of these social technologies are now scattered and fragmented, with no coherent vision of pulling them all together, it seems.

What we need is a Super Client.  Imagine an extensible user interface shell like Outlook or Visual Studio, perhaps based on WPF Acropolis or a similar effort, into which modules can be plugged in.  Allow treeviews to be extended to include new nodes and context-menu options, etc., and all of this social technology plugged into together for deeper integration, shared but extensible schemas, and a common user experience.  Wherever I read and write e-mails, that’s where I want to read newsgroups and blogs, listen to podcasts, share photos, tasks, calendar appointments, contacts, Internet folders, and more.

In fact, why not make this an operating system service?  The ability to dock windows with/on/to each other, group them together, load and close them in persistent groups, and launch them from each other through simply-defined associations?

A unified Super Client user interface shell (or OS shell service) plus social protocol.  That’s my invention of the day.  Or my prediction.  Take your pick.

Posted in Invention-A-Day, Problem Modeling, User Interface Design | Leave a Comment »

Multithreaded Design: Dedicated Task Threads or Bucket Brigade Strategy?

Posted by Dan Vanderboom on December 17, 2007

A few days ago, I took a dive into building a new software product.  It’s aim is mobile developers, such as myself, whose applications access databases on those mobile devices.  After six weeks of development, v1.0 will be released (by the end of January, 2008).  More details on that product to come later.

One of my goals is that its user interface should be very responsive.  When commands take a while to complete, they’ll need to execute on some other thread.  It’s a .NET Framework application, and in that environment, only one thread can update the user interface, so it’s a rare resource that needs to be protected from doing too much work that doesn’t fit that primary purpose.

I’ve written a fair amount of multithreaded code in the past.  Working with RFID hardware before the “Gen 2” standard, on pre-release Symbol (now Motorola) devices, I figured out how to integrate with the RFID radio at a low level, and only by using several threads was I able to obtain acceptable performance.  After wrestling with the issues like race conditions, deadlocks, and other complexities, I have a healthy respect for the amount of planning that’s required to make it work correctly.

My new product consists of three major layers: a user interface layer, a service layer, and a network communication layer.  The Command pattern will be used to publish and service commands.  Actions in the user interface will enqueue command objects to the service layer, where they’ll either be processed immediately or passed to another node in the network for processing remotely.

A simplification of this scenario is illustrated in the sequence diagram below.  Each color represents a separate thread.

DataService multithreaded sequence

The red user interface thread does minimal work.  When it calls the DataServiceProxy to request data, a command object is enqueued and the thread is allowed to return immediately to the user interface.  The blue line represents one thread in a larger pool of worker threads that grows or shrinks depending on the command queue size, and is dedicated to processing these commands.  In this example, the DataServiceProxy makes a call to the TCPConnection object and returns back to the pool.  When a message does arrive, this happens on a thread owned by the TCPConnection object.  Because we don’t want this thread to “get lost” and therefore be unavailable to process more TCPConnection logic, we enqueue a response object and return control immediately.  Finally, the DataServiceProxy object, using one of its threads, fires a MessageReceived event, which in turn calls Invoke so that it can update the user interface on the only thread that is capable of doing so.

I like using Sequence diagrams like this to plan multithreaded work.  It clears up issues that you wouldn’t normally be able to visualize clearly.  The first (and worst) mistake would be to allow the UI thread to cross all the way through to the TCPConnection object, making some blocking call while the network connection times out, throws an exception, handles the exception, etc.  Meanwhile, the user is waiting until they can interact with the screen again, waiting for the hourglass to go away so they can get on with their work.  By assigning specific roles to threads, and creating mechanisms by which threads can be kept within specific boundaries, their behavior is easier to predict and manage.

I don’t know a lot about how ThreadPools behave, but I’m going to be looking into that in detail over the next few weeks, and if I come across anything particularly noteworthy, I’ll write about it.  My instinct is to continue to use this pattern of thread role assignment, especially for my new product, but there’s another pattern I’ve been thinking about that may also work well.  A friend of mine at Bucketworks mentioned to me a while back about the efficiency of bucket brigades, and about how intensely they’ve been studied and found to create self-organizing and automatically-optimizing lines of workers.

Each Thread Is A Worker

The way I’ve been modeling thread work, I treat each thread like a worker: the UI worker hands off a job description to a DataServiceProxy worker, who then does something with it and passes it off to a TCPConnection worker, who in turn does some work with it and hands it off (over a network in that case).  So what happens when the scenario becomes more complicated than the simple sequence diagram above?  What if I have four or five hand-offs?  Is the assignment of specific threads to specific roles really the best use of those threads?  The UI thread is stuck, but the rest of them could technically move around.  Perhaps they’re devoted to one task at a time, but when a task is completed with nothing in that queue, could the worker thread be moved to perform another task?  So I started thinking about how a bucket brigade strategy may help.

Threads aren’t exactly like people workers, so I have to be careful about the analogy.  It’s probably no big deal processing-wise for threads to be sleeping if they’re not used, but each thread does consume some memory, and there must be some kind of overhead to deallocate and reallocate their thread-local resources.  Would it be better to share a single thread pool among a connected linkage of worker objects, or for each service object to allocate its own thread pool?

Will this work?  Bucket brigades tend to work best when the density of work is high and the amount of time to complete a task is relatively small and constant.  If a task takes too long to complete, the line can more easily become unbalanced.  I’m going to be thinking more about this possibility, and come up with some objective measurements that I can use to compare the approaches and determine when this would be appropriate in a thread-management strategy (if at all).

Bucket Brigades In Business

If you want some background on bucket brigades and their use in manufacturing and assembly lines, check out this paper by John Bartholdi and Donald Eisenstein.  They explain how expensive time-motion studies are done to balance assembly lines in large organizations to maximize throughput, and how this needs to be adjusted or redone when the length or configuration of work changes, or when worker productivity varies.  With bucket brigades, however, there’s less need for planning because the strategy “spontaneously generates the optimal division of work”.  Pretty neat stuff.

The list of companies using bucket brigades is impressive: McGraw-Hill, Subway (yes, the sandwich company), Readers Digest, Ford Customer Service Division, The Gap, Walgreen’s, Radio Shack, etc.

If anyone’s using this kind of strategy to manage work in software systems, I’d love to hear about it.

Posted in My Software, Object Oriented Design, Problem Modeling, Software Architecture, SQL Server Compact, User Interface Design | 1 Comment »

Invention-A-Day: Ketchup-And-Mustard-In-One Bottle

Posted by Dan Vanderboom on December 17, 2007

I used to do something called Invention-A-Day, where’d I come up with at least one new gadget, service, business idea, etc., every day.  I’ve been slacking for a few months, but one came to me today.  It’s a ketchup-and-mustard-in-one bottle.  Two compartments for the two liquids, two nozzles, and some ability to open or close each valve.  With both nozzles pointing toward a center point, you could apply both condiments at the same time to your hamburger or brat.

The great thing about Invention-A-Day is that it requires no obligation to implement your idea, and it therefore becomes a fun game that loosens up the mind and encourages creativity.  As long as you believe your its original, your idea is good.  I can’t count the number of inventions I’ve made where I found out later they already existed.  In fact, that’s how I found my fretlight guitar.  After inventing it, I guessed that it would be such a great idea that someone had to have invented it already.  I was right, and a good thing, too!

Another advantage is that it gives you practice with design.  The more fun or exciting your idea is, the more likely you’ll pursue thinking about it, figure out how it would work, speculate about the obstacles, and so on.  Try it!  You might enjoy it!

Oh, and if you ever find one of my inventions out there already, post a comment and tell me about it.

Tags:

Posted in Invention-A-Day | 7 Comments »

Data Synchronization For Flexible Back-End Integrations

Posted by Dan Vanderboom on December 15, 2007

Data synchronization is some of the most difficult logic to write.  There are many interactions and transformations to express, and many factors to consider.  This article discusses synchronization in the context of integrating with third-party back-end software such as ERP and CRM systems.  If you find yourself responsible for creating and implementing synchronization strategies, it will save you a lot of time to list and consider each of the issues.  I’ve spent the past three years or better thinking about and planning different aspects of synchronization logic, at different stages of product maturity, and conversations occassionally fire up about it, with diagrams being drawn and plans being outlined and reworked.  As we implement it, many aspects have emerged and occassionally surprise us.

Existing & Upcoming Technologies

Remote Data Access (RDA) is pretty nice.  In simple applications, I have no doubt it serves its purpose well.  But when we’re talking about enterprise-scale applications that need complete control over synchronization behavior (such as collision handling), as well as data shaping/transformation, more is needed.  Merge synchronization provides an ability to add some custom collision handling, and it requires schema updates and table schema locks.  This is unfriendly to back-end systems that we integrate into.  We want to have as light a footprint on external systems as possible.  What would happen if our customer’s DynamicsGP database upgrade failed because we were locking the tables with synchronization mechanisms?  Not a great idea.  This is really too bad, because so many of the ugly details are taken care of nicely by SQL Server replication.

Microsoft Synchronization Services looks like a fascinating new unification of synchronization services for the Microsoft platform, but unfortunately it doesn’t look like Windows Mobile will be supporting this for a while.  We needed a solution to deliver to customers now, and can’t wait around, and this is how we come to be involved with this.

I also needed to make sure that multiple back-end company databases could be supported, or that it could run viably without any back-end system.  How do you easily set up SQL Server merge replication with a growing set of back-end databases?  Can databases be updated easily?  It’s about more than product functionality.  It’s also about the development process and the costs associated with accurrately performing upgrades (without losing data or falling into an unusable state).  Handshakes, recovery tactics, and other protocol details can become sophisticated and difficult to troubleshoot.  More about this later.

Data Transformations

What data do you need?  How can you map from source and destination of data, plus any transformations in between, in a way that’s as transparent and efficient as possible?  This will depend on whether you use an ORM library or direct SQL access.  Do you store these data replications centrally or in a distributed way?  If distributed, do they themselves get synchronized in some default way?  How easy is this metadata to maintain and refactor?

Security & Performance

If your data isn’t secure, you don’t have a viable enterprise system.  Not only does data need to be encrypted (and those keys managed well), but you need to restrict access to that data based on roles, users, and ultimately permissions.  Sarbanes-Oxley has some strict guidelines that you’ll have to play along with for any of your publicly-traded customers.

Another major concern is performance.  Because mobile devices may be synchronizing over slow connections (cellular modems and cell phones), which can be 50 times slower than typical high-speed connections, synchronization speed is crucial.  You may be pulling tens or hundreds of thousands of rows of data without the right shortcuts in place.  These are the typical ones:

  • Minimizing scope of data vertically (data filtering) and horizontally (data shaping).  Data can be filtered by any other aspect of your data model that makes sense, and because we don’t always need every column on the server, we can store only that subset we absolutely require on the mobile device or other client node.
  • Remoting technology choice.  Binary serialization is must faster than XML, but requires third-party or custom solutions on Compact Framework.
  • Compression of messages between client and server.  Even with the additional processing burden to handle the compression, the overall improvement in throughput is significant.
  • Multithreading of synchronization tasks.  Some tasks can be performed in parallel, while some tasks (and in some contexts, whole synchronizations) can be performed while the user is doing something else (where it’s safe, such as sitting idle in the menu).

Simplifying Data Flow

It’s tempting to let any data object update any other, but in information systems that are critical to the success of a business, every update typically requires some traceability and accountability.  Transactions are used to formally define an addition or update.  In integration scenarios, you usually hand off a message which has second-order affects on other tables in that system when executed.  How this is done is not usually visible to you.

By stereotyping data flows and differentiating a pull of reference data (from an external system to your system or one if its clients) and a push of transaction data (from your system to the external system), the data synchronization challenge can be divided and therefore simplified.  Solving separate push and pull strategies is much easier than trying to implement a true generic merge replication.  Reference data pulls are defined as data-shaped transformations.  Transaction pushes are defined as queued invocations on third-party entry points.  Transactions can have an affect on reference data, which will then be pulled down and therefore update your own system.  Data flow is circular.

Using explicit transactions provides some other advantages.

  • We are able to protect the ERP/CRM from our system.  We can’t assume that the back-end system will accept the messages we send it.  They might be formatted poorly, or may contain invalid data that we were unable to identify or validate.  If our attempt is rejected, we may need to execute some compensating actions.
  • We can provide an approval mechanism to support more complex business workflows, and provide checkpoints before the transactions are submitted or posted.
  • We create an audit trail of data updates for business analysis and system troubleshooting.
  • We can support integrations with different back-end products.  By defining a data model that is abstracted away from any specific back-end system’s model, we have the ability to build integration adapters for each back-end system.  This allows us to transform our generic transactions into the vendor-specific ERP or CRM transactions.  By swapping out the adapters, we can switch our whole integration to another vendor’s system.

Location-Based Behavior

Being occassionally disconnected presents some big challenges.  While connected we can immediately submit transactions, which updates reference data that we have access to in real time.  When disconnected, however, we may enqueue transactions that should have an impact on reference data.  Because these transactions don’t normally affect reference data until they run on the server, data cached on the disconnected device can become stale—and useless after a while, in many cases.

Just as ASP.NET developers learn the page request event lifecycle, it’s easy to see why a transaction (in the journey that it takes) could have different behaviors in different environments and at different stages.  On the handheld, queueing a transaction may need to make an adjustment to reference data so that subsequent workflows can display and use up-to-date data.  The eventual success of that transaction can’t be guaranteed, but for that unit of work, that daily work session, it’s as close to the truth as we can get.

Detecting the state of transactions of the host system you’re integrating into can be tricky, too.  Sometimes when you save or enqueue a transaction in the back-end system, you get immediate changes to its own reference tables, or it may only update reference data only once it’s approved and “posted”.  Only experience and experimentation in working with these systems (lots of trial and error) can give you the necessary insight to do it right.

Schema Versioning

You should define your data schema correctly from the beginning, and then never change it.  (Yeah, right.)  Unfortunately, requirements change, scope expands, products follow new paths, and schemas will have to change, too.  This means you’ll need some way of identifying, comparing, and dealing with version conflicts.  Do you define a “lock-down” period during which existing transactions can be submitted to the server but new transactions can’t be started?  What happens if the server gets updated and then a client with the old schema finally connects?  Can you update the schema with the data in place before submitting the transaction, or have you defined a transaction version format converter?  Can schema upgrades deal with multiple schema version hops?  Can it roll back if necessary?  Do you tell your customers “tough luck” in some of those scenarios?

Summary

This discussion was meant as a rough outline, a preliminary introduction to some of the issues, and is not meant as a comprehensive account of data synchronization in distributed architectures.  Fortunately, there is a managable core of useful functionality that can be implemented without solving every conceivable problem mentioned above.  There are always opportunities for optimizations and tightening down.  I’ll be returning to this topic many times as I discover and work with new scenarios and design goals.

Posted in Compact Framework, DynamicsGP, Object Oriented Design, Problem Modeling, Software Architecture, SQL Server Compact | 1 Comment »

Panatracker – Mobile Inventory/Asset Tracking System

Posted by Dan Vanderboom on December 14, 2007

For the past three and a half years, I’ve been developing software for Panatrack to track inventory, assets, time and attendance, and more, using primarily Symbol PDAs with integrated barcode and RFID scanners.  This software system is called Panatracker, and it has modules that integrate into DynamicsGP (formerly called Great Plains) as well modules that can run independent of a back-end system.  Here are a few screen shots of the splash and menu screens:

Panatracker SplashPanatracker Menu

Pretty, isn’t it?  One of the reasons we consistently win when going up against our competitors is the nice looking, easy to use interface.  We take advantage of the color touch screen, making sure everything is easy to manipulate with a simple touch, without having to pull out the stylus (which is annoying and not very practical in warehouse and retail environments).  The menus have big, easy to push buttons (even when wearing gloves, while working in a freezer, for example), and all workflows are streamlined for extreme efficiency and an intuitive feel.  Compare this to old monochrome telnet applications that are all text and provide poor formatting, navigation, and connectivity options.  It’s displayed in the picture above running on a Symbol MC50, but we also deploy to MC70, MC9090 (which looks like a gun with its trigger handle), and occassionally other devices.

This has been a labor of love, and it shows: in breadth of functionality, ease of use, modularity, upgradability, extensibility, and much more.  It’s been designed with solid object-oriented principles and is positioned to remain a significant and relevant product for the long run.  It’s built upon a highly reusable framework; new transactions and supporting user interface screens can be plugged into its shell easily.  It operates not only in a connected environment (over a wireless network), but also supports occassionally-connected workflows, using data synchronization so that it can be used when disconnected from the server.  Data is collected out in the field, and then synchronized over a cellular network in a secure manner once a connection is available again.

There is a complete administrative website as well that provides access to reports, centralized configuration and management of all handheld devices, and more.  Here’s a teaser screen shot of that (shrunk down a bit to fit better on the blog page):

Panatracker Website

It is growing so rapidly now (in functionality as well as sales) and is always exciting to work on.  I remember looking at an empty shell, a single menu item, and now we have to use multiple menu levels, grouping our workflows into several modules.  Pretty soon we’ll support so many different supply chain workflows that we’ll run out of menu space!

The retail RFID module of Panatracker was even shown on TV in the news.  It’s about 3:15 into the video clip.

Before Gen 2 of RFID came around, before Symbol’s MC8069R hardware want into mass production, we got our hands on one and I figured out how to integrate into it, in a realistically performant way (using several threads and some clever tricks) when companies many times our size couldn’t figure it out and even came to us for help.  We got several contracts with large companies that I don’t think I can name publicly, but one of them was a major big box retailer (hint: their/our app was in the news).

But ultimately, RFID is a difficult technology to justify.  Once you’ve waded through the hype, you realize that the engineering is more difficult to justify and satisfy that one would optimistically expect.  Radio waves are easily reflected and distorted by metals, liquids, special surfaces, and magnetic sources.  You can confirm failed writes to tags, but you can’t always confirm a successful write, nor can you easily determine what tag or tags were written to.  If many tags are in the vicinity, they are difficult to isolate and verify which tags you’re reading.  High-density environments, we call them.

One project I was involved with was based upon a Symbol Apex device (Apex may be a code name and not a model name), which contains a proximity sensor, an accelerometer, and an RFID radio (plus a wireless network radio) running Windows Mobile.  To minimize the damage done to product moved with clamp trucks in warehouses (millions of dollars per year), we would need to detect when the clamp truck slowed down and approached product, activate the RFID scanner, determine the number and stacking configuration of products, look up in a database the correct amount of pressure to apply, and then activate the clamp device to close with the correct pressure.  The largest obstacle and challenge was the chosen device and lack of leeway and budget in using additional sensory devices, and therefore the ability to realistically determine the stacking configuration.  How do you count RFID tags when there are likely to be many others within reading range?  Can you come up with a scheme for raising and lowering the radio power settings depending on the mode the device is in (what function it is trying to accomplish)?  How accurately can you model the physical geometry to isolate what you’re aiming at from a fast moving vehicle with a 400–600 MHz processor that is already overburdened with tasks?

Mobile devices are a challenge not just because of their space-constraint-caused limitations, but because they are mobile and therefore involved more in dynamic situations and environments.  A computer that sits stationary on a desk isn’t very interesting, but there are some cell phones that really get around and are starting to be used in some interesting ways.  The trend is growing exponentially.  Mobility is the seed of digital ubiquity, and we all know deep down that’s where we’re headed.  But that’s just futurist speak.  The point is that we’re at the beginning of a very exciting thing.  The technology seems clumsy and immature to me right now, but I have no doubt that we’ll see large strides forward in the near future.  Google’s Android open-source operating system and development platform looks fantastic, and their marketing is brilliant: to give away millions of dollars in programming contests to build applications for it.  I have a feeling this is going to grow a huge community and many good products, and that’s coming from a real .NET-C#-Microsoft fan.

Posted in Compact Framework, DynamicsGP, My Software, User Interface Design | 1 Comment »

Deployment Madness in Compact Framework

Posted by Dan Vanderboom on December 13, 2007

This is a continuation of my article on Project Reference Oddness.

CAB vs. DLL Deployment

Back in Visual Studio 2005 (like that was a long time ago), I noticed an odd behavior with deployments of Compact Framework applications.  Normally when you add a reference to a DLL, and you deploy that application, the DLL will be copied to the selected device, whether this is a real device or your emulator.  In some cases, though, Visual Studio will deploy a CAB file.  I searched high and low for a screen to give me visibility into this process, and hopefully for some settings to manipulate these behaviors, but I came up empty.  Finally I broke down and scanned the contents of my entire hard drive for mentions of a particular DLL file, and I found the conmand_ds_package.xsl file hidden away in a little nook at:

c:\Users\USERNAME\AppData\Local\Microsoft\CoreCon\1.0

I Googled parts of the path, the filename itself, and found nothing.  (I see there are a few more hits out there now.)

It seems this file keeps track of registered CAB files, and the DLL files they contain.  When you reference a DLL listed in this file, VIsual Studio will deploy the CAB file containing that DLL, instead of the DLL directly.  In my case, I wanted to create my own CAB file that contained my application as well as all of its dependencies.  It can be a pain to store and deploy a bunch of CAB files, and to install them in the correct order to avoid warning messages that scare end users.

If you’ve installed a third party library that registers its CAB files in this way and you would rather deploy the DLL files (which is also faster during debug-test cycles), simply go into this XML file and find the PACKAGE node that corresponds to the CAB file, and remove the entire PACKAGE.  Do this carefully, however, and notice that some of these nodes can be fairly large.

Disclaimer: edit this file at your own risk.  This is not a publicly documented area of Visual Studio.  I wish it was, as I think building a tool around it to make it visible and manageable would be nice.  But there you have it.

Bad Paths & Assumptions

I had Visual Studio 2005 installed on my laptop when I decided to install 2008.  I knew I could run them side-by-side, and I wasn’t sure if Visual Studio 2008 was going to work 100% (shouldn’t assume), so I kept them both on for a while.  After getting comfortable with the new version, I decided to uninstall VS2005.  I thought I’d probably need to reinstall it at some point, but it’s been a little corrupt for a while (toolbox shows no icons, can’t view the designer for .aspx pages, etc.), and I was itching to remove it to prepare for a reinstall.  The more I learn to depend on VS2008, the less I want to install the old version.  But I ran into a situation today that really made me scratch my head.

It wouldn’t have come up at all, except that I occassionally like to wipe out a device (even cold booting it).  I deployed an application to the device and got ready to fire up the debugger.  During the deployment of the Symbol Mobility Developer Toolkit, though, I got an error that it couldn’t find a CAB file.  (Yes, it was using the rule defined in the conman_ds_package.xsl file, mentioned above.)  The output window told me that it looked in this location:

C:\Program Files\Microsoft Visual Studio 9.0\SmartDevices\SDK\Symbol Technologies\wce400\armv4

There was no such directory as Symbol Technologies in SDK.  VS2008 and the 9.0 folder is very new, so there had never been such a folder, and I hadn’t installed the Symbol library for quite a while, so what could possibly be pointing to this location?

 

As I’ve questioned before, why on Earth would you want to store something like a third party library in a folder that is specific to one version of your programming IDE?  This library is equally valuable to any version of the IDE, so it should be stored somewhere on its own… in some sort of Symbol folder, perhaps.

 

I decided to uninstall the Symbol library and reinstall, hoping it would reinstall itself into the new VS2008 folders for smart devices.  To my dismay, I got an error message: “Unable to install SMDK – Visual Studio not found.”  I guess VS2008 doesn’t count.  The Symbol installer must look for that specific path.

 

The upside of this is that, by uninstalling the Symbol SMDK, those entries are no longer in the conman_ds_package.xsl file, and now Visual Studio just deploys the DLLs like I wanted it to anyway.  For the Symbol library, this is okay.  But for other third party components, you may really need the installer to go in.  There may be supporting tools, such as external control designers.  It sure would be nice if you as the developer had real control over all of these behaviors, instead of it being hidden away in undocumented files.

 

I ran across similar issues with SQL Server Mobile 3.0 and SQL Compact 3.5.  First of all, converting projects from VS2005 to VS2008 seems to want to update your mobile SQL Server from 3.0 to 3.5.  We use an object relational mapping (ORM) library called Extensible Persistent Objects (XPO) by DevExpress, a (for the most part) wonderful ORM that supports Compact Framework, and XPO seems to target 3.0 for the time being.  When I wiped out my test device, SQL Server Mobile 3.0 disappeared, too.  Then I tried deploying and instead of deploying that version of SQL Server Mobile, it just skipped that part, ran the application, and gave me an exception message: “Can’t find PInvoke DLL ‘sqlceme30.dll’.”

 

Why didn’t it deploy that CAB?  Because VS2008 can’t find it.  That CAB is located inside the Visual Studio 2005 Program Files folder.

 

Sql server mobile 3.0

 

This is what the folder structure looks like after you’ve uninstalled VS2005.  Good thing some of those folders stick around instead of getting deleted!  But really they should be elsewhere.  When deploying the 3.5 version of SQL Server Compact, a much better source folder is used (under c:\Program Files\”):

 

Sql server 2 and 3.5

 

Now that makes a little bit more sense, doesn’t it?  Except… we have 2.0 and 3.5, but where is 3.0?  Gosh, I just don’t get it.  How odd!

 

So my advice to you is this: 1. Don’t try to make sense out of it.  Just accept it and move on.  2. If you need to continue using SQL Server Mobile 3.0, deploy the CAB manually because Visual Studio 2008 doesn’t seem to want to.

 

Considering that the past few ranting articles are the result and reflections on a single day of development in the world of the Compact Framework, and also considering that this isn’t such an atypical day, this is why I sometimes laugh when companies underestimate the effort and explain that they might go ahead and tackle some CF project on their own.  After all, their developers know the .NET Framework, so how different could the Compact Framework be?  It’s just working with less memory and smaller screen sizes, right?

 

Good luck with that.  More often than not, these projects fail horribly and they either find experts in the field or cancel the projects altogether.  It’s too bad, because there’s a huge amount of potential with mobile devices.  But the development experience is going to need to get a hell of a lot better before it really takes off.  And Microsoft is going to have to make some pretty bold moves with competitors like Google handing out millions of dollars in prize money to develop on their Android mobile platform.  But I have to say, the Silverlight on CF demos look very promising!  Check it out:

 

http://blogs.msdn.com/lokeuei/archive/2007/05/03/checkout-silverlight-on-windows-mobile.aspx

http://blogs.msdn.com/robunoki/archive/2007/05/01/silverlight-and-the-compact-framework.aspx

 

 

Posted in Compact Framework, Visual Studio | 1 Comment »