Critical Development

Language design, framework development, UI design, robotics and more.

Bad Ass Development Rig

Posted by Dan Vanderboom on August 23, 2008

[The powerful workstation described in this article is now for sale on eBay! Click here to see!]

The Need For Speed

I’m not a gamer, I’m a developer.  When I’m on my computer for eight to ten hours a day, I’m typically not rendering graphics, but rather writing, compiling, and testing code.  The writing part hardly requires any resources, but compiling code completely pegs out one of the processors on my dual core laptop (a 2.4 GHz Dell Latitude D830).  Parallel compilers exist, but C# in Visual Studio is not one of them, and by the sound of things, won’t be for quite some time.  This means that if I’m going to see a significant performance increase of this critical task, I’m going to need the fastest processor I can get (and overclock).

Compiling code is also disk intensive, especially toward the end of a build when output files are written to disk.  I ran some benchmarks of C# builds (in Visual Studio) of SharpDevelop.  I chose this code base because it’s fairly large, similar to my own solutions, and it’s open source so others can repeat our tests.  We tracked utilization of individual processors, disk I/O, etc.

Why am I so hell bent on compiling code as fast as possible?  Good question.

mobo

Micro Development Cycles

Software development consists of nested cycles.  There are organizational cycles that envelop project cycles that envelop several-week development sprints, and at the smallest level, it really all boils down to executing many micro development cycles every day.  You start with a goal such as fixing a bug or implementing a feature, do some generally-informal design in your head, plan out your work (again, typically in your head), write code for a few minutes, compile and fix until the build succeeds, deploy if necessary, test the changes, and repeat this sequence anywhere from 20 to 50 or more times in a productive day.  If you do test driven development, you write your tests before the functional code itself, but the cycle is otherwise essentially the same.

Develoment Cycle

Some of these steps take longer than others, and some of them, like designing or thinking about what code to write and where that logic belongs, are creative in nature and can’t be rushed.  But when we work on larger solutions in Visual Studio (and other tools), the time for tools to perform critical processing (compiling code in this case) can lead to Twiddling Thumb Syndrome (TTS).  This is not only an unfortunate affliction, it’s also one that can cause Turret-like symptoms, including swearing at one’s computer, at one’s software tools, and banging on things to entertain oneself while waiting for things to finish.  Sometimes, depending on your projects’ interdependencies and other details, build times can shoot up to several minutes (or worse) per build.  For a long time, I was getting build times in the 5-7 minute range, and it grows as solutions become larger.  Repeat this just 20 times (during which your computer is totally unresponsive) and you’ll quickly get the idea that you’re wasting lots of valuable time (two hours a day!).

Clearly this is unacceptable.  Even if my builds only took a minute, all of the aggregated time spent waiting for progress bars of all kinds (not just compiling) can add up to a significant chunk of wasted time.  In Scott Hanselman’s Ultimate Developer Rig article, which played a part in motivating me to build my own Ultimate Developer Rig, Scott hits the nail on the head:

I don’t want to have time to THINK about what it’s doing while I wait. I wait, in aggregate, at least 15 minutes a day, in a thousand tiny cuts of 10 seconds each, for my computer to finish doing something. Not compile-somethings, but I clicked-a-button-and-nothing-happened-oh-it-was-hung-somethings. Unacceptable. 15 minutes a day is 21.6 hours a year – or three full days – wasted.

I think Scott is being too conservative in his estimate.  It’s easy to waste at least 20-30 minutes a day waiting for general sluggishness, and considerably more when waiting for builds of large solutions.  If you have a computer that’s a few years old, it’s probably worse.  Thirty minutes a day is about 125 hours per year (over 3 weeks), and an hour a day is 6 weeks per year.

Flow = Mental Continuity

Look at it from another perspective.  Even if wasted time isn’t an issue, there’s still a matter of maintaining continuity of thought (and execution).  When we have a plan and are ready to act on it, but are held back behind some bottleneck in the process, we risk losing the fluid flow or mental momentum that we’ve built up.  Often, I have a sequence of pretty good ideas or things I’d like to try, but I end up waiting so long for the first step to finish, that by the time the computer is ready, I’ve lost track of my direction or next step.  This isn’t as much of a problem with long-term planning because those goals and steps tend to be written down, perhaps tracked in some kind of Scrum tool.  But when we’re talking about micro development cycles, a post-it note can be too formal (though when I’m waiting a lot, I do use these).  If we could get near-immediate feedback on our coding choices and reduce the wait time between the execution of tasks, we could maintain this flow better and longer, and our work would benefit from the increased mental continuity.

One analogy is that of reading a programming book.  Some of them are 800-1000 pages or more.  When you read one slowly, say a chapter every other week, it takes so long to read that by the time you finish chapter 10, you have a really hard time remembering what chapter 2 was all about.  But if you focus more and read through the same book in a week, then chapter 2 will still be fresh in your mind when you get to chapter 10, and you’ll be much better able to relate and connect ideas between them.  The whole book sticks in your memory better because all of its content is more cohesive in your mind.

Cost Justification

Scott created a nice computer for the price range he was shooting for, but for my own purposes, I wanted to go with something more extreme.  When I started playing with the numbers, I asked myself what the monthly cost would be for a top-of-the-line, $5,000 to $6,000 power machine.  Spread over 3 years, it comes to only $166 per month.  If you consider the proportion of this cost to the salary of a developer, figure out how much all of our unnecessary wasted time is worth, and realize that this is the primary and constantly-used hardware tool of an engineer, I think it’s very easy to justify.  This isn’t some elliptical trainer that’ll get used for two weeks and then spend the next five years in the garage or the storage shed.  This beast of burden will be put to serious work every day, and will make it easier and more pleasant to get work done.  In an age where we don’t even blink an eye at spending $1,000 on comfortable and ergonomic Herman Miller chairs, I think we’re long overdue for software engineers to start equipping themselves with appropriately-powerful computer hardware.

Compare the cost of a great workstation with the tools of other trades (carpentry, plumbing, automotive repair, etc.) and you’ll find that software development shops like to cut corners and go cheap on hardware because it’s possible to do so, not because it makes the most sense and delivers the greatest possible value.  If you’re in a warehouse and need a forklift, there’s no two ways about it.  But computers are commodities, and though they come in all shapes, sizes, and levels of power, the software you need will normally run on the slowest and most sluggish among them.

Welcome to My Home Office

Welcome to my office.  Since it’s going to appear in the background of many pictures, I thought I’d give a quick tour.  This is my brain dump wall, where many of my ideas first take form.

IMG_0080

And around the corner from this room is the greatest Jack Daniel’s bar in the world, built by Christian Trauth (with a little bit of help from myself).

5

Bad Ass Components

I decided to take a field trip one day, and drove from the Milwaukee area where we live down to Fry’s in Chicago.  This was my first time to a Fry’s.  If you’ve never been to one, just imagine Disney World for computer geeks.  They’re absolutely huge (about 70,000 square feet of computer parts and other electronics).  I bought almost everything I needed there, having ordered a few parts online before this field trip took place.

Here’s what I picked up:

Intel D5400XS “SkullTrail” Motherboard – $575
Intel Core 2 Extreme Processor (QX97750) – $1510 (Tom’s Hardware Review)
  • 3.20 GHz (without overclocking)
  • 1600 MHz FSB
  • 12 MB L2 Cache
ThermalTake Bigwater 760is – $170
  • 2U Bay Drives Liquid Cooling System
Adaptec 5805 RAID Controller – $550
  • 8-Lane PCI Express
  • 512 MB DDR2 Cache
  • Battery Backup
3 Western Digital Velociraptor Hard Drives – $875
  • 900 GB Total
  • 10,000 rpm
  • SATA
8 GB (4 x 2 GB) of PC2-6400 RAM – $400
  • 800 MHz
  • ECC
  • Fully Buffered
GeForce 9800 GTX Video Card – $250
  • PCI Express 2.0
  • SLI Ready
  • 512 MB DDR3
Coolermaster Case – CMStacker 830 SE – $350
  • 1000 Watt Power Supply
  • Lots of Fan Slots
  • Very Modular

Total Damage – $4720

This doesn’t include extra fans (still need to purchase about 11 of them), and the things I already have: a pair of 24 inch monitors, Logitech G15 gaming keyboard (nice for the extra programmable keys), mouse, CD/DVD burner, media card reader, etc.  (When I calculate the cost at $166 per month over 3 years, it’s based on a total price tag of $6,000.)

Building a Bad Ass Development Rig

In the first picture are boxes for the case (and included 1000 Watt power supply), motherboard, video card, memory, and liquid cooling system.  The next two pictures show the motherboard mounted on the motherboard tray, which slides easily into the back of the case.  Notice how huge the video card is (on the right).  It takes up two slots for width, though it doesn’t plug into two slots (I’m not really much of a gamer, so no SLI for me).  The smaller card in the picture on the right is the Adaptec RAID controller.  I chose the slots that I did to maximize airflow; when I first put the graphics card in, it was partially obstructing a fan on the motherboard, so I moved it to the edge.  This blocked a connector on the motherboard, so I ended up moving it again.  Finding the right setup was a matter of trial and error.

1 

Below you can see all the power cables hanging from the case.  They’re wrapped in a strong mesh that keeps the cables bundled together for improved airflow.  On the right, you can see a swinging door with dust filters and empty spaces for four fans (up to 150mm, not included with the case).  Notice the fan on the motherboard tray, and there’s a slot for another one in the roof of the case that you can’t see.  In addition to the fans, the sides, bottom, top, and front all let air pass through for maximum airflow.  The drives on the right are Western Digital Velociraptors: 300 GB and 10,000 rpm.  When set up in a RAID 0 (striping) configuration, they should provide wicked fast disk access, which is important because I’ll be running multiple virtual machines at a time.

2

Next you can see the modular drive cage, which is easier to install the drives in when it’s removed from the case (a couple screws on each side).  It’s nice that it has a fan on the front.  Overall, I’m very impressed with the case and all the attention to detail that’s gone into its design.  It was extremely easy to work with and reach everything.  It’s been several years since I’ve built a desktop computer, and I remember a lot more frustration when trying to reach inside of cases.  Notice that when I put the drive cage back in, I installed it higher up.  I couldn’t put it any higher because of power cables getting in the way, but I need a place for a CD/DVD burner and maybe a media reader anyway.  I moved it up because I’ll be installing a liquid cooling unit, and the radiator takes up two drive height units (2U).  If there’s any chance of that thing leaking (it better not!), I certainly don’t want water dripping down onto my hard drives.

3

Now I’m starting to wire everything up: power and reset switches, power LEDs, USB and Firewire ports, power to the motherboard and video card, power to the hard drives, and the interface cables from the RAID controller to the hard drives.  I start twist-tying the slack on cables and stuff the unused power cables into the ceiling of the case, where they stow away nicely (with more twist ties).  The right-most picture shows some of the other stuff included with the case: an IDE cable bound inside a tubular plastic sheath (for better airflow), SATA cables that I didn’t need because I used the ones that came with the RAID controller, a fan mount for one of the 11 heat sinks on the motherboard (fan not included), and a fun Do-Not-Disturb-style doorknob sign (included with the motherboard) that says “Warning: Noobs Beware.  You will be Pwned.”  And indeed you will be!

4

It’s finally time for some liquid cooling action.  With a motherboard called SkullTrail that was designed for overclocking two 771 processing chips, and a single QX9775 Core 2 Extreme quad core 3.2 GHz processor (to start with), you better believe I’ll be overclocking this bad ass machine to its limit!  I’ve heard rumors that 4.5 GHz is very manageable, and am hoping to be able to pull off upwards of 5 GHz, but we’ll see how it goes (with another processor, this would total 40 GHz across 8 parallel cores).  So far, the liquid cooling tops all other components in documentation: one user guide and one maintenance guide.  And don’t forget that you can’t take a sip of the cooling fluid, or eat any of the rock candy packets that come with the hard drives.  I know it’s tempting.

IMG_0154

I Hit a Snag

The liquid cooling unit doesn’t fit.  It’s close, and I debated whether to let it hang out of the front of the case an inch and a half because of the motherboard being too close.  Not good enough.  Back to the drawing board!

6 

The only way the cooling unit would fit flush in the case was if we cut out one of the aluminum support beams along the top of the case, and inserted the cooling unit in the top drive bays.  This would put the liquid above my hard drives, which I was trying to avoid, but I didn’t have any choice at this point.  So we jumped in his car and stopped at his place to pick up a dremmel.  Ten minutes later we were in the garage, case stripped down and upside down, cutting away.  You can see the end result in the photo on the right, which turned out very nice.

7

Finally, the liquid cooling system fits flush in the case.  We noticed that the cooling system had a fan speed control rheostat connected to it on a wire, and thought it would be nice if we could expose that through the case somehow, so we drilled a hole and fed it through the top (near the power and reset buttons).  I found a knob that fit on it from a robotics kit I purchased a few months ago, and it even matched the color of the case.  Bonus!  You can see the new knob in the picture below on the right.

8

Almost ready to boot up!  I’m waiting for the processor to arrive, and expect it any day now.  As soon as that comes in, I’ll be writing the next article in this Bad Ass Development Rig series, and we’ll see how much we can get this bad mamma jamma overclocked (without making it unstable, of course).  After that, I’ll be setting up the virtual machine system, all of my development environments, and then we’ll do some serious benchmarking.

16 Responses to “Bad Ass Development Rig”

  1. You bet! Many shops give developers hand-me-down machines, but it’s totally true that having a fast development box helps keep you in flow and on focus.

  2. ferruccio said

    When I looked at the price, I had to smile. There was a saying that the ideal development machine will always cost $5000. I guess that’s still true today.

  3. Doc said

    Hmmm, not sure what the value-add of eight processors is to a single-threaded development system. Seems like if a dual-core processor isn’t able to make any headway when one core is maxed out, then four or eight aren’t likely to do much better, you’ll just have more cores idling in resource lock contention.

    I think the biggest wins are going to come from the RAID — it’ll offload processing from the CPU and optimize disk access times. I don’t think FB-DIMMs will be your friend (who wants more memory latency?), but if you’re running 64-bit Windows then more memory should help a lot.

    Are you going to explore incremental configuration (start with a single CPU, 2G of memory and a single drive connected to the motherboard, and work your way up to the fully-stuffed box)? I’m interested in where the “knees” are on the performance graph.

    Thanks for an interesting article!

  4. Dan Vanderboom said

    Visual Studio can’t make use of eight cores directly, but the operating system can. By allowing the OS to schedule other processes on other cores, the one core that Visual Studio is using will be less burdened with unrelated execution tasks.

    Also, because I’ll be running some Virtual Machine system (something like VMWare ESX Server), and will be running different virtual machines (each set up with a different development or software target environment), the extra processors can be assigned to the various concurrent virtual machines so I can enjoy using those virtual machines without significant performance degradation.

    Just because my primary motivator for performance is Visual Studio (because I use it all day, every day), doesn’t mean I won’t be able to use the extra horsepower in other ways. Developing applications that take advantage of multiple cores is important, and this hardware will give me a good platform for testing my software and ensuring that it can take full advantage of this scale of multi-core system, which will become much more common in the next few years.

    I don’t know a ton about memory, and have read the criticism of the mobo’s use of FB-DIMMs. I wonder if the fact of their being fully buffered will help to offset that, and the 12 MB of L2 cache, in addition to the DDR3 cache in the RAID controller, should help tremendously with performance in general. If much more is running out of memory instead of off disk, and more can be kept close to the processor instead fetching from RAM more often, the use of FB-DIMMs might be relatively inconsequential.

    Yes, we will be collecting performance data with incrementing configurations, and including benchmarks of my current laptop and Brent’s 4-core system, and we’ll publish that data in a follow-up article. I’m actually starting with a single CPU (at $1500, can you blame me?), but a few months from now, I’m going to add a second CPU along with a second liquid cooling unit, a fourth Velociraptor hard drive to the RAID (I have one empty slot in the drive cage), and possibly more memory if possible. Then I’ll write another article with benchmarks of the new setup.

  5. Pjer said

    Wicked rig man! 🙂

  6. Dave M. said

    This PC looks like it’ll haul during compiles! I’m going to download that open source project and compile it on my machine at work so I can compare my system’s times with yours, once you post the results.

    I am really considering buying a new development machine for home / work use, because I can’t stand staring at the build output any longer.

    I’m surprised that you said you can’t take advantage of multiple cores during compiling. Is that a .NET thing? We only write C++ at our shop, and it will use all available cores.

    I’m also thinking that a rig like yours with lots of RAM (BTW, what OS are you going to run?) and CPU power would be nice, because the Intel tools I have been playing with take a ton of time when crunching results. Many times I’ve had to bail out of the process because it starts swapping to disk…

    Please post the results soon! 🙂

  7. Dan Vanderboom said

    It’s a compiler thing, not a .NET thing. Each compiler is implemented differently. Some, like your C++ compiler, may run in parallel. Others, like C#, unfortunately do not. I’d be interested to see which other languages take advantage of multi-cores: VB.NET, Boo, F#, etc.

    Right now, I’m planning to run some flavor of VMWare. I’m leaning toward ESXi Server, which is now free, and gets installed on the bare metal. This seems like the best bet, since it can virtualize 4-core SMP, whereas their Workstation product only deals with 2-core VMs. But I also need to make sure I have USB 2.0 support (one of the reasons I’ve chosen VMWare instead of Microsoft’s Virtual PC), and the Workstation product’s support for dual monitors is nice (though I can use MaxiVista software for dual monitors even in ESXi Server).

    With this hypervisor as my base layer, I’ll be free to run multiple VMs: XP Pro, Vista, Windows Server, playing with early Windows 7 releases, and I even plan to install OS X to play with some iPhone XCode development. This caliber of system will make running these virtual machines relatively painless.

    I’m having some issues with (what seems like) the video card I bought, so I’ll be troubleshooting that this weekend and will hopefully have this thing powered up and ready to rumble soon for the battery of tests we have lined up.

  8. Dave M. said

    I hope that the disk performance is adequate enough for you with ESX Server. Under VMWare Workstation / Fusion, which I run under Ubuntu / OSX, performance is pretty awful. At work on a RAID0 array, it didn’t seem that great, either. What RAID type are you going to use? I’m probably going to have to run Windows XP x64 or Vista x64. Unfortunately, I have no experience with either of those OSes. I wonder if you can compile non-64 bit apps when running VS.NET on them.

  9. Dan Vanderboom said

    I ran Workstation for a month or so on a dual core 2.4 GHz Dell Latitude D830, and two concurrent virtual machines (plus the host OS) was tolerable but not great. That’s when I wrote the article on using immutable VMs for development:

    Misadventures in Pursuit of an Immutable Development Virtual Machine

    With three 10,000rpm drives and a nice RAID controller with lots of cache, and expansion for something like 8 drives, I should be able to find the sweet spot with all the resource utilization I actually need. Especially since if I’m doing something disk intensive (like compiling), it’s usually happening on just one of the running machines at a time. I might have a database server VM that gets busy while an application is being tested, but that still will leave me plenty of I/O bandwidth to play around with other stuff while I’m waiting.

  10. Dave M. said

    Hmm… well, it will be interesting to see how compiling does on your new rig. If I were to do it again, I’d either run a dev system on the native hardware instead of a VM and use TrueImage to “roll back” my system to a perfect state when necessary, or try out the VMWare disk mode where you use a physical disk, rather than a virtual one. I can’t remember what they call that, nor know whether or not it’s still supported. I remember seeing it before in 2.0 of VMWare. I don’t recall seeing that on the Linux version which I use at home. I imagine that that mode would have the best performance.

  11. The two bottlenecks for Visual Studio (C#) are always going to be:

    1) Hard Drive Write Speed
    2) Single Core Processor Speed

    We optimize for a couple of things with dev machines here in the office (with serious dev budgets for custom Casino Applications that have yesterday deadlines). They are pretty simple…

    First, you want MAX number of write heads running on your HDD. Write your own app to see how many files a big build generates and replaces every time. It’s sick. We use 8 Raptors or Velociraptors (Size don’t matter much with 8) attached to a good RAID card. We like the Highpoint RocketRaid 2320’s, but you can get more speed out of a more expensive one. DO NOT USE RAID 0 unless you have source control. You DO have source control, DON’T YOU?

    We put an OS on RAID 1 Channels 1 and 2, along with all dev tools. RAID 5 on the remaining 6 bays gets pretty retarted fast with raptors, and provides safety on both arrays. Migrate your data and solution files to RAID 5 array, and get cooking.

    In my experience building dev boxes on big budgets, this is by far the most important.

    2nd item is Processor. Clock speed and max amount of cache on the processor are paramount… Xeon and Itanium both compile very nice, and depending on the processor you may or may not have some performance loss with multi-core over single, but I prefer multi-core because they tend to keep your whole box in general from lagging… rip a build on one core and the others can kick in a bit so alt-tabbing through 10 additional windows (you all know you do it) doesn’t take forever. There is NOTHING more frustrating then waiting for OUTLOOK to repaint, or some mundane crap like that.

    Cooling is important, settings in Visual Studio are important (Tools –> Options –> Projects and Solutions –> Build and Run –> Only build startup projects and dependancies on run), multi-monitors are a MUST. Widescreens are preferred for those long lines of code…

    I think this is a pretty good build you have going here. I’d like to see how it stacks up, but I would have suggested a little more cash on the drive I/O and a little less on the mobo/CPU combo for the same money. I think you would have been a little slower and could have upgraded more in the future (try upgrading a RAID 0 array… riiiiiiight) without scrapping the whole rig. ANYTHING with a big drive array can be turned into a pretty sick server for the next goround without shelling out too much cash.

    Memory could have been maxed out in lieu of cooling crap and overclocking for those virtual machines, but we run big dev servers with single dedicated tasks (gotta love those gamblers) per machine, so I don’t have much experience on those lines… just a guess.

    Nice article. People will learn a lot here, which is kind of awesome, and you are DEFINATELY right about the importance of developer’s time and workflow. I can put down pretty solid dollars and cents on the difference in productivity the first time I told a dev to build his dream machine. It made a HUGE difference in his day… I’d say overall 2x output from his original machine, which was not shabby, just not killer.

  12. Dan Vanderboom said

    Sounds like our findings exactly, though 8 Velociraptors is quite the setup! I was just considering whether I should buy a 4th, since I have the extra slot in my drive cage, and–like you said–adding a drive to RAID 0 means rebuilding the whole image.

    And yes of course I use source control! I even intend to do weekly backups to NAS to preseve all my VMs.

    I could have gone with a slower processor, indeed, but it’s rare that I build a machine, and I decided to go on the extravagant side for the hell of it. There’s more anticipation and fun in building the Ultimate Rig. if I were outfitting a team of developers, I’d probably opt for some nice Xeons.

    I actually had a problem with the motherboard, replaced that (it now posts and I get video), and now have found that the liquid cooling pump is defective. As soon as I can replace that, I’ll be installing OSes and running benchmarks for a follow-up article.

  13. […] Hardware View Into Building the Bad Ass Development Rig […]

  14. Car Crazy said

    Very Nice System. Impressed with your selection. Gotta scream even by todays standards of cpu’s and such. I still run a qx9650 at 4ghz 24hours a day with 3 gtx285ocx’s and 4 velociraptors in RAID 0

  15. JiNtatsu said

    Nice rig.. Too much for me though.. can’t afford it.. Nice selection..

  16. Cameron said

    That is quite a machine! Something that I suppose I can dream about for awhile. Like a lot of those who commented, it’s not really in my price range. Let me know when you build a cost-effective model (if there is such a thing).

Leave a reply to Eric Burcham Cancel reply