Tuesday, October 30, 2007

Marginal Utility - IsDebug for .Net 2.0

Unfortunately for my co-workers (and especially the QA department), I play build engineer because we don't have a dedicated build engineer or even build server. Small company, that's how it goes.

In recent months, I've stepped my game up. Batch files, playboy. I know how retarded that sounds - I was this close to having Cruise Control up and running, but I got fed up with the memory footprint it left on my labtop. It's not that trivial a matter when I can time out database calls by having a debugger up and running the same time as I have SQL Server Management Studio with a couple of windows open. Any way you slice it, that shit is s-a-d.

But in the past, I wasn't quite as lazy as I am now. Rather than spend a couple of hours cobbling together a full build script (dear me from 3 years ago - is it really so hard to Google csc.exe?), I did it all by hand from inside of Visual Studio. Clean out the destination directory, swap from Debug mode to Release mode, compile, zip up, yadda yadda.

But wouldn't you know it? It's so simple and yet I managed to screw it up. Multiple times. Handed QA releases that were compiled in debug mode or that had assemblies left over from debug mode and they got clued into it (because our app works slightly differently in the two modes (BY DESIGN!!!)). How was I to know? DLLs look like DLLs, people.

Eventually, I found a marvelous little utility called IsDebug out on the wild internets. Look at that! You drag them DLLs onto it and it tells you whether they're debug mode or release mode! Total win!

When I finally sat down and knocked out our app's upgrade to .Net 2.0, I was kind of bummed to find out that IsDebug didn't work anymore. I recompiled, nothing. Checked file properties, still not working.

But wait! Internets to the rescue again with the clue-by-four on what new properties I should be examining!

So I got Jeff's peanut butter in Jim's chocolate and here we are - IsDebug for .Net 2.0.

I've got a compiled version if you're lazy or don't have a compiler (I imagine this utility is especially marginal if you don't) and if you distrust stuff I write because I'm on the internet or because you know that I cut and pasted it together, there's the solution with source code for you too.


Monday, October 29, 2007

DebuggerStepThrough Considered Arrogant

It happened again working with vintage code today. I'm tracking down a problem, trying to step inside a getter on the object and all of a sudden, I've skipped over it and find myself in the next statement.

I'm a little confused. I drag the execution point back up to the line I was trying to inspect. Hit the function key to step into it, find myself skipped over it again. What the hell?

So now I'm mired in a meta-problem - in order to get to the bottom of why this (the original problem) doesn't work, I have to get to the bottom of why this (stepping into the getter) doesn't work. I look at it sideways for a little bit and nothing looks out of order, so why isn't that breakpoint being hit? It's definitely being used.

Oh wait, I've seen this before. [DebuggerStepThrough], the most useless code attribute on the fucking planet. The MSDN reference fails it hard, so if you've never run into this abomination, here's what it does - tells the debugger that there's nothing of interest here, move along.

In a best-case scenario, you're saving one mouse click or key press to step through that dumb getter. My hero!

But wait! You don't specify the attribute on a property level, you specify it on a class level. If you're a better programmer than I am, you've managed to not screw up public exposure of private instance variables through getters and setters. Kudos.

But an entire class that's bug-free? And not causing any side effects anywhere else in the application?

Even if you manage to pull that miracle out of your hat, will it never have to live side-by-side with code that isn't quite on par with it? You're still causing problems for people because your code doesn't act like everyone else's code does and that looks awful fucking fishy.

So do me a favor - write that gorgeous, airtight code. Write it as correctly as you can the first time.

And leave the fucking [DebuggerStepThrough] out of it.

Wednesday, October 3, 2007

Internet Is The New COBOL

I've been trying to quantify exactly what it is about software as a service that bugs me so much. Yeah, my experiences haven't been all that winning with it so far, but I shouldn't let my anecdotes taint the promise of the paradigm, right?

Like Joel pointed out, computers will continue to roll out faster and with more memory, so don't sweat the micro-optimizations and build more features. No way I disagree with that; Moore's Law keeps on chugging away and today's monster rig is 18 months from now's Dell.

But as computers inexorably get faster, more memory, bigger hard drives, our line to the outside world remains fairly constant. Putting aside the comparatively minor problems of pretending that software standards ever worked for anything beyond the trivial (can you name one standard that gave you anything approaching wood?), SaaS seems like a plug-ugly mistake for that reason.

  • The success of SaaS is predicated on the use of a scarce resource, the network.

This problem was driven home in performance testing our services. Internally, on a modest (that's a nice way of saying "hand-me-down") application server and database, we were able to push pretty good numbers through. We're feeling good about ourselves and how it's performing, so why not point the load servers at our staging environment up on production? It's a managed site, the hardware isn't vintage, so we're expecting to see some solid throughput.

We start the ramp-up, get about halfway to where we peaked in our internal testing when things start shitting the bed. The load servers start timing out and they drop out of the test.

We saturated our T1.

Oh. Network pipes don't double in size every 18 months? So we set off scrambling again. Do we order another T1 or two? They'll take 30-45 days to be installed and then we'll be paying to have them lay fallow after we run our few days' worth of tests. Maybe more importantly, we're not sure that our clients have any fatter pipes than we do.

Do we find someone with bigger pipes than we've got and tote our load machines over there for a few days? They'll gladly let us set up shop there for a perfectly unreasonable price. Oh. But our connection to our back-end won't work from there so we'll need to be teleconferencing with someone on-site monitoring the servers. That complicates things and we still don't know that performance is a problem.

What we do know is that the network is causing a lot of problems that we can't easily throw more hardware at. When it comes to what a computer can do, the graph trends up and to the right.

When it comes to the stuff backing your service calls, how much shit can you stuff in that five pound sack?

XML is bloated. Really, really bloated. It was designed as a human-readable markup language (it's what puts the ML in XML) but basing communications protocols on it was a dubious decision, hindsight or otherwise. Five pounds.

JSON is less bloated, but JSON parsers aren't as endemic as XML and business people will object because they can't juggle two acronyms at the same time and their tech guys don't know JSON but have a sneaking suspicion that it means way more work for them so they're getting doubly steered back to XML. Six pounds.

You can compress the HTTP that either of them is getting shooted out over but, like JSON, not all clients are going to be able to deal with compression. Six and a half pounds.

In college, professors told me that in the Bad Old Days of computing, you didn't own the computers you worked on. You paid oodles of money to lease an IBM rig and keep it running and even then, it shipped with more hard drive space and more CPUs than were turned on at any point (over the phone line that you pay for).

"But professor, that's awful! You pay all that money and you don't even get all the computer that you could be using? And you have to pay for a phone line so their techs can dial in and turn the magic on?"

"No, that's a good thing. You built your applications under constraints and when you ran into a wall because your app was running too slowly or you were running out of disk space, a call and a few hours later, magic happens and your app's running fine and disk space is no longer an issue."

Curiously, Amazon's following IBM's lead with their S3 and EC2 offerings. Need more space? Got it. More computational power? Bingo bango.

God help you if you need more bandwidth to make those calls to S3 or EC2. Not even god can help you if your clients are running into a brick wall because they saturated their pipes calling your services.

Like buying IBM, basing your architecture around a decentralized network server with flexibly vast resources won't get you cockpunched for making an impossibly wrong decision by most people, but I'll still hate on you because that's how I do.

  1. We already knew that storage space and computational power were cheap and vast. Amazon's maybe made it moreso, but that's nothing new.
  2. For what it is, the pay-as-you-go model isn't awful. You wouldn't consider it if you could build your own disparately-hosted server farm, but you don't got the bankroll to roll like that which is why you've gone this route.
  3. Wait a fucking second. You knew that the network wasn't going to get any faster and you designed your application around using it extensively anyway?

Congratulations. You've discovered the brave new frontier of decentralized internets architecture and it looks a whole lot like a fucking mainframe.

Web 2.0, meet Web 0.7. Web 0.7, meet UNIVAC.