Wednesday, November 28, 2007

In Praise Of Being Too Clever

Strolling into my local bank the other day to get some cash from the ATM, a little something caught my eye. No, not the ATM what with the fuzzy, flickering screen, although the fact that my local branch in podunkville has a newer model with a nicer screen than the one in the big bank in the BIG CITY left me scratching my head. The nameplate on someone's desk.

Director of First Impressions

I know, and I agree with you - what the hell? That job title's too clever by half. But at the same time, it tickles me in all the right places.

It's more pleasant on the ears and brains than "ombudsman." What kind of word is that, anyway? No fair asking internets or dictionary. Beats "receptionist" too (which is what they are). For whatever reason, it speaks to a lot of things. A company that takes itself more seriously than I do, down to the point where they've got egregious nameplates like that out in public view.

It got me to thinking about Peopleware, the anecdote about The Black Team in particular. If you've never read the book and you work with software in any professional fashion, when you finish reading my drivel, go out and get it. Seriously.

But anyway, it was an anecdote about a team of testers at IBM that was good at their jobs to the point where they were unafraid to loudly announce to the world I am better at my fucking job than you are by dressing all in black. It sounds corny and like a complete non-issue in the age of casual attire, but I remember watching a documentary in high school (not Triumph of the Nerds but something around that time) where they interviewed OGs from IBM and they talked about how rigid the culture was, down to the point where on the first day of work, a co-worker lifted leg of the guy's pants up and critiqued the fact that he didn't have sock garters on. Taken in that light, dressing all in black does take a pretty big pair (but then again, maybe they were all black down to the sock garters).

Sadly, I started wondering to myself - where's my application's director of first impressions? For that matter, where's most software's director of first impressions? Most software out there (the stuff I do for work included) doesn't greet you with a handshake and a smile so much as scowl at you from its office before it runs back inside and slams the door shut, hoping you didn't notice it and if something goes wrong, leaves you to fend for yourself and maybe throws you a bone with a wiki or something lame.

Why doesn't most software, most teams developing software, take what they do so seriously enough that they can get away with doing something goofy like rolling into work dressed like an execution squad when everyone else is in a regimented 3-piece suit and get away with it because they're the shit and everyone knows it? Director of First Impressions too precious, too clever?

Hell no - I'm jealous. I hope that the software I develop is too precious too.

Tuesday, October 30, 2007

Marginal Utility - IsDebug for .Net 2.0

Unfortunately for my co-workers (and especially the QA department), I play build engineer because we don't have a dedicated build engineer or even build server. Small company, that's how it goes.

In recent months, I've stepped my game up. Batch files, playboy. I know how retarded that sounds - I was this close to having Cruise Control up and running, but I got fed up with the memory footprint it left on my labtop. It's not that trivial a matter when I can time out database calls by having a debugger up and running the same time as I have SQL Server Management Studio with a couple of windows open. Any way you slice it, that shit is s-a-d.

But in the past, I wasn't quite as lazy as I am now. Rather than spend a couple of hours cobbling together a full build script (dear me from 3 years ago - is it really so hard to Google csc.exe?), I did it all by hand from inside of Visual Studio. Clean out the destination directory, swap from Debug mode to Release mode, compile, zip up, yadda yadda.

But wouldn't you know it? It's so simple and yet I managed to screw it up. Multiple times. Handed QA releases that were compiled in debug mode or that had assemblies left over from debug mode and they got clued into it (because our app works slightly differently in the two modes (BY DESIGN!!!)). How was I to know? DLLs look like DLLs, people.

Eventually, I found a marvelous little utility called IsDebug out on the wild internets. Look at that! You drag them DLLs onto it and it tells you whether they're debug mode or release mode! Total win!

When I finally sat down and knocked out our app's upgrade to .Net 2.0, I was kind of bummed to find out that IsDebug didn't work anymore. I recompiled, nothing. Checked file properties, still not working.

But wait! Internets to the rescue again with the clue-by-four on what new properties I should be examining!

So I got Jeff's peanut butter in Jim's chocolate and here we are - IsDebug for .Net 2.0.

I've got a compiled version if you're lazy or don't have a compiler (I imagine this utility is especially marginal if you don't) and if you distrust stuff I write because I'm on the internet or because you know that I cut and pasted it together, there's the solution with source code for you too.

HOORAY!

Monday, October 29, 2007

DebuggerStepThrough Considered Arrogant

It happened again working with vintage code today. I'm tracking down a problem, trying to step inside a getter on the object and all of a sudden, I've skipped over it and find myself in the next statement.

I'm a little confused. I drag the execution point back up to the line I was trying to inspect. Hit the function key to step into it, find myself skipped over it again. What the hell?

So now I'm mired in a meta-problem - in order to get to the bottom of why this (the original problem) doesn't work, I have to get to the bottom of why this (stepping into the getter) doesn't work. I look at it sideways for a little bit and nothing looks out of order, so why isn't that breakpoint being hit? It's definitely being used.

Oh wait, I've seen this before. [DebuggerStepThrough], the most useless code attribute on the fucking planet. The MSDN reference fails it hard, so if you've never run into this abomination, here's what it does - tells the debugger that there's nothing of interest here, move along.

In a best-case scenario, you're saving one mouse click or key press to step through that dumb getter. My hero!

But wait! You don't specify the attribute on a property level, you specify it on a class level. If you're a better programmer than I am, you've managed to not screw up public exposure of private instance variables through getters and setters. Kudos.

But an entire class that's bug-free? And not causing any side effects anywhere else in the application?

Even if you manage to pull that miracle out of your hat, will it never have to live side-by-side with code that isn't quite on par with it? You're still causing problems for people because your code doesn't act like everyone else's code does and that looks awful fucking fishy.

So do me a favor - write that gorgeous, airtight code. Write it as correctly as you can the first time.

And leave the fucking [DebuggerStepThrough] out of it.

Wednesday, October 3, 2007

Internet Is The New COBOL

I've been trying to quantify exactly what it is about software as a service that bugs me so much. Yeah, my experiences haven't been all that winning with it so far, but I shouldn't let my anecdotes taint the promise of the paradigm, right?

Like Joel pointed out, computers will continue to roll out faster and with more memory, so don't sweat the micro-optimizations and build more features. No way I disagree with that; Moore's Law keeps on chugging away and today's monster rig is 18 months from now's Dell.

But as computers inexorably get faster, more memory, bigger hard drives, our line to the outside world remains fairly constant. Putting aside the comparatively minor problems of pretending that software standards ever worked for anything beyond the trivial (can you name one standard that gave you anything approaching wood?), SaaS seems like a plug-ugly mistake for that reason.

  • The success of SaaS is predicated on the use of a scarce resource, the network.

This problem was driven home in performance testing our services. Internally, on a modest (that's a nice way of saying "hand-me-down") application server and database, we were able to push pretty good numbers through. We're feeling good about ourselves and how it's performing, so why not point the load servers at our staging environment up on production? It's a managed site, the hardware isn't vintage, so we're expecting to see some solid throughput.

We start the ramp-up, get about halfway to where we peaked in our internal testing when things start shitting the bed. The load servers start timing out and they drop out of the test.

We saturated our T1.

Oh. Network pipes don't double in size every 18 months? So we set off scrambling again. Do we order another T1 or two? They'll take 30-45 days to be installed and then we'll be paying to have them lay fallow after we run our few days' worth of tests. Maybe more importantly, we're not sure that our clients have any fatter pipes than we do.

Do we find someone with bigger pipes than we've got and tote our load machines over there for a few days? They'll gladly let us set up shop there for a perfectly unreasonable price. Oh. But our connection to our back-end won't work from there so we'll need to be teleconferencing with someone on-site monitoring the servers. That complicates things and we still don't know that performance is a problem.

What we do know is that the network is causing a lot of problems that we can't easily throw more hardware at. When it comes to what a computer can do, the graph trends up and to the right.

When it comes to the stuff backing your service calls, how much shit can you stuff in that five pound sack?

XML is bloated. Really, really bloated. It was designed as a human-readable markup language (it's what puts the ML in XML) but basing communications protocols on it was a dubious decision, hindsight or otherwise. Five pounds.

JSON is less bloated, but JSON parsers aren't as endemic as XML and business people will object because they can't juggle two acronyms at the same time and their tech guys don't know JSON but have a sneaking suspicion that it means way more work for them so they're getting doubly steered back to XML. Six pounds.

You can compress the HTTP that either of them is getting shooted out over but, like JSON, not all clients are going to be able to deal with compression. Six and a half pounds.

In college, professors told me that in the Bad Old Days of computing, you didn't own the computers you worked on. You paid oodles of money to lease an IBM rig and keep it running and even then, it shipped with more hard drive space and more CPUs than were turned on at any point (over the phone line that you pay for).

"But professor, that's awful! You pay all that money and you don't even get all the computer that you could be using? And you have to pay for a phone line so their techs can dial in and turn the magic on?"

"No, that's a good thing. You built your applications under constraints and when you ran into a wall because your app was running too slowly or you were running out of disk space, a call and a few hours later, magic happens and your app's running fine and disk space is no longer an issue."

Curiously, Amazon's following IBM's lead with their S3 and EC2 offerings. Need more space? Got it. More computational power? Bingo bango.

God help you if you need more bandwidth to make those calls to S3 or EC2. Not even god can help you if your clients are running into a brick wall because they saturated their pipes calling your services.

Like buying IBM, basing your architecture around a decentralized network server with flexibly vast resources won't get you cockpunched for making an impossibly wrong decision by most people, but I'll still hate on you because that's how I do.

  1. We already knew that storage space and computational power were cheap and vast. Amazon's maybe made it moreso, but that's nothing new.
  2. For what it is, the pay-as-you-go model isn't awful. You wouldn't consider it if you could build your own disparately-hosted server farm, but you don't got the bankroll to roll like that which is why you've gone this route.
  3. Wait a fucking second. You knew that the network wasn't going to get any faster and you designed your application around using it extensively anyway?

Congratulations. You've discovered the brave new frontier of decentralized internets architecture and it looks a whole lot like a fucking mainframe.

Web 2.0, meet Web 0.7. Web 0.7, meet UNIVAC.

Saturday, September 29, 2007

Playa Hatin' on Oracle in the 2K7

Growing up a young nerd, I spent a lot of time in my formative years on BBSes. OK, entirely too much time that should have been spent playing in the mud and socializing with people instead of keyboards, but I digress.

Along with BBS doors which were (are; I have a computer that supported a Telnet-connectable BBS collecting dust) awesome, I regularly found myself getting into inane flame wars about how much WINDOWS SUX LOL and MACS ARE GAME SYSTEMS ROFL to bump up my system credits so that I could spend them downloading t-files.

I'm not normally one to wax nostalgic (FUCK CDS! BRING BACK ACETATE!), but when I get a telephone book-sized requirements document (or hell, one that fits on a double-sided printout), boy howdy do I start to wish for The Good Ol' Days, when Telling You How It Was was the domain of hackers.

I miss those precious t-files and what they connote to me.

Profanity. Dry, cutting wit. No mincing words, no dumbing it down for people who won't (and can't) "get it". A pretty powerful stench of superiority, and you'd better believe that as you pore over the electrons of that tome, you start to feel better than the assholes out there that don't know shit from shinola either. Goddamn, I love those things. I still go back and read the Cult of the Dead Cow from time to time. It can feel like a product of its time, like a zine handed off with a wink and a nod in a suburban parking lot, but it sure as hell ages better than that requirements document - pick a requirements document, any requirements document.

The cDc got its name from a little programmer joke (I think; please don't hack my site folks, I am 31337 and k-r4d to the bone!!!!) - people would use hexadecimal "magic numbers" to flag specific memory segments so they could find them easily. People figured out that you could spell things with the few letters that afforded you - 0xDEADBEEF is, to my thinking, one of the swankier iterations that hackerdom mustered.

The fine folks at Oracle, the database company responsible for the ungodly machines required to run Oracle and the $400/hour consultants that are required to "tune" your systems so they don't run like raw ass (does raw ass run?) haven't forgotten about these magic hex codes. I can't speak to whether they've forgotten how to tell the difference between an empty string and a null one, but what do we care for 40 years of relational database theory? There is only one true path to Database Enlightenment and ORACLE IS IT. But seriously, watch those fucking spaces when you work with it.

Why so bitter about Oracle when I should know better than to get all frothy about a technology that I don't use and that probably has no effect on my life? THEY KILLED MY MOTHERFUCKING MOTHER, MAN! No they didn't. But I did have to deal with an Oracle salesman once (and if you've ever had to deal with an Oracle salesman, you know it isn't just once) and it left a foul taste in my mouth ever since.

Which is why I'm so glad to see that the way of the t-file is alive and well - this is some quality-ass hatin' on Oracle right here, folks. A few choice quotes (but really, go read it!):

We are talking libraries of 30 Megabytes and more linked in as well as sitting next to the binary, just in case.

[...]

One can only assume that Oracle uses the Intel compiler because no other compiler would produce efficient enough code to run this behemoth of a binary in acceptable speed.

[...]

And we would like to welcome Oracle Corp. in the year 2007, the century of highly advanced, mixed-case passwords.

When I was young, after getting over wanting to be an astronaut and paleontologist, I wanted to be a guy who dug deep into the cruft of software and systems, ripped the secrets out of them and brought them back to the world.

I never became that guy (and doubt I ever will because I spend too much time playing video games and I'm not that smart), but I am glad to see that there are people out there hacking away and still producing quality t-files. That they're straight hatin' on Oracle is just a triple word bonus.

Now if you'll excuse me, I have to start praying that no one ever looks at my code ever because I'd probably break down sobbing like a big stupid baby if it ever received that kind of brutal scrutiny.

Friday, September 28, 2007

Software as a Service - Oy Vey

Software as a service (SaaS), we need to talk.

You had so much promise. Mashups! Loose coupling! And other buzzwords/phrases that architects, CIOs and developers could somehow all get behind. Tangentially, does anyone still say "the network is the computer"?

I guess it was a pretty cool idea. Rather than having to worry about the boundaries between Widgets Foo and Bar, you just wave your stupid hands, utter things like "SOAP!" and "XML-RPC!!" and presto! Those fuckers are working together perfectly because of the magic of standards-based communications.

No more fugly COM calls. CORBA? It's dead to us. What's old (piping text files between widgets, Unix-style) is new again! Hooray XML!

It's so simple, how can this possibly go wrong?

Glad you asked. You produce a service. The person on the other end - do they know how to consume the service? Do they take into account things like "am I trying to read this file before you've finished streaming it over the tubes to me?" That web services definition that you published - are they really adhering to it? Will those line breaks you put in to make it readable throw a wrench in the works (oh if only I saved that godforsaken e-mail chain to forward along to The Daily WTF).

Let me try putting that another way - you're selling your software as a web service. What happens when, for any reason, people find themselves unable to consume it? If you're like me, you'd like to curse at them for their brazen incompetence and write them off. If you're also like me, you realize pretty quickly that you're cannibalizing your own bottom line by writing off these clueless retards in DRAMATIC FASHION because they're the ones that are paying for your stupid service.

Their problem suddenly becomes your problem. Managing one project at a time is enough of a nightmare; now, in addition to the one you're only barely managing, it's your job to asymptotically manage another.

But I don't mean to exclusively hate on SaaS - this problem extends, in one form or another, to any product that you sell. At the point that it leaves your hands, no matter how fully-rendered, it enters the payers' hands and no matter how drop-dead simple, how intuitive the interface, someone is going to fuck it up and then your headaches begin.

Just that when the inevitable fuck ups happen to be piped over SSL with proxy servers and firewalls and misbehaving routers between points A and B, life seems a little less rosy.

Monday, September 17, 2007

Zen slapped!

It wasn't a moment of clarity or anything else, it was a moment when what made sense no longer did. I have the same problem with words - for a couple of years, "will" just didn't look right to me; I had this nagging feeling that I'd spelled it wrong when I hadn't.
And in a roundabout way, having read a meta-post on what Steve Yegge has to say today about objects or something (my attention span's too short to find out), suddenly the canon of "objects for your business logic, databases for your data" seems to have been proffered by a madman.
At first I was trying to figure out exactly what encapsulation is being broken - as long as objects are responsible for their own persistence (when asked nicely) and retrieval (when supplied a morsel of information), where is the disjoint, right?
But wait a second. What's the "real" object here? We've got business objects and they can be composites of other business objects. But we don't have much to do with our business objects without instantiating them and without eventually persisting them somewhere... at which point, we've got another instance of an object, only now in relational form.
That relational "object" can itself be a composite of sundry other "objects" (tables), unless you're the sort that just tacks another varchar(8000) column on your God Table and calls it a day (and I promise to make fangless threats to beat you to death with Codd's corpse if I ever have to support such a mythical beast).
And tying these two disparate, equally valid (depending on who you ask and when), objects together is what amounts to a domain-specific language (SQL) so that you can get these two different domain objects talking to one another.
This feels unnatural to me and I'm not left wondering why things don't work but rather in awe of the fact that they work in spite of being tied together with duct tape and zip ties.
Thankfully, Giles Bowkett came along and dropped a pithy
oodbs ftw
in the mix and I was slapped back into... coherence?
I've never bought into object-oriented databases because I've never taken the time to wrap my head around them (chicken and egg, that). I understand relational databases - I've long since internalized normal forms and I understand how they help me to enforce data integrity and are generally pretty performant (disk size, memory, speed) ways to persist data.
Even if you take size/memory/speed out of the equation, there's a little matter of people making mistakes - I've yet to come up with an object design that worked out of the gate. When I bone it (and I just about always do), it means tacking on more columns, more tables. The stored procedure interface then (probably) changes and the upstream object (and their calling objects) which is all kind of ungainly, but all the "legacy" data is still there for the taking. When you refactor an OODB, what happens? I mean, it was persisting an old version of your object, right? Is this something you have to take into account manually somehow (I imagine so)? I'm scared.
Model/view/controller, I'm sorry I doubted you honey. But you have to admit, you are kind of freakish when you take a step back and think about it.

Friday, September 14, 2007

What are you good at?

It's been a confluence of things that's left this question tingling in my head, begging me to ask it to just about everyone at work.
Work's been a firefight - stuff is blowing up and I've been asked to put down what I was working on to help look into it. It's technology that I don't know, code that I don't yet have a mental picture of and unknowns on all sides (our app, our hosting environment, their app, their test methodology, yadda yadda).
It's all finger-pointing and scrambling through the mismanagement playbook. Regularly changing priorities. Micromanagement on a level I didn't think was possible. Scrambling to find consultants who can "help" (from the same consulting company that built a large portion of the app, naturally) and get them put on hot standby even though we haven't been able to pin the problem down yet. More people from outside the organization being brought in to help micromanage. Scheduling people to test round-the-clock, generally frazzled nerves all around.
At some point I took a step back and started to wonder how we ended up in this mess in the first place. It sort of dawned on me that up the food chain, someone failed to ask a simple question that could have saved us all a bunch of time.
What are we good at?
This really is as simple as it sounds. If you're heading up a company, you think in terms of "core competencies." (That's a real term, right?)
You don't over-extend yourself. I can grill a mean steak and I can brew a ferocious cup of coffee, but at the end of the day I'm not going to try to write a cookbook because that'd be even more unreadable than this is.
Unfortunately, that's sort of what the situation that we're in feels like. Rather than define and focus on what products we felt we could successfully accomplish in a set period of time, we were told what we needed to get done in that same period of time.
I can see how it's a risky sell - "rather than over-extend my people and producing n working products, I'd like to focus in and produce n-2 pretty high-quality products that I have confidence that they'll be able to build, test and release." You're talking about setting us back 2 products that we could be selling. Are you out of your mind?
As a strict value prop (that's "value proposition" for those of you not in the know, another vaguely business term that I'm sure I fucked up using) the value of quality isn't easy to wrap your head around. McDonald's doesn't make the highest quality hamburger but they make bank, right?
Right - and when they've failed to ask themselves what they're good at, the market's reminded them. What ever happened to the gourmet menu that I remember hearing about 15 years ago? The pizzas? The steaks? The lobster rolls (I've heard they have them in New England but it's not like I've gone looking for them because what the shit so maybe they still have them?)? They weren't good at them. They spent a lot of time and money developing them and they ultimately ate those development costs and pulled them off the shelves.
What works for a corporation doesn't work for a developer.
When I got to thinking about what makes an object work for me, something one of my first computer science teachers told me when looking over my code came back and stuck in my head. Paraphrasing what he told me...
You should be able to describe a function in one sentence. Furthermore, when it's time for you to describe that function, the word "and" should not be in that sentence. There should be no semi-colons, no subjugate phrases, no hyphens. If you find yourself using the word "and" to describe your function, you're not describing a function, you're describing two functions.
So I discovered the joy of decomposing functions. I didn't get down and dirty by declaring every variable as final. I'm by no means writing functional code and I slip all too often and my functions do two things, but I try to refactor them when I'm able to admit to myself that yeah, that really is doing multiple things.
The code behind the application that's sometimes working like a champ, other times shitting all over the floor... not so much. Giant blocks of code. Twisty turny, deeply nested if blocks. Classes dedicated to re-doing functionality that the .Net framework had built in (if only they had asked Google).
I want to give the codebase one last hug before I put it out of its misery - when I look at it and ask it "what are you good at?", it sighs heavily and shakes its head. It doesn't have to say anything - that look it gives me is all I need to know. I've seen it before, and I'll see it again. Finally it says in a weak little voice, "I'm good at doing whatever they wanted me to do today. I think. What was I doing yesterday? I'm not even sure what I'm doing here."
It's not its fault. It's not their fault. Someone should have asked "what are we good at?" and someone should have responded "not this" and that should have been that.
We were given this weekend off (not that wild horses were going to drag me in for another consecutive day of hair-pulling). Management is good at giving us what we shouldn't even have to ask for. Maybe next time they'll learn to ask us what we can give them instead and we can produce one or two quality products rather than a bunch of garbage and save everyone a lot of agony.
Wishful thinking, right?

Wednesday, September 12, 2007

The Joy of Recursion

I read Why Functional Programming Matters because Raganwald told me to and he's about a jillion times smarter than I am. It was an equally frustrating and enlightening experience for me, but boy howdy has it ever gotten my brain wheels spinning again.
For too long, I think I've forgotten why I got into programming - there's a terrifying amount of stuff to learn. As of late, I've settled into a rut (however productive) of kind of reflexively seeing every problem in terms of objects for the business logic and a relational database to persist the data. It's good and bad - I can make it work, but there's that nagging little voice in the back of my head wondering if I could have done it differently (and the slightly louder voice reminding me to ignore that voice and Do The Simplest Thing That Could Possibly Work and map and filter aren't necessarily it (for me (right now))).
Back in Ye Good Olde Days, I really got a kick out of programming. Moving at my own pace, figuring out what worked and what didn't at my own pace... what could be better? One of the things I figured out back then was recursion sucks and no one should use it ever. Asking me why that got stuck in my head would be as productive as asking why I didn't go outside, it's a beautiful day. Computer's inside. But the idea that I could get through life with nothing but recursion did stick with me for a few years. In my defense, I had a lot of awful ideas about how programming should and shouldn't work back then. Some things never change, I guess.
After my limited experience in college with Lisp, I chalked it (Lisp/functional programming) up as something destined to die in the halls of academia. After all, object oriented programming has conquered in the marketplace of ideas, right? Well, yeah.
But then again, there's a not-so-small matter of The Multicore Era. I've done my penance with threading and there was little with it that came easy. I still think that there must be companies out there working on C++/Java/.Net compilers that hide the ugliness of working with multiple CPUs/cores, but curiously, the same machinations you go through to make your object oriented code play nice with multiple processors make it start to look... almost functional.
Why FP Matters did a good job of opening my eyes and stretching out corners of my brain that have laid fallow for too long, but it's aggravating. For every idea I can almost wrap my head around, there's a dozen that I can see on the periphery slipping away from me.
I don't want to let those dozens slip away from me quite yet. I'm now slowly working my way through YAHT (Yet Another Haskell Tutorial, a better read than the name would have you believe, honest) and rediscovering my inner child. I'm inept in Haskell - functional programming's still completely foreign to me. The syntax feels awkward. I'm not even illiterate in this and I can still feel myself trying to translate OO to FP.
But there's something to be said for rediscovering the Fibonacci sequence and factorial functions in a new light. I shouldn't be beaming this much about a dead academic language. I shouldn't seriously be considering reading The Structure and Interpretation of Computer Programming over the winter. I should maybe be making friends with a nice language like F# that I could do something with. How will I ever make money with this?
There's no real logic to it, I just feel like the time's right to play with it (don't I have a Wii for that?) and I'm enjoying it. My inner child's even beginning to make peace with recursion.

Monday, August 27, 2007

Am I this clueless about the business?

To give a little background, I work in the health care insurance industry. Grand.
So today we had an exec in from the home office to give a little pep talk or something. Or acknowledge that important people are, in fact, aware that we exist. I'm not entirely sure what the point of it was.
Amidst the flow of coached company lines, it was the unguarded moments that caught me.
One of the thingies that we develop lets people "choose" the benefits they "want." Apparently, customer satisfaction ("Customer Sat" for those of us in the biz) with their benefits is reasonably high enough with people who have gone through and used the thing.
Which leads to faux pas number one - after reinforcing that people are satisfied with our product, he joked about how unsatisfied employees are with our benefits. This is no joke - my benefits have gotten worse in every way I can think of since getting hired once upon a time. More expensive, less covered, more hassles.
But somehow, giving people more choices about their health care makes them happier about what they end up with. You'll hear it referred to as "consumerism," the great hope of the health care insurance industry and if you're like me, you'll somehow find yourself pining for the "bad old" days of HMOs. And they were bad, but the new high-deductible garbage foisted on us is worse. But more choices makes it better.
To prove this, faux pas number two. Guy's got a daughter headed off to college. She's an art major and they wanted to make sure she had a laptop for school. What sort of computer do you think they got her? You don't have to be Miss Cleo to guess "an Apple" and be right. But all the choices - how big a screen, how fast a processor, yadda yadda apparently made him happier about the choice.
All I could and can think is - "yeah, but they fooled you into fooling yourself into thinking there was a choice." The only choice was "Apple" - the rest of it was window dressing. They could have offered a single make and model and he'd have bought it because they deliver a product people love (because the people that work for Apple believe in the product that they deliver (because it's a good product that people love (ad absurdum))).
Furthermore, Apple might not be around to sell jack shit if it weren't for the iPod, which doesn't give you a whole lot in the way of those choices that we apparently demand. A bigger drive, different colors. Not much in the way of choices. When it was first released, it was dead in the water because it didn't have features that other MP3 players had. The fact that no one really wanted or needed or used these other features was irrelevant - the iPod just didn't have them. Suckers.
"People don't want to buy a quarter-inch drill. They want a quarter-inch hole!"
- Theodore Levitt
I don't buy the notion that people want to do a whole lot of choosing their health care benefits. If they're like me, all they want to know is that when shit happens, I'm not going bankrupt.
Four years ago (when we had insurance from An Insurance Company I Am Not Employed By), I had a migraine that lasted five days. I finally broke down and went to the hospital - they CAT scanned me, found an abnormality, had me back in in a few days for an MRI. Nothing wrong with it, but I should get it checked out regularly just to make sure.
I think I was out all of $10. I was happy with my insurance.
Since getting hit by the truck and getting deluged with mail, I have no such confidence in my health care. The little form letter I got after we got bought out letting me know that I just need to double-check with them before I got an MRI doesn't help matters in that confidence regard.
I see the same godawful tyranny of the amateur in financial planning - I suspect that folks in the industry there are able to trot out study after study proving that people are inexplicably (to me, at least) more satisfied with planning their own 401K. I hate it. Hate it. I don't know this mutual fund from that index fund. What do I know? That I want to retire some day (the sooner, the better) and be able to do it comfortably.
What do I want from health care? To know that if shit happens, I'm covered. I do not want to know in network, out of network, deductible, copay, blah blah blah. More information leaves me less satisfied.
I feel like we're being set to work building shadows. People are monstrously unsatisfied with their health care and a little novelty makes people momentarily happier about it (at least on a survey card) so more novelty will make them even more satisfied! And more information about it (one of the cornerstones of "consumerism" is bombarding people with information that's goddamned near impossible to parse) will make them happier!
I need that quarter inch hole. If there's any question at all about how I procure that quarter inch hole, if you make me work to figure out what drill suits my lifestyle, you run the very real risk of me finding out that someone else provides a better, faster, cheaper, stronger drill bit.
I'd like to be more Apple, ignoring what people say they like and giving people what they want and less chasing ghosts on customer satisfaction cards.

Monday, August 20, 2007

Management Jeopardy

"I'll take Unsane At Any Speed for $1000, Alex."

"This defies rational explanation and is a sure-fire hint that you're working at the wrong place. Yes, Dave."

"What is a four hour meeting that starts at 6 PM?"

I'm not smart enough to make this up. I'm pretty sure that meetings that long are in violation of the Geneva Convention to begin with, but having it start well after working hours end on top of that?

To clear up some mitigating factors that weren't...

  • We do not have any off-shore workers, so it's not a matter of having to figure out a way to meet with people halfway around the globe.
  • We are not a huge company.
    • I can't keep track given all the resignations and hirings, but we've got about 40 employees.
    • We are owned by a (much) bigger company.
    • We have one main point of contact with this bigger company (most of the time) who was involved in it.
  • Adding injury to injury, some of the people in the meeting had an 8 AM meeting earlier that day.
    • It wasn't a matter of showing up late in the day for a late meeting.
    • No time's being compensated for this. There was no half day at the end of that tunnel.

In as much of its defense as I can raise, I wasn't in this meeting and the people who were in it say that they really got stuff done. If I was in there, I'd be telling myself that to save my sanity too, so who knows.

Actually, there would be no need to save my sanity. Nobody's going to pay me the kind of money I'd expect to be paid for the insane brand of job where the very idea of a meeting from 6 PM to 10 PM isn't immediately laughed out of the building.

This is, easily, the worst solution for the "we have too many meetings and not enough hours in the day to get all the particulars together" problem that I've ever seen.

Friday, August 17, 2007

Sometimes Foresight is 20/20, Too

I keep trying to come up with really great, really kick-ass ideas for programs that I can sit down and knock out. I want to find that itch that I can conceivably scratch.
I obviously haven't had any.
Every now and then, someone I know will come to me with a "wouldn't it be great if..." idea that they've got, but there's two problems with them. OK, one problem. They're invariably not that great because if they were, I'd have jumped all over it because it felt like that great of an idea and I wouldn't be typing to an empty audience, I'd be dictating this in front of a sold-out crowd at Madison Square Garden because I paid for them to watch me sit here and dictate this tripe to my butler's butler after having built whatever that great idea was.
The second, obviously subjugate, problem there is that I haven't had any great software ideas (of my own or lent to me) that got my juices flowing. I haven't had much more than ideas for the sake of ideas.
Until one fateful day a few weeks ago. I'd gotten back from work and was out on a bike ride to clear my head and stretch my legs out. It was a nice enough late afternoon and climbing the hill, I started to think back to all the silly little programs I wrote when I was a young lad dabbling in BASIC and PASCAL. Terrible "RPG"s and the sort. Cut-rate Ultima Is, minus, well. Everything that would make them playable.
It suddenly became obvious - why don't I write a roguelike in .Net? Since then, I've been kicking around the idea and it's only gotten bigger and, well, dumber. As Google will tell you, the very idea of a roguelike in .Net is a dumb one to begin with. I haven't even started so I couldn't tell you why.
Compounding the shaky base I'd apparently be starting with, some truly awful thoughts have gone through my head.
"Hey, I could write a flexible MVC implementation so that it could be presented by either a web interface or a BBS door!"
Yes, another filthy childish love of mine is olde tyme BBSes and door games. I've tried to come up with a good idea for a door game to produce, but no dice there either.

"Hmm. Since it's online, I could make it multiplayer. I remember reading some design documents for Multihack; how will I handle surreal time? If there's two people on the same dungeon at the same time, how do I handle the dungeon-moves-when-you-say-so aspect of it? Can I fix this?"
Multihack was the multiplayer version of Nethack that never quite saw the light of day. If you've never played a roguelike, the game world doesn't advance until you make your move. You can take a second or a week to make that next move. This presents obvious problems when more than one person inhabits the same level and Player A wants to move NOW while Player B wants to take 10 or 20 minutes to decide whether to read a cursed scroll of levitation or enscribe Elbereth or whatever. Oh yeah, I went there.
"How do I persist the levels? Hmm. Oh, I know - this feels like a pretty relational data model that I'm working with. I'll just install SQL Server."
At that point, a bigger, louder voice in my head screamed "SQL Server for a fucking roguelike? What the shit are you thinking?" Thankfully, foresight finally reared its beautiful head.
If I may, a few words in my defense.
Day in, day out, I write database-backed web applications. I take detours to write "thick" WinForms apps here and there and then multithread them based off of need rather than just because I can (honest) so I've got a really solid comfort level when it comes to, well, ASP.Net up front, C# codebehinds and middleware and SQL Server backing it all.
That said, I recognize that it's more than OK to step out of your comfort zone from time to time. It wasn't comfortable when I took a step back and started to see that what I was trying to pass off as object-oriented code was, being generous, marginally better procedural code with some trappings of objects tacked on as an afterthought. I'd gotten comfortable writing that way as a student and hadn't progressed much since (and coworkers who'd been there before me and had more experience were writing the same).
At a certain point that felt logical to me, I stopped with the parameteritis and the basically stateless objects and started writing object-oriented code they way that I thought it should be written, trying to take care to think about what it was doing and how it would be used now (rather than building things in because you never know), but I think that I'm a better programmer and my software's better and easier to maintain because of it.
It wasn't comfortable when I got started and it was slow going at first, but I'm happy with where I've ended up.
Now that I find myself at the top of a mountain again, do I try to convince myself that I've climbed to the peak of the world or do I take a look around and see that there are other mountains that I should climb?
My class diagrams (which look every bit like a socially inept teenager scribbled them down) for my roguelike can sit in a pile on my desk for a while. A database for a roguelike feels so impossibly wrong that I can appreciate the fact that it's time for me to stretch a bit and tighten up my laces because I've got some climbing to do.

Sunday, August 5, 2007

Triathlons are hard.

I competed took part in my first-ever triathlon yesterday, the Top Notch Triathlon up in Franconia, NH.

It sounded so easy - 6.5 miles of biking (3.5 on paved roads, 3 on trails), followed by a half mile swim, then a 2.5 mile run/hike. I'm a fat lazy guy and I'm pretty sure that I could do that in my sleep.

And then I found out that there's a 1040 vertical foot rise in the cycling part. And 2280 more in the hike. (Thankfully, the swim was pretty level (har har)) I probably should have looked into that before I went out there.

I didn't really have much in the way of goals for the triathlon. You may not have known this about me if you skimmed above, but I'm fat and lazy and really all I did to prepare for this was ride my sweet new bike a lot. Unfortunately, that was about all the preparation I did for the race.

I took a look at the numbers (before I noticed the vertical climb advertised) and I wasn't even going to bother swimming in preparation because I'm fat and fat floats and I could swim all day when I was 10 years old. I wasn't going to run because I don't like to run all that much anymore. The little running I did didn't help, but I'm glad I got a few laps in swimming before I got out in the lake so I wasn't in for such a huge slow surprise.

The second-biggest surprise of the day was how poorly I did biking - I started at the back of the second pack and was passing suckers left and right on the road portion. I was keeping a pretty leisurely pace - I knew that it was only 6.5 miles, but I also knew that I didn't want to burn myself out cycling. The trail ride was a different story.

I've never ridden in packs off-road. I've ridden with two or three people, but mostly on fire roads or where we had the luxury of spacing out well enough to not interfere with one another, and with people at about the same skill level as me. The trails we were on were double-track in some parts, but mostly single-track with roots and rocks and muck on the other side for passing lanes (where there were any), which meant you had to work considerably harder to pass people. I was trying to conserve energy, so I didn't do a whole lot of passing.

Except I sort of had to, because there were people who were really slow and really unsteady out there. I was not prepared for people to be slowing down for every mid-sized rock, every root, every mud bank out there and had to dab a couple of times and fell once when someone abruptly braked and cut left across my front wheel for no good reason. It was tough on me because riding alone, I'm used to keeping up the cadence that I want to keep up and go the speed I like. Out here, I was constricted to keep up an awkward cadence and go slower than I would have liked - it wasn't until well into it (maybe a mile left) that the light bulb went on above my head and I realized "hey, maybe I should shift up to an easier gear to keep up the cadence I want even at this pretty crawling pace" and things got easier at that point.

Then on to the swimming! I knew I'd be slow because I'm in such awful shape that I can't even maintain a freestyle for more than a few hundred yards. Can I blame that one on the broken collarbone and ribs circa 16 months ago even though they don't hurt me? The water was 70 degrees and felt pretty damned nice to get into right after I got off my bike. I was going as slow as I expected to go, but I wasn't gassed by the time I got out of the water, so I figured I was doing pretty OK.


I'm not in as much pain as I look here, just squeegeeing my hair and trying to keep the water from getting in my eyes/contacts. OK, maybe I am as tired as I look. Thankfully, there are no 10 year old girls racing ahead of me in this picture.

I expected there to be some change station where I could get my other change of clothes on for the third leg but there wasn't, so I just toweled off a little, dried my feet and put on my socks, strapped on my camelbak, and started up the mountain in my bike shorts. For anyone that was stuck behind me, I'm so very sorry about that.

I'd run a whopping 10 miles in preparation for this, so I figured I'd run as far up the mountain as I could. Once I got a look at the first hill you take off up and its vertical incline, that thought went straight out the window.

The hike was, well. Kind of brutal. At parts it was sand and loose rocks, not a winning combination by any stretch of the imagination. Other parts it was ledge (exposed slabs of rock). All of it was steep, steep, steep. There were a few short legs where it flattened out or had a slight decline, but mostly it was uphill and then some. The early part of it wasn't that bad - I wasn't trying to sprint up there, so I slowed down a bit and chatted with a few people on the way up and that was alright.

By the time I'd made it to the first water station, I was wondering how much further it was. By the time I passed the second water station, I was worrying how much further it was because I was starting to get worried. There were other people stopping to take a break from time to time and I wanted to as well, but my legs were starting to tighten up pretty bad and I was worried that if I stopped, I'd cramp up and make my life a hell of a lot tougher.

In my head before the race, no sweat. I'll run as much as I can, hike the rest and sprint to the finish. There was no sprint to the finish. I grabbed a bottle of whatever bottled water they were handing out on the finish line (even though I still had a good 30 oz. of water in my camelbak) because my brain was fairly mush at that point. As in - me even dumber than usual which is hard to compute.


I am as tired and confused as I look in this picture. And then some.


The cramps were still creeping up on me, so I stood in the shade for a few minutes sipping my water and started to feel better, so we took the tram back down the mountain to go collect my bike and head back to the starting point so I could change out of my bike shorts and sweaty shirt and check my final time.

I scatterbrained the post-race tie-up activities multiple times, so it was multiple trips back there and eventually they had my time posted - 2 hours, 4 minutes. I finished 195th (198th? again, scatterbrained) out of, uh. I don't know how many people. My number was 318, so I'm employing my advanced mathematics training and figure that I finished in the top 2/3rds of the race which is a gentleman's pass any way you slice it. I remember that I was 178th in the biking and 20th out of 22 in my age division (WHAT ABOUT BMI DIVISION, BIGOTS?), neither of which I'm really proud of. The biking especially I'd like to make excuses for - I didn't know how hard I could push, the cadence thing throwing me for a loop, yadda yadda, but whatever. I wasn't competing, it just seemed like something fun to do. And it was.

Except for that goddamned hike.

Now that I've swam a lake and climbed a mountain, picking up functional programming should be a piece of cake, amiritefolks?

Tuesday, July 24, 2007

Software development - the journalism model

You probably don't know this about me, but I've studied journalism extensively. Back in third grade (for my vast international readership, that's the grade level you're in when you're ~9 years old), I learned journalism well enough that I think I wrote an article or two for my school paper. Maybe I even edited an edition of it - obviously, my journalistic credentials can't be called into question.

We learned that the essence of journalism can be found in the "Six W's" - Who?, What?, When?, Where?, Why? and, uh, hoW?. We also learned that teacher couldn't spell that day.

I humbly submit that these same Six W's should be applied a bit more rigorously to software development and project management, just in case you're trying to figure out if your project will fail or succeed (and if you're not trying to figure out, it will fail).

The Important W's

(in no particular order)

  • What (are we doing)?

This one seems so obvious that it should require no explanation, but the flip side of that is that it's so essential that you're dead in the water if you and everyone else on your team can't explain what you're doing. The team should know what you (collectively) are doing in the macro sense and they should have direction in the micro sense.

If you or any of your team can't accurately (and hopefully, clearly and concisely) verbalize just what you're doing, your project's probably well on its way to being necrotic.

If you're a manager, you should be able to define achievable goals in the macro sense and the tasks needed to achieve them in the micro sense. If you haven't and if your people can't either, see you at the unemployment office.

  • Why (are we doing it)?

As a lazy software developer, I pose this question to the project management team a lot more than they'd like me to. It's reflex and I think it does everyone a lot of good. I don't want to try to wield phrases I won't understand in a million years like "cost-value proposition" but they're probably applicable here.

In hackneyed colloquialisms I can wrap my pea brain around, whose itch will your software scratch? Will anyone take one look at it and immediately "get it"? Will it make people spontaneously say "this is fucking cool"? These are all really, really good things - if you know that people will think that it's fucking cool, you and your team probably does too, and will be excited and take pride in what they're doing, and the end product will reflect this.

Alternatively, will people take one look at what you've done and shake their heads and gasp in disgust at the turd you've fished out of the shitter and slapped on the table?

  • Who (is doing it)?

As in - do you have the people to accomplish your goal on hand already? Or are you expecting that you'll just subcontract this out and everything will work out just fine? Do you already have a team of studs or do you expect scrubs to "step up to the plate" and get things done?

If you don't have faith in your team's ability to accomplish it, listen to that nagging voice in your head and start running for the hills.

Projects can overcome a lack of manpower, but you probably don't have the stomach or the wallet to grunt your way through it. With amorphous direction and unclear goals, even a team of ICFP-certified development gods are going to flounder.

The Unimportant W's

  • How are we doing this?

This applies to those "little things" like IDE, language and framework. I don't mean to underplay the importance of all of these; you'll certainly make your work more difficult (OK, orders of magnitude more difficult) if you choose something pretty inherently unsuited for the task (RoR for an embedded system, C for a web application), but I also think that one of the other W's easily steamrolls it - Who. You put your team of studs up to the task and it stands a reasonable chance of succeeding in spite of itself.

  • Where am I doing this?

This doesn't speak so much to physical location as to metaphysical location. Oh, did I just blow your mind?

By that, I mean things like version control (where the project lives), and build management. I'd lump bug tracking in here too. I know that Bill de hÓra would disagree and the developer in me wants to agree with him, but then I got to thinking - "hmm... VCS hasn't been around forever and I'm sure some working projects have teams that still pass around spreadsheets with a list of outstanding bugs on them." More importantly, the Who will overcome it - good people know good practices. If we walk into an shop without version control, we'll set it up. It's not hard to find a bug tracker out on the internets, it's a couple days to roll your own and if all else fails, spreadsheets aren't the dumbest way to go about things (OK, they kind of are). All this goes out the window once you meet a certain critical mass of project size, but the Who will obviate it again, as you'll get good people in there that will make you hire build engineers and all that jazz.

  • When

Uh, now? I've lived through multiple dot-bombs, so spare me the "the market just wasn't ready for our innovation" speeches. See Why, jackhole.

Boiling it down to Solomon's Law (YES!), "Code cannot overcome reality." If the reality is that you've got an inept team, are lacking direction, are developing something that nobody wants, it doesn't make one goddamned bit of difference that you've got an A-number-one build system, magical bug tracking, distributed version control and an immaculate code base built on the right language and framework for the job.

No matter how beautiful the package, you're still boxing up a turd.

Saturday, July 21, 2007

Oral Traditions in Software Development

The requirements document will be the death of me. I don't have a whole lot of faith in the requirements document if only for the fact that I've yet to work with one that concisely and accurately expresses the needs and intent of everyone that it represents.
I can't stress this highly enough: I'm not very smart and I don't have much of an attention span. That's why "concisely" matters so much to me. You can make that requirements document accurate, but in the process you'll probably end up with something resembling the Yellow Pages for Manhattan. It's accurate up until the point where it puts me to sleep (about three paragraphs that resemble legalese in) and sooner or later, people will get hip to the fact that I didn't read it.
OK, I'm taking that a bit too far. I do read it as much as I think I need to in order to get the flavor of what I'm developing. People sweat way too much over getting into the down and dirty of how things are supposed to work, pre-chewing the details. For downstream consumers of this product (QA, client-facing folks and beyond) maybe this is a good thing? For me, I just want the broad strokes and I'll be able to fill in most of the blanks.
To fill in the gaps between the broad strokes, there's back and forth where I talk it out with the person who wrote the requirements document, because they know what we need better than I do. This works out great for everyone as we quickly hash out details and get back to what we need to do.
Well, it's great until months from now, when QA will have a new set of questions that weren't reflected in the original documentation because they didn't sit in the series of little meetings that development had with the requirements crew. And we'll have another sit down and chat and they'll see what the real intent of the functionality is or I'll go back to the drawing board.
Here's the thing - I really like how this works. Little meetings (30 minutes or so) when needed and when the particulars can fit it into their schedule. I don't have to read much of the documentation which is good for so many reasons, but mainly because people can tell me what they need better than they can write about how things should work.
But here's the other thing - I can see how one would look at this and be calculating a bus number of "one." The person that wrote the requirements document leaves the company and all of a sudden, you're back to just that dry document that's a subset of the expertise that they brought to the table. The developer leaves the company and all the discussions that the product developer had with them that they never inserted back into the original document disappear with it. People are left to scratch their heads over what's going on in with the application, not knowing that this is expected behavior.
Or QA or someone else walks into the application with the expectations laid out by the requirements document and finds that the fantasy of the documents doesn't meet the reality of the application.
I don't like documents.
I like talking things out.
I have no idea how to blend these two competing desires and turn what amounts to an oral tradition into a sustainable development model but if I figure it out, I figure I'll be a billionaire and then it'll be time to settle down to my life's ambition, hunting the most dangerous prey of all - MAN. Just fucking with you, I'll be so rich I'll pay someone to hunt MAN for me.

Monday, July 16, 2007

THE MOST IMPORTANT BLOG POST OF THE YEAR

As the internets tell me Paul Erdos famously told other people, "A mathematician is a device for turning coffee into theorems." I can't find any way to fault that logic. The man does, after all, know from logic and as far as logic goes, that shit is airtight.

My personal spin on this is that "A developer is a device for turning coffee into arable software." This, likewise, is airtight reasoning and to claim otherwise is to beg for your License To Internets to be revoked.

Inexplicably in the wilds of N'Hampsha, it gets hot up here. Sometimes ugly hot - about a month back it was 95 degrees and the relative humidity was at like 180% or something (before you furiously type and complain, remember that Erdos is the super-stud mathematician and I'm the forgettable developer). 9 months of winter and then we get this? So funny I forgot to laugh.

Days like these, I don't want home-brewed hot coffee, I want iced coffee. But who wants to buy iced coffee out at the coffee spots? With very few exceptions, I read "iced coffee" as "the shit that we brewed hours ago and didn't sell and it's too stale and bitter to sell hot but with a few ice cubes in it the rubes will drink it up." Lemons from lemonade and all that. But seriously - most iced coffee is bitter and stale. I've made my own at home by brewing hot coffee extra strong so when I poured it over lots of ice, it'd even itself out. Which works out OK, but I don't really want to turn on the electric kettle and boil water when it's that damned hot out. I just want some tasty iced coffee.

Inexplicably (oh, dissssss), it's the NY Times to the rescue with a recipe for cold-brewed iced coffee. But I've rambled on long enough and you don't need to read their snooty rambling in addendum to mine (you can if you want) and I don't know why they tell you to dilute the mixture.

For starters, the hardware - they have some retarded elaborate mason jar + cheesecloth + sifter combination, but fuck that noise. I normally make my coffee in a french press and I figure coffee's not going to get stale sitting out at room temperature (I'll experiment in the name of science on this soon enough, but my past experiments have shown the french press to work quite favorably), so if you don't already have one, maybe it's time to get one. Twenty bucks. Totally worth it. Makes hot and now cold coffee. Win.

So you grind the coffee (the normal amount that you'd brew with) as always.

And then you pour some cold water in (the amount that you'd normally brew the grinds with) and stir it up and, uh. That's sort of it for the steeping step.

THEN... it sits overnight. And in the morning, you push the press down and pour over ice and you win. As advertised, it's not bitter at all (you don't buy shitty roasts, right?) and adding a bit of heavy cream makes it almost candy-sweet. A little if you want but it's pretty damned fine all by itself over ice.

You've read this post, now you can safely turn off the internets because I can't imagine ANYTHING BETTER COMING FROM ANYWHERE ON THE ECHO CHAMBER OF THE INTERTUBES ANYTIME FOR THE REMAINDER OF THE YEAR.

Unless I decide to post again in which case all bets are off.

Tuesday, July 10, 2007

Agile Development - Can We Stop Designing?

Over on some programming forums I frequent, someone posed the following question that resulted in some heated development (from Agilists and others).
"We have a team that tells us that because they have adopted 'agile development' they no longer need to do any design work before writing code.

To me this seems bizarre, I can't fathom how one could build any reasonably complex application without a modicum of design (hell, even pencil sketches of UIs or something), but they claim it works.

Is it reasonable to develop complex applications with zero design work?"

BUFD (Big Up-Front Design) has gotten a hell of a black eye, but damned if I can think of a better way to wrap my head around a project in a business domain that I'm completely unfamiliar with.
Generating documents is a mixed bag - I think they're great for the process you use to create them (collaborating with everyone involved in the project, picking brains, exposing flaws and needs and hopefully constraints early) but I think they're awful for the fact that the end product is a document that's outdated a week into the development process (give or take; the point where the bulk of your time is developing rather than documenting) and at that point, the different partners have vastly different ideas about what the document means.
Developers think it's a rough roadmap (do I update the documentation with the way things will work in our system? I'll get that later... OK, not really. Have a meeting? That gets old REAL fast), most everyone else (project managers/QA/stakeholders) think it's litany and that's a sure-fire recipe for friction.
Eschewing the design portion of this and flying by the seat of your pants elevates this from an exercise in friction between developers and, well, everyone else to an exercise in sure-fire catastrophe for everything but the most trivial of implementations.
This isn't to say that reasonably complex applications can't be done without BUFD - I'm sure that they can, but with a big fat caveat: the developers have had a reasonable amount of experience with the business/application domain that they're developing for. There you can come away with a working application in the void of BUFD but with yet another gotcha: no one has any solid expectations of what to expect when the developers pull away the veil.
If you're working in an "agile" fashion, I guess this means that you're going to have rapid iterations from developer to project managers/stakeholders at which point they'll yea or nay it and talk about what else needs to be done, which probably means throwing away a fair bit of what you're doing, which means that instead of doing BUFD, you're doing Smaller Bits And Pieces Of Design All Allong (SBAPODAA). On the upside, there's that boost of "look what we're getting done". On the downside, there's that "holy crap I had to throw out another fucking day's work because Joe Blow From Product Management couldn't have told me yesterday that's not how things work?" and on the real big downside, there's the very real possibility that you're going to scuttle the project wholesale because developers are arrogant and think they know how to do everything so much better that you end up with a brilliant application that just happens to be absolutely useless for what the stakeholders need it to do.
I'm pretty sure that pair programming doesn't solve that one.

Friday, June 29, 2007

Maggot therapy for code

Showing the project owner what I'd done, she had one big question for me. "You started from scratch?" Accompanied, of course, by an incredulous look. So much for subtext.
But let's back up for a second.
So I spent a few months last year building a bunch of pieces of functionality to support a new business need. Late (but under-budget! (since I was the only person working full-time on it)) and hairy as anything, but it more or less works at least some of the time. Hairy enough that when asked "can it do this?" most of the time I roll my eyes and groan and tell them "it's complex and fragile and I'm not sure so I'd guess it'll take me a week" and then sometimes I've already built the functionality. I laugh at Peoplesoft for being Rorschach software and look - I've built some of my own.
As time's gone on, I've gone back in and refactored bits and pieces and have new objects that branch out of the old (that are probably better, honest!). I inherited one subset of these associated objects from a recently-departed coworker.
I've made enough bugs of my own in the past that I worry about other people interacting directly with my stuffs and making me look like a jackass that I've done code audits of the work, so I knew what I was in for.
Loop unrolling.
Bona fide magic numbers. "OK, so there's codes of 3, 4 and 7 that I can apparently pass down from the presentation tier. These are enumerated in the business tier, right? Guess not. Data tier? No. Oh - they drive branching in the stored procedure. Brilliant."
Leaky abstractions. As in the presentation tier sometimes calling business objects, others the data tier, still others the database. Plus handling business logic in the presentation tier.
Code paths in the business tier and database that weren't called in the database - so stored procedures and methods that were essentially dead code.
I'd already given the work a vote of no confidence, so when it was handed to me, I went with my gut - tactical nuke.
But there was that nagging voice in the back of my head. "Good job, Dave - way to go with Not Invented Here Syndrome. Worse, you're exhibiting It Has Been Invented Here, But I'm Too Lazy To Read The Fucking Code So I'll Just Do It All Over Again Syndrome." Couldn't I have spent this week doing something more productive, just fixing the bugs that were logged and moving on, rather than re-doing it pretty much from scratch, virtually guaranteeing that I've introduced more in its stead?
I guess there's that, but given all the code smells... and honestly. Loop unrolling? I've only seen shit like that on The Daily WTF. Who does that shit? Magic numbers? Yow. I've sort of taken an ambivalent stance on them in the past, but when "magic number" means "you have to open up the stored procedure to figure out why the presentation logic passes down this value in this circumstance", I have to scratch my head.
I referred to it as applying a "tactical nuke" when I checked it in, and that too gave me pause. I mean, Joel says you should never completely rewrite it, so you should never completely rewrite it, right?
In medicine (and thank god I don't know this from personal experience), sometimes wounds become necrotic - the tissue's been so damaged that it's no longer living. At times like these, doctors have to become a bit, shall we say, "inventive." I can't think of any other way to describe applying maggots to a human being for medicinal purposes. From the Wikipedia article on maggot therapy (which I'm not linking because fucking eww):

The maggots have three principal actions reported in the medical literature:

  • debride wounds by dissolving only necrotic, infected tissue;
  • disinfect the wound by killing bacteria; and
  • stimulate wound healing.
So calling it a "tactical nuke" was going way overboard. There were bits and pieces in there that worked pretty well. Not the way I would have done them, but I won't argue with working code that works in a reasonably intuitive fashion. So I left the living pieces and excised the suspect ones, the untenable ones.
Two days later, I ultimately think I did the right thing. Two weeks later, after QA's gotten to bang on it, we'll see how confident I am in my skills as Senior Systems Maggot.

Saturday, June 23, 2007

Management smells: don't ask

Let's suppose you're not a programmer. Then let's further suppose you've never known a programmer... yet you manage them. Where do you go to figure out what makes them tick and how to work (OK, deal) with them?
Imagine that all you knew about programmers you learned from Joel on Software and the Jargon File. OK, the latter's really a stretch because t-files are, like, so old. Between reading the two, one would get the impression that programmers are some sort of super-genius prima-donnas dropped in from another dimension with its own bizarro set of social mores.
But in-between the auto-hagiography, you'd find them staring gleefully at some gems, like this tidbit taken from the Jargon File's entry on the SNAFU principle.
True communication is possible only between equals, because inferiors are more consistently rewarded for telling their superiors pleasant lies than for telling the truth.
So have you figured out my clinically-developed, painstakingly-researched advice for you? Talk to your people. It shouldn't be that difficult to figure out. As a manager, do you work with people or do you just crack the whip and shit magically appears? I'm guessing you work with people.
No matter how smart or alien they may seem to be (and when you ask them how it's going, you may get an answer in what approaches moon-man), people all generally share the same sort of motivations. Food in the belly, roof over the head, meaningful work.
The meaningful work thing can be a hard sell - at the end of the day, how many people do you know that get PUMPED up for Industry X? You can sell tickets to a rock show, but you have to bribe people into attending that process improvement meeting with the lure of free lunch (and bottomless cups of coffee). But you can become the meaning for them. Ask them how things are going. If there are any problems that they've encountered. What successes they have. Show interest in what they have to show you, even if it's not much.
If you can't find the time in your busy day to show any interest in what they're doing, don't be surprised when they show a concomitant level of care and interest in the product they're creating for you. Huh. Maybe those semantically-challenged scrum maniacs really are on to something with those daily meetings after all?

Tuesday, June 19, 2007

Seeing the code rather than the product

Unintentional irony is never a pretty thing.
A feller out on that there interweb started off so strong - people who write books or knowingly put code out in the public domain for learning purposes should hold themselves to a higher standard. Amen to that, brother! And the comments invariably start filling up with other folks linking from reddit.com who just have to whip 'em out and show the guy that they know micro-optimization like nobody's business and never mind missing the writer's point (make your code correct and readable, especially if people who might not know any better are going to be learning from it).
And then I took a look at what exactly the writer was railing against - an example from an AJAX book about how to validate credit card numbers in Javascript.
If you could take a moment from furiously hacking away at that ZX-80 assembler solution to the problem, let's jump to the point so you can get back to it.
The book's suggesting something completely fucking insane - client-side validation of credit card numbers - and the most offensive thing about that is that the code's cribbed poorly from a Wikipedia entry?
Back in high school, one of my computer science professors graded starter programs by the "cat on the keyboard" test - subject to random input, will your program gracefully handle it or will it shit the bed?
Rule number one-or-so of client-server development (and sorry, Web 2.0-aholics; it's still client-server no matter how semantic or semiotic or other big words I don't understand-ic you try to make your app sound) is: never trust client input. Their cat could be walking on the keyboard or, when there's money on the line, they could be trying to game the system. Validation of any critical data has absolutely no place being performed on the client-side. My two cent AJAXy solution? Make a web service you can call to handle the server-side validation and then make your AJAXy call to that service when appropriate to see if the credit card number's valid to give the user the immediate UI response you crave.
Don't get so wrapped up in your code that you lose sight of the product and basic common sense. When the music stops, you don't want to be the guy earnestly deliberating about whether to pound that nail with a shoe or glass bottle.
I still haven't gotten the scuff marks and shards out of my wall.

Monday, June 18, 2007

A love letter to the software QA folks of the world

In our twisted little minds, we're fashioning castles out of dirt, breathing life into the imaginary peasants that inhabit the castle and coming up with new and fascinating ways to teach the little people to exercise functionality. Nothing could be wrong with my castle! I built it myself from scratch! Can't you see all the little people in there grinding away just like I taught them to?
But oh! There be a storm brewing! Eventually we have to put down our magic wands and show people just what we've been up to and there's the QA folks. Looking everything over with a discerning, non-paternal, eye and pointing out that I'm just an idiot playing with dirt and by the way that castle you built? The walls aren't up to spec.
Developers and QA operate in an antagonistic relationship. We build things that we're proud of (and if you're not proud of what you're building, you're making their job a hell of a lot easier) and they knock it down. Maybe it's because I'm an entirely reliable source of bugs, but on the whole, it's been mostly mock antagonism. True, I enjoy tagging bugs as "unable to reproduce" or "user error" more than I should, but at the end of the day, I appreciate that there's someone checking what I'm producing and verifying that it's not all wrong.
I'd tell you that I test as much as I can, but if I told you that, then I'd be pretty hard-pressed to explain some of the head-scratchers I've released in the past. Can I blame it on not using test-driven design? Not being agile enough?
Developers get the cool software toys. I've seen a few QA test environments in action and they're pretty execrable, little better than the (super-awesome and totally deserving of the coveted Dave Solomon Seal of Approval) Watir. I can get unnaturally pumped about source control. What else is there to get psyched about in the world of QA software?
We get the cool development methodologies. Test-driven design (jesus christ we're developers pretending to be QA! are we trying to put you folks out of a home?). Agile. Scrum. What do QA folks get? Seriously. I have no idea.
Worst of all, the project timelines. When the specs take too long to get hammered out and development drags on too long because the software's more complex than expected (leaving more nooks and crannies for bugs to fester in), what does your enterprising project manager propose as the solution? Push the release date back? Nah. Just cut the QA cycles short. Sell the sizzle, the quality of the steak be damned.
How many projects will be haunted to their grave by that decision?
So here's to the software QA testers of the world. Despite being forgotten children when it comes to software, viewed as a liability by managers and loathed by developers afraid to eat their own dog food, you somehow manage to persevere and keep the quality up.
Just quit going over my code with a fine-tooth'd comb, willya?

Tuesday, June 12, 2007

Object-oriented management - throw new DevelopmentException();

From what little I can remember of object-oriented design from school, there was next to no attention paid to something that, in my idle thinking, makes OO implementation a whole lot more pleasant - exceptions. There's an analogy to development management in mind here, but to draw it fully I think that I need to lay out how and why I use exceptions in my object design first.

So let's say that you're developing an object. It performs functions that may or may not succeed - how do you communicate failure? At a simple level, there's good old booleans. True/false - the method call succeeded or it didn't.

class Foo {
public int Bar;

public bool Retrieve(out MissingRequiredValueBar) {
MissingRequiredValueBar = (this.Bar == 0);

bool returnValue = false;

// retrieve stuff!

return returnValue;
}

public bool Save() {
return false;
}
}

So this does a reasonable job of telling the whole story... in a simple case. But look at that Retrieve method - we've got two booleans to juggle already - pass/fail, plus an error state that we need to return explaining why things fail.


How do we know that the calling method's checking and working with both of those booleans? Boy, it'd sure be bad if they didn't. And what about Save? They know that returning false means that the data wasn't saved, right?


Furthermore, in a more complex case, I don't think it's hard to see (I'd show you some old code I'm ashamed to have written if I could) that our Retrieve() method's going to metastasize into a hideous mass of parameters. The more parameters you have, the better the odds that you won't have or need one in a calling scenario; method overloading can only take you so far.


Maybe you're thinking of another way out of this jam. Divorce the return parameter from the method call and make it another public property, right? So very wrong.


Do that and you're depending on the person calling your class to have intimate knowledge of how things work ("Oh - first I call Retrieve(), then if it fails I check this other property to see why it failed."). That's a best-case scenario and even there, you're not impressing anyone. In a worst-case scenario, your class has been transformed into an API and the developer on the other end is reaching for a decompiler while promising to punch whatever idiot was responsible in the throat for this godawful code that's been foisted upon them. Not that I've ever made that promise. Objects love loose coupling, remember?


So let's take another stab at this awful example of a class but use exceptions this time.

class Foo {
public int Bar;

public bool Retrieve() {
if(this.Bar == 0) {
throw new MissingRequiredDataException("You need to supply a non-zero value for Bar in order to call Retrieve.");
}

bool returnValue = false;

// do stuff!

return returnValue;
}

public bool Save() {
throw new FunctionalityNotImplementedException("Save functionality isn't working yet.");
}
}

A common critique of exceptions (along with the baffling observation that "throwing them is slow") is that they shouldn't be used as flow control. And you know what? I absolutely agree.


But notice - exceptions aren't being used as flow control, they're being used as cessation control. The exception message is unambiguous as to how to resolve the problem. Whether it's being used as an API or simply as a black-box object by a co-worker, it's clear what needs to be changed in the calling class to avoid the exception.


I used to write classes and have little todo comments littered all over the place. It wouldn't be long before I'd get things bootstrapped far enough along to start testing. Something goes wrong upstream and I spend time debugging to eventually realize/remember that I hadn't implemented that piece of functionality that I was trying to use yet. With exceptions, there's no ambiguity as to the source of the problem.


See how Foo's Save method blows up obviously? Would you prefer spending time figuring out why Foo's data isn't showing up in the database?


If you're not already converted, I imagine that it sounds like we'll be awash in a sea of exceptions, right? While your objects are in development and people are discovering the rough edges, yeah.


But as time goes on, a curious thing starts to happen - you and your co-workers get tired of getting bombarded with exceptions and you start to fix them. Yesterday's exception becomes today's new use case or available error state.


Summing up exceptions, I like them because...



  • Your class knows best when something's gone wrong and should have the ability to call a complete stop to the proceedings
  • You have the ability to provide context to the object's consumer that they obviously need
  • They make it very clear what isn't working with your class as you exercise it
  • The calling class can treat your object as a black box - all it knows is that either their call worked or it blew up and why
  • The calling class has the option to let that exception bubble up to its subsuming classes or to nip it in the bud right there
  • Error states can be misdiagnosed or ignored, exceptions will stick around until the process responsible is fixed

So here's where I make the big leap from merely being a so-so developer to being a clueless, completely inexperienced, hopelessly naive manager-wannabe, but I don't think this is the stupidest assertion I've ever made.


Problems employees face should be treated like exceptions and bubbled up to their manager as soon as possible.


As a manager, you trust that your employees are doing their work, but for the most part, you want to leave them be and have trust that what they're doing is working. In that sense, you're treating them like objects - do you step through every line of code of every class that you call or do you validate the inputs and outputs and take it as a matter of faith that things are working as expected since it looks OK?


When it comes to developers, there seem to be three ways to handle problems that crop up.



  1. "What problem? Don't worry about development going long, we'll just make up for it by cutting the QA cycle a little short. It's not a big deal, because if a component took longer to implement than expected because of the complexity, you can be sure that there won't be too many problems in the finished product that I'm hurrying to make."
  2. "Holy shit I had to open up a debugger today because Dave's Foo class threw an exception when I tried to Save we are going down in a sinking ship."
  3. "I know that I promised you that Foo in two months' time and I've only been on it for a couple of weeks, but I didn't realize how crufty the surrounding portions are. If I have the time to refactor classes ancillary to the new functionality, it'll take longer but should ultimately be more stable. If not, I can shoehorn it in but it likely isn't going to be pretty or stable."

#1 is probably the most common way. People don't like to throw exceptions because at its heart, you're admitting failure when you do. They don't like to cop to being late or not having features complete because again, admitting failure. At best you'll pull teeth and get some status information to work with. But how do you know how severe the problem is or isn't? How can you trust the status they're giving you? At worst, they'll be completely silent about how things are going until the day before the gold build at which point maybe they'll dig deep into their bag of tricks and pull the 24 Hour Push Of Redemption out. It's too late for you to let anyone up the ladder know at this point. Pull teeth earlier next time.


#2 is the sky is falling way - dealing with them, you know that you're going to be awash in a sea of noise. If you listen to the roar, you'll be convinced that the sky is falling too. Then the sun will rise the next morning and over time you'll learn to take what they say with a big grain of salt - you'll implement processes to deal with this constant stream of exceptions. It's perfectly permissible to catch exceptions if you know what you're doing, but when you know what you're doing, you know that there are times when you have to let them bubble up or add context and re-throw them. A doctor's going to be annoyed dealing with a hypochondriac, but even a hypochondriac genuinely gets sick sometimes. Still preferable to being blind-sided by an error state.


#3 is about the best you can hope for (and what I hope my way is) -they don't overreact to problems but at the same time, they don't try to sugarcoat any of the big, project-threatening, hurdles in the way. Figure out what the blockades are in the process and try to resolve them.


Over time, you'll see trending - what exceptions are my employees raising to me most? Chicken Little's saying that the machines aren't beefy enough to keep up? You can probably back-burner it for a while. The Late-Warning, Silent Type's griping about the compile speed of the machines? Time to freshen up that hardware.


You'll react to the exceptions as you need to and (hopefully) resolve them as you see that you need to and can. You'll implement processes to catch the ones you can, to quash the ones that don't matter and to bubble up the ones that are nightmares.


Then again, I'm a developer who thinks of co-workers as objects and problems that a project faces in terms of exceptions. Worst. Metaphor. Ever?

Thursday, June 7, 2007

WTF exactly is wrong with The Daily WTF's site rename?

Ladies and gentlemen, a new pithy tenet of the software development world has been born into life.

Before, we had to slum it with lame old one-or-two-liners like...

The first 90% of development will take 90% of the time. The last 10% of development will take the other 90%.

Perl is write-only.

Java is the new COBOL.

But that's so pre-web! We need to get with the times and have something that goes down easy in our RSS readers! So we have a new one!

The Daily WTF sucks now that the WTF stands for "Worse Than Failure."

Has the quality of the postings gone down? I don't think so. With new editors there and the 3 posts/day that they're churning out, there's bound to be some that don't quite fire on all cylinders, but I get a chuckle or a sad shake of my head out of something most every day there (still). That said, I can't lie and tell you that I'm some sort of sophisticated gentleman. I play video games and laugh at fart jokes so I obviously don't know from quality, plus I might have licked my old Voltron toys to get them to stick together better when I was a kid so I might be a little (OK, a lot) retarded.

Is there really that much in a name or is there more to it? Obviously, I think there's more to it, and here goes.

When it was The Daily WTF and the WTF stood (spoiler alert!) for What The Fuck, it was nothing more than a freak show. Only in the place of the bearded lady and the world's largest horse, we had the programmer who overloaded booleans so he could enumerate FILE_NOT_FOUND! Ha ha ha! They're so much dumber than I am! Can someone around here give me a big high five because I solved FizzBuzz in Erlang the other day? Paula Bean LOL!
Now that it's Worse Than Failure, could it be that it cuts to the quick of that nagging fear that I've got in the back of my head. Maybe it's in yours too... it says things like "I thought this object model was the bee's knees, but have I gone too far? Can anyone but me support it? Could I have done it a better, simpler way?" Things like "What exactly is the point of all this? There's a metric shit-ton of code and tables, but at the end of the day, does anyone appreciate what I've done?" Things like "Is this what I have to show for the last few years of my life on this?"

Or, to paraphrase Morse, "What hath we wrought?"
It hurts to think critically and realize that the system that you've worked so hard on probably should never have been built in the first place. That those pet classes of yours might look like the Sistine Chapel to you, but to the rest of us they're little more than a house-shaped booby trap constructed out of snot, zip ties and duct tape, waiting to trap and maim us in new and unexpected ways each time we brush up against the walls.

That you've taken a rusty, but perfectly servicable, old DOS application and re-implemented it as a spanking new web app with all the fixins (AJAX! MVC and so many other patterns! Multi-threaded!). You see a success, your users see that you've architected a monumental clusterfuck that's so ornery and unusable that they're keeping around their 386s because you've all you've succeeded in is failing their needs miserably.

That a lot of the time, development feels an awful lot like the Red Queen's Race.

Or maybe I'm missing the point altogether and have no clue what the fuck I'm on about. Has the quality really dropped, are Alex Papadimoulis and his associates sellouts (however that would apply) or should Shakespeare have wondered "what's in an acronym?"

Really, is the world a better, happier place because of what you've done? Are people getting more out of your system than they're putting into it? If your system disappeared tonight, would anyone care tomorrow or the day after that? Is it possible that your successes are such untenable messes that they really are worse than failure?