Sunday, June 19, 2005

Microsoft's 3.0 (or, How I Learned to Stop Worrying and Love The Curve)

It's that major review time of the year at Microsoft. Keyboards are clicking away with folks regaling their lead about all the great things they've done.

But is it too late for your review comments to make a difference? You've got to figure that everything is pretty much set within the stack rank in your manager's mind. A while back, I wrote about the stack rank and some advice about owning your career. The stack rank meetings are probably starting now and going on for a month or so. Can your proud lobbying move you up a position or two higher? Well, it might be important based on where the rating line is drawn for folks in your peer group.

You know after the stack rank something wicked this way comes: The Curve. We grade on a curve. Your rating is based on your ladder level expectations and relative to the performance of your peer group (a peer ordering most likely extracted from your position in the stack rank). Woe unto you if you're a super-star in a super-super-star peer group.

A curve, though? Actually, it's more like three buckets than a curve: bucket 4.0 (A! Sweet!), bucket 3.5 (B. Well, okay), and bucket 3.0 (C! Dang!). I don't bother considering the gold star (4.5) and the platinum star (5.0). And I don't want to bring up 2.5 because I'll get side-tracked.

4.0, 3.5, and 3.0 (oh my). The dreaded 3.0 is really two beasts in one: the well deserved 3.0 and the trended 3.0. While a manager might be steamed to have a 4.0 person be trended down to 3.5, they will go ape-poo-flinging-ballistic to have someone trended to 3.0. But we have buckets to fit and if your product team needs to provide 25% 3.0s you're going to have to fill that bucket.

I totally accept that we need to have a rating system, especially to reward our kick-butt super-contributors who end up doing most the hard work around here. I have not, however, come to accept the bucketing rating system we employ, especially around the 3.0 review score. Now then, I have met dev managers who have reached acceptance with our system and with dolling out the 3.0s. In their presence, I experience a cult-like allure to these folks as they tell me how they are totally behind the peer relative 3.0 review rating and that's how we do business in managing our performers. They make it sound so calm and simple that I feel like if I could just drink up that Kool-Aid I, too, could not rip myself up inside over this.

Some recent comments here about our review system and 3.0:

  • As a lead, it's one of the most painful experiences at work to have to give a review back to someone and say "you worked hard last year and accomplished great stuff...I can't tell you how much I personally appreciate it. However, the review model came back down the chain and the best I could do is get you a 3.5 (or *shudder* a 3.0)". Nothing kills morale faster. It assumes all cogs at all levels are equal with similar managers and similar circumstances solving similar problems with similar constraints. The truth is, the guy across the hall from you might have worked half-as-hard and still pulled a 4.0 because of how the model went up and down the chain. A good lead will fight, yell, scream, beg, cajole, and even threaten to get the scores he believes his guys have earned, only to have those scores crapped on by upper management and their curve.
  • I have recently been thinking it might be interesting to score teams on their effectiveness as a whole. If we did team reviews as well as individual reviews, and team scores affected rewards, managers would really scramble to get low performers off their teams. In this system, a 3.0 on a high performing team might get a better bonus and stock award than a 4.0 on a team that did a science project and shipped nothing.

    I'd like to know if the executives who run teams that are woefully behind (Yukon, Longhorn) get 3.0s when their products fail to ship.
  • Argument against is what if you really have seven 4.0 performers but the model says you can only give three 4.0 review scores? Well, if you are a weanie Mgr. you screw four people over....most get pissed but stay anyways and now join the ranks of disgruntled employees who are no longer passionate about their work. Work product begins to suffer, crappy products get shipped, who cares any more?? If you are a principled Mgr. you take on the system and go to bat for your seven key employees but invariably you will get shut down and most likely commit a career you too are inside the bell curve.

Why all the angst over 3.0?

  • Because it's hard to figure out whether this is an earned 3.0 (slacker!) or a trended 3.0 (suckah!). The lead's feedback has to be written the same. You really can't go and write, "Garsh. I wanted to give you a 3.5 but they're making me dork you around with this 3.0." Nope. You got to write harsh feedback saying they are doing okay but these are the ways they could have done better. This feedback especially sucks when the report thinks they did a 4.0 job, relative to their peers even.
  • It affects your lifetime review score and makes it difficult to transition to a groovy group that is actually getting stuff done (like MSN - yeah, baby, I'll take anything new showing up on our customers' screen).
  • It represents "eh, you are doing what we asked of you, pretty much" review score. Good right?
    • We're not "good enough" people. I don't hire "good enough." When I am involved in hiring (the shame), I work hard to hire people smarter than me that I expect to zoom past me one day because they are so awesome.
    • 3.0? You most likely will get zero raise, zero bonus, and zero stock. Good job. Thanks to the increasing cost of living, you're working for less effective pay. And we just told you that you are indeed pretty much doing what we ask of you. What?
    • 3.0s are sticky.
    • If you're in a strong group, you have to rely on attrition to be able to move up (or hope some clunkers get hired). To really succeed, you need to move groups (which might be a bad move for our customers if your contributions to your current group really makes an important difference). But getting that other hiring manager interested in a 3.0 performer is trick-y.

So you go and follow Mini's advice over the past year and move-on all your poor-performing-process-loving-slack-ass-clunkers, get a tight, high-performing team together to take on the world and then... The Curve comes around and you realize, "Oh, crap! I've got to give out seven 3.0s and all I have are great performers now!"

Yes, it sucks to be you. You're about to flush away 25% of your team's morale for doing the right thing for the company and the shareholders.

So what would you do? Do you think our review system is in need of some tuning to make it equitable to morale and great software development? Or should we all just take the blue pill with a nice swig of Kool-Aid?

Humble suggestions I have include:

  • Increase the resolution on the curve. Instead of our A / B / C simplistic bucketing, bring on the 100 point scale and have a finer curve with appropriate compensation fitting in. Still lavishly reward your super contributors, yes. But don't go and bugger someone because they fell just within the 3.0 line. A 79's compensation should be very close to an 80's. A 70 would then be a strong message that we think you're just squeaking by.
  • Allow high-performing product teams to have a more gracious curve to fit. And don't make this gracious curve have to come out of another team's budget as a zero-sum gain. Last I checked, we have some pretty good profit that I'm sure would be a proper use of shareholder's money to reward the people that this continued profit depends on.
  • Punish poor-performing teams with a harsher curve. Don't ship? Have a bunch of bugs? Customer concerns not being addressed? Security breaches created by this team? Then maybe 50% of your team gets 3.0s and we want some percentage of 2.5s.
  • Figure a way to associate poll-results with management compensation / management curve shape.
  • Always at least meet regional cost-of-living increases. It's unacceptable for this company to pay someone "less" this year than last and to tell them that they are doing a good job of what we've asked them. It just makes me want to unionize.
  • Consider a split review rating for individual contributors: ladder level specific and peer relative.
  • Bring back the minor review in winter. Summer shouldn't be an all-or-nothing compensation event. When it comes to administration and process, I put just as much work in for the mid-point as I do the major review. It's not worth any cost-saving, company process-wise, to not have a minor review.
  • Figure out a way to avoid the automatic 3.0: "Hi, you're new to the team. 3.0." Here we are, considering how hard it is to hire for Microsoft, and then what happens during your first review: 3.0. No soup for you! We'll give you another year to prove yourself. Suckah!

Usually, I'm no fan of HR tinkering with our review system. They've certainly cheesed me off with all the bizarre review form iterations over the past few years. However, an honestly compensated work-force leads to motivated contributors with high morale making great features for our customers and giving us a good chance to raise the value of our stock.

Then, if I realize that I'm a super-star on a super-super-star team, I will still put in the extra effort knowing that it is actually going to be recognized and worth something (versus easing back and asking, "Why bother? I'll get the same review no matter what I do.").

That's worth some class-A tinkering to make happen.

I LOVE this company, but I hate The Curve. This is not how the great teams we do have should be rewarded. I certainly feel that if a morale-busting brain-dead review systems goes on too long, we might find ourselves with barely motivated contributors creating mediocre features that may or may not ship...

No comments:

Post a Comment