Sandi's Rules

One day, Sandi Metz was pressed by a team she was working with to produce a simple set of rules that would lead to better code quality. After consideration, she came up with the following surprisingly numerical guidelines:

  • classes can't grow beyond one hundred lines of code
  • methods can't grow beyond five lines of code
  • methods can only take four parameters (hash options count),
  • controllers can only send one message to one object.

What emerges from these rules is a pretty pragmatic lens on how to practice object oriented design with Ruby and Rails without falling into the tarpit of radical approaches. You don't end up needing to worry about fast tests, decoupling from the framework, presenter, conductors, mediators, adapters, ad nauseum.

More critically, you aren't beset with decision fatigue. You don't have to survey the landscape of helper gems and bolt-on approaches to writing Rails applications. You can start with the way Rails wants you to write applications: logic and data encapsulated in models, behavior encapsulated in controller actions, easy sharing of data between actions and views with instance variables, etc.

A wingding

You start with what Rails wants; when you find yourself violating one of Sandi's rules, then you apply design and refactoring techniques. If a model does too much, extract common behaviors into an encapsulated object. If a controller action knows about too many things, move that into an object and call it from the action. You can go a really long way using only the Replace Method with Method Object refactoring until you get to Kent Beck's notion of a Composed Method.

That bears repeating. In big letters.

Most Rails applications can be made better by relentless application of Method Object and Composed Method.

Thoughtbot has written about what this looks like as it emerges. It doesn't have to involve a lot of pattern names, new frameworks, framework asceticism or half-baked conventions. You write classes and methods when needed. When they get too big, you refactor, rinse, and repeat.

A wingding

Sandi's book, Practical Object Oriented Design with Ruby, is great not because it covers new territory or casts a new light on the subjects of object oriented programming and Ruby. It's great because it shows how to work with the tools provided by OO and Ruby. It's great because it shows how to go from less-good code to more-good code.

It's really great because it's short and easy to read. This is the book that enthusiastic, "enlightened" programmers can hand to the less energetic to help them understand how to write better code. They can do so without worrying that the book is hard to understand or a beast to read. It's as close to a magic pill for clue enhancement as anyone has yet come.


Your application is on fire

Six easy pieces on thinking about sustainable code

Your application is on fire. Something is consuming it, a process converting fresh and shiny code into rigid, opaque legacy code.

Seriously, your applications is on fire. You should look into it.

Are you going to fight the fire? Will you keep throwing logs into it, hoping that you can corner it at some point down the line? Do you draw a line in the sand and start containing the fire and eventually suffocating it?


The fire is every change you make to your application in haste. The fire is every design decision that is bolted onto one part of the application without care for how it fits into another part of the application. The fire is every stopgap put in place to fix a short-term problem that isn't revisited once the short-term has elapsed.

Once time is not the most critical concern, you stop and think about how to make the application better in the long term. You consider what worked about the stop-gap solution, how to design your application with care, how to iterate on improving the architecture of your application.

You fight the fire by trading time for quality. Knowing how to solve your problems, knowing what is important about your application and what is consequential, you start to express a design. You write down principles that worked for you and write down an architecture that joins the principles and the design in the middle. That's where your application lives, once it's not on fire.


The reason green field software projects are appealing is that they aren't on fire. A green field is lush with possibility, without the constraint of decisions already made. More so, a green field is not bound by decisions implicitly accepted but never considered.

The unconsidered decision is how the fire starts. A class here, a subsystem there. Soon you've got a system but no design, emergent or otherwise. You've got a ball of mud, but the mud is on fire.

That fire is sustained by every time you bolt something else on. It's sustained by implementing the feature or fixing the bug without care for the program as a whole. It's fed by making progress without making care.


The fire is warm, but eventually it will consume your application. No one wants to work on the burnt out wrecks inside of it.

Everyone can smell it, but no one wants to go near it. Firewalls are erected, teams establish practices that allow them to avoid working on or near the fire.

So it burns on.


One day, you decide enough is enough. No more fuel on the fire. Every change to the application has to meet standards: does this make the code better, does this make the design better? This here is the design; adhere to the why's, use your creativity for the what's and how's.

That day is the day you start thinking for the long term. In the language of economics, its when you internalize external costs. You accept that the cost of maintaining this application is worthwhile, so you should mitigate that cost, optimizing input effort for output value.

You can't maximize cost by just getting the work done. You can't maximize value by building grand structures. You optimize cost for value by continuously making decisions that mitigate or reduce the fire.


Internal combustion engines are made of fire. Despite being powered by flame, combustion engines are pretty mundane things. They became mundane because they have a design and a standard of engineering that contains that fire, harnesses it, and turns it into a mechanism for converting fuel into energy.

When do we harness and focus our code, instead of letting it burn inefficiently and out of control? The moment we do is when we stop being firefighters and start being the wizards we set out to be when we started writing software.


What makes longevity?

A joke for a late-night variety show monologue may only be funny for one day (e.g. a joke about a celebrity). A newspaper article may lose relevance in days or weeks. A TV show might feel dated a couple years after its run ends (e.g. most problems on Seinfeld could be solved with a smartphone). Computer programs don't fare well over time either (though there are exceptions).

The best songs demonstrate better longevity. Beethoven and Bob Dylan still work today. There will always be something amazing about "Good Vibrations", at least to the trained ear.

Even the rap trope of yelling the year the song was recorded has a timeless quality to it; it serves as a marker for the state of affairs. "Nineteen eighty nine!" is the first thing shouted in Public Enemy's "Fight the Power", marking the historical context for what Chuck D is about to tell us.


Why is this? I suspect it underlies the act of making music. Besides hit factory music, i.e. ear worms you hear on the radio, a musician's goal is to make something expressive. Expressiveness often leads to qualities that give a piece endurance; timelessness, nostalgia, high quality. Expressiveness is less often an objective for jokes, headline journalism, or television.

That enduring quality, it's tricky. It happens in film, television, and books too. But, for me, there's something about music that has a more direct emotional connection. They vibrate my ear drum and work their way directly to a part of my brain containing "the feels". I hear a good song and I'm immediately thinking about why I enjoy it so much, what makes it so good, or when I heard that song and connected it to an experience.

Maybe that's why music is such a big deal in our culture. Really, really good music connects in a way beyond "haha that's funny" jokes or "huh, that's interesting" writing.


Is it possible to write expressive non-fiction or an enduring computer program? It seems like the answer is yes, but the answers are outliers. Hofstader's Godel, Escher, Bach and Knuth's TeX come to mind. For every GEB or TeX, there are thousands of less interesting works.

Further, the qualities of an enduring, expressive, and yet functional work seem somewhat at odds with the pragmatics of the daily act of writing or programming. A lot more perfectionism, experimentation, and principle goes into these works than the typical news article or application.

And yet, for every "Good Vibrations", there's probably a thousand commercial jingles composed, elevator tunes licensed, ringtones purchased, and bar bands playing "Brickhouse" yet again. Perhaps music is just as prone to longevity as writing, film, or programming but has a far longer timeline on which its easier to see what really worked. In fifty years perhaps, if we're lucky, we'll start to learn what is really amazing in film and software.


The downsides of live music

I am a giant music nerd. I listen to a ton of music, I think about music a lot, and I often seek out new music via Twitter and Rdio (🪦RIP). Besides a dislike for showtunes and reggae, I’m a pretty broad listener.

Yet, it is exceedingly rare that I seek out live music. When I do, I’m that concert goer who is only buying tickets for long-established acts. In the past several years, I’ve seen Paul McCartney, Ray Wylie Hubbard, Lucinda Williams, Steely Dan, Ben Folds, and Ryan Adams. The youngest of these started their career twenty years ago.

What’s up with that? Well, I (mostly) don’t like live music. I’ve got reasons.

Performances don't start on time

Having performed in jazz bands, orchestras, stand-up showcases, and improv shows, I've come to accept as axiom that performances won't start on time. There are good reasons for this. Everyone wants to get as many people into the seats as possible to make the show better, to improve the audience experience, or simply to make an extra buck.

The reasons that performances at live music venues don’t start on time come down to selfishness. The performers didn’t arrive on time, the stage wasn’t set up in time. Worse, these have a knock-on effect. Once a performance is behind, it only gets more behind. There’s no shortening the break between bands or reducing the time between doors opening and the first band playing.

Which brings us to the most inane reason live music is not on time: selling beer. “Doors open at 8 PM” almost universally means that you can walk in at 8 PM, but you can count on not seeing any live music until 9 PM. The opening act for the opening act is the selling of booze. I’ve got better ways to spend my time than standing around for an hour staking out a spot just so the venue can sell beer.

Standing for a couple hours sucks

Whether it's standing in line for a roller coaster or waiting through beer-time, an opening act, and the changing of the stage, standing around is the worst. Fatigue and boredom set in; you're taken out of the experience of enjoying music played in front of a lot of people. Sore feet and knees do not an enjoyable listening experience make.

Thank you, venues with seats, and thank you, crowds that don’t feel the need to show their enthusiasm by standing upright. You make live music a much more civil, enjoyable experience.

Crowds of people are the worst

Suppose you get a good room, with good sound, and a great performer. You've still got to tend to the other people in the room. The drunk heckler, the people calling out songs, the tall person blocking your view. That's all after you stood in line to get in, waited to go to the bathroom, or put out of mind the guy who lit up next to you.

Opening acts

Opening acts. They're a necessary but inconsistent evil. Sometimes you'll see a really good one. One of the best bands I saw at a Dallas radio station "festival" was on the third stage. One of the worst bands I ever saw was an opener that was sufficiently uncertain of their own skills that the majority of the between-song banter was insults at the audience and counter-heckling gone bad.

I salute events that eschew the opening act and cut straight to the main performer. Give the people what they want.

It’s too loud

I don’t know why, but live music is universally an assault on my ear drums. I've been at concerts where I could feel the music moving inside my pants. That seems a bit excessive.

Beyond the personal discomfort, there’s nothing about loudness that makes music better. If everything is loud, nothing is loud. Sustained loudness is boring.

Short bouts of loudness; that’s interesting. The juxtaposition of the opening arpeggios of “Wouldn’t It Be Nice” with the wall-of-sound that follows is really nice. The way “Thunder Road” or Bolero grow into something loud and great is what makes them interesting. The amazing loudness of the opening of Also Sprach Zarahustra contrasted with the nearly non-existent quietness of the second movement is genius.

Don’t turn it up because you can. Turn it up because you mean it.

Drums are a lie

Let's talk about the actual music again. In particular, drums. Drums, my friend, are a lie. They do not sound like you think they do. What you hear on the radio and on albums are the results of trained sound engineers using microphone and equalizer tricks to make drums sound decent.

This is problematic for two reasons. First off, drums are really loud in the hands of an enthusiastic player. Often quite a bit louder than your typical amplifier. Thus, it’s guaranteed you’re going to hear a lot more drums than guitars, horns, strings, or vocals at a live music event. I take that back; I guarantee you that you will not be able to hear strings at any music club you ever go to, but I’ll come back to that.

The more problematic aspect of a drum kit is you’re going to hear raw drums when you go to a live music event. Very little microphone tricks or equalizer cleverness; the drums may not even be isolated. That means the snare is going to sound like a can of beans getting hit with a stick. The toms will sound like someone banging on an empty box. The cymbals are going to sound like a mad person beating on pots and pans.

If you’re anything like me, you’re not going to enjoy them drums.

It sounds terrible

Drums are not the only problematic instrument. In my experience, most clubs have very poor sound. Even if it's not too loud, the mix is wrong, you can't hear the melody, you can't hear the singing, or the overall sound is distorted.

Assuming that clubs don’t exist merely to move booze (not a big leap in reasoning, I know), I don’t understand how this is the situation. If you want to be a part of a music scene, a good sound system and someone who can operate it seems like par for the course.

I am happy to note that, if you’re lame like me and only go to see performers who have been around the block dozens of times, you’re going to have a much better listening experience. Less prominent musicians are starting to tour with just one accompanying performer and that person is not playing a drum kit. The A-list performers have really good drummers (Paul McCartney’s drummer is a blast to watch) and the sound engineers on the tour are excellent. This makes for a far more enjoyable, balanced sound.

There's little mystery

This one is rather personal, though I've spoken with musicians who feel the same way. If you know how to make music, watching the performance of music can be boring. A song that you can listen to and quickly pick up the structure and details of isn't all that exciting. Even if it is, you can see the musicians enjoying the performance of the song and just wish you were up there playing and not down here watching.

I do enjoy watching very talented performers do their thing. Someone who mixes music with a good stage show or interesting banter between songs is fun to watch. The Rolling Stones are interesting to watch because Mick Jagger is such a good showman, Charlie Watts seems so apathetic, and Keith Richards is, well, Keef. I’ve really enjoyed seeing Hayes Carll and Lyle Lovett because the stories they tell are great and their banter between songs is amusing.

Genres I don’t know how to make are also fascinating. Hip-hop is not a thing I really know how to make, so that’s fun. Jazz and classical can be fun because I know how they work but didn’t reach the level where I could really make it. My new rule is, whenever Rite of Spring is performed, I need to be there; it’s relatively short (about forty minutes), really awesome, and I’m certain I would not be able to perform it with an orchestra without ruining it for everyone else.


Maybe I’m doing it wrong. Perhaps my heuristics for trying to time a concert so I arrive as the opening act is finishing require tweaking. I should definitely remember to bring earplugs more often. It’s entirely possible I’m just a grumpy guy.

But: I’m not the guy who tells you about the hippest new musical thing. I’m probably not the guy who’s going to catch your favorite band. I’m the guy who goes to see Paul McCartney out of reverence and because my wife and I both like him. I’m the guy who listens to an album as a long-form idea. I’m the guy who wants to understand the history and creation of a thing. That’s just the nerd I am: I understand music over time, not over the course of an evening.

Ed. this originally ran in The Internet Todo List for Enthusiastic Readers. You should check that out. It was pointed out that I’m a bit of an old man. In spirit, this is absolutely true. Also worth noting: I’m going to see Paul McCartney again this week, so I must not entirely hate live music. Human inconsistencies, eh?


Learning from a dropped refactoring

You don’t have to deploy every bit of code you write. In fact, it’s pretty healthy if you throw away some of the code you write before you even think about committing it.

Case in point: I sat down to refactor some Sifter code this weekend. I wanted to experiment with an idea about how Rails apps should be organized. I moved some code around, stepped back, and decided the idea behind my refactoring wasn’t sufficiently well cooked and stashed it instead of committing it.


I think that one can follow SOLID principles without radically decoupling your app from Rails and especially ActiveRecord. Instead, you let AR objects do what they’re good at: representing data. You push behavior, as much as possible, out into objects. What you should end up with is a handful of models that encapsulate the tables in your database and a bunch of classes that encapsulate the logic and behavior of your application; whether these classes live in app/models or somewhere else is a matter of personal taste.

My goal is to extract an object that encapsulates user authentication behavior; a class that is mostly concerned with conditions and telling other objects to do things. I extracted a class focused on “remember me” session behavior. It looked something like this:

class UserAuthentication < Struct.new(:user)

  def remember_token?
    user.remember_expires_at.present? && (Time.now.utc < user.remember_expires_at)
  end

  def remember_me(offset=10.years)
    time = offset.from_now.utc
    user.touch_remember_token(time) unless remember_token?
  end

  def forget_me
    user.clear_remember_token!
  end

end

Then, for compatibility, I added this to the model:

  def authentication
    UserAuthentication.new(self)
  end

  delegate :remember_token?, :remember_me, :forget_me, to: :authentication

The principles I’m following are thus:

  • Don’t expose AR’s API to collaborators. Therefore, UserAuthentication must call methods on User rather than directly update attributes and save records.
  • Encapsulate behavior in non-model classes. Therefore, User shouldn’t know when or how to manipulate “remember me” data, only expose an API that handles the mechanics.
The result: now my extracted class, UserAuthentication has meaning, but doesn’t really own any behavior. It’s a bit envious of the User model. That “envy” hints that its behavior really should live in the model.

Further, using delegate means the User model’s API surface area isn’t actually reduced. Granted, this is a stopgap measure. I should really hunt down all invocations of the delegated method and convert them to use UserAuthentication instead.

At this point, it feels like my refactoring isn’t making forward progress. I’ve rearranged things a bit, but I don’t think I’ve made the code that much better.


As I alluded earlier, I decided to stash this refactoring for later contemplation. Maybe next time I should start from the callers of the APIs I want to refactor and drive the changes from there. Perhaps my conjecture about decoupling ActiveRecord data from application behavior needs tuning.

I beat the “ship lots of stuff!” drum a lot, but I’m happy with this result. Learning to strike the balance between shipping the first thing you type out and the thing your future self won’t hate is an undervalued skill. It takes practice, and that’s just what I did here. Plus, I parleyed it into a blog post. Everyone wins!


What I wish I'd known about rewrites

I can’t say enough good things about How to Survive a Ground-Up Rewrite Without Losing Your Sanity. Having been party to a few projects like this, a lot of this advice rings true to me. Let me quote you the good parts!

Burndown

You must identify the business value of the rewrite:

The key to fixing the "developers will cry less" thing is to identify, specifically, what the current, crappy system is holding you back from doing. E.g. are you not able to pass a security audit? Does the website routinely fall over in a way that customers notice? Is there some sexy new feature you just can't add because the system is too hard to work with? Identifying that kind of specific problem both means you're talking about something observable by the rest of the business, and also that you're in a position to make smart tradeoffs when things blow up (as they will).

The danger of unicorn rewrites:

For the Unhappy Rewrite, the biz value wasn't perfectly clear. And, thus, as often happens in that case, everyone assumed that, in the bright, shiny world of the New System, all their own personal pet peeves would be addressed. The new system would be faster! It would scale better! The front end would be beautiful and clever and new! It would bring our customers coffee in bed and read them the paper.

Delivering value incrementally is of the greatest importance:

Over my career, I've come to place a really strong value on figuring out how to break big changes into small, safe, value-generating pieces. It's a sort of meta-design -- designing the process of gradual, safe change.

But “big bang” incremental delivery is accidental waterfall:

False incrementalism is breaking a large change up into a set of small steps, but where none of those steps generate any value on their own. E.g. you first write an entire new back end (but don't hook it up to anything), and then write an entire new front end (but don't launch it, because the back end doesn't have the legacy data yet), and then migrate all the legacy data. It's only after all of those steps are finished that you have anything of any value at all.

Always keep failure on the table:

If a 3-month rewrite is economically rational, but a 13-month one is a giant loss, you'll generate a lot value by realizing which of those two you're actually facing.

I really wish I’d thought of applying “The Shrink Ray”, an idea borrowed from Kellan Elliot-McCrea:

We have a pattern we call shrink ray. It's a graph of how much the old system is still in place. Most of these run as cron jobs that grep the codebase for a key signature. Sometimes usage is from wire monitoring of a component. Sometimes there are leaderboards. There is always a party when it goes to zero. A big party.

Engineer your migration scripts as idempotent, repeatable machines. You’re going to run them a lot:

Basically, treat your migration code as a first class citizen. It will save you a lot of time in the long run.

Finally, you should fear rewrites, but developing the skill to pull them off is huge:

I want to wrap up by flipping this all around -- if you learn to approach your rewrites with this kind of ferocious, incremental discipline, you can tackle incredibly hard problems without fear.

Whenever I talk to people with monolithic applications, slow-running test suites, and an urge to do something drastic, I want to mind-meld the ideas above into their brains. You can turn code around, but it takes time, patience, and a culture of relentless improvement and quality to make it happen.


Look up every once in a while!

Sometimes, I feel conditioned never to look beyond the first ten feet of the earth. Watch where you're going, don't run into things, avoid being eaten by bears. Modern life!

A Texas sunset
I see stuff like this out my office window every day. Be jealous.

When I remind myself to look up, there’s so much great stuff. Trees, antennae, water towers, buildings. Airplanes, birds, superheroes. Never mind the visual pollution of smoke, contrails, and billboards. Nifty things, natural and man-made.

Clouds in particular are nifty. They’re almost always changing, even if you look at the same patch of sky. They have pleasing shapes, and just a little bit of texture. Simple pleasure, clouds are.

And sunsets! Hooo boy, those are great. I thought they were overrated for a long time, but boy was I wrong. Colors, dynamics, fading off into darkness. I'm pretty sure sunsets invented the word "poetic".

Ed. This originally appeared in my Internet Todo List for Enthusiastic Thinkers (defunct as of 2023). It's an email thing you can (could) subscribe to. When you do (did), good things come came) to you, often via email. It's free, and it bears no shilling for other people.


Exemplary documentation: size and purpose

There’s a lot to say about programmer-focused software documentation. It’s more crucial than many developers think, so it is often neglected. Even when its not neglected, it’s often an after-thought. I’ve noticed there are three kinds of documentation I’m interested in.

When I first come across some software, I want short and focused examples of how I can use it for my own purposes. I’m not looking for a lot of theory or exposition; show me the benefit. If I can’t quickly see how the software works and makes my life easier, I’m very likely to discard it. In other words, I want shorter, “tweet-sized” documentation that sells me on the sizzle right away.

rbenv's README
rbenv's old README is a good example. I can see from the screenshot what using rbenv looks like. The bullet points make it easy to know the specifics of what this software is about.

If I come back to some software, I often want to learn the whole thing in one sitting. I want a longer document that I can read through in a serial fashion to learn most or all of the concepts and details about using the code. It should cover the domain ideas of the software, the individual APIs, and how it all works together to make something. To continue the metaphor, I want a well-written, “Instapaper-length” document worthy of reading in a comfy chair.

Backbone.js homepage
The Backbone.js homepage is great at serving as a long-form read. It serves as a reference document and guide at the same time.

After I start using something, I will often want to return to it to remember how to do specific things or to figure out if a task is possible at all. This is when I lean most on traditional API documentation. One to three paragraphs, easily searched are the ideal here. Kind of like the “Tumblr-post” of documentation.

I’ve yet to find all three of these qualities in the documentation for a single piece of software. Finding that software has done a really good job at one of them is delight enough. I can’t imagine how excited the world, at large, would be if something were to have all three. There would be a lot of rejoicing.


The Third Shift

In the days of industrial labor, many factories ran three shifts per day. Three eight-hour shifts per day keeps a factory fully utilized and some business major’s spreadsheets happy. Luckily, for many of us, knowledge/thinking oriented businesses don’t usually follow this paradigm. We’re not (often) pressured to pick up a double shift, possibly freeing time to do useful things that we don’t get paid for.

For the ambitious (possible euphemism), this opens up an interesting opportunity: allocating the second shift to one’s own projects. Writing that great book you’ve got inside you, penciling a comic, running your Etsy business on the side, or bootstrapping that web app you’re dreaming about all make a great fit for a second shift. Find time before or after your day job, and then aim for the sky.

I found it easy to take this logic to the next level and think, well if two shifts works and I can make progress on two things, three shifts might work and then I can do three things! Wake up early, do something awesome. Work the nine to five, do awesome things. Take a couple hours in the evening, do even more awesome things. Seems good, right?

Unfortunately, the third shift is a bandaid over too many projects and lead me to do lower quality work across the board.

I need more physical rest and mental space than working on three things affords. Turning down an extra hour of sleep or the bleeping of an alarm clock is a hard bargain. One side project, as it turns out, is plenty.

That said, the third shift is useful as a “turbo button” that I only press when I really mean it and used only for short-term projects that are important to whatever awesome thing I’m trying to do. A couple weeks waking up early to bang out a presentation or longer-form article are good. Sustaining that for a series of projects doesn’t work for me.

In short: ambition is great, but striking a balance with mental and physical rest is better.


A newsletter

So I did this thing where I wrote a newsletter. I’m going to do it again. The first iteration of this publication was a bit like a written late-night variety show. I wrote about interesting articles, or things that interested me. Each “episode” almost always closed with some kind of musically awesome thing I’d found on the internet.

The next iteration of this newsletter will be like a hand-delivered transmogrification of this weblog. I’ll include links to the articles I thought were most special or had a surprising reception. I’ll occasionally write “commentary tracks” on how an article came to be. Each edition will almost certainly end with a musical or pop culture find, because what fun is running a newsletter if I can’t annoy you with my pop culture tastes?

My hope is that you’ll find this interesting. You can take a look at what I’ve written previously and subscribe to this free internet email newsletter at your discretion. I think you might like it.


Web design for busy coders

Here it is: I'm somewhere between horribly afraid and way-too-smart to seriously attempt front-end web work. Browsers are not the software whose bugs I am interested in knowing about.

That said, putting information on the web that doesn't look like utter dross is a kind of required literacy in our field. While bravely dipping my toes back into the front-end waters, I recently found some great tricks. Rediscovered, probably, but I'm not sure where the idea originally came from.

Most important: design in greyscale. Color is hard and can lead to tinkering. My goal is to get in and out of the front-end bits quickly, so tinkering is the enemy. Greyscale is one dimensional, greatly simplifying matters. Give important information higher contrast and less important information (and "chrome") less contrast. Now you're done thinking about color.

Almost as important: use a fixed-with font. As a programmer, you look at them everyday, so it's a touchpoint of comfort. Pick a font you don't use in your editor all day, just so you can stare at something different for a while. Copy and paste a "font stack" from the aptly named fontstacks. Make important things big and unimportant things small. Now you're done thinking about type.

The key to avoiding browser dragons, it seems, is to skip horizontal layout, i.e. pull quotes, text wrapped around images, etc. It's pretty easy to use CSS if you only run things down the left side of the page. All the depth and despair of CSS is in trying to get things to appear off the left margin. Don't do that. Leave it to people who know how browsers work and how to manage their gnarly bugs. Now you're done thinking about layout.

It's tempting to think you should make your code examples look really nice. Don't worry about it; highlighting code is of marginal value. You'll never be satisfied with how it looks. The human mind is capable of reading code without a rainbow spectrum of colors. Spend time on writing about the code, not on polishing the colors and how its highlighted.

With all of those things out of the way, your way is clear to think about the really important things. What do you need to say, how do you structure the message, what do you leave out, how do you organize all the information? That's the essence of publishing on the web, not the accidental complexity of making things look interesting.


The gift and the curse of green-field projects

The "green field" in software is a gift and a curse.

On the bright side, you have an opportunity to use the new-shiny. Past wrongs can be righted. You can move quickly, without worry about why some code exists, how it works, or whether you should care about it. Life is good.

The peril I've found is that green field projects are by their nature isolated. They don't have a deployment or monitoring story. They don't spring forth fully integrated with other critical systems. The project probably hasn't proven itself as useful yet.

Letting a green-field live in isolation too long is the root of lots of problems. I've experienced scope creep, confused expectations, and declining morale that all could have been avoided had I brought a green-field project "into the fold" sooner. But the whole point of a green field is that you don't integrate too soon, lest you spin your wheels on legacy things.

Green field projects are fun and an often welcome change of pace from working within an existing system. However, succeeding on a green field project is just as hard, or harder, than succeeding with a legacy system. It's a different set of trade-offs that each developer has to get good at.


Hypermedia chicken, web browser egg

A lot of the hypermedia philosophy is centered around the idea that API clients should work a lot like web browsers and plain-old Hypertext Markup Language. They should follow hyperlinks, leverage media types, cache data when they can, and intelligently take advantage of the meaning of hyperlinks whenever possible.

Good APIs require good services *and* good clients

The problem with that is, web browsers are way more capable than HTTP clients developers are using to build API clients with.

Here are some things web browsers have become pretty good at:

  • GET and POST requests
  • Following links and redirects
  • Discovering data structures via HTML forms and submitting data using that schema
  • Using headers to negotiate content types
  • Caching data when possible and expiring those caches
  • Handling streaming data and chunked responses
  • A bunch of stuff I'm probably forgetting

Here are some things the HTTP client in your typical standard library are good at:

  • GET, POST, PUT, and DELETE requests
  • Following redirects (if you manage to set the right options)

What's at play here is a chicken and egg problem. Client developers can't build on hypermedia principles until they are working at the level of hypermedia abstractions. Arming them only with HTTP requests and the ability to choose their encoding and schema poison is too low level.

Protocols like HAL or Collection+JSON could light a path to solving this problem. Rather than dealing in pure data, services and consumers work with data annotated with hypermedia-like semantics for traversing data structures and creating new data. If these protocols can gain traction, it's "simply" a matter of getting HTTP clients into widespread use (read: standard libraries in stable releases of your-favorite-programming-language) that are as good at HTTP as web browsers are. At that point, API providers and API consumers could start using hypermedia principles to build APIs for those who aren't interested in the mechanics of hypermedia.

Until then, it seems to me that putting hypermedia principles front-and-center is suboptimal. It's akin to telling someone who wants to use your website that they need to understand MVC patterns first. It's only going to discourage and confuse them.


How to understand Saturday Night Live

Saturday Night Live is a changing thing. It’s not new like it was in the seventies, it’s not a powerhouse like it was in the nineties, it may not be the training camp for NBC sitcoms anymore. Despite that, its still a big dog in the worlds of comedy and pop culture. Every time I hear or read “SNL was better when…”, I cringe a little. As far as I can tell, this isn’t true.

Everyone’s got their favorite cast. Ferrell/Shannon/Sanz/Oteri, Fey/Poehler/Rudolph, Hartman/Carvey/Myers. It seems largely to depend on whenever you started watching SNL or when you were a teenager or in college. So when someone says “SNL isn’t relevant any more”, I mentally substitute “I liked SNL better with the cast I watched first.”

Over the years of watching and reading about SNL, I’ve come to understand that the show is very much about the people on camera, but Lorne Michaels is the show. The only time the show has been in consistent decline was when Michaels wasn’t around in the early eighties. For the past thirty years, claiming SNL was on the down seems to be more of a sport than a rational argument.

Since Michaels' return, the show is subject to fractal cycles. Each night, some skits kill and some skits bomb. Generally, the front of the show is better than the back; if you stay tuned after “Weekend Update”, you should count yourself a long-time fan, willing to see some weird and/or flat skits, or asleep on the couch.

If you zoom out to look at how a season flows, you’ll again find shows that are really great and some that aren’t. My theory is that this entirely depends on the quality of the host. A mediocre host seems to bring middling material out of the writers and performers. A good or high-profile host seems to bring good-but-not-great material and pretty good performances. One of the darling hosts, like Alec Baldwin and the more recent Jon Hamm, brings the A-game material from the writers and performers play up to the occasion.

Interestingly, musical guests can bring a certain electricity too. Paul Rudd is a capable host, but pairing him with Paul McCartney led to a show that was pretty electric. I defy you to watch that episode and tell me SNL just isn’t as good as it used to be.

Zooming out to look at successive seasons, you see the same sort of up-and-down. Will Ferrell’s first season was good, but not great. He definitely left at his sketch comedy peak, and the show was briefly weaker for it. But right on his heels came the Fey/Poehler/Rudolph powerhouse. There’s an ebb and flow as casts come together, hammer out a few good seasons, and then move on to other stages.


That’s how I understand SNL. Perhaps I’m seeing cognitive biases towards the show through my own cognitive biases. I think it’s still a relevant benchmark of American pop culture.


Context is data to burst your bubbles

Designing with context:

Context is a slippery topic that evades attempts to define it too tightly. Some definitions cover just the immediate surroundings of an interaction. But in the interwoven space-time of the web, context is no longer just about the here and now. Instead, context refers to the physical, digital, and social structures that surround the point of use.

Great design is built around people, not devices or software. Applying responsive design or native app UX is a tool, not a solution. Instead, we should design software that solves a problem for a real person (not a power-user or one of our colleagues) given the devices available to them and the context of use they’re in.

A high information density display is no good to a parent trying to get their kids out the door. Documentation based on video tutorials is no good for someone riding a bus. A developer troubleshooting a service bottleneck needs to know more than the average response time.

As both designers of user experiences and developers of software, we need to get away from the desk and out amongst those we’re building for. It’s too easy to build for ourselves and our friends. We need to consider how others approach and use what we make. Armed with that context, we can design a solution for everyone, and not just those we share a bubble with.


Hyperthreading illustrated

I'm fond of saying hyperthreading is a lie. It's true though; a dual hyperthreaded core is nowhere near as awesome as a four real cores. That's more provocative than useful, so let me draw you some pictures.

If you zoom way out, a single core, dual cores, and a single hyperthreaded core look like this:

20130216-134550.jpg

Note how the hyperthreaded core is really a single core with an extra set of registers and instruction/stack pointers. The reason hyperthreading is a lie is you can't actually run four processes, or even four threads, at the same time. At best, you can run two threads with two others ready in the wings.

I'm a dilletante of processor architecture at best, but I think I can explain why chip designers would do this.

20130216-134900.jpg

My best guess as to why you would design and release a hyperthreaded core is to increase the number of instructions you can retire (i.e. mark as completely executed) per clock cycle. Instructions retired per cycle is one of the primary metrics processor architects use for judging a design.

The enemy of instructions retired per clock cycle is memory accesses and branch mispredictions. When a processor has to go access a cache line or worse, something out in main memory, it has nothing to do but wait. When a branch (i.e. a conditional or loop) is incorrectly speculatively executed (how awesome is it that processors start executing code paths before they even know if its the right thing to do?) they end up in the same predicament. Cache misses and branch mispredictions are at best a recipe for some overhead, and at worst a recipe for waiting around on main memory.

Hyperthreading attempts to solve this problem by keeping extra programs (threads or processes) waiting in the wings, ready to start executing as soon as another pauses due to needing something from main memory. This means our execution units, the things that actually do math and execute the logic of a program, are (almost) always utilized and retiring instructions. And that gets us to our happy place of a higher instructions retired per clock cycle.

Why not just throw more execution units in and have real cores ready to work? I'm not sure, there must be something about how modern processor pipelines work that I don't know which makes it too expensive to implement. That said, hyper threading (as I understand it) is a pretty clever hack for improving the efficiency of a processor.


TextMate's beautiful and flawed extension mechanism

This is about how TextMate’s bundle mechanism was brilliant, but subtly flawed. However, to make that point, I need to drag you through a dichotomy of developer tools.

Composition vs. context in developer tools

What's the difference between a tool that developers work with and a tool developers often end up working against? Is there a useful distinction between tools that seem great at first, but end up loathed as time goes on? Neal Ford has ideas. Why Everyone (Eventually) Hates (or Leaves) Maven:

I defined two types of extensibility/programability abstractions prevalent in the development world: composable and contextual. Plug-in based architectures are excellent examples of the contextual abstraction. The plug-in API provides a plethora of data structures and other useful context developers inherit from or summon via already existing methods. But to use the API, a developer must understand what that context provides, and that understanding is sometimes expensive.

Composable systems tend to consist of finer grained parts that are expected to be wired together in specific ways. Powerful exemplars of this abstraction show up in *-nix shells with the ability to chain disparate behaviors together to create new things.

Ford identifies Maven and IDEs like Eclipse as tools that rely on contextual extension to get developer started with specific tasks very quickly. On the other hand, a composable tool exchange task-oriented focus for greater adaptability.

Contextual systems provide more scaffolding, better “out of the box” behavior, and contextual intelligence via that scaffolding. Thus, contextual systems tend to ease the friction of initial use by doing more for you. Huge global data structures sometimes hide behind inheritance in these systems, creating a huge footprint that shows up in derived extensions via their parents. Composable systems have less implicit behavior and initial ease of use but tend to provide more granular building blocks that lead to more eventual power.

And thus, the crux of the biscuit:

Contextual tools like Ant and Maven allow extension via a plug-in API, making extensions the original authors envisioned easy. However, trying to extend it in ways not designed into the API range in difficultly from hard to impossible...

Contextual tools are great right up to the point you hit the wall of the original developer’s imagination. To proceed past that point requires a leap of one or two orders of magnitude in effort or complexity to achieve your goal which the original developer never intended.

Bundles are beautiful

Ford wrote this as a follow-on to a piece Martin Fowler wrote about how one extends their text edior. It turns out that the extension models of popular text editors, such as VIM and Emacs, are more like composable systems than extension-based systems.

All of this is a extremely elaborate setup for me to sing the praise of TextMate. Amongst the many things it got very right, TextMate brilliantly walked the line between a nerdy programmer’s editor and an opinionated everyday tool for a wide range of developers. It did this by exposing its extension mechanism through two tools that every developer knows: scripts and regular expressions.

To add functionality to TextMate, you make a bundle. A bundle is a convention for laying out a directory such that TextMate knows the difference between a template and a syntax definition through a convention. This works because developers know how to put things in the right folder. There were only ever five or so folders you needed to know about, so this was a simple mechanism that didn’t become a burden.

To tell TextMate how to parse a language and do nifty things like folding text, you wrote a bunch of regular expressions. If I recall, there were really only a few placeholders to wedge in these regular expressions. This worked great, as most languages, though the “serious” tools use lexers and parsers, are amenable to low-fidelity comprehension with a series of naive pattern matches. The downside was that languages that didn’t look like C were sometimes odd to work with.

In my opinion, the real beauty of TextMate’s bundles was that all of the behavioral enhancement was handled with shell scripts. Plain-old Unix. You could write them in Ruby, Python, bash, JavaScript, whatever fit your fancy. As long as you could read environment variables and output text (or even HTML), you could make TextMate do new things. This led to an absolute explosion of functionality provided by the community. It was a great thing.

Downfall

Interestingly enough, TextMate is essentially a runtime for bundles. This is how VIM and Emacs are structured as well. TextMate just put a nicer interface around that bundle runtime. However, the way it did so was its downfall, at least for me.

Recall, a few hundred words ago, the difference between composable and contextual extensions. A contextual extension is easy to get going, but comes up short when you imagine something the creator of the extension point didn’t imagine. The phenomenal thing about TextMate was how well it chose the extension points and how much further those extension points took the program than what came before it. I’d estimate that about 90% of what TextMate ever needed to do, you could do with bundles. But the cost to find that last 10%, it was brutal.

Eventually, I bumped up against this limitation with TextMate. I wanted split windows, I wanted full-screen modes, I wanted better ctags integration. No one could add those (when TextMate wasn’t open source, circa 2010) because they required writing Objective-C rather than using TextMate’s extension mechanism. And so, I ended up on a different editor (after several months of wandering in a philosophical desert).

The moral of the story

If possible, you should choose a composable extension mechanism (a full-blown language, probably) and use that extension mechanism to implement your system, ala Vimscript/VIM and elisp/Emacs. If you can’t do that, you can get most of the benefit by doing a plugin API, but you have to choose the extension points really, really well.


Austin's startup vibe

It's different from other towns. What's the Difference between Austin and San Francisco?:

Austin offers you more options, but greater variety means that, on the whole, Austinite’s don’t focus as intensely as in San Francisco.  Austin’s defining characteristic (part of it’s slacker culture) is a belief that intensity isn’t always the best thing. Austin believes in variety and moderation. This affects the startup community. Austin, the city, will let you pick and choose from its buffet line, and then admire the smorgasbord you put together. Your lifestyle is a work of art in Austin, and I think the culture rewards you for how you live as much as what you do, often moreso.

In my few visits to San Francisco, I’ve found that I cannot wrap my Texan brain around that town. Trying to really understand its startup culture with just a few visits to the city and Palo Alto is similarly folly. But I did notice the intensity that SF has. It’s not a bad way to describe the town.

That said, I think this is a pretty decent encapsulation of Austin. Austin is a slower town (slower even than Dallas) and revels in the variety of activities available to its people. The Austin tech community is more about smaller groups and individuals too. It’s not (always) about aim-for-the-stars startups or working for large companies using large technology from large vendors. It’s as much about a few people on a team or individuals hacking something out while enjoying their city, family, and friends.

Obviously, I dig that a lot.

Update: I should mention that, while it’s popular to write Austin off as a slacker town, there’s a lot of people dedicated to their work and craft here. It’s not all tacos and BBQ. The events I go to most often are frequented by people who using their evenings to learn something new or talk shop while they’re making something. That is, I think, the most important factor of a startup community: the more people who are putting their evenings into making things, the more likely those things will end up awesome and grow into a business-like organism.


Senior VP Jean-Luc Picard, of the USS Enterprise (Alpha Quadrant division)

If you’re working from the Jean-Luc Picard book of management, a nice little Twitter account of Picard-esque tips on business and life, we can be friends. Consider:

Picard management tip: Be willing to reevaluate your own behavior.

And:

Picard diplomacy tip: Fighting about economic systems is just as nonsensical as fighting about religions.

But I’m not so sure about this one:

Picard management tip: Shave.

If you’re playing from home, the fictional characters that have most influenced my way of thinking are The Ghostbusters (all of them) and Jean-Luc Picard. I also learned everything I need to know about R&B from The Blues Brothers.


SoundCloud, micro-services, and software largeness

From a monolithic Ruby on Rails app to the JVM, how Soundcloud has transitioned to a hybrid approach with Ruby apps intermingling with Scala and Clojure apps. I think some of their idea of what is idiomatic Rails and how to operate Ruby are not exactly on center. But, their approach to the problem of a large Rails app is right on: break it up into “micro-services” that, if you don’t like the code, you can rewrite quickly if necessary.

Lest you fear this is yet another “Rails doesn’t scale!” deck, they do make a key observation. “Rails, PHP, etc. are a very good choice to start something”. Once you get past “starting” and “growing” to “successful and challenging”, you’ll face the same level of challenge no matter what you choose: Ruby or Java, MySQL or Riak. All the technologies we have today are challenged when they grow large.

So don’t let applications and services get large. Easy to say; hard, but worthwhile, to practice.