Exemplary documentation: size and purpose

There’s a lot to say about programmer-focused software documentation. It’s more crucial than many developers think, so it is often neglected. Even when its not neglected, it’s often an after-thought. I’ve noticed there are three kinds of documentation I’m interested in.

When I first come across some software, I want short and focused examples of how I can use it for my own purposes. I’m not looking for a lot of theory or exposition; just show me the benefit. If I can’t quickly see how the software works and makes my life easier, I’m very likely to discard it. In other words, I want shorter, “tweet-sized” documentation that sells me on the sizzle right away.

rbenv's README

rbenv‘s old README is a good example. I can see from the screenshot what using rbenv looks like. The bullet points make it easy to know the specifics of what this software is about.

If I come back to some software, I often want to learn the whole thing in one sitting. I want a longer document that I can read through in a serial fashion to learn most or all of the concepts and details about using the code. It should cover the domain ideas of the software, the individual APIs, and how it all works together to make something. To continue the metaphor, I want a well-written, “Instapaper-length” document worthy of reading in a comfy chair.

Backbone.js homepage

The Backbone.js homepage is great at serving as a long-form read. It serves as a reference document and guide at the same time.

After I start using something, I will often want to return to it to remember how to do specific things or to figure out if a task is possible at all. This is when I lean most on traditional API documentation. One to three paragraphs, easily searched are the ideal here. Kind of like the “Tumblr-post” of documentation.

I’ve yet to find all three of these qualities in the documentation for a single piece of software. Finding that software has done a really good job at one of them is delight enough. I can’t imagine how excited the world, at large, would be if something were to have all three. There would be a lot of rejoicing.

The Third Shift

In the days of industrial labor, many factories ran three shifts per day. Three eight-hour shifts per day keeps a factory fully utilized and some business major’s spreadsheets happy. Luckily, for many of us, knowledge/thinking oriented businesses don’t usually follow this paradigm. We’re not (often) pressured to pick up a double shift, possibly freeing time to do useful things that we don’t get paid for.

For the ambitious (possible euphemism), this opens up an interesting opportunity: allocating the second shift to one’s own projects. Writing that great book you’ve got inside you, penciling a comic, running your Etsy business on the side, or bootstrapping that web app you’re dreaming about all make a great fit for a second shift. Find time before or after your day job, and then aim for the sky.

I found it easy to take this logic to the next level and think, well if two shifts works and I can make progress on _two_ things, three shifts might work and then I can do _three_ things! Wake up early, do something awesome. Work the nine to five, do awesome things. Take a couple hours in the evening, do even more awesome things. Seems good, right?

Unfortunately, the third shift is a bandaid over too many projects and lead me to do lower quality work across the board.

I need more physical rest and mental space than working on three things affords. Turning down an extra hour of sleep or the bleeping of an alarm clock is a hard bargain. One side project, as it turns out, is plenty.

That said, the third shift _is_ useful as a “turbo button” that I only press when I really mean it and used only for short-term projects that are important to whatever awesome thing I’m trying to do. A couple weeks waking up early to bang out a presentation or longer-form article are good. Sustaining that for a series of projects doesn’t work for me.

In short: ambition is great, but striking a balance with mental and physical rest is better.

Web design for busy programmers

Here it is: I’m somewhere between horribly afraid and way-too-smart to seriously attempt front-end web work. Browsers are not the software whose bugs I am interested in knowing about.

That said, putting information on the web that doesn’t look like utter dross is a kind of required literacy in our field. While bravely dipping my toes back into the front-end waters, I recently found some great tricks. Rediscovered, probably, but I’m not sure where the idea originally came from.

Most important: design in greyscale. Color is hard and can lead to tinkering. My goal is to get in and out of the front-end bits quickly, so tinkering is the enemy. Greyscale is one dimensional, greatly simplifying matters. Give important information higher contrast and less important information or “chrome” less contrast. Now you’re done thinking about color.

Almost as important: use a fixed-with font. As a programmer, you look at them everyday, so it’s a touchpoint of comfort. Pick a font you don’t use in your editor all day, just so you can stare at something different for a while. Copy and paste a “font stack” from the aptly named fontstacks. Make important things big and unimportant things small. Now you’re done thinking about type.

The key to avoiding browser dragons, it seems, is to skip horizontal layout, i.e. pull quotes, text wrapped around images, etc. It’s pretty easy to use CSS if you only run things down the left side of the page. All the depth and despair of CSS is in trying to get things to appear off the left margin. So don’t do that. Leave it to people who know how browsers work and how to manage their gnarly bugs. Now you’re done thinking about layout.

It’s tempting to think you should make your code examples look really nice. Don’t worry about it; highlighting code is of marginal value. You’ll never be satisfied with how it looks. The human mind is capable of reading code without a rainbow spectrum of colors. Spend time on writing about the code, not on polishing the colors and how its highlighted.

With all of those things out of the way, your way is clear to think about the really important things. What do you need to say, how do you structure the message, what do you leave out, how do you organize all the information? That’s the essence of publishing on the web, not the accidental complexity of making things look interesting.

Lessons from premature design

Lessons from Premature Abstractions Illustrated. I’ve run afoul of all three of these:

Make sure you have someone on the team or externally available that will keep the critical, outside look at the project, ready to scream and shout if things turn bad.

Don’t let your technical solution influence your design decisions. It’s the tool that needs to fit the job, not the other way round.

Don’t build abstractions as long as you have no proven idea on how the levels below that abstraction will look like.

I could have used an outside, trusted voice to gently reel me in if when I went off into the unproductive weeds. Someone to ask “how will this help the team in two weeks?”, someone to point out ideas that might be great but have only achieved greatness in my head. A person who is asking questions because they want me to succeed, not because they’re trying to take me down a notch.

I have rushed into implementing the first idea in my head. Sometimes I’ve convinced myself that my first idea is the best, despite knowing I need to review it from more angles. I’ve jumped into projects with a shiny new tool and a bunch of optimism, only to cut myself on a sharp edge later on.

I’ve built systems that look fine on their own, but don’t fit into the puzzle around them. I’ve isolated myself building up that system, afraid to figure out how to fit my system into the puzzle in a useful way. I’ve used mocks and stubs to unintentionally isolate myself from the real system.

Basically, these are all really good ways to paint yourself into a corner. It seems like being in a corner with a shiny new system/tool/abstraction would be nice. Unfortunately, my experience is that once you have to make sense of that abstraction in a team, things get dicey.

It’s dangerous to run a software project on your own! Take a friend.

Semantics/Empathy

People argue about words all the time. In the past two weeks, I’ve participated and watched as nerds unproductively tried to convince each other that they are incorrectly using the words bijection, hypermedia, and dependency injection. Nerds easily fall into this trap because many of us are fascinated by knowledge, sharing that knowledge, and teaching that knowledge.

Arguing about words is fun. Arguing about words is practically useless.

Semantics are good

Words are a tricky business. An overused, overloaded, or ambiguous word isn’t particularly useful. “Synergize”, “web-scale”, or “rockstar” are mush words that don’t convey much meaning anymore. It’s tempting to think that encouraging others to be judicious in their use of words and mind the specific context and meaning of their statements could move the needle in making the world better.

On the other hand, human interaction is fidgety. We all have differing experiences, so the way we think and feel about things can vary wildly. You might say “we should pivot our business”, remembering the time you did so and took the company in a much better direction. I might hear you say “pivot” and think about all the abuses of the word in startup discourse or all the companies that have “pivoted” and still failed. Even though we are thinking of the same definition of “pivot”, we are thinking different things.

Semantics are good for getting two people in the same mental ballpark. I can say “web framework” and expect you to know I’m not talking about dogs, tacos, coffee, or compilers. You and I may differ on what a web framework is and what it does, but at least we’re both thinking of things that help developers build web-based applications. We may not be talking about the same thing, but we’re close.

This is why I think strong semantics are interesting, but not a silver bullet. Very rarely have I solved a problem by applying stronger semantics to the words used in the discussion of the problem. Never have I solved a problem by telling someone they are using the wrong semantics and that they should correct themselves.

We can argue about words all we want, but it’s not getting us any closer to solving the real problem. The problem we started talking about before we decided to have a side argument on the meaning of a word.

Empathy is better.

Empathy is a better tool. When someone misuses a word, I stop myself and think, “OK, let’s allow that one to slide. What are they really trying to say?” Rarely does someone misuse a word on purpose. It’s more likely they know it in a different context; discovering that context and matching it to your own is how the conversation moves forward.

If you say “we need to pivot our web commerce company to a web framework consultancy”, I may not know precisely what you mean by “pivot”, “web framework”, or “consultancy” but I can get on the same page with you. You think we need to change directions and that some services-oriented business based on helping people build web applications is the way to move forward. Armed with that, I can ask you questions about why we need to change directions, what that web framework looks like, or how we would change ourselves to a services-oriented company. It’s not as important that you get the words right; it’s important that we find a way to talk about the same thing.

Words are fun, but what’s useful is to figure out what the other person is thinking or feeling and talk to that. Setting aside the tension of telling someone they’re wrong, it’s not productive. I’d rather talk about how we can make better programs or better understand our world than foible over the meanings of a few words.

Words are a lossy representation, they can’t possibly ever connote the full meaning and nuance of any idea of interesting size. Don’t get caught up in skirmishes about the marginally important details of semantics. Use words to show others what you’re thinking and guide them towards your understanding of the problem and a proposed solution.

Reflecting on Ruby releases

Ruby 1.8 brought us a couple changes that made many kinds of metaprogramming easier, plus a whole bunch of library additions that made Ruby feel more “grown up”. Without seeking external libraries, one could write Ruby to solve many problems developers face in commonplace jobs. I wasn’t around for Ruby 1.6, but I’ve been thinking of Ruby 1.8 as a transition from “better Perl or Java” to “better Smalltalk”.

Ruby 1.9 brought us features that make some functional programming idioms easier. Lambdas, i.e. anonymous functions, require less syntax and are better defined. Enumerators make it possible to use features of Enumerable, itself a very functional-esque feature, in more places. Symbol-to-proc makes it easier to pass methods around as blocks, another FP-esque practice. I might say that Ruby 1.9 is the “better MatzLisp” version of Ruby.

Ruby 2.0 is bringing us features that, on the surface, make it easier for Rails to extend the Ruby language via ActiveSupport. I think that’s too shallow of a reading. The new tools in Ruby 2.0 (excepting the highly-controversial refinements) make it easier to cleanly add functionality to Ruby’s core objects and library. Reducing the cost of extending the core make it possible for more libraries and applications to judiciously make high-leverage additions to the lower levels of Ruby. That seems like a pretty good thing.

I can’t find a source for this, but I could have sworn I once read that all programming is language design. It was probably related to Lisp, where you’re arguably directly manipulating the AST much of the time. If the changes in Ruby 2.0 can take us closer to this level of program design, where we think more about building language up to the problem domain instead of objects and mechanism, sign me up.

Design for test vs. design for API

How many design considerations are there in an almost trivial method? Let’s look at two of them. Consider this code:

def publish!
  self.update_attributes(created_at: Time.now)
end

If you’ve been studying OO design and the SOLID principles, using TDD as a practice to guide you towards those ideas, there’s a missing piece here. The reference to Time is a dependency that should be injected. In Ruby, it’s really easy for us to fix that:

def publish!(time=Time.now)
  self.update_attributes(created_at: time)
end

I suspect a lot of TDDers would instinctively write the above first, skipping the first version by force of habit. But, let’s stop and think about what the drives us to want the second version.

The strength of the second version is that it is designed for test. If we need to test how this model behaves when it is published at night, or on a leap day, or the day before Arbor Day, injecting the time object makes that easier.

There are some other test-focused design direction this method could go. We could create our own object whose role is to hand out timestamps, which would allow us to reasonably stub out the time reference, instead of injecting it. I’ll bet there are other approaches lurking out there as well.

I want to look at another set of design considerations. I could design this code for testability, which often leads me to code that follows the SOLID principles which often leads me to decoupled code that is easier to change later. To many people, that’s a good thing.

However, there’s another lens I can look through: API design. How does this method hold up as a piece of behavior that developers will leverage?

Strictly speaking, the TDD’d version is a more complicated API. Even adding one optional parameter to a method carries “mass”. Consider documenting the parameter-less version:

Publishes the current post. The created_at timestamp is set to the current time. Returns the created_at timestamp.

For numbers sake, it’s 40 words. More importantly, it reads linearly. Now let’s look at the dependency-injected version:

Publishes the current post. By default, created_at is set to the current time. Optionally, callers may pass in a Time object, or any object that returns a Time object when sent the now message. The created_at column is set according to that Time value. Returns the value of the created_at timestamp.

This one is 54 words. That’s not too many more, numerically, but notice that the explanation is no longer linear. There’s a default, easy case where I don’t care about the timestamp. Then there’s a clever case where I do care about the timestamp. In real API documentation, I’d need to specify when and why I’d want to use that clever case and what it looks like.

There’s some further potential trouble lurking in this API. What if a caller passes in the wrong kind of Time object? What if sending the now message raises an exception? Those are important parts of the API too, both from a behavior specification perspective and when considering the user experience of using this API in code and possibly troubleshooting it when things go wrong.

My point is, that optional argument is starting to look rather weighty. Adding the code is pretty trivial. The possible interactions with the optional argument and its support cost is where it gets expensive. Like many things, it’s a trade-off.

I won’t claim to know which of these is better. Honestly, I think it comes down to a subjective view on what’s important: test design, or API design. This is where I can’t make a bold-sounding prognosis. I believe that design, even of code, is about deciding what to leave out. Everyone has to decide what to leave out for themselves.

Declaring coupling

A lot of discussions on software design end up focusing on dependencies and coupling. In short, hell is dependencies and the couplings it produces. It’s a tricky problem because its hard to look at some program text and see all of its dependencies; some of them require intelligence to recognize.

In Ruby, we don’t have very good ways to declare a class’s dependencies and no ways ways to declare its couplings. We can describe a project’s dependencies with a Gemfile or a file’s dependencies with requires. The trick is that these specifications often explode in complexity. Requiring Ruby’s thread library brings in some thread-safe data structures like queues and condition variables. Requiring ActiveRecord brings in a world of dependencies and causes a number of behavioral changes to Ruby that some consider impolite.

In some tinkerings with Clojure this weekend, I was struck how the ns function is more effective at both declaring dependency and coupling and in restricting the possible distress those qualities may bring. Consider this snippet from my weekend project:

(ns hrq.routes

  (:use compojure.core

        hrq.core)

  (:require [compojure.route :as route]

            [compojure.handler :as handler]

            [ring.middleware.params :as params]))

I have pedantic quibbles with ns, but I like what’s happening here. This file can only use the functions in compojure.core and hrq.core with no namespace qualifications. This file can only use the functions from compojure.route, compojure.handler and ring.middleware.params when they are qualified with the proper prefix. So now I have a very good idea of what code this particular file depends on and where I should look to find behavior that this file is subject to.

To a lesser extent, I have a good guess about what state this file depends on. If there are dynamically scoped variables (pardon me if those are the wrong Clojure/Lisp words) in the dependencies declared for this file, I would need to care about them. If those files are pure behavior (i.e. referentially transparent pure functions), I have nothing to worry about.

Clojure isn’t perfect in this regard; it does allow mutations and state changes outside of functions. It’s not strictly referentially transparent like Haskell is. The tradeoff is worthwhile, in my opinion. Admit some possible coupling in exchange for ease of building typical programs.

I’m not sure that Clojure is inherently superior to Ruby in this regard. It’s possibly a momentary cultural advantage, a reaction by those who were burned by expansive, implicit dependencies in Ruby and other languages. That said, it’s a good example of Clojure’s considered separation of concerns solving problems that are quite thorny in other languages.

Intermediate variables, organizing OO, meeting Grinders half way

I work with Dave Copeland at LivingSocial, but not on the same team. Maybe someday I’ll fix that, but for now I learn a lot from his writings. Herein, a few things worth checking out yourself.

If you ever need to read my code, you’ll eventually come to suspect I have a particular dislike for intermediate variables. You’ll come to suspect this through finding lots of uses of inject and tap, two Ruby methods not everyone is on good terms with. You can imagine I’d side with Dave on the subject of Tap versus intermediate variables. You’d be right, but Dave says it so well, you should read his take on the joy of tap. He also shows how to annoy people with tap-like constructs in other languages. If you’re into combinators, Reg Braithwaite has written about tap in terms of Kestrels.

I’ve been learning a lot about how to think about organizing a non-trivial object-oriented system this year. Gary Bernhardt is doing some fantastic work explaining a hybrid imperative/object/functional system. If you don’t have time to dive into Gary’s entire backlog (it’s worth it to find the time), Dave covers some similar ground describing the only four types of classes in your OO system. Think of these as a post-hoc observation on how many systems seem to evolve; Record objects take root, Service objects reveal themselves (often intertwined amongst other objects), Builders are sprinkled throughout, and there are a few classes hanging out that you wish you’d made immutable. These are handy guides for thinking about and refactoring an existing design. That said, I think it would be overkill to start a design with these archetypes. Caveat: some developers will really dislike organizing a system this way; tread carefully.

I’ve written about the virtues of The Grinder. I know a lot of non-Grinders wish that the Grinder knew more about how to take the code they’ve made to work and improve it so that it is more malleable in the future. Making it Right: Technical Debt vs. Slop sets out a good mindset on how this can happen. Think before you type, write a test, make it work, and then tidy it up with future malleability in mind. From there, non-Grinders need to meet Grinders in the middle is in shrinking the feedback cycle. When tests are too much effort to write or take too long to run, Grinders fall back to their old habits. When making it right involves too many intermediate steps with nothing to show for it, Grinders move on to the next thing. When a non-Grinder learn to be less precious with our work, or a Grinder learns to take a moment to round off the sharp corners on their work, you end up with a much stronger team. Fight for it.

Why I’m down on hypermedia containers

In response to my hypermedia opinions, Mike Kelly said:

These two seem to conflict: “In my opinion, abstract container formats aren’t useful.” and “Just use JSON”. People normally talk about “generic” media types, but they don’t have to a “container” at all, they can simply add conventions for linking. Having conventions for this stuff is useful because it allows us to build tooling around it, if everyone reinvents the wheel in their own way then we can’t build re-usable code. For a similar reason, “specifying your own custom MIME types” is not a good idea – there’s also the time cost associated with doing that. If you use something like hal+json you avoid that cost, and can concentrate on establishing your API’s workflows via link relations.

The logic behind the madness goes like this: abstract containers aren’t solving a problem I currently have. Unfortunately, this means they create problems for me. In the end, I’m building an API to provide functionality, not as advocacy.

As of summer 2012, there are ideas like HAL, JSON collections, etc. and specifications of those ideas. There are very few implementations. As a service provider, I wouldn’t actually get any benefit out of using those formats. The convention, within my own API, that fields ending in _url are links is sufficient. I’d actually end up net negative, because I’d have to explain how e.g. HAL works and support client developers seeking to understand how to work with it. Anyone building to my API would likely end up having to write their own HAL code, so they don’t benefit much either.

I’ve decided to use JSON because providing an API that returns HTML and expects users to scrape it via selectors is extremely confusing to developers. Keep in mind, not everyone is savvy to the latest development trends. To them, an API means an HTTP service that returns XML or JSON. If I were to embrace HTML as a response type, I’m again stuck with explaining a new concept to client developers.

If it’s not obvious yet, one of my main principles in adopting hypermedia is to avoid educating developers on hypermedia as much as possible. I’m in the game of providing a useful API, not a system that shows off the possibilities of hypermedia and how deeply committed I am to its theories.

Finally, I’ve chosen to craft my own content types because I need some kind of contract with client developers that tells them what kind of data they can expect to see, plus some documentation that expresses that contract in a way that is easy for humans to understand. An RFC-style specification that states what a content type MUST/MAY/SHOULD include is exactly the right kind of abstraction. It allows me to update the specification with a version identifier and specify how each revision changes in terms of what data is available. Further, I found a content type the most tractable solution for specifying the input formats supported by PUT and POST endpoints. None of the abstract containers that existed as of August 2012 fit my needs for specifying links, structure, and how to submit data via POST/PUT/PATCH.

Honestly, link traversal and machine-to-machine interaction are not the pain I’m feeling. What I want is the simplest possible API that allows potential client developers to understand what they can do via our API and how to do it. Further, I want it to be possible, even if it’s not magical or easy, to change the API in the future so I’m not constricted by its current design. I feel no need to apply all of the hypermedia principles to make something useful; I can cherrypick some of the hypermedia principles and still achieves the goal of an API that is stable but not set in stone.