2012
Tool agnosticism is good for you
When it comes to programming editors, frameworks, and languages, you’re likely to take one of three stances: marry, boff, or kill. There are tools that you want to use for the rest of your life, tools that you want to moonlight or tinker with, and tools you never want to use again in your life.
Tool antagonism, ranting about the tools you want to kill, is fun. It plays well on internet forums. It goes nicely with beer and friends. But on a team that isn’t using absolutely terrible tools, it’s a waste of time.
Unless your team is bizarrely like-minded, it’s likely some disagreement will arise along the lines of editors, frameworks, and languages. I’ve been that antagonistic guy. We can’t use this language, it has an annoying feature. We can’t use this framework, it doesn’t protect us from an annoying kind of bug. I’ve learned these conversations are a waste of social capital and unnecessarily divisive. Don’t be that guy.
There are usually a few tools that are appropriate to a given task. I often have a preference and choose specific tools for my personal projects. There are others tools that I’m adept at using or have used before, but find minor faults with. I’ve found it way better for me to accept that other, non-preferred tool if it’s already in place or others have convictions about using it. Better to put effort into making the project or product better than spinning wheels on programming arcanery.
When it comes to programmer tools, rational agnosticsm beats antagonism every time. Train yourelf to work amazingly-great with a handful of tools, reach adeptness with a few others, and learn how to think with as many tools as possible.
Rails 4, all about scaling down?
To some, Rails is getting big. It brings a lot of functionality to the table. This makes apps easier to get off the ground, especially if you aren’t an expert. But, as apps grow, it can lead to the pain; there’s so much machinery in Rails, it’s likely you’re abusing something. It’s easy to look at other, smaller tools and think there’s green grass over there.
Rails 4 might have answers to this temptation.
On the controller side of things, it seems likely that some form of Strobe’s Rails extensions will find their way into Rails, making it easier to create apps (or sub-apps) that are focused on providing an API and eschew the parts of ActionPack that aren’t necessary for services. The thing I like a lot about this idea is it covers a gap between Sinatra and Rails. You can prototype your app with all the conveniences of Rails and then strip out the parts you don’t need as you grow the app and strip it down to provide quick services. You could certainly still rewrite services in Sinatra, Grape, or Goliath, but it’s nice to have an option.
On the model side of things, people are, well, modeling. Simpler ways to use use ActiveModel with ActionPack in Rails will appear in Rails 4. The components the DataMapper team is working on, in the form of Virtus seem really interesting too. If you want to get started now, you can check out ActiveAttr right now, sort of the bonus track version of ActiveModel. Chris Griego’s put a lot of solid thought into this; you definitely want to check out his slides on models everywhere; they’re lurking in your controllers, your requests, your responses, your API clients, everywhere.
In short, my best guess on Rails 4, right now, is that it will continue to give developers a curated set of choices and frameworks to get their application off the ground. It will add options to grow your application’s codebase sensably once it’s proven out.
What I know, for sure, is that the notion of Rails 4 seems really strange to me. How fast time flies. Uphill, both ways.
This silver bullet has happened before and it will happen again
Today it's Node. Before it was Rails. Before it was PHP. Before it was Java. Cogs Bad:
There’s a whole mindset - a modern movement - that solves things in terms of working out how to link together a constellation of different utility components like noSQL databases, frontends, load balancers, various scripts and glue and so on. Its not that one tool fits all; its that they want to use all the shiny new tools. And this is held up as good architecture! Good separation. Good scaling.
I’ve fallen victim to this mindset. Make everything a Rails app, solve all the problems with Ruby, store all the data in distributed databases! It’s not that any of these technologies are wrong it’s just they might not yet be right for the problem at hand.
You can almost age generations of programmers like tree rings. I’m of the PHP/Rails generation, though I started in the Linux generation. A few years ago, I thought I could school the Java generations. But it turns out, I’ve learned a ton from them, even when I was a bit of a hubristic brat. The Node generation of developers will teach me more still, both in finding new virtuous paths and in going down false paths so I don’t have to follow them.
That said, it would be delightful if there was a shortcut to get these new generations past the “re-invent all the things!” phase and straight into the “make useful things and constructive dialog about how they’re better” phase.
Bootstrap, subproject, and document your way to a bigger team
Zach Holman's slides on patterns GitHub uses to scale their team Ruby Patterns from GitHub's Codebase:
Your company is going to have tons of success, which means you'll have to hire tons of people.
My favorites:
- Every project gets a
script/bootstrap
for fetching dependencies, putting data in place, and getting new people ready to go ASAP. This script comes in handy for CI too. - Try new techniques by deploying it only to team members at first. The example here was auto-escaping markup. They started with this only enabled for staff, instead of turning it on for everyone and feeling the hurt.
- Build projects within projects. Inevitably areas of functionality start to get so complex or generic that they want to be their own thing. Start by partitioning these things into
lib/some_project
, document it with a read me inlib/some_project
and put the tests intest/some_project
. If you need to share it across apps or scale it differently someday, you can pull those folders out and there you go. - Write internal, concise API docs with TomDoc. Most things only need 1-3 lines of prose to explain what’s going on. Don’t worry about generating browse-able docs, just look in the code. I heart TomDoc so much.
These ideas really aren’t about patterns, or scaling the volume of traffic your business can handle. They’re about scaling the size of your team and getting more effectiveness out of every person you add.
Write more manpages
Every program, library, framework, and application needs documentation of some sort; this much is uncontroversial. How much documentation, what kinds of documentation, and where to put that documentation are the questions that often elicit endless prognostication.
When it comes to documentation aimed at developers, there’s a spectrum. On one end, there’s zero documentation, only code. On the other end of the spectrum, are literate programs; the code is intertwined with the documentation and the language is equally geared towards marking up a document and translating ideas into machine executable code.
Somewhere along this spectrum exists a happy ideal for most programmers. Inline API docs ala JavaDoc, RDoc, and YARD have been popular for a while. Lately, tools like docco and rocco have raised enthusiasm for “semi-literate programming”. There’s also a lot of potential in projects exhaustively documenting themselves in their Cucumber features as vcr does.
All of these tools couple code with documentation, per the notion that putting them right next to each other makes it more likely documentation gets updated in sync with the code. The downside to this approach is that code gets ‘noised up’ with comments. Often this is a fair trade, but it occasionally makes navigating a file cumbersome.
It happens that Unix, in its age-old sage ways, has been storing its docs out-of-line with the relevant code for years. They’re called manpages, and they mostly don’t suck. Every C API on a modern Unix has a corresponding manpage that describes the relevant functions and structures, how to use it, and any bugs that may exist. They’re actually a pretty good source of info.
Scene change.
It so happens that Ryan Tomayko is a Unix afficionado and wrote a tool that is even better for writing manpages than the original Unix tooling. It’s called ronn, and it’s pretty rad; you write Markdown, and you get bonafide UNIX manpages plus manpage-styled HTML.
Perhaps this is a useful way to write programmer-focused docs. Keep docs out of the code, put it in manpages instead, push it to GitHub pages. Code stays focused, docs still look great.
I took John Nunemaker’s scam gem and put this idea to the test. Here’s what the manpage looks like, with the default styling provided by ronn:

Here’s the raw ronn document:
No Ruby files were harmed in the making of this documentation.
It took me about ninety minutes to put this together. Probably 33-50% of that time was simply tinkering with ronn and making sure I was writing in the style that is typical of manpages. So we’re talking about forty-five minutes to document a mixin with seven methods. For pretty good looking output and simple tooling, that’s a very modest investment.
The potential drawbacks are the same as any kind of documentation; it could fall out-of-sync with the production code. Really, I think this is more of a workflow issue. Ideally, every commit or merged branch is reviewed to make sure that relevant docs are updated. As a baseline, the release workflow for a project should include a step to make sure docs are up-to-date.
In short, I have one datapoint that tells me that ronn is a pretty great way to generate programmer-oriented documentation. I’ll keep tinkering with it and encourage other developers to do the same.
How to make a CIA spy, and other anecdotes
And the hilariously incompetent, such as the OSS operative whose cover was so far blown that when he dropped into his favorite restaurant, the band played “Boo! Boo! I’m a Spy.”
Interesting, new-to-me tidbits on what goes into making CIA spies, what they actually do in the field, and how the practitioners of spy craft have changed over the years. The bad news: spies recruitment doesn’t exactly work like in Spies Like Us. The good news: the CIA and its spying is closer to “just as bad/inept as you’d think” than “as diabolical as a James Bond villain”.
Own your development tools, and other cooking metaphors
Noel Rappin encourages all of us to use our development tools efficiently. If your editor or workflow aren’t working for you, get a new tool and learn to use it.
I’ve been working with another principle lately: minimize moving parts. I used to spend time setting up tools like autotest, guard, or spork. But it ended up that I spent too much time tweaking them or, even worse, figuring out exactly how they were working.
I’ve since adopted a much simpler workflow. Just a terminal, a text editor, and some scripts/functions/aliases/etc. for running the stuff I do all the time. I take note when I’m doing something repeatedly and figure out how I can automate it. Besides that, I don’t spend much time thinking about my tools. I spend time thinking about the problem in front of me. It makes a lot of sense, when you think about it.
I say you should “own” your tools and minimize moving parts because you should understand how they all work together and how they might change the behavior of your code. If you don’t own your tools in this way, you’ll end up wasting time debugging someone else’s code, i.e. a misbehaving tool. That’s just a waste of time; when you come across a tool that offends in this way, put aside a time block to fix it, or discard it outright.
Automated code goodness checking with cane
A few nights ago, I added Xavier Shay’s cane to Sifter. It was super simple, and cane runs surprisingly fast. Cane is designed to run as part of CI, but Sifter doesn’t really an actual CI box. Instead, I’ve added it to our preflight script that tells us whether deploying is a good idea or not, based on our spec suite. Now that preflight can tell us if we’ve regressed on code complexity or style as well. I’m pretty pumped about this setup.
Next step: add glib comments on failure. I’m thinking of something like “Yo, imma let you finish but the code you’re about to deploy is not that great.”
What kind of HTTP API is that?
An API Ontology: if you were curious about what the difference between an RPC, SOAP, REST, and Hypermedia API are, but were afraid to ask. In my opinion, this is not prescription; I don't think there's anything inherently wrong with using any of these, except SOAP. Sometimes an RPC or a simple GET is all you need.
On rolling one's own metrics kit
On instrumenting Rails, custom aggregators, bespoke dashboards, and reinventing the wheel; 37signals documents their own metrics infrastructure. They’re doing some cool things here:
- a StatsD fork that stores to Redis; for most people, this is way more sensible than the effort involved in installing Graphite, let alone maintaining it
- storing aggregated metrics to flat files; it’s super-tempting to overbuild this part, but if flat files work for you, run with it
- leaning on ActiveSupport notifications for instrumentation; I’ve tinkered with this a little and it’s awesome, I highly recommend it if you have the means
- building a custom reporting app on top of their metric data; anything is better than the Graphite reporting app
More like this, please.
One could take issue with them rolling this all on their own, rather than relying on existing services. If 37signals were a fresh new shop, one would have a point. Building out metrics infrastructure, even today with awesome tools like StatsD, can turn into an epic time sink. However, once you’ve been around for several years and thought enough about what you need to measure and act on your application, rolling your own metrics kit starts to make a lot of sense. It’s fun too!
Of course, the important part is they’re measuring things and operating based on evidence. Whether you roll your own metrics infrastructure or use something off the shelf like Librato or NewRelic, operating on hard data is the coolest thing of all.
Whither code review and pairing
Jesse Storimer has great thoughts on code review and pairing. You Should be Doing Formal Code Review:
Let’s face it, developers are often overly confident in their work, and telling them that something is done wrong can be taken as a personal attack. If you get used to letting other people look at, and critque, your code then disidentification becomes a necessity. This also goes vice versa, you need to be able to talk about the code of your peers without worrying about them taking your critiques as a personal attack. The goal here is to ensure that the best code possible makes it into your final release.
I struggle with this so much, on both the giving and receiving side. When I’m reviewing code, I find myself holding back so as not to come off as saying the other person’s code is awful and offensive. On the receiving side, I often get frustrated and feel like a huge impediment has been put in front of my ability to ship code. In reality, neither is the case. Whether I’m the reviewer or the reviewee, the other party is simply trying to get the best code possible into production.
Jesse has further great points: review helps you avoid shortcuts, encourages one to review their own code (my favorite), and it makes for better code.
More recently, Jesse’s pointed out that pairing isn’t necessarily a substitute for code review: “…pairing is heavyweight and rare. Code review is lightweight and always.”
In my experience, pairing is great for cornering a problem and figuring out what the path to the solution is. Pairing is great for bringing people into the fold of a new team or project. Review is great for enforcing team standards and identifying wholly missing functionality. Review is sometimes great for finding little bugs, as is pairing.
Neither pairing or code review is a silver bullet for better software, but when a team applies them well, really awesome things can happen.
Hip-hop for nerds: "Otis"
(Ed. Herein, I attempt to break down a current favorite of mine, “Otis” by Jay-Z and Kanye West, in terms familiar and interesting to nerds, specifically of the nerd and/or comedy persuasion.)
“Otis” is a song arranged and performed by two best pals, Jay-Z and Kanye West. It opens with a sample of Otis Redding (hence the title) singing “Try a little tenderness”. Opening with a sample like this tells us two things:
- Misters Z and West enjoy the music of Mr. Redding enough that they were compelled to include it in their own music.
- The gentlemen are also well connected and affluent, as not just everyone can afford to sample a legend like Redding in their music
A digression: sampling in hip-hop is one of its key characteristics and is of particular interest to nerds. It is a way that we can connect, through “nerding out” with the artist and find what it is that they respect and listen to. It is also a bit of a recursive structure; “Otis” samples Otis, Otis borrowed from gospel and blues, blues and gospel borrowed from traditional songs, etc. Finally, sampling is a recombinant form; in “Otis”, there is a verbatim sample in the opening bars, but the sample devolves to a looped-beat in the middle of the song and a mere sound-effect at the end of the song.
As Misters Z and West enter the song proper, the rappers trade verses about their affluence (“New watch alert, Hublot’s / Or the big face Rollie I got two of those”), the recursive (again, nerdy) deception they use to evade the papperazzi (“They ain’t see me ‘cuz I pulled up in my other Benz / Last week I was in my other other Benz”), a conflicting verse about how they would seek the paparazzi out (“Photo shoot fresh, looking like wealth / I’m ‘bout to call the paparazzi on myself”), more boasting of their affluence and skill (“Couture level flow, it’s never going on sale / Luxury rap, the Hermes of verses”), and such.
This song features a video, so we wouldn’t be properly doing a nerd dissection of it if we were to neglect that. It opens with our heroes approaching a Maybach sedan with a saw and a blow torch. Following a “car modification montage”, it appears the doors have been removed from the car and the front end of the car has been placed on the back, and vice versa. Another display of affluence, with perhaps a touch of hipster irony thrown in.
The video follows with various shots of our heroes rapping and driving the Maybach through an abandoned dock or airfield. Our heroes are in the front seat of the car and there are four models in the back seat, one seated precariously atop the one in the middle as our heroes make dangerous-looking manuvers in the car. At one point it appears they will lose a model through the door-less side of the car. At multiple points, it appears the boobs of the models might fight free of their loose fitting shirt. It should be noted that the appearance of a possible free boob could be considered quite progressive for a hip-hop music video.
The Maybach is, in my opinion, the most difficult to interpret signal the song and video send. Are we to understand that Misters Z and West are so affluent they can afford to put down six figures on the purchase and massively impractical modification of a high-end luxury car? Perhaps they had a spare one laying around and felt it would be a better use to destroy it than to leave it around. Or, perhaps this was a vehicle for a clever tax deduction?
Mr. West's CPA: You're going to owe a lot of tax on this purchase of your other-other Benz, 'Ye.
Mr. West: What if I were to use it in a music video for the purposes of promoting my upcoming album?
Mr. West's CPA: Well then you could depreciate it at 50% this year and 25% for the next two years, but you're still going to owe a lot.
Mr. West: If I were to take it to a chop shop and have them put the ass-end of the car on the front and turn the doors into wings, could I depreciate it faster?
Mr. West's CPA: throw some models in the back seat, and it *just might work*!
(Ed. as it turns out, the vehicle was to be auctioned and the proceeds donated to charity)
The other enigma of the video is the presence of comedian Aziz Ansari. Mr. Ansari has documented (Ed. hilariously) his friendship with Mr. West. Thus it is not shocking to see him appear in the video. He appears for only an instant, and his appearance marks the absence of the models in the rest of the video. Perhaps, we are to believe, Mr. Ansari is the pumpkin that the models turn into after some deadline has passed for Misters Z and West.
Despite, or perhaps because, of its mysteries, I find “Otis” is a fantastic piece of hip-hop production. The samples is well chosen and deconstructed, the verses are interesting (if mentally unchallenging), and the video is engaging to watch. I would easily rank it amongst the top songs of recent memory, were I one to make lists of top songs.
Stand on the shoulders of others' REST mistakes
Like all API design, putting a REST API on your app is tricky business that most people learn through lots of mistakes. So stand on the shoulders of other peoples mistakes! Thus, REST worst practices:
In the REST world, the resource is key, and it’s really tempting to simply look at a Django model and make a direct link between resources and models — one model, one resource. This fails, though, as soon as you need to provide any sort of aggregated resource, and it really fails with highly denormalized models. Think about a Superhero model: a single GET /heros/superman/ ought to return all his vital stats along with a list of related Power objects, a list of his related Friend objects, etc. So the data associated with a resource might actually come out of a bunch of models. Think select_related(), except arbitrary.
Mistaking the app’s internal model with what API users want to work with was the mistake I made on the first API I wrote.
Any big API is going to need to have dedicated servers that just serve API applications: the performance characteristics of large-scale APIs are so different from web apps in general that they almost always require separately-tuned servers.
This is how I prefer to roll my APIs lately. At the least, they should be a separate set of controllers. If you can extract a completely different application even better.
Represent dat API
Rails is missing an abstraction when it comes to building REST APIs, in my opinion. Requests route through controllers, controllers call models or services to obtain the right objects. And then…you awkwardly try to bang a JSON object together with an ERB template? It gets awkward quickly.
There’s a lot of experimentation in the wild attempting to figure out what works well here. You can bang out a bunch of presenter classes. You can describe and compose representations. You can go resource oriented.
I came across one yesterday that immediately caught my eye. You could just use lambda
to implement a bunch of functions that present, decorate, or map objects from one representation to another. To borrow an example:
# Define a base representation
UrlsPresenter = lambda do
{
'self' => "#{Gauges.api_url}/me",
'gauges' => "#{Gauges.api_url}/gauges",
'clients' => "#{Gauges.api_url}/clients",
}
end
# Compose the base representation with more data
UserPresenter = lambda do |user|
{
'id' => user.id,
'email' => user.email,
'name' => user.name,
'urls' => UrlsPresenter.call
}
end
# Pass an object to the presenter and convert it to JSON
UserPresenter[user].to_json
I love that this one adds no machinery and no state. Input, function, output. With just lambda
, you can describe a bunch of transformations and string them together into meaningful and interesting pipelines. I’m experimenting with this now, hoping to find an interesting way that functional programming approaches can make it simpler to build APIs with Rails.