When to Class.new

In response to Why metaprogram when you can program?, an astute reader asked for an example of when you would want to use Class.new in Ruby. It’s a rarely needed method, but really fun when faced with a tasteful application. Herein, a couple ways I’ve used it and an example from the wild.

Dead-simple doubles

In my opinion, the most “wholly legitimate” frequent application of Class.new is in test code. It’s a great tool for creating test doubles, fakes, stubs, and mocks without the weight of pulling in a framework. To wit:

TinyFake = Class.new do

  def slow_operation
    "SO FAST"
  end

  def critical_operation
    @critical = true
  end

  def critical_called?
    @critical
  end

end

tiny_fake = TinyFake.new
tiny_fake.slow_operation
tiny_fake.critical_operation
tiny_fake.critical_called? == true

TinyFake functions as a fake and as a mock. We can call a dummy implementation of slow_operation without worrying about the snappiness of our suite. We can verify that a method was called in the verification section of our test method. Normally you would only do one of these things at a time, but this shows how easy it is to roll your own doubles, fakes, stubs, or mocks.

The thing I like about this approach over defining classes inside a test file or class is that it’s all scoped inside the method. We can assign the class to a local and keep the context for each test method small. This approach is also really great for testing mixins and parent classes; define a new class, add the desired functionality, and test to suit.

DSL internals

Rack and Resque are two examples of libraries that expose an API based largely on writing a class with a specific entry point. Rack middlewares are objects with a call method that generates a response based on an environment hash and any other middlewares that are contained within the middleware. Resque expects the classes that work through enqueued jobs define a perform method.

In practice, putting these methods in a class is the way to go. But, hypothetically, we are way too lazy to type class/end, or perhaps we want to wrap a bunch of standard instrumentation and logging around a simple chunk of code. In that case, we can write ourself a little shortcut:

module TinyDSL

  def self.performer(&block)
    c = Class.new
    c.class_eval { define_method(:perform, block) }
    c
  end

end

Thingy = TinyDSL.performer { |*args| p args }
Thingy.new.perform("one", 2, :three)

This little DSL gives us a shortcut for defining classes that implement whatever contract is expected of performer objects. From this humble beginning, we could mix in modules to add functionality around the performer, or we could pass a parent class to Class.new to make the generated class inherit from another class.

That leads us to the sort-of shortcoming of this particular application of Class.new: if the unique function of performer is to wrap a class around a method (for instance, as part of an API exported by another library), why not just subclass or mixin that functionality in the client application? This is the question you have to ask yourself when using Class.new in this way and decide if the metaprogramming is pulling its weight.

How Class.new is used in Sinatra

Sinatra is a little language for writing web applications. The language specifies how HTTP requests are mapped to blocks of Ruby. Originally, you wrote your Sinatra applications like so:

get '/'  { [200, {"Content-Type" => "text/plain"}, "Hello, world!"] }

Right before Sinatra 1.0, the team added a cleaner way to to build and compose applications as Ruby classes. It looks the same, except it happens inside the scope of a class instead of the global scope:

class SomeApp < Sinatra::Base

    get '/'  { [200, {"Content-Type" => "text/plain"}, "Hello, world!"] }

end

It turns out that the former is implemented in terms of the latter. When you use the old, global-level DSL, it creates a new class via Class.new(Sinatra::Base) and then class_evals a block into it to define the routes. Short, clever, effective: the best sort of Class.new.


So that’s how you might see Class.new used in the wild. As with any metaprogramming or construct labeled “Advanced (!)”, the main thing to keep in mind, when you use it or when you set upon refactoring an existing usage, is whether it is pulling its conceptual weight. If there’s a simpler way to use it, do that instead.

But sometimes a nail is, in fact, a nail.


The year of change that was 2011

The year is winding down, and its time to reflect on the 2011 that was. The year took me into the dark abyss of the American housing market and back out the other end. Somehow I ended up in Austin, a little lighter in the pocket-book to show for it. It saw the most exciting, and ultimate, year of that which was Gowalla. I got in pretty good shape, and then got into pretty mediocre shape. I read a lot, coded a lot, wrote a bit, and learned a lot about everything. ‘Twas a tough year, but all’s well that ends well, or so they say.

Things I read

The most interesting fiction I read this year was Neuromancer. This one was probably more jaw dropping before The Matrix came out, but it was still interesting. That said, it reads like the Cliffs Notes version of Neal Stephenson.

The best non-fiction I read was Godël’s Proof. It’s a short, clear explanation of his approach to computability. Even if you’re medicore at math and proofs, like myself, this one will stick.

The best technical book I consumed this year was Smalltalk Best Practice Patterns. First off, I love Kent Beck’s concise but powerful writing style. Second off, this book is like discovering that someone wrote down a really good theory of the elements of software decades ago and no one told you about them. Third, you should get a copy of this book, look for the used ones.

Things I made this year

I piled a lot of the things I learned about infrastructure and shipping software at Gowalla into Mixing a Persistence Cocktail. How to think about scaling, how to ship incrementally, overcoming THE FEAR. It’s all there.

I large chunk of my time at work on Chronologic. I presented on it too. Then I open sourced it. We deployed it at Gowalla and it held up, but not without some rough spots. I presented on those too.

I did a lot of open source tinkering this year. A lot of it is half-baked, but at least it’s out there. That was a major personal goal for the year, so I’m glad I at least stuck my neck out there, even if I’m not rolling in kudos. Yet!

Things I wrote this year

Modulo a summer lull, I ended up doing a good bit of writing this year. The crowd favorites were Why metaprogram when you can program?, The Current and Future Ruby Platform Cassandra at Gowalla, and Your Frienemy, the ORM. My personal favorites were The ear is connected to the brain, Post-hoc career advice for twenty-something Adam, and How to listen to Stravinsky’s Rite of Spring.

Of course, working at Gowalla this year was quite the ride. I wrote about that too, sometimes rather obliquely. Relentless shipping, The pitfalls of growing a team, The guy doing the typing makes the call, Skip the hyperbole, Sleep is the best, and Don’t complain, make things better were all borne of things I learned over the course of the year.


If I had to pithily summarize the year, I’d tie it together under change. Change is good, challenging, frustrating, and inevitable. Better to change than not, though!


Four essential topics of 2011, in charts

The Year In 4 Charts: Planet Money does an excellent job collecting four economic charts (themselves chosen from three collections of best-of charts). I’m a dilettante as far as economics and economics go, but these charts do a great job of rolling up what seemed to have been the essential stories of the year.

Spendingandrevenues custom

A picture can nullify a thousand talking points, no?


Making a little musical thing

After software development, music is probably the thing I know the most about. My brain is full of history, trivia, and a modest bit of practical knowledge on how to read notation and make music come out. That said, I haven’t really practiced music in several years. I’ve been busy nerding out on other things, and I’ve grown a bit lazy. Too lazy to find people to play with, too lazy for scales, too lazy to even tune a stringed instrument. Very, very lazy.

Long story short, I’ve been wanting to get back into music lately, but I want to learn something new. Something entirely mysterious to me. Given my recent fascination with hip-hop, I’m eager to try my hand at making the beats that form the musical basis of the form.

There are a lot of priors to cover (tinkering with various sequencers, drum machines, and synthesizers; steeping myself in sample culture; listening to the actual music and understanding its history), but I just made a short, mediocre little beat and put it on the internet. Herein, I reflect on making that little musical thing:

  • I’m sure that, if I get serious about this, I’ll need real software like Ableton or Logic. But for my tinkering, it turns out GarageBand is sufficient. The included software instruments aren’t amazing or even idiomatic samples (no TR808, no “Apache” break included), but with a little bit of tinkering, they produce results.
  • Laying a drum track down that is little more than a fancy click track helps to get started. GarageBand has a handy feature where you can define the a number of bars as a loop and then record multiple takes, review them, and discard the takes you don’t want.
  • What an app lacks in samples you can make up in effects. Throwing a heavy dose of echo and a ridiculous helping of reverb made an otherwise pedestrian drum track way more interesting.
  • I didn’t go into this with anything in my head that I wanted to make real. For the drum track, I ended up with a pretty typical beat. A little quantization made it end up sound better and more interesting than it really is. This process, manual input with some computer-assisted tweaking, produced way better results than the iOS drum machines I’ve used in the past.
  • Tapping out the bass-line took a little more time than the drums. I didn’t have anything “standard” in my head, so I doodled a bit. This is where the “takes” gizmo in GarageBand came in really handy. Record a bunch of things, decide which one is most interesting, clean it up a little, throw an effect or two on it to make it more interesting, on to the next track.
  • In retrospect, lots of effects is maybe a crutch. I don’t have enough taste yet to tell.
  • With the drums and bass down, it’s time to adorn the track with a melody or interesting hit for effect. I added one subtle thing, but couldn’t think of anything I liked that was worth making prominent. If I were actually trying to use this beat for something, I’d keep digging. But for my first or second beat, it’s not a big deal.

I wanted to jot down my thoughts because I’d like to write more about making and understanding music, but also because I keep meaning to write down what I find challenging and interesting as I start from a “beginner’s mind” in some craft or skill. And so I did.

You’re six hundred words into this thing now, so I’ll reward you, if we could call it a reward, with “An Beat”.


Crafting lightsabers, uptime the systems, a little Clojure

Herein, some great technical writings from the past week or two.

Crafting your editor lightsaber

Vim: revisited, on how to approach Vim and build your very own config from first principles. My personal take on editor/shell configurations is that its way better to have someone else maintain them. Find something like Janus or oh-my-zsh, tweak the things it includes to work for you, and get back to doing what you do. That said, I’m increasingly tempted to craft my own config, if only to promote the fullness and shine of my neck beard.

Uptime all the systems

Making the Netflix API More Resilient lays out the system of circuit breakers, dashboards, and automatons Netflix uses to proactively maintain API reliability in the face of external failures. Great ideas anyone maintaining a service that needs to stay online.

List All of the Riak Keys, on the trickiness of SELECT * FROM all_the_things-style queries in Riak, or any distributed database, really. The short story is that these kinds of queries are impractical and not something you can do in production. The longer story is that there are ways to work around it with clever use of indexes and data structures. Make sure you check out the Riak Handbook from the same author.

A little bit of Clojure

Introducing Knockbox introduces a Clojure library for dealing with conflict resolution in data stored in distributed databases like Riak. If you’re working with any database that leaves you wondering what to do when two clients get in a race condition, these are the droids you’re looking for. I would have paid pretty good money to have known about this a few months ago.

Clojure’s Mini-languages is a great teaser on Clojure if, like me, you’ve tinkered with it before but are coming back to it. This is particularly useful if you’ve seen some Lisp or Scheme before, but are slightly confused by what’s going on with all the non-paren characters that appear in your typical Clojure program. Having taken a recent dive into the JVM ecosystem, I have to say there’s a lot to like in Clojure. If your brain understands static types but thinks better in dynamic types (mine does), give this a look.


I occasionally post links with shorter comments, if you’d like a slightly more-frequent dose of what you just read.


A short routine for making awesome things

I’ve said all this stuff before, but I came across some nice writing that highlights people doing it. I’m repeating it because it’s important stuff.

Step one, get on that grind. Making things is about consistently making progress. Consistently making progress is about showing up every day and moving the ball forward. Progress can take different forms, and sometimes won’t even feel like progress at all. The crux of the biscuit is to make the time to do the things that need doing in order to produce the thing you’re excited about making.

Questlove, band leader of The Roots and pretty much my favorite music nerd of all time, spends most of his waking hours thinking about, rehearsing, or performing his music. A typical day for him is 11 AM - 7 PM at 30 Rock rehearsing for Late Night with Jimmy Fallon or writing new music, 8 PM - 2 AM spent performing or DJing, and late nights winding down by studying their performance from that day’s show or doing some crate digging (cool kid speak for listening to obscure stuff in your record collection).

Step two, simplify. You just can’t devote the mental energy to awesome stuff if your brain is going in multiple directions. Close as many of the social medias, chats, emails, and alarm klaxons as possible. If you’re an organized person, clear your workspace; if you’re a clutter person, just roll with your clutter[1]. And, of course, think critically about what you’re consuming and using. If a tool, book, TV show, or application isn’t pulling its weight helping you do or think awesome things, show it the door.

Matt Gemmell on simplicity:

More importantly, I also believe in simplifying my life, offline and online, to let me focus on doing what I want to do - whether that’s writing code, writing words, or helping other people with their work. To do that, I have to reduce the ambient noise.

Step three, stop. Think. You can’t grind and simplify all the time. Your brain needs room to breathe. If you ever wondered why you do your best thinking and problem solving in your dreams or in the shower, I’ll tell you why: those places have no computers, TVs, or internet. Every week, you need to get away from your computers, music, and distractors. Go someplace novel and interesting; a coffeeshop, a park, a busy boulevard, a quiet trail, whatever makes your brain happy. Take a notebook or whatever you can physically think on. Now use that time to take apart what you’re working on, think about how it works, and figure out how to make it work better.

Jacob Gorban on thinking time:

In this state, we may become so reactive to the tasks that need to get done that we just don’t stop, take a step back and reflect on the whole situation. We may just forget to think deeply, strategically about the business and even about the work tasks themselves.

Your brain will thank you for the chance to stop and think. You’ll feel better when you remove the extra crap that’s distracting you. You’ll glow inside when you put the time in every day to make things and end up with something awesome.

[1] Sorry, I’m not a clutter person, I can’t help you here.


Quality in the inner loop

Quality in Craftsmanship:

In software, this means that every piece of code and UI matters on its own, as it’s being crafted. Quality takes on more of a verb-like nature under this conception: to create quality is to care deeply about each bit of creation as it is added and to strive to improve one’s ability to translate that care into lasting skills and appreciable results.

When I wrote on “quality” a few months ago, I was thinking of it as an attribute one would use to describe the outer loop of a project. Do a bunch of work, locate areas that need more quality, but a few touches on those areas or note improvements for the next iteration, and ship it.

But what Brad is describing is putting quality into the inner loop. Work attains “the quality” as it is created, rather than as a secondary editing or review step. Little is done without considering its quality.

I’m extrapolating a bit from the letter of what Brad has written here, but that’s because I’ve been lucky enough to work with him. Indeed Brad’s work is of consistently high quality. Hopefully he’ll write more specifics about how quality code is created in the future (hint, Brad), and how much it relates to Christopher Alexander’s “quality without a name”.


Why metaprogram when you can program?

When I sought to learn Ruby, it was for three reasons. I’d heard of this cool thing called blocks, and that they had a lot of great use cases. I read there was this thing called metaprogramming and it was easier and more practical than learning Lisp. Plus, I knew several smart, nice people who were doing Ruby so it was probably a good thing to pay attention to. As it turns out, I will never go back to a language without the first and last. I can’t live without blocks, and I can’t live without smart, kind, fun people.

Metaprogramming requires a little more nuance. I understand metaprogramming well enough to get clever with it, and I understand it well enough to mostly understand what other people’s metaprogramming does. I still struggle with the nomenclature (eigenclass, metaclass, class Class?) and I often fall back to trial and error or brute-force tinkering to get things working.

On the other hand, I think I’ve come far enough that I can start to smell out when metaprogramming is done in good taste. See, every language has a feature that is terribly abused because it’s the cool, clever thing in the language: operator overloading in Scala, monadic everything in Haskell, XML in Java, and metaprogramming in Ruby.

Adam’s Handy Guide to Metaprogramming

This guide won’t teach you how to metaprogram, but it will teach you when to metaprogram.

I want you to think twice the next time you reach for the metaprogramming hammer. It’s a great tool for building developer-friendly APIs, little languages, and using code as data. But often, it’s a step too far. Normal, everyday programming will do you just fine.

There are two principles at work here.

Don’t metaprogram when you can just program

Exhaust all your all tricks before you reach for metaprogramming. Use Ruby’s mixins and method delegation to compose a class. Dip into your Gang of Four book and see if there isn’t a pattern that solves your problem.

Lots of metaprogramming is in support of callback-oriented programming. Think “before”/”after”/”around” hooks. You can do this by defining extension points in the public API for your class and mixing other modules into the class that implement logic around those public methods.

Another common form is configuring an object or framework. Think about things that declare models, connections, or queries. Use method chaining to build or configure an object that acts as a parameter list for another method or object.

Use the weakest form of metaprogramming possible

Once you’ve exhausted your patterns and static Ruby tricks, it’s time to play a game: how little metaprogramming can you do and get the job done?

Various forms of metaprogramming are weaker or stronger than others. The weaker ones are harder to screw up and less likely to require a deep understanding of Ruby. The stronger ones have trade-offs that require careful application and possibly need a lot of explanation to newcomers to your codebase.

Now, I will present to you a partial ordering of metaprogramming forms, in order of weak to strong. We can bicker on their specific placement, but I’m pretty certain that the first one is far better to use frequently than the last.

  • Blocks - I hesitate to call this a form of metaprogramming. But, it is sometimes abused, and it is sometimes smart to use blocks instead of tricks further down this list. That said, if you find yourself needing more than one block parameter to a method, you should consider a parameter object that holds those blocks instead.
  • Dynamic message send on a static object - You set a symbol on an object and later it will send that symbol as a method selector to an object that doesn’t change at runtime. This is weak because the only thing that varies is the method that gets called. On the other hand, you could have just used a block.
  • Dynamic message send on a dynamic object - You set a symbol and a receiver object, at some point they are combined into a method call. This is stronger than the previous form because you’ve got two points of variability, which means two things to hunt down and two more things to hold in your brain.
  • Class.new - I love this method so much. But, it’s a source of potential hurt when trying to understand a new piece of code. Classes magically poofing into existence at runtime makes code harder to read and navigate with simple tools. At the very least, have the civility to assign classes created this way to a constant so they feel like a normal class. Downsides, err, aside, I love this method so much, having it around is way better than not.
  • define_method - I like this method a lot too. Again, it’s way better to have it around than not. It’s got two modes of use, one gnarly and one not-so-bad. If you look at how its used in Rails, you’ll see a lot of instances where its passed a string of code, sometimes with interpolations inside said string. This is the gnarly form; unfortunately, it’s also faster on MRI and maybe other runtimes. There is another form, where you pass a block to define_method and the block becomes the body of the newly defined method. This one is far easier to read. Don’t even ask me the differences in how variables are bound in that block; Evan Phoenix and Wilson Bilkovich tried to explain it to me once and I just stared at them like a yokel.
  • class_eval - We’re getting into the big guns of metaprogramming now. The trick with class_eval is that its tricky to understand exactly which class (the metaclass or the class itself) the parameters to class_eval apply to. The upside is that’s mostly a write-time problem. It’s easy to look at code that uses class_eval and figure out what it intends to do. Just don’t put that stuff in front of me in an interview and expect me to tell you where the methods land without typing the damn thing into IRB.
  • instance_eval - Same tricks as class_eval. This may have simpler semantics, but I always find myself falling back to tinkering with IRB, your mileage may vary. The one really tricky thing you can do with instance_eval (and the class <<some_obj trick) is put methods on specific instances of an object. Another thing that’s better to have around than not, but always gives me pause when I see it or think I should use it.
  • method_missing - Behold, the easiest form of metaprogramming to grasp and thus the most widely abused. Don’t feel like typing out methods to delegate or want to build an API that’s easy to use but impossible to document? method_missing that stuff! Builder objects are a legitimate use of method_missing. Everything else requires deep zen to justify. Remember: friends don’t let friends write objects that indiscriminately swallow messages.
  • eval - You almost certainly don’t need this; almost everything else is better off as a weaker form of metaprogramming. If I see this, I expect that you’re doing something really, really clever and therefore have a well-written justification and a note from your parents.

Bonus principle!

At some point you will accidentally type “meatprogram” instead of “metaprogram”. Cherish that moment!


It’s OK to write a few more lines of code if they’re simple, concise, and easy to test. Use delegation, decorators, adapters, etc. before you metaprogram. Exhaust your GoF tricks. Read up on SOLID principles and understand how they change how you program and give you much of the flexibility that metaprogramming provides without all the trickery. When you do resort to trickery, use the simplest trickery you can. Document it, test it, and have someone review it.

When it comes to metaprogramming, it’s not about how much of the language you use. It’s about what the next person to see the code whispers under their breath. Don’t let your present self make future enemies.


Modern Von Neumann machines, how do they work?

Modern Microprocessors - A 90 Minute Guide!. If you didn't find a peculiar joy in computer architecture classes or the canonical tomes on the topic by Patterson and Hennessey, this is the thing for you. It's a great dive into how modern processors work, what the design challenges and trade-offs are, and what you need to know as a software developer.

Totally unrelated: when I interned at Texas Instruments, my last project was writing tests for a pre-silicon DSP. Because there were no test devices, I had to run my code against a simulator. It simulated several million gates of logic and output the result of my program as the wires that come out of the processor registers. This was fun, again in a way peculiar to my interest, at the time, in being a hardware designer/driver hacker. Let me tell you, every debugging tool you will ever see is better than inspecting hex values coming out of registers.

Anyway, these programs ran super slow, each run took about an hour. One day I did the math and figured out the simulator was basically running at 100 hz. Not kilohertz or megahertz. One hundred hertz. So, yeah. In the snow, uphills, both way.


Changing legacy code, made less painful

Rescuing Legacy Code by Extracting Pure Functions. Come across strange, pre-existing code. Decide you need to change it. Follow the pattern described herein. Apply TDD afterwards. I so wish someone had shown me this technique years and years ago. Also, Composed Method (from Smalltalk Best Practice Patterns) is so great, I can't even put it into words.


Cassandra at Gowalla

Over the past year, I’ve done a lot of work making Cassandra part of Gowalla’s multi-prong database strategy. I recently spoke at Austin on Rails on this topic, doing a sort of retrospective on our adoption of Cassandra and what I learned in the process. You can check out the slide deck, or if you’re a database nerd like me, dig into the really nerdy details below.

Why does Gowalla use Cassandra?

We have a few motivations for using Cassandra at Gowalla. First off, it’s become out database of choice for applications with relatively fixed query patterns that, for us to succeed, need to handle a rapidly growing dataset. Cassandra’s read and write paths are optimized for these kinds of applications. It’s good at keeping the hot subset of a database in memory while keeping queries that require hitting disk pretty quick too.

Cassandra is also great for time-oriented applications. Any time we need to fetch data based primarily on some sort of timestamp, Cassandra is a great fit. It’s a bit unique in this regard, and that’s one of the main reasons I’m so interested in Cassandra.

Cassandra is a Dynamo-style database, which yields some nice operational aspects. If a node goes down over night, we don’t take an availability hit; the ops people can sleep through the night and fix it later. The Cassandra developers have also done a great job of eliminating all the cases where one need to an entire Cassandra cluster at one time, resulting in downtime.

When does Gowalla not use Cassandra?

I don’t think Cassandra is all that great for iterating on prototypes. When you’re not sure what your data or queries will end up looking like, it’s hard to build a schema that works well with Cassandra. You’re also unlikely to need the strengths that a distributed, column-oriented database offers at that stage. Plus, there aren’t any options for outsourced Cassandra right now, and early-stage applications/businesses rarely want to devote expertise to hosting a database.

Applications that don’t grow data quickly, or can fit their entire dataset in memory on a pair of machines doesn’t play to Cassandra’s strengths either. Given that you can get a machine with a few dozen gigabytes of memory for the cost of rent in the valley, sometimes it does pay out to scale vertically instead of horizontally as Cassandra encourages.

Cassandra applications at Gowalla

We have a handful of applications going that use Cassandra:

  • Audit: Stores ActiveRecord change data to Cassandra. This was our training-wheels trial project where we experimented with Cassandra to see if it was useful for us. It was incrementally deployed using rollout and degrade. Worked well, so we proceeded.
  • Chronologic: This is an activity feed service, storing the events and timelines in Cassandra. It started off life as a secondary index cache, but became a system of record in our latest release. It works great operationally, but the query/access model didn’t always jive with how web developers expected to access data.
  • Active stories: We store “joinability” data for users at a spot so we can pre-merge stories and prevent proliferation of a bunch of boring, one-person stories. This was built by Brad Fults and integrated in one pull request a few weeks before launch. The nice thing about this one was that it was able to take advantage of Cassandra’s column expiration and fit really nicely into Cassandra’s data model.
  • Social graph caches: We store friend data from other systems so we can quickly list/suggest friends when they connect their Gowalla profile to Facebook or Twitter. This started life on Redis, but the data was growing too quickly. We decoupled it from Redis and wrote a Cassandra backend over a few days. We incrementally deployed it and got Redis out of the picture within two weeks. That was pretty cool.

What worked?

  • Stable at launch. A couple weeks before launch, I switched to “devops” mode. Along with Adam McManus, our ops guy, we focused on tuning Cassandra for better read performance and to resolve stability problems. We ended up bringing in a DataStax consultant to help us verify we were doing the right things with Cassandra. The result of this was that, at launch, our cluster held up well and we didn’t have any Cassandra-related problems.
  • Easy to tune. I found Cassandra interesting and easy to tune. There is a little bit of upfront research in figuring out exactly what the knobs mean and what the reporting tools are saying. Once I figured that out, it was easy to iteratively tweak things and see if they were having a positive effect on the performance of our cluster.
  • Time-series or semi-granular data. Of the databases I’ve tinkered with, Cassandra stands out in terms of modeling time-related data. If an application is going to pull data in time-order most of the time, Cassandra is a really great place to start. I also like the column-oriented data model. It’s great if you mostly need a key-value store, but occasionally need a key-key-value store.

What would we do differently next time?

  • Developer localhost setups. We started using Cassandra in the 0.6 release, when it was a giant pain to set up locally (XML configs). It’s better now, but I should have put more energy into helping the other developers on our team getting Cassandra up and working properly. If I were to do it again, I’d probably look into leaning on the install scripts the cassandra gem includes, rather than Homebrew and a myriad of scripts to hack the Cassandra config.
  • Eventual consistency and magic database voodoo. Cassandra does not work like MySQL or Redis. It has different design constraints and a relatively unique approach to those constraints. In advocating and explaining Cassandra, I think I pitched it too much as a database nerd and not enough as “here’s a great tool that can help us solve some problems”. I hope that CQL makes it easier to put Cassandra in front of non-database nerds in terms that they can easily relate to and immediately find productivity.
  • Rigid query model. Once we got several million rows of data into Cassandra, we found it difficult to quickly change how we represented that data. It became a game of “how can we incrementally rejigger this data structure to have these other properties we just figured out we want?” I’m not sure that’s a game you can easily win at with Cassandra. I’d love to read more about building evolvable data structures in Cassandra and see how people are dealing with high-volume, evolving data.

Things we’ll try differently next time

  • More like a hash, less like a database. Having developed a database-like thing, I have come to the conclusion that developers really don’t like them very much. ActiveRecord was hugely successful because it was so much more effective than anything previous to it that tried to make databases just go away. The closer a database is to one of the native data structures in the host language, the better. If it’s not a native data structure, it should be something they can create in a REPL and then say “magically save this for me!”
  • Better tools and automation. That said, every abstraction leaks. Once it does, developers want simple and useful tools that let them figure out what’s going on, what the data really looks like, tinker with it, and get back to their abstracted world as quickly as possible. This starts with tools for setting up the database, continues through interacting with it (database REPL), and for operating it (logging, introspection, etc.) Cassandra does pretty well with these tools, but they’re still a bit nerdy.
  • More indexes. We didn’t design our applications to use secondary indexes (a great feature) because they didn’t exist just yet. I should have spent more time integrating this into the design of our services. We got bit a lot towards the end of our release cycle because we were building all of our indexes in the application and hadn’t designed for reverse indexes. We also designed a rather coarse schema, which further complicated ad-hoc querying, which is another thing non-database-nerds love.

What’s that mean for me?

Cassandra has a lot of strengths. Once you get to a scale where you’re running data through a replicated database setup and some kind of key-value database or cache, it makes sense to start thinking about Cassandra. There are a lot of things you can do with it, and it lets you cheat in interesting ways. Take some extra time to think about the data model you build and how you’ll change it in the future. Like anything else, build tools for yourself to automate the things you do repeatedly.

Don’t use it because you read a blog post about it. Use it because it fits your application and your team is excited about using it.


Sleep is the best

Sleep deprivation is not a badge of honor:

This is why I’ve always tried to get about 8 1/2 hours of sleep. That seems to be the best way for me to get access to peak mental performance. You might well require less (or more), but to think you can do with 6 hours or less is probably an illusion. Worse, it’s an illusion you’ll have a hard time bursting. Sleep-deprived people often vastly underestimate the impact on their abilities, studies have shown.

Like David, I put a high value on sleep. I go out of my way to make sure I get my seven hours. If I don’t, my brain gets messy and less useful, plus the attendant stubbornness and crankiness of being short on sleep.

Figure out how much sleep you need every night and make sure you get it. You’ll do much better work for it.

Also: naps are fantastic.


Pass interference: can't live with it, can't live without it.

Bill Barnwell on revamping defensive penalties. Pass interference is tough business in the NFL. It's one of the easiest calls to get wrong on the field (besides the myriad of missed holding calls), but the easiest to fix with a slow-motion camera. It's too easy for both sides to game it as well. There's some good ideas in here, but I think just making pass interference calls and non-calls is a simple first step.


Growing a culture

I previously noted that adding people to a team is tricky, doing so quickly doubly so. A nice discussion popped up around how to do so effectively. So, to cover the other side of the team-growing coin, here are some ideas on what helps when adding people to your team:

  • When you integrate people, do it purposefully and deliberately. (Jeff Casimir)
  • Grow the team slowly. Pair the new person with a mentor. Task the new person with the change that a cultural, process, or technological change that the team agrees upon as part of the recruiting and hiring process. (Myself)
  • Pairing can help. Jeff mentioned pairing in the context of teachers. If you’re already doing pairing, I bet it helps a lot of these team growth issues.
  • Document your culture (Jeff), present said document as new people join the team. Even better, document your culture online as part of your team’s outward face and recruiting efforts (Brian Doll). Works great for GitHub.
  • Announce the hire with an interview-style announcement rather than a short bio (Brian Doll).
  • Go over the top when celebrating bring on a new team member (Jeff).
  • Jeff noted that in education, they have the advantage that all new people start at the same time in August. You can use this to batch celebrate/integrate new team members.
  • Never stop the process of integrating your new team members (Brian). When you stop, people notice. As the saying goes, if it hurts, do it more.
  • Job titles can be a cancer (Brian). If you’re constantly bringing on “senior developers”, what is there to celebrate?
  • The E-Myth Revisited is mostly about entrepreneurship (Jeff), but it devotes a lot of space to focusing on roles instead of jobs. This makes it easier to bring people on with less focus on titles and more on what they will actually do. Brian notes that roles are great for lowering your bus number and encouraging team ownership of the product.

Culture is hard

Looking at all of these ideas, it strikes me that maybe it’s not adding to a culture that’s tricky; maybe it’s defining and maintaing a culture that’s really challenging. I often find it difficult to draw the line between the personalities on a team and the explicit and implicit culture that is the aggregate of those personalities and their actions. Getting a bunch of people on the same page and deciding what the culture is would prove challenging, as is any activity with a group of people.

Subtract the notion of adding new people to a team, and the above ideas are all about defining and maintaining a culture. That’s something worth thinking about as you start a team. What do you value, how do you present yourself, how do you get stuff done? Once those questions are answered, you have a starting point for your culture. Then it’s a matter of “gardening” that culture so that everyone, new team members and veterans alike, learn it and evolve it.


Thanks to Brian and Jeff for a great conversation, they both get internet gold stars. I’m just the guy who curated it and typed it all in later.


The pitfalls of growing a team

Premature Ramp-up, Martin Fowler on the perils of building up a development team too quickly: loss of code cohesion, breakdown of communication, plus the business costs of on-boarding. The problem I'm more concerned with, when growing a software team, is maintaining culture.

Adding a new person to a team is a process of integrating the new person’s unique good qualities to the team’s existing culture. It’s critical to use their prior experiences to clean up the sharp edges of the existing team practice without accidentally integrating new sharp edges. It’s a careful balancing act of taking advantage of the beginner’s mind and cultural indoctrination. Both sides have to give and take.

If you grow too quickly, it’s very easy for this balancing act to get, well, out of balance. The new people are only indoctrinated and the team doesn’t learn, or the new people don’t understand the team and go about doing whatever they felt was successful at their previous gig.

Its common to focus on the difficulty of recruiting a team, but finding a culture match and growing that culture is equally, if not more, challenging.


A food/software change metaphor

Are You Changing the Menu or the Food? Incremental change, the food metaphor edition. It's about software and startups. But food too. Think "software" when he says "food". Just read it, OK?


How do you devop?

I’m a sucker for good portmanteau. “Devops” is a precise, but not particularly rewarding concatenation of “development” and “operations”. What it lacks in sonic fun, it makes up in describing something that’s actually going on.

For example, the tools that developers build for themselves are taking cues from the scripts that the operations team hobbles together to automate their work. In the bad old days, you manually configured a server after it was racked up. Then there was a specific load out of packages, a human-readable script to work from, a disk image to restore from, or maybe even a shell script to execute. Today, you can take your pick from configuration management systems that make the bootstrap and maintenance of large numbers of servers a programmatic matter.

It’s not just bringing up new servers that developers are dabbling in. Increasingly, I run across developers who are really, really interested in logging everything, using operational metrics to guide their coding work, and running the deploys themselves. In some teams, the days of “developers versus operations” and throwing bits over walls is over. This is a good.

You devop and don’t know it

Even if you don’t know Chef or Puppet, even if you never ssh into a database server even once, even if you never use the #devop hashtag or attend a like-marketed conference, you’re probably dabbling in operations. You, friend, are so devops, and you don’t even know it.

You use a tool or web app to look at the request rate of your application or the latency of specific URLs and you use that information to decide where to focus your performance efforts. You watch the errors and exception that your app encounters and valiantly fix them. Browsers request images, scripts, and stylesheets from your site and you work to make sure they load quickly, the site draws as soon as possible, and users from diverse continents are well served. You run deploys yourself, you build an admin backend for your app, you automate the processes needed to keep the business going. You consult with operations about what infrastructure systems are working well, what could improve, and what tools might serve everyone better.

All of these things skirt the line between development and operations. They’re signs of diversifying your skillset, better helping the team, and taking pride in every aspect of your work. You can call it devops if you want, but I hope you’ll consider it just another part of making awesome stuff.


The Current and Future Ruby Platform

Here we are, in the waning months of 2011. Ruby and its ecosystem are a bit of an incumbent these days. It’s a really great language for a few domains. It’s got the legs to become a useful language for a couple of other domains. There are a few domains where I wouldn’t recommend using it at all.

Ruby’s strong suit

Ruby started off as a strong scripting language. The first thing that attracted non-tinkerers was a language with the ease-of-hacking found in Perl with the nice object-oriented features found in Java or Python. If you see code that uses special globals like $! and $: or weird constants like ARGF and __DATA__ and it mostly lacks classes and methods, you’re looking at old-fashioned scripting code.

As Ruby grew, it got a niftier way of doing object-oriented programming. Developers started to appreciate it in the same places they might use Java or Smalltalk. A few of the bravest started building production systems using a nice object-oriented language without the drawbacks of a high-maintenance type system (Java) or the isolation of an image (Smalltalk). This code ends up looking a little like someone poking Ruby with their Java brain; they’re not using the language to its fullest, but they’re not abusing it either.

Out of the OO crowd exploded the ecosystem of web frameworks. There were a few contenders for a while, but then Rails came and sucked the air out of the competitive fire. For better or worse, nearly everyone doing web stuff with Ruby was doing Rails for a few years. This yielded buzz, lots of hype, some fallings out, some useful forward progress in the idioms of software development, and a handful of really great businesses. At this point in Ruby’s life, its interesting properties (metaprogramming, blocks, open classes) were stretched, broken, and put back together with a note pointing out that some ideas are too clever for practical use.

As Ruby took off and more developers started using it, there was a need for integration with other systems. Thus, lots of effort was put into projects to make Ruby a part of the JVM, CLR, and Cocoa ecosystems. Largely, they delivered. At the end of 2011, you can use Ruby to integrate with and distribute apps for the JVM and OS X, and maybe even Windows. This gave Ruby credibility in large “enterprisey” shops and somewhat freed Ruby from depending on a single implementation. The work to make this happen is non-trivial and thankless but hugely important even if you never touch it; when you see one of these implementers, thank, hug, and/or bribe them.

Ruby could go to there

WARNING Prognostication follows WARNING, your crystal ball is possibly different than mine

Scala, a hybrid functional/object-oriented language for the JVM, is a hot thing these days. A lot of people like that it combines the JVM, the best ideas of object-oriented programming, and then swizzles in some accessible and useful ideas from the relatively untapped lore of functional programming (FP). So it goes, Ruby already does one or two of these things, depending on how you count. The OO part is in the bag. Enumerable exposes a lot of the same abstractions that lie at the foundation of FP. If you’re using JRuby, you’re getting many of the benefits of the JVM, though Scala does one better in this regard right now. Someone could come along and implement immutable, lazy data structures and maybe a few combinators and give Ruby a really good FP story.

Systems programming is traditionally the domain of C and C++ developers, with Java and Go starting to pick up some mindshare. Think infrastructure services like web servers, caches, databases, message brokers, and other daemon-y things. When you’re hacking at this level, control over memory and execution is king. Access to good concurrency and network primitives is also important. Ruby doesn’t do a great job of providing all of these right now, and Matz’s implementation might never rank highly here. However, one of the promising aspects of Rubinius is that they’re trying very hard to do well in terms of performance, concurrency, and memory management. If Rubinius can deliver on those efforts, offer easily hacked trapdoors to lower level bits, and encourage the development of libraries for network and concurrent programming, Ruby could easily turn into a good solution for small-to-medium sized infrastructure projects.

Distributed systems are sort of already in Ruby’s wheel house and sort of a stretch for Ruby. On the one hand, most Ruby systems already run over a combination of app servers and queue workers, storing data in a hodgepodge of browser caches, in-heap caches, and databases. That’s a distributed application, and it’s handy to frame one’s thinking about building an application in terms of the challenges of a distributed system: shared state is hard to manage, failure cases are weird and gnarly, bottlenecks and points of failure are all over the place. What you don’t see Ruby used for is implementing the infrastructure underneath distributed applications. Hadoop, Zookeeper, Cassandra, Riak, and doozerd all rely on both the excellent concurrency and network primitives of their respective platforms and on the reliability and performance those platforms provide. Again, given some more progress on Ruby implementations and good implementations of abstractions for doing distributed messaging, state management, and process supervision, Ruby could be an excellent language to get distributed infrastructure projects off the ground.

Unlikely advances for Ruby

Embedded systems, those that power your video game consoles, TVs, cars, and steroes, rely on promises that Ruby has trouble keeping. C is king here. It provides the control, memory footprint, and predictability that embedded applications crave. Rite is an attempt to tackle this domain. The notion of a small, fast subset of Ruby has its appeal. However, developers of embedded systems typically hang out on the back of the adoption curve and are pretty particular about how they build systems. Ruby might make in-roads here, but it needs a killer app to acheive the success it currently enjoys in application development.

Mobile apps are an explosive market these days. Explosive markets go really well with Ruby (c.f. “web 2.0”, “AJAX”, “the social web”), but mobile is different. It’s dominated by vendor ecosystems. Largely, you’ve got iOS with Objective-C and Cocoa, and Android with Java and, err, Android. Smart developers don’t tack too far from what is recommended and blessed by the platform vendor. There are efforts to make Ruby play well here, but without vendor blessing, they aren’t likely to get a lot of traction.

Place your bets, gentlemen

Tackling the middle tier (object/functional, distributed/concurrent, and systems programming) is where I think a lot of the really promising work is happening. Ruby 1.9 is good enough for many kinds of systems programming and has a few syntactic sugars that make FP a little less weird. JRuby offers integration into some very good libraries for doing distributed and concurrent stuff. Rubinius has the promise to make those same libraries possible on Ruby.

Really sharpening the first tier (thinking about how to script better, getting back to OO principles, fine tuning the web development experience, improving JRuby’s integration story) is where Ruby is going to grow in the short term. The ongoing renaissance, within the Ruby community, of Unix idioms and OO design is moving the ball forward; it feels like we’re building on better principles than we were just two years ago. The people who write Ruby will likely continue to assimilate old ideas, try disasterous new ones, and trend towards adopting better ways of building increasingly large applications.

When it comes to Ruby, go long on server-based applications, hedge your bets on systems infrastructure, and short anything that involves platforms with restricted resources or vendor control.


Your frienemy, the ORM

When modeling how our domain objects map to what is stored in a database, an object-relational mapper often comes into the picture. And then, the angst begins. Bad queries are generated, weird object models evolve, junk-drawer objects emerge, cohesion goes down and coupling goes up.

It’s not that ORMs are a smell. They are genuinely useful things that make it easier for developers to go from an idea to a working, deployable prototype. But its easy to fall into the habit of treating them as a top-level concern in our applications.

Maybe that is the problem!

What if our domain models weren’t built out from the ORM? Some have suggested treating the ORM, and the persistence of our objects themselves, as mere implementation details. What might that look like?

Hide the ORM like you’re ashamed of it

Recently, I had the need to build an API for logging the progress of a data migration as we ran it over many million records, spitting out several new records for every input record. Said log ended up living in PostgreSQL1.

Visions of decoupled grandeur in my head, I decided that my API should be not leak its databaseness out to the user. I started off trying to make the API talk directly to the PostgreSQL driver, but that I wasn’t making much progress down that road. Further, I found myself reinventing things I would get for free in ActiveRecord-land.

Instead, I took a principled plunge. I surrendered to using an AR model, but I kept it tucked away inside the class for my API. My API makes several calls into the AR model, but it never leaks that ARness out to users of the API.

I liked how this ended up. I was free to use AR’s functionality within the inner model. I can vary the API and the AR model independently. I can stub out, or completely replace the model implementation. It feels like I’m doing OO right.

Enough of the suspense, let’s see a hypothetical example

User model. Everyone has a name, a city, and a URL. I can all do this in my sleep, right?

I start with by defining an API. Note that all it knows is that there is some object called Model that it delegates to.

class User
  attr_accessor :name, :city, :url

  def self.fetch(key)
    Model.fetch(key)
  end

  def self.fetch_by_city(key)
    Model.fetch_by_city(key)
  end

  def save
    Model.create(name, city, url)
  end

  def ==(other)
    name == other.name && city == other.city && url == other.url
  end

end

That’s a pretty straight-forward Ruby class, eh? The RSpec examples for it aren’t elaborate either.

describe User do

  let(:name) { "Shauna McFunky" }
  let(:city) { "Chasteville" }
  let(:url) { "http://mcfunky.com" }

  let(:user) do
    User.new.tap do |u|
      u.name = name
      u.city = city
      u.url = url
    end
  end

  it "has a name, city, and URL" do
    user.name.should eq(name)
    user.city.should eq(city)
    user.url.should eq(url)
  end

  it "saves itself to a row" do
    key = user.save
    User.fetch(key).should eq(user)
  end

  it "supports lookup by city" do
    user.save
    User.fetch_by_city(user.city).should eq(user)
  end

end

Not much coupling going on here either. Coding in a blog post is full of beautiful idealism, isn’t it?

“Needs more realism”, says the critic. Obliged:

  class User::Model < ActiveRecord::Base
    set_table_name :users

    def self.create(name, city, url)
      super(:name => name, :city => city, :url => url)
    end

    def self.fetch(key)
      from_model(find(key))
    end

    def self.fetch_by_city(city)
      from_model(where(:city => city).first)
    end

    def self.from_model(model)
      User.new.tap do |u|
        u.name = model.name
        u.city = model.city
        u.url = model.url
      end
    end

  end

Here’s the first implementation of an actual access layer for my user model. It’s coupled to the actual user model by names, but it’s free to map those names to database tables, indexes, and queries as it sees fit. If I’m clever, I might write a shared example group for the behavior of whatever implements create, fetch, and fetch_by_city in User::Model, but I’ll leave that as an exercise to the reader.

To hook my model up when I run RSpec, I add a moderately involved before hook:

  before(:all) do
    ActiveRecord::Base.establish_connection(
      :adapter => 'sqlite3',
      :database => ':memory:'
    )

    ActiveRecord::Schema.define do
      create_table :users do |t|
        t.string :name, :null => false
        t.string :city, :null => false
        t.string :url
      end
    end
  end

As far as I know, this is about as simple as it gets to bootstrap ActiveRecord outside of a Rails test. So it goes.

Let’s fake that out

Now I’ve got a working implementation. Yay! However, it would be nice if I didn’t need all that ActiveRecord stuff when I’m running isolated, unit tests. Because my model and data access layer are decoupled, I can totally do that. Hold on to your pants:

require 'active_support/core_ext/class'

class User::Model
  cattr_accessor :users
  cattr_accessor :users_by_city

  def self.init
    self.users = {}
    self.users_by_city = {}
  end

  def self.create(name, city, url)
    key = Time.now.tv_sec
    hsh = {:name => name, :city => city, :url => url}
    users[key] = hsh
    users_by_city[city] = hsh
    key
  end

  def self.fetch(key)
    attrs = users[key]
    from_attrs(attrs)
  end

  def self.fetch_by_city(city)
    attrs = users_by_city[city]
    from_attrs(attrs)
  end

  def self.from_attrs(attrs)
    User.new.tap do |u|
      u.name = attrs[:name]
      u.city = attrs[:city]
      u.url = attrs[:url]
    end
  end

end

This “storage” layer is a bit more involved because I can’t lean on ActiveRecord to handle all the particulars for me. Specifically, I have to handle indexing the data in not one but two hashes. But, it fits on one screen and its in memory, so I get fast tests at not too much overhead.

This is a classic test fake. It’s not the real implementation of the object; it’s just enough for me to hack out tests that need to interact with the storage layer. It doesn’t tell me whether I’m doing anything wrong like a mock or stub might. It just gives me some behavior to collaborate with.

Switching my specs to use this fake is pretty darn easy. I just change my before hook to this:

  before { User::Model.init }

Life is good.

Now for some overkill

Time passes. Specs are written, code is implemented to pass them. The application grows. Life is good.

Then one day the ops guy wakes up, finds the site going crazy slow and see that there are a couple hundred million user in the system. That’s a lot of rows. We’re gonna need a bigger database.

Migrating millions of rows to a new database is a pretty big headache. Even if it’s fancy and distributed. But, it turns out changing our code doesn’t have to tax our brains so much. Say, for example, we chose Cassandra:

require 'cassandra/0.7'
require 'active_support/core_ext/class'

class User::Model

  cattr_accessor :connection
  cattr_accessor :cf

  def self.create(name, city, url)
    generate_key.tap do |k|
      cols = {"name" => name, "city" => city, "url" => url}
      connection.insert(cf, k, cols)
    end
  end

  def self.generate_key
    SimpleUUID::UUID.new.to_guid
  end

  def self.fetch(key)
    cols = connection.get(cf, key)
    from_columns(cols)
  end

  def self.fetch_by_city(city)
    expression = connection.create_index_expression("city", city, "EQ")
    index_clause = connection.create_index_clause([expression])
    slices = connection.get_indexed_slices(cf, index_clause)
    cols = hash_from_slices(slices).values.first
    from_columns(cols)
  end

  def self.from_columns(cols)
    User.new.tap do |u|
      u.name = cols["name"]
      u.city = cols["city"]
      u.url = cols["url"]
    end
  end

  def self.hash_from_slices(slices)
    slices.inject({}) do |hsh, (k, columns)|
      column_hash = columns.inject({}) do |inner, col|
      column = col.column
      inner.update(column.name => column.value)
      end
    hsh.update(k => column_hash)
    end
  end
end

Not nearly as simple as the ActiveRecord example. But sometimes it’s about making hard problems possible even if they’re not mindless retyping. In this case, I had to implement ID/key generation for myself (Cassandra doesn’t implement any of that). I also had to do some cleverness to generate an indexed query and then to convert the hashes that Cassandra returns into my User model.

But hey, look! I changed the whole underlying database without worrying too much about mucking with my domain models. I can dig that. Further, none of my specs need to know about Cassandra. I do need to test the interaction between Cassandra and the rest of my stack in an integration test, but that’s generally true of any kind of isolated testing.

This has all happened before and it will all happen again

None of this is new. Data access layers have been a thing for a long time. Maybe institutional memory and/or scars have prevented us from bringing them over from Smalltalk, Java, or C#.

I’m just sayin’, as you think about how to tease your system apart into decoupled, cohesive, easy-to-test units, you should pause and consider the idea that pushing all your persistence needs down into an object you later delegate to can make your future self think highly of your present self.


  1. This ended up being a big mistake. I could have saved myself some pain, and our ops team even more pain, if I’d done an honest back-of-the-napkin calculation and stepped back for a few minutes to figure out a better angle on storage. ↩


Relentless Shipping

Relentless Quality is a great piece. We should all strive to make really fantastic stuff. But I think there’s a nuance worth observing here:

Sharpen the edges, polish the surface and make it shine.

I’m afraid that some people are going to read more than the Kneath intends here. Quality does not mean perfection. Perfection is the enemy of shipping. Quality is useless if it doesn’t ship. Quality is not an excuse for not shipping.

Quality is a subjective, amorphous thing. To you, it means the fit and finish. To me, it means that all the bugs have been eliminated and possible bugs thought about and excised. Even to Christopher Alexander, quality isn’t nailed down; he refers to good buildings as possessing the “quality without a name”.

To whit, this shortcoming is pointed out in the original essay:

Move fast and break things, then move fast and fix it. Ship early, ship often, sacrificing features, never quality.

Scope and quality are sometimes at odds. Schedules and quality are sometimes at odds. There may come a time when you have to decide between shipping, maintaining quality, and including all the features.

The great thing about shipping is that if you can do it often enough, these problems of slipping features or making sacrifices in quality can fade away. If you can ship quickly, you can build features out, test them, and put that quality on them in an iterative fashion. Shipping can’t cure all ills, but it can ease many of them.

Kneath is urging you to maintain quality; I’m urging you to ship some acceptable value of quality and then iterate to make it amazing. Relent on quality, if you must, so you can ship relentlessly.