bitly's nsq has some good ideas

bitly/nsq:

NSQ is a realtime message processing system designed to operate at bitly's scale, handling billions of messages per day.

It promotes distributed and decentralized topologies without single points of failure, enabling fault tolerance and high availability coupled with a reliable message delivery guarantee.

No SPOFs and reliable message delivery, without relying on something like ZooKeeper, is a big claim. They have some novel approaches to these problems.

First, they run an intermediary daemon, nsqlookupd, between the producers/consumers and the actual queues. These daemons monitor all the available queue servers and tell the clients what to connect to. No configuration of actual queue servers is known to applications. They then run multiple lookup daemons, which are stateless and don’t need to agree with each other in order for the system to operate properly.

Reliable message delivery is provided with at-least-once message delivery semantics. They require all consumers to de-duplicate messages or restrict their operations to idempotent operations. Not exactly legacy friendly, as many applications are coded with the assumption of a closed, one-shot world. But. Idempotence: I highly recommend it if you have the means.

If you need to prevent losing messages due to the FBI stealing your servers, which is something you definitely need to account for, you can set up redundant pairs of servers and rely on deduplication/idempotence to make sure you’re only processing messages once, even if you consume them multiple times.

In summary: lots of good ideas here. Perhaps some of them could be applied to how people are using Resque?


Invent the right thing

You have to invent the right thing. Some things you might invent:

  1. A solution to a problem. Nothing novel, just an answer for a question. Eg. any Rails/Django/etc. application.

  2. An application of some existing techonologies in a novel way. Eg. integrating a library to avoid inventing your own solution.

  3. An incrementally better, specific kind of mouse trap. Eg. building on top of existing infrastructure to solve a problem better than any existing solutions.

  4. An entirely new kind of mousetrap. Eg. building wholly new infrastructure because you face a high quality, unique problem that you are imminently required to solve.

Inventing the wrong thing means you’re operating at the wrong level. If you’re too high, you’re spinning your wheels on problems you hope to have. If you’re too low, you’re spinning your wheels on building something that isn’t sufficient to solve your problems. If you’re at the right level, you’re mostly solving problems you actually face and not solving too my coincidental problems.

This doesn’t mean new problems shouldn’t be tackled and new techonologies should not be invented. It applies mostly to reinventing wheels. That is, a project starts with level 1, not level 3 or 4. Apply a technology and improve it before you push the edge. In fact, you must push the limits of an extant technology before level 4 is the right answer. No skipping allowed.

Don’t let imposter syndrome lead you to the wrong technology decision. I’ve tried to build at the wrong level in the past because I felt like I had to fit in with the level of what others working on larger systems were building. Wrong answer.

It’s OK to build a scooter instead of a spaceship if all you need to do is go pick up the mail.


A better shared space

Remote teams are hard. Not impossible hard, but running uphill hard. It’s hard because people are used to interacting face-to-face. Given the opportunity, they’ll interact with those around them rather than those in virtual spaces.

The trick, I think, is to make a better shared space for a remote/local team than the physically shared space they already have. A space that is just as fluid, fun, and useful as a physical space and available anytime, everywhere is more compelling because it affords its occupants (aka team members) more hours in their day (no commuting, flexible hours) and permits all sorts of non-traditional work locations (coffee shops, trains, sofas at home, a summer trip to Europe).

Decoupling work from location and time is a big deal. I hope more companies, in software and outside of it, attempt to solve it.


I got Clojure stacks

Here’s a Sunday afternoon hack. It’s a “stack” machine implemented in Clojure. I intended for it to be a stack machine, no airquotes, but I got it working and realized what I’d really built was a machine with two registers and instructions that treat those two registers as a stack. Pretty weird, but it’s not bad for a weekend hack.

I’m going to break my little machine down, and highlight things that will feel refreshingly different to someone, like me, who has spent the past several years in object-oriented languages like Ruby. What follows is observations; I’m still very new to Clojure, despite familiarity with the concepts, so I’ll pass on making global judgements.

Data structures as programs as data

I’ve seen more than one Rubyist, myself included, say that code-as-data, a concept borrowed from Lisp’s syntax, is possible and regularly practiced in Ruby. DSLs and class-oriented little languages accomplish this, to some degree. In my experience, this metaprogramming is really happening at the class level, using the class to hold data that dynamic code parses to generate new behaviors.

In contrast, Clojure, being a Lisp, programs really are data. To wit, this is the crux of my stack machine; the actual stack machine program is a Clojure data structure that in turn specifies some Clojure functions to execute:

    (def program
      [['mpush 1]
       ['mpush 2]
       ['madd]
       ['mpush 4]
       ['msub]
       ['mhalt]])

    (run program)

If you’ve never looked at Clojure or Lisp code, just squint and I bet you’ll keep up. This snippet defines a global variable, of sorts, program, whose value is a list of lists (think Arrays) specifying the instructions in my stack machine program. In short, this program pushes two values on the stack, 1 and 2, adds them, pushes another value 4, subtracts 4 from the result of the addition, and then halts, which prints out the current state of the “stack” registers.

I’ve got a function named run which takes all these instructions, does some Clojure things, then hands them off to instruction functions for execution.

Some familiar idioms

Let’s look at run. It’s really simple.

    (defn run [instructions]
      (reduce execute initial-state instructions))

This function takes one argument, instructions, a Clojure collection (generally called a seq; this one in particular is a vector). Clojure has an amazing library of functions that operate on collections, just as Ruby has Enumerable. In fact, reduce in Clojure is the same idea as inject in Ruby (reduce is aliased to inject in Ruby!). The way I’m calling it says “iterate over a collection instructions, calling execute on each item; on the first iteration, use initial-state as the initial value of the accumulated collection”.

initial-state is another global variable whose value is a mapping (in Ruby, a hash) that maintains the state of the machine. It has two keys, op-a and op-b, representing my two stack-ish registers.

    (def initial-state
      {:op-a nil :op-b nil})

Now you’d expect to find an execute function that takes a collection plus a value and generates a new version of the collection, just like Ruby’s inject. And here that function is:

    (defn execute [state inst]
      (let [fun (ns-resolve *ns* (first inst))
            params (rest inst)]
        (apply fun [params state])))

This one might require extra squinting for eyes new to Clojure. execute takes two arguments, the current state of the stack machine, state, and the instruction to execute, inst. It then uses let to create local variables based on the values of function’s parameters. I use Clojure’s mechanism for turning a quoted variable name (quoting, in Lisp, means escaping a variable name so the interpreter doesn’t try to evaluate it) into a function reference. Because the instruction is of the form [instruction-name arg arg arg ...], I use first and rest to split the instruction into the function name, bound to fun and argument list, bound to params.

The meat of the function “applies” the function I extracted in the let block to the arguments I extracted out of the instruction. Think of apply like send in Ruby; it’s a way to call a function when you have a reference to it.

The sharp reader would now start searching for a bunch of functions, each of which implements an instruction for our stack machine. And so…

Some boilerplate arrives

Here is the implementation for mpush, madd, and mhalt:

    (defn mpush [params state]
      (let [a (state :op-a)
            b (state :op-b)
            v (first params)]
        {:op-a v :op-b a}))

    (defn madd [params state]
      (let [a (state :op-a)
            b (state :op-b)]
        {:op-a (+ a b) :op-b nil}))

    (defn mhalt [params state]
      (println state))

Each instruction takes some arguments and the state of the machine. They do some work and return a new state of the stack machine. Easy, and oh-so-typically functional!

These instructions are where I’d introduce something clever-ish in Ruby. That let where the register values are extracted feels really boilerplate-y. In Ruby, I know what I would do about that: a method taking a block, probably.

I’m not sure how I’d clean this up in Clojure. A macro, a function abstraction? I leave it as an exercise to the reader, and to myself, to find something that involves less copypasta each time a new instruction is implemented.


I found some pleasant surprises in this foray into Clojure:

  • Building programs from bottom-up functions in a functional language is at least as satisfying as doing the same with a TDD loop in an object-oriented language. It is just a conducive to dividing a problem into quickly solved blocks and then putting the whole thing together. It does, however, lack a repeatable verification artifact as a secondary output.
  • At first I was a little skeptical of the fact that Clojure mappings (hashes) can be treated as data structures, by passing them to functions, or as functions, by calling them using a key to extract as the parameter. In practice, this is a really awesome thing and it’s a nice way to write one’s own abstractions as well. There’s something to using higher-order functions more prevalently than Ruby does.
  • The JVM startup isn’t quick in absolute terms, but at this point it’s faster than almost any Rails app, and many pure Ruby apps, to boot. Damning praise for the JVM and Ruby, but the take-away is I never felt distracted our out-of-flow due to waiting around on the JVM.

Bottom line: there’s a lot to like in Clojure. It’s likely you’ll read about more forays into Clojure in this space.


Faster, computer program, kill kill!

Making code faster requires insight into the particulars of how computers work. Processor instructions, hardware behavior, data structures, concurrency; it’s a lot of black art. Here’s a few things to read on the forbidden lore of fast programs:

Fast interpreters are made of machine sympathy. Implementing Fast Interpreters. What makes the Lua interpreter, and some JavaScript interpreters, so quick. Includes assembly and machine code details. Juicy!

Lockless data structures, the easy way. A Java lock-free data structures deep dive. How do those fancy java concurrent libraries work? Fancy processor instructions! Great deep dive.

Now is an interesting time to be a bottleneck. Your bottleneck is dead. Hardware, particularly IO, is advancing such that bottlenecks in code are exposed. If you’re running on physical hardware, especially if you have solid-state disks, your bottleneck is probably language-bound or CPU-bound code.

Go forth, read a lot, measure twice (beware the red herrings!), and make faster programs!


When to Sinatra, when to Rails

On Rails, Sinatra, and picking the right tool for the job. Pedro Belo, of Heroku fame, finds Rails is way better for pure-web apps and Sinatra is way better for pure-API apps. Most of it comes down to Rails has better tooling and Sinatra is better for scratching itches, which happens a lot more in APIs than applications. I’m not ready to pronounce this the final word, but what he’s saying lines up with much of my experience.

That said, you can get pretty far with a Rails API by segregating it from your application. That is, your app controllers inherit from ApplicationController and your API controllers inherit from ApiController. This keeps the often wildly different needs of applications and APIs nice and distinct.


Common sense code checks

Etsy’s Static Analysis for PHP. This isn’t as complicated as you might think. While Facebook’s HipHop is used, and is quite sophisticated, a lot of this is just common sense. Trigger code reviews when oft-misused functions are used or when functions that involve security things are introduced.

This stuff is great for an intern or new team member to get a quick win with. So next time you bring someone onto your team, why not turn them loose on these kinds of quick, big wins?


Designing for Concurrency

A lot is made about how difficult it is to write multi-threaded programs. No doubt, it is harder than writing a CRUD application or your own testing library. On the other hand, it’s not as difficult as writing a database or 3D graphics engine. The point is, it’s worth learning how to do. Skipping the hubris and knowing your program will have bugs that require discipline to track down is an enabling step to learning to write multithreaded programs.

I haven’t seen much written about the experience of writing a concurrent program and how one designs classes and programs with the rules of concurrency in mind. So let’s look at what I’ve learned about designing threaded programs so far.

The headline is this: only allow objects in consistent states and don’t rely on changing state unless you have to. Let’s first look at a class that does not embody those principles at all.

class Rectangle
  attr_accessor :width, :height

  def orientation
    if width > height
      WIDE
    else
      TALL
    end
  end

  WIDE = "WIDE".freeze
  TALL = "TALL".freeze
end

Just for fun, mentally review that code. What are the shortcomings, what could go wrong, what would you advise the writer to change?

For our purposes, the first flaw is that new Rectangle objects are in an inconsistent state. If we create an object and immediately call orientation, bad things will happen. If you’re typing along at home:

begin
  r = Rectangle.new
  puts r.orientation
rescue
  puts "whoops, inconsistent"
end

The second flaw is that our object allows bad data. We should not be able to do this:

r.width = 100
r.height = -20
puts r.orientation

Alas, we can. The third flaw is that we could accidentally share this object across threads and end up messing up the state in one threads because of logic in another thread. This sort of bug is really difficult to figure out, so designing our objects so it can’t happen is highly desirable. We want to make this sort of code safe:

r.height = 150
puts r.orientation

When we modify width or height on a rectangle, we should get back an entirely new object.

Let’s go about fixing each of these flaws.

Encapsulate object state with Tell, Don’t Ask

The first flaw in our Rectangle class is that it isn’t guaranteed to exist in a consistent state. We go through contortions to make sure our databases are consistent; we should do the same with our Ruby objects too. When an object is created, it should be ready to go. It should not be possible to create a new object that is inconsistent.

Further, we can solve the second flaw by enforcing constraints on our objects. We use the “Tell, Don’t Ask” principle to ensure that when users of Rectangle change the object’s state, they don’t get direct access to the object’s state. Instead, they must pass through guards that protect our object’s state.

All of that sounds fancy, but it really couldn’t be simpler. You’re probably already writing your Ruby classes this way:

class Rectangle
  attr_reader :width, :height

  def initialize(width, height)
    @width, @height = width, height
  end

  def width=(w)
    raise "Negative dimensions are invalid" if w < 0
    @width = w
  end

  def height=(h)
    raise "Negative dimensions are invalid" if h < 0
    @height = h
  end

  def orientation
    if width > height
      WIDE
    else
      TALL
    end
  end

end

A lot of little things have changed in this class:

  • The constructor now requires the width and height arguments. If you don’t know the width and height, you can’t create a valid rectangle, so why let anyone get confused and create a rectangle that doesn’t work? Our constructor now encodes and enforces this requirement.
  • The width= and height= setters now enforce validation on the new values. If the constraints aren’t met, a rather blunt exception is raised. If everything is fine, the setters work just like they did in the old class.
  • Because we’ve written our own setters, we use attr_reader instead of attr_accessor.

With just a bit of code, a little explicitness here and there, we’ve now got a Rectangle whose failure potential is far smaller than the naive version. This is simply good design. Why wouldn’t you want a class that is designed not to silently blow up in your face?

The crux of the biscuit for this article is that now we have an object with a narrower interface and an explicit interface. If we need to introduce a concurrency mechanism like locking or serialization (i.e. serial execution), we have some straight-forward places to do so. An explicit interface, specific messages an object responds to, opens up a world of good design consequences!

Lean towards immutability and value objects whenever possible

The third flaw in the naive Rectangle class is that it could accidentally be shared across threads, with possibly hard to detect consequences. We can get around that using a technique borrowed from Clojure and Erlang: immutable objects.

class Rectangle
  attr_reader :width, :height

  def initialize(width, height)
    validate_width(width)
    validate_height(height)
    @width, @height = width, height
  end

  def validate_width(w)
    raise "Negative dimensions are invalid" if w < 0
  end

  def validate_height(h)
    raise "Negative dimensions are invalid" if h < 0
  end

  def set_width(w)
    self.class.new(w, height)
  end

  def set_height(h)
    self.class.new(width, h)
  end

  def orientation
    if width > height
      WIDE
    else
      TALL
    end
  end

end

This version of Rectangle further extracts the validation logic into separate methods so we can call it from the constructor and from the setters. But, look more closely at the setters. They do something you don’t often see in Ruby code. Instead of changing self, these setters create an entirely new Rectangle instance with new dimensions.

The upside to this is, if you accidentally share an object across threads, any changes to the object will result in a new object owned by the thread that initiated the change. This means you don’t have to worry about locking around these Rectangles; in practice, sharing is, at worst, copying.

The downside to this side is you could end up with a proliferation of Rectangle objects in memory. This puts pressure on the Ruby GC, which might cause operational headaches further down the line. Clojure gets around this by using persistent data structures that are able to safely share their internal structures, reducing memory requirements. Hamster is one attempt at bringing such “persistent” data structures to Ruby.

Let’s think about object design some more. If you’ve read up on domain-driven design, you probably recognize that Rectangle is a value object. It doesn’t represent any particular rectangle. It binds a little bit of behavior to a domain concept our program uses.

That wasn’t so hard, now was it

I keep trying to tell people that, in some ways, writing multithreaded program is as simple as applying common object-oriented design principles. Build objects that are always in a sensible state, don’t allow twiddling that state without going through the object’s interface, use value objects when possible, and consider using immutable value objects if you’re starting from scratch.

Following these principles drastically reduces the number of states you have to think about and thus makes it easier to reason about how the program will run with multiple threads and how to protect data with whatever form of lock is appropriate.


Cardinal sins

It is conceivable that a really good machine can learn our hash algorithm really well, but in the case of string hashing we still have to walk some memory to give us reasonable assurance of unique hash codes. So there's performance sin #1 violated: never read from memory.
Avoiding Hash Lookups in a Ruby Implementation, on the quest to eliminate the use of ad-hoc hashes inside JRuby. I love that the cardinal sin of a runtime is to avoid memory reads. It makes avoiding random database lookups in web applications look like a walk in the park.

On the other hand, consider how much fun it is to write compilers; their cardinal sin is to avoid conditionals or anything that would stall the processor pipeline. If that seems pedestrian, then consider the cardinal sin of a processor designer: don’t do anything that will take longer than one clock cycle, or half a billionth of a second if you’re keeping score at home.


Three application growth stories

First you grow your application, then you grow your organization, and then you get down to the metal and eek out all the performance you can.

Evolution of SoundCloud’s Architecture, this is how you grow an application without eating the elephant too soon. I would love to send this back to Adam from two years ago. Note that they ended up using RabbitMQ as a broker instead of Resque as a job queue. This nuanced position put them in a pretty nice place, architecturally.

Addicted to Stable is equal parts “hey, you should automate things and use graphs/monitoring to tell you when things break” and “look at all of GitHub’s nifty internal tools”. Even though I’ve seen the latter a few times already, I like my pal John Nunemaker’s peak into how it all comes together.

High Performance Network Programming on the JVM explains how to choose network programming libraries for the JVM, some pro’s and con’s to know about, and lays out a nice conceptual model for building network services. Seems like this is where you want to start once you reach the point where your application needs to serve tens of thousands of client concurrently.

I’m going to keep posting links like these until, some day, I feel like I’m actually doing it right. Until then, stand on other people’s shoulders, learn from experience.


My inner dialog while coding

My inner dialog while coding

I’m a bit of a sailor when I’m wrangling my own creations.


Hello, you beautiful fixed-width font

Pitch. Not quite a programmer’s font, but holy cow is it gorgeous.

I love the thought put into this type; the creator actually tried to recreate the artifacts of type created by physically striking paper. Turned out that took away from the font, but it’s delightful that he went that deep in considering what a fixed-width font should feel like.

The history of fixed-width, typewriter-esque fonts is fantastic too. Even if you’re not typography-curious like myself, you should read the whole thing and not just look at the fantastic specimens.


One part mechanics, one part science

One black-and-white perspective on building software is that part of it is about mechanics and part of it is about science. The mechanics part is about wiring things up, composing smaller solutions into bigger ones, and solving any problems that arise in the process. The science part is taking problems that don’t fit well into the existing mechanisms and making a new mechanism that identifies and solves all the right puzzles.

You could look at visual and interaction design in the same way. The mechanical part is about using the available assets and mechanisms to create a visual, interactive experience on screens that humans interact with. The science is about solving a problem using ideas that people already understand or creating an idea that teaches people how to solve a problem.

The mechanical case is about knowing tools, when to use them, and how they interact with each other. The scientific case is about holding lots of state and puzzle in your head and thinking about how computers or people will interact with the system.

I’ve observed that people end up all long the spectrum. Some specialize on mechanics, others on science. The rare case that can work adeptly on both sides, even if they’re not the best at either discipline, is really fun to watch.


Know a feedback loop

TDD is one way to create a feedback loop for building your application. Spiking code out and then stabilizing it is another:

For most people, TDD is a mechanism for discovery and learning. For some of us, if we can write an example in our heads, our biggest areas of learning probably lie elsewhere. Since ignorance is the constraint in our system, and we’re not ignorant about much that TDD can teach us, skipping TDD allows us to go faster. This isn’t true of everything. Occasionally the feedback is on some complex piece of business logic. Every time I’ve tried to do that without TDD it’s stung me, so I’m getting better at working out when to do it, and when it’s OK to skip it.

TDD helps me a lot when I have an idea what the problem looks like. Spiking out a prototype and backfilling tests helps me when I don’t know what the problem looks like.

You’re possibly different in how you approach problems. If you’re flying more by the seat of your pants, or you aren’t including the composition and organization of the code in your feedback loop, I will probably insist you work on something that isn’t in the core layers of the application. That’s cool though; as long as you have any feedback loop that will nudge you towards better discovering and solving the core problem, we’re cool.


Constructive teamwork is made of empathy

We nerds are trained from an early age to argue on the internet, hone our logical skills, and engage with people based on data instead of empathy.

twitter.com/gotascii/…

It’s so hard to divorce reason, emotion, and making progress on a project. Letting a logical inconsistency go is harder than forcing someone to see the flaw in their reasoning. Getting angry or worked-up feels more powerful than a supportive attitude. There are so many disasters to avoid, it’s hard to not to force everyone to listen to all the things you’ve been burned by previously and how you want to avoid them at all costs.

Take a deep breath. Fire up your empathy muscles. Figure out how to say “yes” to the work of your teammates while using your experience to guide that work to an even better place. This is what they call “constructive teamwork”.


Futures, Features, and the Enterprise-D

A future is a financial instrument (a thing you invest in) where you commit to paying a price today to receive something tomorrow. The price could go up or down tomorrow, but you’re locked into today’s price. Price goes up, you profit; price goes down, you eat the difference.

A feature is a thing that software does. For our purposes, we’ll say it’s also work that enables a feature: setting up CI, writing tests, refactoring code, adding documentation, etc. The general idea behind software development is that you should gain more time using a feature than the time you spent implementing it.

The Enterprise-D is a fictional space ship in the Star Trek: The Next Generation universe. It can split into two spaceships and is pretty well armed for a ship with an exploratory mission.


Today, Geordi and Worf (middle management) are recalibrating the forward sensor array. It takes them most of the day, but they get the job done. Captain Picard is studying ancient pan-flutes of the iron-age Vulcan era. Data (an android), as an experiment on his positronic net, is trying to learn how to tell an Aristocrat joke.

Tomorrow, in a series of events no one could predict, our friends find themselves in a tense situation with a Romulan Bird of Prey. Luckily, Worf detected it minutes before it decloaked, thanks to the work he and Geordi had performed the day before. This particular Bird of Prey is carrying ancient Romulan artifacts dating back to their own iron age. Amazingly, Picard is able to save the day by translating the inscriptions, which aren’t too different from Vulcan pan-flutes, and prevents an ancient doomsday weapon from consuming the Bird of Prey and Enterprise alike.

Data’s Aristocrat joke is never used. That’s good, because this is a family show.


Our friends on the Enterprise are savvy investors who look at their efforts in terms of risk and reward. They each invest time today into an activity (an instrument, in financial terms) which they may or may not use tomorrow. We can say that if they end up using the instrument, it pays off. We can then measure the pay-off of that instrument by assigning a value to the utility of that instrument. If the value of the instrument exceeds the time they invested in “acquiring” it, there is a profit.

Geordi and Worf’s investment was clearly a profit-bearing endeavor. Few other uses of their time, such as aligning the warp crystals or practicing Klingon combat moves, could have detected an invisible ship before it uninvisibles itself. In Wall Street terms, Geordi and Worf are getting the fat bonus and bottle of Bollinger champagne.

Picard’s investment seems less clear cut. It did come in handy in this particular case, but it probably wasn’t the only activity that would have saved the day. He could have belted out some Shakespeare or delegated to one of his officers to reconfigure the deflector dish. We’ll mark Picard as even for the day.

Data totally blew this one. His Aristocrat joke went unused. Even if he had used it, the best outcome would be that it’s a lame, sterile groaner that only ends up on a DVD extras reel. Data is in the red.

In terms of futures, we can say that the price of working on the foward sensor array went up, the price of pan-flute research was largely unchanged, and the price of Aristocrat jokes plummeted. Our friends on the Enterprise implicitly decided what risks are the most important to them and hedged against three of them. Some of them even came out ahead!


I’m working on software. Today, I can choose to do things on that software. I could 1) start on adding a new feature, 2) shore up the test suite, or 3) get CI setup and all-green. Respectively, these are futures addressing 1) the risk of losing money due to missing functionality, 2) losing money because adding features takes too long to get right, or 3) losing money because things are broken or not communicated in a timely manner.

Like our Enterprise episode, it’s hard to value these futures. If I deliver the feature tomorrow and it generates more money than the time I put into implementing, testing, and deploying the code, we’re looking at a clear profit. Revenue minus expense equals profit, grossly speaking.

Shoring up the test suite might make another feature easier to implement. It might give me confidence in moving code around to facilitate. It could tell me when I’ve broken some code, or some code is poorly designed and holding me back. But, these values are super hard to quantify. Did I save two hours on some feature because I spent one hour on the test suite yesterday? Tricky question!

Chore-ish tasks, like standing up a CI server or centralizing logs, are even harder to quantify. Either one of these tasks could save hours and days of wasted time due to missed communication or troubleshooting an opaque system. Or, they might not pay off at all for weeks and months.


I’m going to start writing down what I worked on every day, guess how many hours I spent on it, and then revisit each task weekly or monthly to guess if it paid out. Maybe I’ll develop an intuition for risk and reward for the things I work on. Maybe I’ll just end up with a mess of numbers. Almost certainly, I will seem pretty bookish and weird for tracking these sorts of things.

You should look bookish and weird too. Let me know what you find. I’ll write up whatever we figure out. Maybe there’s something to this whole “finance” thing besides nearly wrecking the global economy!


The test-driven astronaut

Don't Make Your Code "More Testable", make the design of your program better. Snappy test suites are all the vogue, but that misses the point of even writing tests: create a feedback loop to know when your program works and when your program is organized well. Listen carefully to the whispers in your code; if you're spending all your time writing tests or shuffling code instead of adding features, improving features, or shipping features then you're falling to the siren song of the test-driven astronaut.


Simplicators for sanity

For those rainy days when integrating with a not-entirely sane system is getting you down:

A Simplicator introduces a new seam into the system that did not exist when the service's byzantine API was used directly. As well helping us test the system, I've noticed that this seam is ideal for monitoring and regularing our systems' use of external services. If a widely supported protocol is used, we can do this with off-the-shelf components.

The Simplicator is a component that lives outside the architecture of your system. It exports a sane interface to your system. You test it separately from your system. Its only purpose in life is to deal with the insanity of others.

Hell is other people’s systems; QED this is a heavenly idea.


Smelly obsessions

Get Rid of That Code Smell - Primitive Obsession:

Think about it this way: would you use a string to represent a date? You could, right? Just create a string, let’s say "2012-06-25" and you’ve got a date! Well, no, not really – it’s a string. It doesn’t have semantics of a date, it’s missing a lot of useful methods that are available in an instance of Date class. You should definitely use Date class and that’s probably obvious for everybody. This is exactly what Primitive Obsession smell is about.

Rails developers can fall into another kind of obsession: framework obsession. Rails gives you folders for models, views, controllers, etc. Everything has to be one of those. Logic is shoehorned into models instead of put in objects unrelated to persistence. Controller methods and helpers grow huge with conditionals and accreted behavior.

This is partially an education and advocacy problem. Luckily, folks like Avdi Grimm, Corey Haines, Gary Bernhardt, and Steve Klabnik, amongst others, are spreading the word of how to use object oriented principles to design Rails applications without obsessing over the constructs in the Rails framework.

The second part is practice. Once you’ve educated yourself and bought into the notion that a Rails app isn’t all Rails classes, you’ve got to practice and struggle with the concepts. It won’t be pretty the first time; at least, it wasn’t for me. But with time, I’ve come to feel far better about how I design applications using both Rails principles and object-oriented principles.


How to think about organizing folders: don't.

Mountain Lion’s New File System:

Folders tend to grow deeper and deeper. As soon as we have more than a handful of notions, or (beware!) more than one hierarchical level of notions, it gets hard for most brains to build a mental model of that information architecture. While it is common to have several hierarchy levels in applications and file systems, they actually don’t work very well. We are just not smart enough to deal with notional pyramids. Trying to picture notional systems with several levels is like thinking three moves ahead in chess. Everybody believes that they can, but only a few skilled people really can do it. If you doubt this, prove me wrong by telling me what is in each file menu in your browser…

A well-considered essay on the non-recursive design of folders in iCloud, how people think about organizing documents, the emotions of organizing documents, and how it comes together in an app like iCloud. Great reading.