Breaking My Habits For Editing Programs

I’m a Unix guy, by upbringing. My first formative experiences in software development were on an early, Linux 1.x version of Debian. I’d used Windows, but always came back to Linux. When OS X got good enough around 10.2, I switched to something that didn’t require so much tinkering, so I could make more useful stuff.

Software development on Unix has skewed towards focusing on tools, languages, and text editors for quite some time. IDEs and browsers on Unix are a messy, foreign thing (just like everything else in Unix). Thus, I’ve long favored the terminal-and-editor style of development.

I’ve decided that now is the time for me to try something different. I like text editors and directly manipulating text, but I can see why some people feel naked without an IDE. The ability to pop-up a level and make a more broad-stroked transformation to a program is appealing. Having code navigation and semantic awareness baked in has lots of potential.

I’ve probably said grumbly things about RubyMine in the past, but I think now is the time to give it a go. Worst thing that could happen is that I don’t like it and I go back to the infinite tinkering of Emacs or the 85% perfect experience of TextMate.

I’ll let you know how it goes.


I originally wrote that a few months ago, at the apex of my editor neurosis.

I did give RubyMine a try, and I like some parts of it. It’s code navigation is pretty nice, it does an admirable job of integrating with the unique ecosystem of tools that a Ruby developer uses to manage their environment, and it does an excellent job of grokking TDD with test/unit and RSpec. RubyMine is a step in the right direction. I suspect that if I had muscle memory for IntelliJ, it would be the way to go.

But, I have muscle memory for TextMate and Emacs, and I have an affinity for being close to my tools. RubyMine felt one step disconnected from both my muscle memory and my tools. That’s quite an accomplishment; most IDEs feel several steps removed the tools and seem to discourage developing finger-memory in favor of menu-memory. I’ll give RubyMine another try in a year, probably, see how it’s coming along. But in the mean time, it’s great to see that there is a vendor out there tackling the challenge that is tools for Ruby.


A rambling, regurgitated thought on process

Elevator pitch: I’ve found that if you want to divert a productive team into an hour or two of semi-fruitless banter, ask how the team should use Git, Pivotal Tracker, and Capistrano to manage incoming work, verify it, and deploy it to production. In reality, you should ignore all the corner-cases and figure out what will enable you to push really small chunks of work with great frequency.

Ed. What follows isn’t novel, but it was a useful change in perspective for me, so I decided to share.

I’ve been thinking a bit about software processes lately. Despite great variation in telling you how to do so, most processes seem to focus on to do more stuff faster.

Lately, the notion of doing less has a lot of interest. Lean startups are the new-new thing and Getting Real is the old new thing; both preach getting more done and delivering value by doing less and analyzing the results more.

There are two kinds of “do less” a software developer can engage in. In the past I’ve been a little too focused on how I can take on fewer responsibilities from other parties. Literally doing less by scoping down features, putting off decisions, and focusing on things that seem like they really matter. I sometimes feel like I’ve become too eager to do less, making myself something of a cranky coder/slacker. But I digress

Recently, I’ve been trying to tackle doing less in my habits of creating software. How can I write less code to implement a feature, not in the minimalist sense, but in the “how do I just get it to kinda work sense”? How can I take less time between starting something and getting some form of it out in the wild? How can I make my code less coupled so there are fewer changes to make when I decide it needs to do something else? How can I make this less coupled to data storage so that putting it out requires less deployment effort? How can I make changes that are less likely to cause long-term regressions? How can I make it less effort to rollback bad changes?

When I look through the lens of accomplishing more by doing less, a lot of popular software methodology seems like dead weight. Rather than trying to find a process that addresses every team member’s own scars and affections, both perceived and imaginary, it seems most useful to imagine the smallest ruleset that won’t result in uncontrollable entropy and put it into action. If something starts to hurt, imagine the simplest new rule and put it into play.

The goal, as stated above, is to get to the point where you make really tiny, maybe imperceptible changes, and push them really frequently. Everything that stands in the way is the enemy.


Rails' Next Top Model

Of all the new and reimagined code in Rails 3, ActiveModel and ActiveRelation rank amongst what I find the most interesting. I’m excited that they potentially lower the bar to implementing one’s own data layer. If you’ve got some custom backend or datastore, writing a nice API around it has previously been quite the endeavor. To get it working is one thing; to make it as pleasant to use for application programmers is another thing entirely. ActiveModel and ActiveRelation have been extracted from ActiveRecord and make the task of building one’s own model layer far easier.

My presentation for RailsConf 2010 focused on what ActiveModel and ActiveRelation provide and how one can use it to write cleaner code in domain models, how to make your models feel more like ActiveRecord objects, and how to use ActiveRelation to build your own persistence layers.

I hope you find the slides educational. Further, I’ve posted the examples on GitHub so you can play along at home. If you’re particularly interested in ActiveRelation, I hope you’ll find the examples useful as a starting point to using that library.


Make More Awesome

Over the past year, I’ve been trying to create more stuff, some of which I’d hope to turn out awesome. Largely this is an ongoing sort of thing. I try things, I learn a little, I keep at it. Some things I try work, others don’t. I try to make a habit out of the things that have worked out well. I make a note of things that seem to help me get out of a funk when I’m not making as much as I’d like or having trouble putting in the hours I think are necessary to make cool stuff.

I first distilled these ideas into a talk I did at RubyConf 2009 on having more fun while coding. But I didn’t realize it at the time; I was just sharing some ideas about how to have fun. For me, one of the ways to have more fun is to make more time to have fun. But that’s just the beginning of making more awesome. I found I had to make more time, and develop a bunch of other habits.

For Big Design 2010, I honed the ideas, habits, and tricks I’ve found useful to me into a presentation on how to get off the couch, start making more things, and make some of those things awesome. I hope you’ll find it useful and perhaps start making lots of awesome stuff.


The art of making a useful todo list

I have a tenuous relationship with todo lists. Rather than helping me focus on getting stuff done, they usually only give me something to tinker with and feel better about having slightly mitigated my procrastination.

I’ve used Kinkless, OmniFocus, and TaskPaper. The latter is helping more, due somewhat to its more spartan nature. But mostly, I’m simply wiser and more pragmatic about the extent to which a todo list app can improve my life as the years have gone on.

In that wisdom, I’ve eschewed working from my todo list much lately. Instead, I’ve just glanced at it a few times a week to make sure I’m not forgetting something crucial.

For no particular reason, I went back to working from a list last weekend. To my surprise, it worked out for the better. Things got done, a feeling of accomplishment was achieved, stress levels were down, greater relaxation was had.

I suspect this success was due to a confluence of factors:

  • I made sure my list was minimally aspirational; everything on the list was something I could achieve in less than an hour
  • The list was focused on what I need to accomplish; if something was nice-to-have, I did it in the rest times between working items on the list
  • A little bit of novelty can’t hurt; working from a list after a few weeks not doing so is perhaps just different enough to yield temporary productivity gains

All that said, I’m starting to think there’s an art to making one’s list of things to do on any given day. Something that is achievable, but moves the ball. Something not too aspirational, but worth doing. Fresh, but with some long-standing items that feel good to knock off.

Perhaps a good todo list is equal parts excitement, tedium, and accomplishment.


A Personal Journey in Editing Programs

Over the past year or so, I’ve spent a bit of time tinkering with text editors. It feels like I woke up one morning and was simply dissatisfied with what I was currently using. It certainly wasn’t disgust, because I largely like the tools I use. But I felt it was likely there were better tools out there, and that I should give them an honest try. The grass on the other side might be greener.

So I went through liasons with VIM and a couple with Emacs. The first left me feeling disconnected, like I had ideas but I couldn’t even make them appear on the screen, let alone run. My experiences with Emacs are better, but have always felt somewhat awkward. I suspect that at some point, I could be a full-blown Emacs person, but right now, I’m just an aspirational Emacs guy.

I’ve used a myriad of editors in my time. I started with jed, a small-ish Emacs clone, then graduated to full-on Emacs. By way of viper, I migrated to VIM. Then I picked up BBEdit, because I wanted more direct manipulation of text, and less of a never-ending learning curve. I was pretty quick to jump on TextMate, seeing a great fusion of the Mac and Unix aesthetics.

I still think TextMate is as close as it gets to an ideal situation. And yet, it falls short enough that I tinker with VIM and Emacs, two editors that can easily be labeled powerful but extremely lacking in the visual and conceptual aesthetics departments.

I suspect my mismatch with most of these editors is due to something about my habits, the way I like to work with programs, and the kinds of programs I work with. Emacs’ notions of major and minor modes doesn’t play well with markup files with three different languages embedded within. VIM is efficient for those who have internalized it and hostile to everyone else. TextMate is easy to get started with and pleasant to extend up to a point, but seems to have fallen victim to its creator’s perfectionism.

For a long time, I’ve adhered to the philosophy that one should choose one text editor and learn as much about it as possible. Increasingly, I’m starting to think that there is no panacea, no "one true editor". TextMate is great for working on web apps and easy to extend. Emacs is really wonderful for functional programming languages, especially those with REPLs and languages that are LISP-shaped. And some kinds of development demand an environment more like Smalltalk browsers than text editors.

My journey for editing bliss probably won’t end anytime soon. In the future, I suspect that it’s going to be a spectrum of tools depending on the work I’m doing. It could be that the days of personal text editing monoculture are over.


A Brief Survey of the History of Editing Programs

Software developers spend a lot of time working with code. Over the past half-century of doing so, we’ve invented a lot of mechanisms that make that task easier and reduce the amount of friction between an idea and a computer executing that idea.

A quick review of the approaches to working with code reveal some areas where tool makers have devoted a lot of effort:

  • Editors — we’ve gone from paper tape to punch cards, from flipping switches on a panel to editing files one line at a time, and finally to the point where (most of us) work with one or more files by directly manipulating the text with a combination of keystrokes. When you get down to it, the experience of editing code in Emacs, TextMate, Visual Studio, or Eclipse are quite similar.
  • Environments — many software developers today use some kind of full-screen tool that puts a friendly face on all the tools they need to create, edit and deploy their software. These days, it is most commonly an IDE like Visual Studio or XCode. But way back in the day, Smalltalk people used a clever piece of technology called a browser, and it’s not too much of a stretch to call Emacs a Lisp browser.
  • Tools — we’ve come a long way in fifty years. We started writing machine code directly, then we grew assemblers, linkers, and compilers. Today, few developers will go through their career without using a debugger, profiler, lint tool, or applying an automated refactoring. There are a lot of development tasks that can be offloaded directly to the computer, leaving the programmer free to worry about important things like why anyone would ever choose three-space tabs.

Finally, let’s not forget that the past half-century of software development has seen more than its fair share of programming languages. These languages express a myriad of ideas and the ones that combine them nicely and support their users well have left a lasting impression in the evolution of how we express our ideas so that computers can run them.

It seems we are nearing an inflection point with regard to how we use tools to create programs. As the keyboard-and-mouse give way to the display-and-finger(s), there’s an opportunity to interact with and modify programs in new ways. I suspect that the future of editing programs has something to do with merging editors with environments and making the tools as pervasively used as typing is today.


Who are we that make software?

We who spend all of our time in front of a computer involved in the production of software are often quick to pigeon-hole ourselves. You probably self-classify as a developer or designer, maybe an engineer or artist if you got a college degree and think highly of it.

But like many other things, it’s all messy now. I’d say I spend sixty percent of my time doing general “developer” stuff, twenty-five percent doing something one could approximately call “engineering”, and split the rest between marketing, business, and design.

Does self-identifying with any one of these roles limit how we think or approach doing what we do?

Sloppy classifications

Consider these heuristics for placing people into categories:

  • You build things that face other people
  • You are making things that are constrained by rulesets defined largely by Newtonian mechanics
  • You are making things where trade-offs between aesthetics and affordances are made
  • Other people build things on top of your things

None of these are useful at all. Were you to provide any one as a definition of what an engineer or designer is, you could probably get some heads nodding. So there’s something appealing about each of these statements. But none of them provide a pleasing definition or guideline for when you’re doing engineering, development, or design.

Part of the answer to these classifications is that we all do everything. Developers strive to build software that fits within the aesthetic of the code around it or their own personal aesthetic. Designers operate within the limitations of human perception and cognition. Engineers are constrained by both of these but will throw either out in a heart beat to improve upon the efficiencies that are important to the project at hand.

We’re all hybrids

The notion of developing designers and designing developers is by no means new. A few examples:

But consider Kent Beck, renowned for his work building and thinking about the process of building software. He often talks about the design of software, considering trade-offs, aesthetics, and affordances just like a designer does. But he’s also been spending a lot of time recently iterating on businesses, trying out new ideas, and writing about the process and essence of converting an idea into a sustainable business.

Or consider Shaun Inman. He’s writing games as a one-man show. He splits his time between producing the music, drawing pixel art, and coding up collision detection systems. That’s a pretty neat cocktail of talents.

If you’ve ever bikeshedded a design discussion or suggested how a feature might work, you’re a hybrid. Ever refer to yourself as a specializing generalist? That’s a hybrid.

Directed thought

If you’ve self-classified one way or the other, there are little things you might do that have large effects on your thinking. You socialize with those who are like classified, use the tools of that classification, and concern yourself with the classic problems that consume those working in your area of specialization.

If you’re not careful, you could box yourself in too much, become too specialized. While there are opportunities for well-chosen, tightly-focused specializations, they are few and far between. Specializing generalists are the order of the day.

Where do we get if we acknowledge that we’re all hybrids now? Suppose you’re aiming for a balance of sixty percent developer, twenty percent engineer, and twenty percent designer. Is it worth going whole-hog learning Emacs or Photoshop? Or is it better to learn less-capable but lower learning-curve tools like TextMate and Acorn? Should such a person concern themselves with the details of brand design and the implementation of persistent data structures, or is it more important to grasp those topics in a conversational manner?

Is it a better use of Shaun Inman’s time to dissect a Mahler symphony, do an expansive study of pixel art, or review the mechanisms Quake III used for detecting collisions? Is it a better use of Kent Beck’s time to build software and write about that process, to talk to people and integrate their problems into his way of developing software, or iterate on business ideas and share those experiences?

Here's the motivational part

So now that all of this is forehead-smackingly clear (right?!), where do we go from here? Personally, I’m using the idea to guide how much effort I put into teaching myself new tricks. I probably won’t go on a six-month algorithms kick anytime soon, but I might spend six months learning the pros and cons of various database systems or application frameworks. I’d love to spend a month just tinkering with typography, color, layout, and other visual design topics. I probably won’t sweat it if Emacs or Photoshop don’t integrate into my daily work too well, or prove impenetrable to my mind, since those tools imply workflows that aren’t top priority to me.

But that’s me; where should you go? If you don’t already have a good idea of what kind of hybrid you are, start noting how much time you spend on various sorts of tasks and think about whether you’d like to do more or less of them. Then, start taking action to realize a course correction.

You can be whatever kind of hybrid developer you want, it’s just a matter of putting in the time and effort.


Those who think with their fingers

In the past couple of years, I’ve discovered an interesting way to think about programming problems. I try to solve them while I’m away from the computer. Talking through a program, trying to hold all the abstractions in my head, thinking about ways to re-arrange the code to solve the problem at hand whilst walking. The key to this is that I’m activating different parts of my brain to try and solve the problem. We literally think differently when we talk than when we write or type. Sometimes, the notion you need is locked up in other parts of your brain, just waiting for you to activate them.[1]

But sometimes, when I’m doing this thinking, there is something I really miss: the feel of the keyboard under my fingers and the staccato of typing. If there’s an analog to “I love the smell of napalm in the morning”, it’s “I love the sound of spring-loaded keys making codes”.

With the release of the iPad, it’s quite likely that a large percentage of the population can start to eschew the traditional keyboard and pointer that have served them in such a mediocre fashion for so long. On the other hand, you can take my keyboard from my cold dead hands. I really like typing, I’m pretty good at it, and I feel like I get a lot done, pretty quickly, when I’m typing.

Last year, I decided I would give other text editors a try. I stepped out from my TextMate happy place to try VIM. I knew this part of the experiment wasn’t going to work because when I felt like I’d gone through enough reading, tutorials and re-learning of VIM, I sat down to tap out some code. And…nothing. I felt like I was operating the editor instead of letting the code flow from my brain, through my fingertips, onto the display. It was as if I had to operate through a secondary device in order to produce code.

Sometimes it seems that developers think with their fingers. I’m not sure what the future of that is. We’ve created environments for programming that are highly optimized for using ten fingers nearly simultaneously. How will that translate to new devices that focus on direct manipulation with at most two or three fingers at a time. Will new input mechanisms like orientation and acceleration take up the slack?

Will we finally let go off the editor-compiler-runtime triumvirate? Attempts to get us out of this rut in the past have proven to be folly. I’m hoping this time around the externalities at the root of past false starts are identified and the useful essence is extracted into something really compelling.

In the mean time, it’s going to be fun trying the different ways to code on an iPad as designers and developers try new ways to make machines do our bidding.

1 If this intrigues you, read Andy Hunt’s Pragmatic Thinking and Learning it’s excellent!


Kindly cogs in unpleasant machines

Some of the most villified companies are poorly regarded because of the way they treat their own customers. Think about people complaining about AT&T’s service in New York City or people put on hold for hours by their electric company during a power disruption. Instead of treating these problems as real, telephone and electrical companies treat them as items in a queue to be dealt with as quickly as possible.

And thus, systems are set up that put a premium on throughput. Rewards are given to those who prevent customers from taking the time of the real experts who might fix a problem. Glib voice menus serve as a layer of indirection before you even reach a human. Service disruptions often bear a message tantamount to saying “we know we are not giving you the service we promised, buzz off and wait until we manage to fix it.”

Despite all this, sometimes you come upon a real jewel. Someone who really helps you, who cares about what’s going on. That special person who doesn’t care so much about their average call time, but who takes the time to get you to a happy place.

These are good actors working within a rotten system; kindly cogs in a vicious machine. I could call up AT&T and talk to any number of nice, well-meaning and empathetic people. Sometimes they are empowered and can fix the problem I face. Just as frequently, they want to help but the system they operate in prevents them from doing so, either because it would take too much time or because it is deemed too expensive to put the decision in the hands of those answering the phones.

When I describe them as cogs, it’s almost literal. Though manufacturing in the US is in serious decline, manufacturing-style management is not. Managers routinely and without irony describe people they might hire as “resources” that they can “utilize”. If there were a machine that could pop out customers whose problems had been resolved, managers would “utilize” those “resources” in the same way. Indeed, the majority of information systems attempt to do just this.

My point is that we regard a company like AT&T, Microsoft, Walmart, or Coca-Cola as a homogenous thing with its own will, priorities, and personality. But companies aren’t homogenous, because people aren’t.

Here’s to those kindly cogs. Thanks for making our interactions with these unpleasant behemoths just a little less daunting.


A quick RVM rundown

(It so happens I’m presenting this at Dallas.rb tonight. Hopefully it can also be useful to those out in internetland too.)

RVM gives you three things:

  • an easy way to use multiple versions of multiple Ruby VMs
  • the ability to manage multiple indpendent sets of gems
  • more sanity

First, let's install RVM:

  • gem install rvm
  • rvm-install
  • follow the directions to integrate with your shell of choice

Now, let's install some Rubies:

  • rvm list known will show us all the released Rubies that we can install (more on list)
  • rvm list rubies will show which Rubies we have locally installed
  • rvm install ree-1.8.7 gives me the latest release of the 1.8.7 branch of Ruby Enterprise Edition
  • rvm install jruby will give me the default release for JRuby
  • rvm use jruby will switch to JRuby
  • rvm use ree will give me Ruby Enterprise Edition
  • rvm use ruby-1.8.6 will give me an old and familiar friend
  • rvm use system will put me back wherever my operating system left me

The other trick that RVM gives us is the ability to switch between different sets of installed gems:

  • Each Ruby VM (JRuby, Ruby 1.9, Ruby 1.8, REE) has its own set of gems. This is a fact of life, due to differing APIs and, you know, underlying languages.
  • rvm use ruby-1.9.1 gives you the default Ruby 1.9 gemset
  • rvm use ruby-1.9.1%acme gives you the gemset for your work with Acme Corp (more on using gemsets)
  • rvm use ruby-1.9.1%wayne gives you the gemset for your work with Wayne Enterprises
  • rvm use ree%awesome gives you the gemset for your awesome app
  • You can export and import gemsets. This can come in handy to bring new people onboard. No longer will they have to sheepishly install gems on their first day as they work through dependencies you long since forgot about.

Some other handy things to peruse:

I also promised you some extra sanity:

  • RVM knows how to compile things, put Rubygems and rake in place, even apply patches and pull from specific tags. You can do more important things, like watch The View or read an eleven part series on pre-draft analysis for the Cowboys.
  • RVM lets you isolate different applications you're working on. Got one app that doesn't play nice with Rails 2.x installed? No problem, create a gem environment for that! Stuck in the spider-web of Merb dependencies? Isolate it in its own environment.
  • RVM makes multi-platform testing and benchmarking easy. You can easily run your test suite or performance gizmo on whatever Rubies you have installed.
  • RVM makes it easy to tinker with esoteric patchlevels and implementations. For instance, feel free to tinker with MagLev or the mput branch of MRI.

A couple other things RVM tastes great with:

  • Using homebrew to manage packages instead of MacPorts
  • Not using sudo to install your gems
  • Managing your dotfiles on GitHub

The imperfection of our tools

I enjoy a well-crafted application. I place a high value on attention to detail, have opinions on what design elements make an application work, and try to empathize with the users of applications I’m involved in creating. Applications with a good aesthetic, a few novel but effective design decisions, and sensible workflow find themselves in my Mac’s dock. Those that don’t, do not.

The applications I observe fellow creators using to create often don’t fit into their environment. They don’t fit into the native look-and-feel. They ignore important idioms. Their metaphors are imperfect, the conceptual edges left unfinished.

In part I notice this because as creators we tend to live in a few different applications, and time reveals most shortcomings. But in part, I notice this because the applications are in fact flawed. Flawed to the point, that you would think given my opening words, that I would refuse to use them. And indeed, I refuse to use many of the applications that others find completely acceptable for making the same kinds of things I do.

Increasingly, it seems the applications that people who create things live in offer a disjoint user experience. I’m thinking of visual people living in Photoshop or Illustrator or developers living in Emacs or Terminal.app. We use these applications because they best allow us to make what we want and get in our way only a little bit. But, it’s a tenuous relationship at best.

What’s this say about what we’re doing and the boundaries that we operate along? Would we accept the same kinds of shortcomings in say, a calendar application or a clock widget, if those were central to our workflow? That is, is there something about the creative process that leads us to accept sub-perfect tools? Is it inevitable that someone seeking to make new things will find their tools imperfect? Is the quest for ever-more perfect tools part of how we grow as makers?

I hate closing with a bunch of questions, but this piece is but an imperfect tool for discovering an idea.

Ed. Closing could use some work.


Warning: politics

Embedded within the migraine that is American politics are some very interesting ideas. Economics, markets, ethics, freedom, equality, education, transportation, and security are all intriguing topics. Recently, I figured out that the headache comes not from people or trying to make the ideas work, but in politics. Getting a majority of the people to agree on anything is a giant pain of coordination. When you throw in fearmongering, power struggles, critically wounded media, and the fact most people would rather not think deeply about any of this you end up with the major downer that we face today.

All that said, here are some pithy one-liners about politics:

  • If I were part of the Democratic leadership, I’d be wondering how you take the high road in a race to the bottom. And win.

  • If I were a Republican, I’d be wondering how to dig myself out of this giant hole I made by winning a race to the bottom.

  • If I were a libertarian, I’d be wondering how to convince people that the Tea Party is different from what I believe in.

  • If I were a leader of the Tea Party, I’d be wondering what I’m going to do when someone who claims to be a part of the Tea Party blows up a building or goes nuts with an assault rifle.

  • If I were a politician, I’d wonder how much I have to compromise my values and what I really wanted to accomplish but still get enough votes to keep my job.

  • If I were skeptical of climate change due to human activity, I’d be wondering how I’m going to find a spaceship, because this line of reasoning leads to the conclusion that the Earth is about to become very inhospitable.

  • If I were a nihilist, I’d wonder…nothing.

There, have I offended everyone?


Goodbye, gutbombs

Last March my wife and I joined a gym, started working out with a trainer, started trying to eat better, and thusly set out to improve our health. Amazingly, we’ve stuck with it (after two previous failed attempts in years past) and are both in much better shape than we’ve been in for quite some time.

One of my personal reasons for doing this was what I’d been hearing about the correlation between working out, eating better, and brain function. Lots of people who read way more into this than I do had been saying that if you eat better and exercise more, your brain will work better.

I’ve noticed this first hand. The day after my first serious run, my mind was in overdrive. I had lots of great ideas, I worked through them quickly, and I didn’t procrastinate when it came to exploring or realizing them.

Today, I had the opposite experience. I went out for a rather large Tex-Mex lunch. Lots of starch. I got home and took a nap, as is often my wont. Usually I wake up ready to get back to work after my naps. But today was different. My brain was thoroughly sluggish. My body’s energy was going towards digestion, not thought.

I guess this is something of a break-up letter for me. You see, I’ve long enjoyed the large, starchy lunch. But, I’m not sure I can put up with it anymore. If its a choice between starchy, tasty lunches and a high-functioning brain, I’m going to have to choose my brain.

Sorry, lunch-time gutbombs. We had a good run, but I’m going to have to quit you for a while.


Give attribute_mapper a try

(For the impatient: skip directly to the `attribute_mapper` gem.)

In the past couple months, I’ve worked on two different projects that needed something like an enumeration, but in their data model. Given the ActiveRecord hammer, they opted to represent the enumeration as a has-many relationship and use a separate table to represent the actual enumeration values.

To a man with an ORM, everything looks like a model

So, their code ended up looking something like this:

class Post < ActiveRecord::Base

  belongs_to :status

end

class Status < ActiveRecord::Base

  has_many :tickets

end

From there, the statuses table is populated either from a migration or by seeding the data. Either way, they end up with something like this:

# Supposing statuses has a name column
Status.create(:name => 'draft')
Status.create(:name => 'reviewed')
Status.create(:name => 'published')

With that in place, they can fiddle with posts as such:

post.status = Status.find_by_name('draft')
post.status.name # => 'draft'

It gets the job done, sure. But, it adds a join to a lot of queries and abuses ActiveRecord. Luckily…

I happen to know of a better way

If what you really need is an enumeration, there’s no reason to throw in another table. You can just store the enumeration values as integers in a database column and then map those back to human-friendly labels in your code.

Before I started at FiveRuns, Marcel Molina and Bruce Williams wrote a plugin that does just this. I extracted it and here we are. It’s called attribute_mapper, and it goes a little something like this:

class Post  {
    :draft => 1,
    :reviewed => 2,
    :published => 3
  }
end

See, no extra table, no need to populate the table, and no extra model. Now, fiddling with posts goes like this:

post.status = :draft
post.status # => :draft
post.read_attribute(:status) # => 1

Further, we can poke the enumeration directly like so:

Post.statuses # => { :draft => 1, :reviewed => 2, :published => 3 }
Post.statuses.keys # => [:draft, :reviewed, :published]

Pretty handy, friend.

Hey, that looks familiar

If you’ve read Advanced Rails Recipes, you may find this eerily familiar. In fact, recipe #61, “Look Up Constant Data Efficiently” tackles a similar problem. And in fact, I’m migrating a project away from that approach. Well, partially. I’m leaving two models in place where the “constant” model, Status in this case, has actual code on it; that sorta makes sense, though I’m hoping to find a better way.

But, if you don’t need real behavior on your constants, attribute_mapper is ready to make your domain model slightly simpler.


Just For Fun

This year was my fourth RubyConf. I’ve always come away from RubyConf energized and inspired. But, I’ve yet to follow through on that in a way I found satisfying. I have a feeling I’m not alone in that camp.

This was the first year I’ve given a presentation at RubyConf. At first, I had intended to use this watershed-for-me opportunity to ask whether Ruby was still fun. There’s been a number of “drama moments” since my first RubyConf; I thought it might be worth getting back to my early days of coding with Ruby, when I was exploring and having a great time turning my brain inside out.

As I started researching, it turned out that there are a lot of people having fun with Ruby. Some are doing things like writing games, making music or just tinkering with languages. Others are doing things that only some of us consider fun. Things like hacking on serious virtual machines, garbage collection, and asynchronous IO frameworks.

So, back to my talk. I saw my failure to harness the motivation what I’d seen at previous years at RubyConf as an opportunity to figure out ways to line up some tactics to make sure that after the conference, I was able to create awesome things, contribute them back to the community, and enjoy every minute of it.

Thus, I came up with a sort of “hierarchy of open source developer needs”. At the bottom is enjoyment; there’s little sense doing open source work if you’re not having fun. Once you’re having fun, you probably want to figure out how to find more time for making codes. Once you’re making more codes, you want to figure out how to get people interested in using your stuff. I’ve taken these three needs and identified several tactics that help me when I find myself in a rut or unable to produce. Call them patterns, practices, whatever; for me, they’re just tricks I resort to when the code isn’t flowing like I want to.

The talk I ended up with is equal parts highlighting people in the Ruby community that are having fun and highlight ways to enjoy making things and contributing it back to whatever community you happen to be part of. I hope that I avoided sounding too much like a productivity guru and kept it interesting for the super-technical RubyConf crowd.

If all of this sounds interesting you, grab the slides (which are slightly truncated, no thanks to Keynote) or watch the recording from the conference itself.


I wrote the proposal for this talk right after Why disappeared himself. His way of approaching code is what inspired me to write a talk about getting back to coding for fun. “Just for Fun” starts with a tribute to Why the Lucky Stiff. The sense of fun and playfulness that Why had is important to the Ruby community. I’ve tried to highlight some of his most interesting playful pieces. And in the end, I can’t say “thanks” enough. Why has inspired me a lot and I’m glad I got to meet him, experience him and learn through his works.

Even if you don’t take a look at my presentation, I strongly urge you to give a look at some of Why’s works and let them inspire you. My favorites are Potion and Camping.


Some other things I mentioned in my talk as interesting or fun:


Curated Awesome, the 2nd


The Kindle&#39;s sweet spot

Given all the hubbub about Kindles, Nooks and their utility, I thought this bears repeating to a wider audience:

The Kindle is great for books that are just a bag of words, but falls short for anything with important visuals.

I’ve really enjoyed reading on my Kindle over the past year. You can’t beat it for dragging a bunch of books with you on vacation or for reading by the poolside. That said, I don’t use it to read anything technical with diagrams or source code listings. I certainly wouldn’t use it to read anything like Tufte, which is exactly why his books aren’t available on the Kindle. Where the Kindle shines is with pop-science books like Freakonomics and Star Wars novels1.

If you love books and reading, the Kindle is a nice addition to your bibliophilic habit, but it’s no replacement for a well-chosen and varied library.

1 Did I say that out loud? Crap.


Testing declarative code

I’m a little conflicted about how and if one should write test code for declarative code. Let’s say I’m writing a MongoMapper document class. It might look something like this:

[sourcecode language=“ruby” gutter=“false”] class Issue

include MongoMapper::Document

key :title, String key :body, String key :created_at, DateTime

end [/sourcecode]

Those key calls. Should I write a test for them? In the past, I’ve said “yes” on the principle that I was test driving the code and I needed something to fail in order to add code. Further, the growing ML-style-typing geek within me likes that writing tests for this is somewhat like constructing my open wacky type system via the test suite.

A Shoulda-flavored test might look something like this:

[sourcecode language=“ruby” gutter=“false”] class IssueTest < Test::Unit::TestCase

context ‘An issue’ do

should_have_keys :title, :body, :created_at

end

end [/sourcecode]

Ignoring the recursive rathole that I’ve now jumped into, I’m left with the question: what use is that should_have_keys? Will it help someone better understand Issue at some point in the future? Will it prevent me from naively breaking the software?

Perhaps this is the crux of the biscuit: by adding code to make certain those key calls are present, have I address the inherent complexity of my application or have I imposed complexity?

I’m going to experiment with swinging back towards leaving these sorts of declarations alone. The jury is still out.


Texas is its own dumb thing

Southern American English

OK, here’s the deal. Wikipedia has it all wrong.

  1. Texas is not part of the South. Texas is its own unique thing. Sure we have dumbasses, but they are our dumbasses, wholly distinct from your typical Southern dumbass.
  2. In Texas, the way you refer to “you all” is “ya’ll”; it’s a contraction of “ya all”.
  3. They neglected to mention the idiomatic pronunciation of words like “oil” (ah-wllllll) or “wash” (warsh).

Please take it under consideration: Wikipedia is edited by a lot of damn Oklahomans.