Who are we that make software?
We who spend all of our time in front of a computer involved in the production of software are often quick to pigeon-hole ourselves. You probably self-classify as a developer or designer, maybe an engineer or artist if you got a college degree and think highly of it.
But like many other things, it’s all messy now. I’d say I spend sixty percent of my time doing general “developer” stuff, twenty-five percent doing something one could approximately call “engineering”, and split the rest between marketing, business, and design.
Does self-identifying with any one of these roles limit how we think or approach doing what we do?
Sloppy classifications
Consider these heuristics for placing people into categories:
- You build things that face other people
- You are making things that are constrained by rulesets defined largely by Newtonian mechanics
- You are making things where trade-offs between aesthetics and affordances are made
- Other people build things on top of your things
None of these are useful at all. Were you to provide any one as a definition of what an engineer or designer is, you could probably get some heads nodding. So there’s something appealing about each of these statements. But none of them provide a pleasing definition or guideline for when you’re doing engineering, development, or design.
Part of the answer to these classifications is that we all do everything. Developers strive to build software that fits within the aesthetic of the code around it or their own personal aesthetic. Designers operate within the limitations of human perception and cognition. Engineers are constrained by both of these but will throw either out in a heart beat to improve upon the efficiencies that are important to the project at hand.
We’re all hybrids
The notion of developing designers and designing developers is by no means new. A few examples:
But consider Kent Beck, renowned for his work building and thinking about the process of building software. He often talks about the design of software, considering trade-offs, aesthetics, and affordances just like a designer does. But he’s also been spending a lot of time recently iterating on businesses, trying out new ideas, and writing about the process and essence of converting an idea into a sustainable business.
Or consider Shaun Inman. He’s writing games as a one-man show. He splits his time between producing the music, drawing pixel art, and coding up collision detection systems. That’s a pretty neat cocktail of talents.
If you’ve ever bikeshedded a design discussion or suggested how a feature might work, you’re a hybrid. Ever refer to yourself as a specializing generalist? That’s a hybrid.
Directed thought
If you’ve self-classified one way or the other, there are little things you might do that have large effects on your thinking. You socialize with those who are like classified, use the tools of that classification, and concern yourself with the classic problems that consume those working in your area of specialization.
If you’re not careful, you could box yourself in too much, become too specialized. While there are opportunities for well-chosen, tightly-focused specializations, they are few and far between. Specializing generalists are the order of the day.
Where do we get if we acknowledge that we’re all hybrids now? Suppose you’re aiming for a balance of sixty percent developer, twenty percent engineer, and twenty percent designer. Is it worth going whole-hog learning Emacs or Photoshop? Or is it better to learn less-capable but lower learning-curve tools like TextMate and Acorn? Should such a person concern themselves with the details of brand design and the implementation of persistent data structures, or is it more important to grasp those topics in a conversational manner?
Is it a better use of Shaun Inman’s time to dissect a Mahler symphony, do an expansive study of pixel art, or review the mechanisms Quake III used for detecting collisions? Is it a better use of Kent Beck’s time to build software and write about that process, to talk to people and integrate their problems into his way of developing software, or iterate on business ideas and share those experiences?
Here's the motivational part
So now that all of this is forehead-smackingly clear (right?!), where do we go from here? Personally, I’m using the idea to guide how much effort I put into teaching myself new tricks. I probably won’t go on a six-month algorithms kick anytime soon, but I might spend six months learning the pros and cons of various database systems or application frameworks. I’d love to spend a month just tinkering with typography, color, layout, and other visual design topics. I probably won’t sweat it if Emacs or Photoshop don’t integrate into my daily work too well, or prove impenetrable to my mind, since those tools imply workflows that aren’t top priority to me.
But that’s me; where should you go? If you don’t already have a good idea of what kind of hybrid you are, start noting how much time you spend on various sorts of tasks and think about whether you’d like to do more or less of them. Then, start taking action to realize a course correction.
You can be whatever kind of hybrid developer you want, it’s just a matter of putting in the time and effort.
Those who think with their fingers
In the past couple of years, I’ve discovered an interesting way to think about programming problems. I try to solve them while I’m away from the computer. Talking through a program, trying to hold all the abstractions in my head, thinking about ways to re-arrange the code to solve the problem at hand whilst walking. The key to this is that I’m activating different parts of my brain to try and solve the problem. We literally think differently when we talk than when we write or type. Sometimes, the notion you need is locked up in other parts of your brain, just waiting for you to activate them.[1]
But sometimes, when I’m doing this thinking, there is something I really miss: the feel of the keyboard under my fingers and the staccato of typing. If there’s an analog to “I love the smell of napalm in the morning”, it’s “I love the sound of spring-loaded keys making codes”.
With the release of the iPad, it’s quite likely that a large percentage of the population can start to eschew the traditional keyboard and pointer that have served them in such a mediocre fashion for so long. On the other hand, you can take my keyboard from my cold dead hands. I really like typing, I’m pretty good at it, and I feel like I get a lot done, pretty quickly, when I’m typing.
Last year, I decided I would give other text editors a try. I stepped out from my TextMate happy place to try VIM. I knew this part of the experiment wasn’t going to work because when I felt like I’d gone through enough reading, tutorials and re-learning of VIM, I sat down to tap out some code. And…nothing. I felt like I was operating the editor instead of letting the code flow from my brain, through my fingertips, onto the display. It was as if I had to operate through a secondary device in order to produce code.
Sometimes it seems that developers think with their fingers. I’m not sure what the future of that is. We’ve created environments for programming that are highly optimized for using ten fingers nearly simultaneously. How will that translate to new devices that focus on direct manipulation with at most two or three fingers at a time. Will new input mechanisms like orientation and acceleration take up the slack?
Will we finally let go off the editor-compiler-runtime triumvirate? Attempts to get us out of this rut in the past have proven to be folly. I’m hoping this time around the externalities at the root of past false starts are identified and the useful essence is extracted into something really compelling.
In the mean time, it’s going to be fun trying the different ways to code on an iPad as designers and developers try new ways to make machines do our bidding.
1 If this intrigues you, read Andy Hunt’s Pragmatic Thinking and Learning it’s excellent!
Kindly cogs in unpleasant machines
Some of the most villified companies are poorly regarded because of the way they treat their own customers. Think about people complaining about AT&T’s service in New York City or people put on hold for hours by their electric company during a power disruption. Instead of treating these problems as real, telephone and electrical companies treat them as items in a queue to be dealt with as quickly as possible.
And thus, systems are set up that put a premium on throughput. Rewards are given to those who prevent customers from taking the time of the real experts who might fix a problem. Glib voice menus serve as a layer of indirection before you even reach a human. Service disruptions often bear a message tantamount to saying “we know we are not giving you the service we promised, buzz off and wait until we manage to fix it.”
Despite all this, sometimes you come upon a real jewel. Someone who really helps you, who cares about what’s going on. That special person who doesn’t care so much about their average call time, but who takes the time to get you to a happy place.
These are good actors working within a rotten system; kindly cogs in a vicious machine. I could call up AT&T and talk to any number of nice, well-meaning and empathetic people. Sometimes they are empowered and can fix the problem I face. Just as frequently, they want to help but the system they operate in prevents them from doing so, either because it would take too much time or because it is deemed too expensive to put the decision in the hands of those answering the phones.
When I describe them as cogs, it’s almost literal. Though manufacturing in the US is in serious decline, manufacturing-style management is not. Managers routinely and without irony describe people they might hire as “resources” that they can “utilize”. If there were a machine that could pop out customers whose problems had been resolved, managers would “utilize” those “resources” in the same way. Indeed, the majority of information systems attempt to do just this.
My point is that we regard a company like AT&T, Microsoft, Walmart, or Coca-Cola as a homogenous thing with its own will, priorities, and personality. But companies aren’t homogenous, because people aren’t.
Here’s to those kindly cogs. Thanks for making our interactions with these unpleasant behemoths just a little less daunting.
A quick RVM rundown
(It so happens I’m presenting this at Dallas.rb tonight. Hopefully it can also be useful to those out in internetland too.)
RVM gives you three things:
- an easy way to use multiple versions of multiple Ruby VMs
- the ability to manage multiple indpendent sets of gems
- more sanity
First, let's install RVM:
gem install rvmrvm-install- follow the directions to integrate with your shell of choice
Now, let's install some Rubies:
rvm list knownwill show us all the released Rubies that we can install (more on list)rvm list rubieswill show which Rubies we have locally installedrvm install ree-1.8.7gives me the latest release of the 1.8.7 branch of Ruby Enterprise Editionrvm install jrubywill give me the default release for JRubyrvm use jrubywill switch to JRubyrvm use reewill give me Ruby Enterprise Editionrvm use ruby-1.8.6will give me an old and familiar friendrvm use systemwill put me back wherever my operating system left me
The other trick that RVM gives us is the ability to switch between different sets of installed gems:
- Each Ruby VM (JRuby, Ruby 1.9, Ruby 1.8, REE) has its own set of gems. This is a fact of life, due to differing APIs and, you know, underlying languages.
rvm use ruby-1.9.1gives you the default Ruby 1.9 gemsetrvm use ruby-1.9.1%acmegives you the gemset for your work with Acme Corp (more on using gemsets)rvm use ruby-1.9.1%waynegives you the gemset for your work with Wayne Enterprisesrvm use ree%awesomegives you the gemset for your awesome app- You can export and import gemsets. This can come in handy to bring new people onboard. No longer will they have to sheepishly install gems on their first day as they work through dependencies you long since forgot about.
Some other handy things to peruse:
I also promised you some extra sanity:
- RVM knows how to compile things, put Rubygems and rake in place, even apply patches and pull from specific tags. You can do more important things, like watch The View or read an eleven part series on pre-draft analysis for the Cowboys.
- RVM lets you isolate different applications you're working on. Got one app that doesn't play nice with Rails 2.x installed? No problem, create a gem environment for that! Stuck in the spider-web of Merb dependencies? Isolate it in its own environment.
- RVM makes multi-platform testing and benchmarking easy. You can easily run your test suite or performance gizmo on whatever Rubies you have installed.
- RVM makes it easy to tinker with esoteric patchlevels and implementations. For instance, feel free to tinker with MagLev or the mput branch of MRI.
A couple other things RVM tastes great with:
- Using homebrew to manage packages instead of MacPorts
- Not using
sudoto install your gems - Managing your dotfiles on GitHub
The imperfection of our tools
I enjoy a well-crafted application. I place a high value on attention to detail, have opinions on what design elements make an application work, and try to empathize with the users of applications I’m involved in creating. Applications with a good aesthetic, a few novel but effective design decisions, and sensible workflow find themselves in my Mac’s dock. Those that don’t, do not.
The applications I observe fellow creators using to create often don’t fit into their environment. They don’t fit into the native look-and-feel. They ignore important idioms. Their metaphors are imperfect, the conceptual edges left unfinished.
In part I notice this because as creators we tend to live in a few different applications, and time reveals most shortcomings. But in part, I notice this because the applications are in fact flawed. Flawed to the point, that you would think given my opening words, that I would refuse to use them. And indeed, I refuse to use many of the applications that others find completely acceptable for making the same kinds of things I do.
Increasingly, it seems the applications that people who create things live in offer a disjoint user experience. I’m thinking of visual people living in Photoshop or Illustrator or developers living in Emacs or Terminal.app. We use these applications because they best allow us to make what we want and get in our way only a little bit. But, it’s a tenuous relationship at best.
What’s this say about what we’re doing and the boundaries that we operate along? Would we accept the same kinds of shortcomings in say, a calendar application or a clock widget, if those were central to our workflow? That is, is there something about the creative process that leads us to accept sub-perfect tools? Is it inevitable that someone seeking to make new things will find their tools imperfect? Is the quest for ever-more perfect tools part of how we grow as makers?
I hate closing with a bunch of questions, but this piece is but an imperfect tool for discovering an idea.
Ed. Closing could use some work.
Warning: politics
Embedded within the migraine that is American politics are some very interesting ideas. Economics, markets, ethics, freedom, equality, education, transportation, and security are all intriguing topics. Recently, I figured out that the headache comes not from people or trying to make the ideas work, but in politics. Getting a majority of the people to agree on anything is a giant pain of coordination. When you throw in fearmongering, power struggles, critically wounded media, and the fact most people would rather not think deeply about any of this you end up with the major downer that we face today.
All that said, here are some pithy one-liners about politics:
-
If I were part of the Democratic leadership, I’d be wondering how you take the high road in a race to the bottom. And win.
-
If I were a Republican, I’d be wondering how to dig myself out of this giant hole I made by winning a race to the bottom.
-
If I were a libertarian, I’d be wondering how to convince people that the Tea Party is different from what I believe in.
-
If I were a leader of the Tea Party, I’d be wondering what I’m going to do when someone who claims to be a part of the Tea Party blows up a building or goes nuts with an assault rifle.
-
If I were a politician, I’d wonder how much I have to compromise my values and what I really wanted to accomplish but still get enough votes to keep my job.
-
If I were skeptical of climate change due to human activity, I’d be wondering how I’m going to find a spaceship, because this line of reasoning leads to the conclusion that the Earth is about to become very inhospitable.
-
If I were a nihilist, I’d wonder…nothing.
There, have I offended everyone?
Goodbye, gutbombs
Last March my wife and I joined a gym, started working out with a trainer, started trying to eat better, and set out to improve our health. Amazingly, we’ve stuck with it (after two previous failed attempts in years past) and are both in much better shape than we’ve been in for quite some time.
One of my personal reasons for doing this was what I’d been hearing about the correlation between working out, eating better, and brain function. Lots of people who read way more into this than I do had been saying that if you eat better and exercise more, your brain will work better.
I’ve noticed this first hand. The day after my first serious run, my mind was in overdrive. I had lots of great ideas, I worked through them quickly, and I didn’t procrastinate when it came to exploring or realizing them.
Today, I had the opposite experience. I went out for a rather large Tex-Mex lunch. Lots of starch. I got home and took a nap, as is often my wont. Usually I wake up ready to get back to work after my naps. But today was different. My brain was thoroughly sluggish. My body’s energy was going towards digestion, not thought.
I guess this is something of a break-up letter for me. You see, I’ve long enjoyed the large, starchy lunch. But, I’m not sure I can put up with it anymore. If its a choice between starchy, tasty lunches and a high-functioning brain, I’m going to have to choose my brain.
Sorry, lunch-time gutbombs. We had a good run, but I’m going to have to quit you for a while.
Give attribute_mapper a try
(For the impatient: skip directly to the `attribute_mapper` gem.)
In the past couple months, I’ve worked on two different projects that needed something like an enumeration, but in their data model. Given the ActiveRecord hammer, they opted to represent the enumeration as a has-many relationship and use a separate table to represent the actual enumeration values.
To a man with an ORM, everything looks like a model
So, their code ended up looking something like this:
class Post < ActiveRecord::Base belongs_to :status end class Status < ActiveRecord::Base has_many :tickets end
From there, the statuses table is populated either from a migration or by seeding the data. Either way, they end up with something like this:
# Supposing statuses has a name column Status.create(:name => 'draft') Status.create(:name => 'reviewed') Status.create(:name => 'published')
With that in place, they can fiddle with posts as such:
post.status = Status.find_by_name('draft')
post.status.name # => 'draft'
It gets the job done, sure. But, it adds a join to a lot of queries and abuses ActiveRecord. Luckily…
I happen to know of a better way
If what you really need is an enumeration, there’s no reason to throw in another table. You can just store the enumeration values as integers in a database column and then map those back to human-friendly labels in your code.
Before I started at FiveRuns, Marcel Molina and Bruce Williams wrote a plugin that does just this. I extracted it and here we are. It’s called attribute_mapper, and it goes a little something like this:
class Post {
:draft => 1,
:reviewed => 2,
:published => 3
}
end
See, no extra table, no need to populate the table, and no extra model. Now, fiddling with posts goes like this:
post.status = :draft post.status # => :draft post.read_attribute(:status) # => 1
Further, we can poke the enumeration directly like so:
Post.statuses # => { :draft => 1, :reviewed => 2, :published => 3 }
Post.statuses.keys # => [:draft, :reviewed, :published]
Pretty handy, friend.
Hey, that looks familiar
If you’ve read Advanced Rails Recipes, you may find this eerily familiar. In fact, recipe #61, “Look Up Constant Data Efficiently” tackles a similar problem. And in fact, I’m migrating a project away from that approach. Well, partially. I’m leaving two models in place where the “constant” model, Status in this case, has actual code on it; that sorta makes sense, though I’m hoping to find a better way.
But, if you don’t need real behavior on your constants, attribute_mapper
is ready to make your domain model slightly simpler.
Just For Fun
This year was my fourth RubyConf. I’ve always come away from RubyConf energized and inspired. But, I’ve yet to follow through on that in a way I found satisfying. I have a feeling I’m not alone in that camp.
This was the first year I’ve given a presentation at RubyConf. At first, I had intended to use this watershed-for-me opportunity to ask whether Ruby was still fun. There’s been a number of “drama moments” since my first RubyConf; I thought it might be worth getting back to my early days of coding with Ruby, when I was exploring and having a great time turning my brain inside out.
As I started researching, it turned out that there are a lot of people having fun with Ruby. Some are doing things like writing games, making music or just tinkering with languages. Others are doing things that only some of us consider fun. Things like hacking on serious virtual machines, garbage collection, and asynchronous IO frameworks.
So, back to my talk. I saw my failure to harness the motivation what I’d seen at previous years at RubyConf as an opportunity to figure out ways to line up some tactics to make sure that after the conference, I was able to create awesome things, contribute them back to the community, and enjoy every minute of it.
Thus, I came up with a sort of “hierarchy of open source developer needs”. At the bottom is enjoyment; there’s little sense doing open source work if you’re not having fun. Once you’re having fun, you probably want to figure out how to find more time for making codes. Once you’re making more codes, you want to figure out how to get people interested in using your stuff. I’ve taken these three needs and identified several tactics that help me when I find myself in a rut or unable to produce. Call them patterns, practices, whatever; for me, they’re just tricks I resort to when the code isn’t flowing like I want to.
The talk I ended up with is equal parts highlighting people in the Ruby community that are having fun and highlight ways to enjoy making things and contributing it back to whatever community you happen to be part of. I hope that I avoided sounding too much like a productivity guru and kept it interesting for the super-technical RubyConf crowd.
If all of this sounds interesting you, grab the slides (which are slightly truncated, no thanks to Keynote) or watch the recording from the conference itself.
I wrote the proposal for this talk right after Why disappeared himself. His way of approaching code is what inspired me to write a talk about getting back to coding for fun. “Just for Fun” starts with a tribute to Why the Lucky Stiff. The sense of fun and playfulness that Why had is important to the Ruby community. I’ve tried to highlight some of his most interesting playful pieces. And in the end, I can’t say “thanks” enough. Why has inspired me a lot and I’m glad I got to meet him, experience him and learn through his works.
Even if you don’t take a look at my presentation, I strongly urge you to give a look at some of Why’s works and let them inspire you. My favorites are Potion and Camping.
Some other things I mentioned in my talk as interesting or fun:
- Greg Borenstein’s code, writings and tumblings
- Project Euler
- Marc-Andre Cournoyer’s codes and book
- Philip Kromer’s Wukung
The Kindle's sweet spot
Given all the hubbub about Kindles, Nooks and their utility, I thought this bears repeating to a wider audience:
The Kindle is great for books that are just a bag of words, but falls short for anything with important visuals.
I’ve really enjoyed reading on my Kindle over the past year. You can’t beat it for dragging a bunch of books with you on vacation or for reading by the poolside. That said, I don’t use it to read anything technical with diagrams or source code listings. I certainly wouldn’t use it to read anything like Tufte, which is exactly why his books aren’t available on the Kindle. Where the Kindle shines is with pop-science books like Freakonomics and Star Wars novels1.
If you love books and reading, the Kindle is a nice addition to your bibliophilic habit, but it’s no replacement for a well-chosen and varied library.
1 Did I say that out loud? Crap.
Testing declarative code
I’m a little conflicted about how and if one should write test code for declarative code. Let’s say I’m writing a MongoMapper document class. It might look something like this:
[sourcecode language=“ruby” gutter=“false”] class Issue
include MongoMapper::Document
key :title, String key :body, String key :created_at, DateTime
end [/sourcecode]
Those key calls. Should I write a test for them? In the past, I’ve said “yes” on the principle that I was test driving the code and I needed something to fail in order to add code. Further, the growing ML-style-typing geek within me likes that writing tests for this is somewhat like constructing my open wacky type system via the test suite.
A Shoulda-flavored test might look something like this:
[sourcecode language=“ruby” gutter=“false”] class IssueTest < Test::Unit::TestCase
context ‘An issue’ do
should_have_keys :title, :body, :created_at
end
end [/sourcecode]
Ignoring the recursive rathole that I’ve now jumped into, I’m left with the question: what use is that should_have_keys? Will it help someone better understand Issue at some point in the future? Will it prevent me from naively breaking the software?
Perhaps this is the crux of the biscuit: by adding code to make certain those key calls are present, have I address the inherent complexity of my application or have I imposed complexity?
I’m going to experiment with swinging back towards leaving these sorts of declarations alone. The jury is still out.
Texas is its own dumb thing
OK, here’s the deal. Wikipedia has it all wrong.
- Texas is not part of the South. Texas is its own unique thing. Sure we have dumbasses, but they are our dumbasses, wholly distinct from your typical Southern dumbass.
- In Texas, the way you refer to “you all” is “ya’ll”; it’s a contraction of “ya all”.
- They neglected to mention the idiomatic pronunciation of words like “oil” (ah-wllllll) or “wash” (warsh).
Please take it under consideration: Wikipedia is edited by a lot of damn Oklahomans.
Curated awesome, the 1st
A bumpy subway wall, loving things for their Unix-y qualities, Kurt Vonnegut looking dapper, the final movement of Dvorak’s Ninth Symphony (originally his fifth), and a music video by Talib Kweli that makes me want to go get my hair cut. Oh, and I can’t leave out the connection between prototyping physical things and applications operating on large data, Ben Scoffield’s take on database taxonomy and a screed on reading one book per week.
(Editor’s note: I recently took to using Tumblr again. For a while, I’ve been curating interesting stuff here. But Tumblr has evolved into a really fantastic application for doing this. So, my policy going forward is to post my stuff here and curate other people’s awesome stuff over there. That said, I’ll probably do “best-of” posts, like this one, to keep you interested and informed.)
Chance Encounter
Another meme-ish “film” by yours truly. This time, the idea is you do something in five seconds, plus a two second intro and a one second outro. Here’s what I came up with:
This is an adaptation of possibly my favorite improvised joke. I deploy this joke when a conversation is interrupted by some disturbance or noisy distraction. Right before the conversation is going to continue, I say “…and that’s how I met the president and the pope on the same day.” Works pretty well.
Funny aside: I found out about this on the Vimeo site, thinking there was a competition this weekend. Turns out, it was last weekend. Figures.
Dallas could get a pedestrian bridge
Trinity gift is $10 million for pedestrian bridge. Catering to pedestrians, in Dallas? Surely you jest!
I’ll just sit here and quietly hope that the plans for an urban park around the Trinity aren’t derailed by everything that is politics.
Birthing Born to Run
The birth of Born To Run. On the creation and evolution of the song and album. Great read for Bruce-o-philes.
Representing time in our programs
The time problem is not easy to see in today's mainstream languages because there are no constructs that make time explicit. It is implicit in the system. We don't even know that's what we're doing when we use locks to try to make this work.
I’ve been thinking about how we represent time in programs for a while. The problem is that concurrent programs are all about time, but mostly, we only use two mechanisms to represent it in our programs.
The semi-explicit way is through locks. When we insert locks around some bit of code, we are giving hints to the system that things should only proceed in a certain order. This ordering gives us a notion of time, but it’s not horribly comforting.
The completely implicit way we represent time in our programs is by ordering calls to functions and the lines of code within those functions. Line 10 always executes before line 11, etc.
The problem that Rich Hickey, who has some fantastic ideas about this time stuff, has in the article I quoted is that time is managed manually and implicitly. When you start writing large concurrent programs, this falls apart. We need better constructs to deal with it.
Think of it like the shift from unstructured programming to structured programming to object-oriented programming. At first we just had a long code listing; no functions, just line after line of code. This became mentally untenable, so we shifted to structured, procedural programming. But some of our data was global and it was often hard to tell what functions belong to what data. So we moved to object-oriented programming and encapsulation.
Hopefully Rich Hickey, Simon Peyton-Jones and other functional programming folks can lead us to is a nice way to structure our programs around time. I’m eager to have my brain melted by what they conjure up.
Ain't talkin' 'bout the man
Here’s a fun game. “The Government”:
Try something. Every time somebody complains about the evils or failings of “the government,” strike out “the government” and see what results.
Often, simply striking out “government” reveals a completely different, and far more useful, commentary.