These are computers, I know this

An encouraging thing happened to me last year. I was faced with a mystery involving how a bit of application code was interacting with ActiveRecord. It seemed like we were calling ActiveRecord properly, but the query wasn’t coming out quite right. In years past, this would have likely stymied me; productivity lost! But this time, I had the gumption to dive into the mystery and the tooling to help me navigate the murky waters of an object-relational mapper’s internals. Armed with a little bit of confidence, the ability to click on a method call to jump into its definition, and a little bit of experimenting in a Rails console, I figured out the problem. Success!

Now, when I’m faced with a weird situation, I tell myself “these are computers, I know this!”1 and dive in.

This feeling is due to a few ways I leveled up my skills over the past few years. A little improvement in my tooling helped me acquire the “quickly jump to the definition of this function/method/etc.” skill. The big level up was the confidence that I could figure this mystery out, that there was likely an easy explanation lurking just behind the curtain, and that if I had the gumption to pursue it, I could figure all this out.

I feel like any developer of any background and experience can level up these skills!

I. Finding courage in cartoon foxes and stick figures

Over the past few years, I have been able to dive into more curious bugs, behaviors, and domain logic because I was encouraged by off-the-wall, esoteric forms of technical discourse. It probably started off years ago with the silly carton foxes of Why’s Poignant Guide to Ruby. Check it out if you haven’t!

Fifteen years ago (yikes!), when Ruby was gaining momentum and Rails was, to most people, a demo screencast, this approach to teaching a language was controversial. “Programming is serious!”, some would say. They claim there’s no room for flippant catchphrases like “chunky bacon,” sketchy cartoons, or programs not meant for “production-fortified commercial codebases”. Turns out, they are wrong — some folks find playful texts are a much easier way to learn deep topics like Haskell, Erlang, or even economics.

Fast forward to now, and folks like Julia Evans, the Base CS podcast, and illustrated.dev are once again chipping away at the notion that computers are all serious business that require a stiff lip and stereotypically masculine dedication to mathematical rigidity. I have learned more about datacenter networking and containerized deployment from Evans’ stick figures than any manual page, reference doc, or even the classic textbooks of W. Richard Stevens.

In other words: These are computers. I know this. And from there, I can figure out almost anything.

II. Knowing I’ve solved bigger mysteries than this

Allow me to get self-involved for a moment.

I recharge my batteries not with side hustles or open source projects2 but by tinkering with side projects and learning new technologies. I’ve long had a growth mindset. I’ve benefitted a lot by turning that energy and curiosity into something I could apply when I get stuck on less esoteric work mysteries like legacy-to-me code and framework code.

I was fortunate to study computer science in university, and a little lucky that the program at my university was very average. My courses pushed me to figure out topics I would have otherwise skipped or found too intimidating like discrete math, computer architecture, or compiler construction. I came out with the ability to self-teach myself the topics I found interesting and immediately practical like Linux, programming languages, and how to actually build software.

In my twenties, when I was still full of energy and some margin time to pursue interesting ideas, I took more time to self-learn things which challenged or intrigued me. I learned how Ruby works, the basics of Haskell, and went deeper into databases and distributed systems than many developers outside of mega corporations have the necessity to do.

By my early thirties, I could look at a wide spectrum of technologies and feel confident (perhaps unearned) I could “figure it out” if necessary in a professional context. Out of order computer architectures, database index and query strategies, distributed consensus, managed runtime trade offs, or implementing binary addition from first principles all felt like a big challenge, but something I could participate in a discussion of, if not attempt to implement of on my own. A growth mindset in my twenties, learning tricky talks on my own, and a few of the courses I didn’t think I’d use in college paid off!

Having these topics “under my belt” makes tackling many (but not all) challenges feel achievable. I feel like this is down to the curiosity to learn a bunch of topics on my own and the optimism that I’ve learned plenty of tricky topics and can learn more. Of all the things I would encourage a younger-me to continue doing, challenging myself to figure out big, audacious mysteries is amongst the most important.

III. Believing there are no evil spirits

In code, there are no boogeymen or little demons conspiring to confuse me. The vast majority of the logic and behavior of any computer, program, or system thereof is explainable. While it’s tempting and enjoyable to ascribe personalities, stories, motivations, and drama to inanimate systems, they do not actually exist.

Most systems are linear, predictable, and some kind of deterministic. Things don’t happen magically, they only happen for reasons I don’t yet understand. There are no evil demons or spirits, only processes or circumstances which my mental model does not yet accommodate.

The corollary to this is that there are very few mysteries I can’t solve with sufficient time and determination. The solution might be weird, completely different from what I first thought, not what I’d hoped to learn, or involve inputs outside what I considered the domain of the problem. But the answer exists!

It’s tempting to say “this job just dies overnight and we restart it” as though that were nature and we have no agency over the process. But, I totally do have control over the process and can look into why it’s dying overnight! The only thing stopping me is me, and finding some time to learn. Given necessity and a time box3, I can figure it out or eliminate variables that aren’t the answer.

It’s also tempting to think “well I call this method and then the framework does some magic? and I get the value I want back, most of the time”. Like I said before: there is no magic, only things I don’t yet know about. When I find myself uttering this, I know it’s time to roll up some courage, gumption, and sleeves then dive into the framework to figure out how it makes the magic happen.

Some unpredictable or surprising behavior is very deep. Not every mystery is worth my time to resolve. That’s what time boxes are for! When this happens, my goal is to remove as many “suspicious spirit” stories as possible. The more logic and facts I bring to explaining this behavior, the better equipped I am to actually figuring it out next time I look into it.

These are computers and software; I know this.

IV. Enhancing my thinking with tools

I eschewed integrated development environments for a long time; they were slower and less capable than more focused text editors with smaller, Unix-style language integrations. But, computers are faster, designers of IDEs are more tasteful, and we now live in a world where language runtimes are just as influential as linters, test runners, and build tools. Perhaps now is the time for a smarter, integrated development environment.

It’s essential, to me, that whatever I’m using to write and edit the code is fast enough to keep up with my thinking. Beyond my own ability to type and make up names, the important criteria are all about enhancing my ability to think. TextMate first did this with vastly improved file navigation and language-specific snippets and expansions that helped me hold less syntax and boilerplate in my head. Vim, then Atom, helped me lay source files out side-by-side, like I would with sheets of paper, so I could think about related things in a limited-but-helpful spatial ordering.

Now, the tool that is enhancing my thinking is RubyMine. Its ability to “take me to the definition of the method/variable/class/etc.” under my cursor is now much easier to use than setting up equivalent tools that integration with Emacs, Vim, etc. So in the moment of perplexing code, I’m able to jump into the code at the center of the mystery and figure out what was going on.

In this case: these are computers, they know me. ;)

V. Mystery. Learn. Repeat.

Pulling it all together: I’m often faced with mysteries in the course of development work. It often takes courage and the confidence I’ve tackled deep topics before to go down the rabbit hole. Once I’m down the rabbit hole, it’s important to remind myself that most systems are linear and have logical inputs and outputs; no philosophical daemons mischievously manipulating results to confuse me. Automated tools for navigating source code are a huge boon throughout the process. All together, I stand a pretty good chance of tackling mysteries.

The common link between Ms. Evans and Mr. Stiff, makers of cartoon-y programming literature, is broad curiosity about the craft of programming and optimism that no topic is “off limits”, “too deep”, or “requires credentials” for us to learn. That’s a great mindset we can all benefit from!

Thanks to Marie Chatfield, Kelsey Huse, and Brian Ray for giving me tremendous feedback on this draft.

  1. There’s a scene in the original Jurassic Park where a young heroine saves the day by doing some computer stuff. In a crucial moment with dinosaurs about to eat the whole cast, she sits down in front of the computer system which can save them, recognizes it, utters “This is Unix, I know this!”, and proceeds to save the day.
  2. Though I wish I could do that too!
  3. Time boxing is working on a task for a fixed time, e.g. 30 minutes. Either you finish it, decide not to keep going, or have a better idea of how to break it down so you can finish it.

Things makes a nice landing pad

One of the better productivity ideas I’ve seen over the years is using some app as a landing pad for all the random ideas, recommendations, and notes I come across in the moment. I’ve been using Things for this lately, and its surprisingly effective while remaining stress free.

Here’s what my Inbox/landing pad looks like right now:

An inbox in Things with four items
In our modern times, Inbox is another word for procrastination

I’ve accumulated a handful of links to read, link to, watch, or write about. Some of these I’ll look at in the morning and check off as I go. If something hangs around in the Inbox too long, it probably needs to go into a project somewhere. Usually, I read the thing or watch the video, check it off, and feel more productive about the day.

Most importantly, I’m capturing ideas or links whether they’re Really Great or Just Okay and then dealing with them later. If an idea is really special, I start filling an idea in right away in Things, either in the Inbox or as part of a project. I also do this when I’m actually working on a task. Rather than try to balance Things, Bear, Goodnotes, etc. I jot down ideas and progress as I work in Things.

A task in Things with a note describing programming research
Thinking into a text box: an essential skill of contemporary times.

This running list of ideas, code, and further links may become a note in Bear, a blog post, a tweet, etc. Things isn’t perfect here; it doesn’t understand Markdown, it’s not a sophisticated note-taking thing. But, it’s quick and it works great, so it checks the boxes here. As I’m wrapping up the task, I will probably end up copy/pasting the whole task and notes into someplace for future reference.

So that works for me, right now!

Automotive function determines form

I generally think function should have a strong influence on form, if not determine the form outright. I like to use cars an example of this, but I’m having trouble reconciling the past of “function over form” with the future.

Back in the days of peak car culture (1960s), the Jaguar E-Type was (and currently is) considered one of the finest looking cars ever produced:

The Jaguar E-Type. Possibly the best possible shape for a car, ever.

I’m taken by that long, long hood. And, as per my principles, that is function defining form. The car had an inline-6 cylinder engine and later a V-12 engine. Both very long engines. Further, the E-Type is a sports car and sports cars of the era were all rear-wheel drive. That means the engine has to be mounted on the length of the car. All that adds up to requiring a very long hood. And, it turns out, the designers working on the form did a fine job accommodating that function!

I’m not particularly taken by modern Lamborghini, but the Aventador serves nicely as another example of form determining function (at least in my imagination):

The Lamborghini Aventador. An ode to mechanical aggression.

The Aventador is also a sports car and also contains a rather large V-12 engine, this time mounted behind the driver. What transpired in the decades between the E-Type and the Aventador is a whole lot of technological development. Where the E-Type is a masterpiece in blended lines, the Aventador is a cacophony of gizmos and dingers on the outside of the car, particularly the back. Most of those slats and ducts serve the function of a) cooling the car’s engine, turbochargers, or brakes, b) shaping air flow around the car to reduce drag or increase grip in corners, and/or c) making the car “look pricey and fast”. I like to think that an initial design concept had more gracefully blended curves, but the engineering director put the kibosh on it because it would prevent exposing some kind of cooling duct or aerodynamic surface in a crucial spot. Function (going fast, looking pricey) determines form.

I’m increasingly convinced the Toyota Prius, alongside the Tesla Model S, will be thought of as the pivot point from oil to renewable culture. This shape will be part of that story:

The Toyota Prius. It is a car, with a shape.

Electric cars require efficiency throughout. Lower weight, skinny tires to reduce friction, and low aerodynamic drag. The last, I fear, is the function that will lead us to extremely boring “aero-lump” forms. Most electric/self-driving car designs are going for “sitting room on wheels” as function, and there are only so many low-drag forms that can take. None of them “exciting”.

The E-Type and Aventador are, to my eye, pleasing forms by function, or perhaps by nostalgia. But the Prius (and even the Model S/3 that have followed it) school of design has not yet generated deeply pleasing forms.

In principle, I still like the idea of form following function. In my heart, for the future of car-based transportation, I’m a little worried about the outcomes.

Social media in the morning? Whichever.

Austin Kleon recommends skipping the news/social media/blinky lights in the morning. I’ve found this works great for me, and sometimes not! I’m a morning person, so I’ve got that going for me. If I’m already in a groove, have ideas about what to write or code on, and jump in first thing, this advice works out for me.

When I’m in a rut, or returning from a vacation and out of the groove, I need something to kickstart the process. I can often find that somewhere in the buzz of people computing with words on blogs or Twitter.

On the other hand, less thoughtful inputs don’t get me going. Instagram, nope. Daily news articles or op-eds raise my blood pressure, but don’t get me creating in the right direction. Seeing what’s up with cars, racing, or video games are interesting, but don’t get me to the point of making.

Getting over the hurdle from “waking up” to “making stuff” to “in a groove” is so difficult. Often that takes a little outside stimulus. Equally often, I just need to keep going.

Blogging, like writing, is challenging

The thing which makes blogging difficult is not engagement, analytics, finding just the right theme, curating to a newsletter, managing comments, finding reach after the demise of Google reader, etc.

The hardest part is showing up, every day, writing. The hardest part is writing! The second hardest thing is hitting the publish button on a regular basis, not necessarily every day.

Deciding what to write about is pretty tricky too. And not falling prey to “hmm this idea really deserves a nine-part, 15 thousand word treatment, probably in eBook form”. And hitting Publish even when you’re not sure.

So I’m trying to blog (most/many) days in November. Which is easier than writing a whole book! The roadblocks look pretty similar, though.

Currently intriguing: Toby Shorin

I’m currently intrigued by, and not entirely sure what to do with, the ideas of Toby Shorin. Particularly, Jobs To Be Done and The Desire for Full Automation. The thread of design thinking, the “needs” of technology, capitalism, and social systems runs throughout. Milkshakes are perfect for commutes, jobs are as varied as chores, biological functions, and societal norms. Existing in the system of the world, the system and our job within it defining us. What capitalism desires of people and society, the need for automation therein. Whether automation of tedium liberates or restricts us. Has the agency of capital (the excess money in the emergent system we live in) already turned us into automatons for its purposes? How does automation and purpose square with religion? 🤔🤔🤔

The paradox of event sourcing

The hardest part for me is knowing when to use this. It creates a lot of friction for a small application, but all applications start small. Moving to an event-sourced architecture when your application (and team) is no longer small feels like a big undertaking that could be hard to justify.

Dave Copeland, Event Sourcing in the Small

Once an application is big enough to need it, it’s already hard to introduce it. But, it’s too much trouble to start an application with this architecture. Maybe this is corollary to “most things are easy/workable on small teams/applications”?

A few problems that Dave ran into building a small event-sourced data model were in deriving the domain models (he called them projections) from the event data model. It’s possible that there’s a sweet balance point between rolling this kind of data flow behavior by hand and building an entire framework around capturing events that are transformed for various consumers to their specific domain model needs. I haven’t seen it yet.

I haven’t kept up with Datomic, but the interesting about it a few years ago was that it was sort of event sourcing as a database. Data producers store events to it (in a format that strongly resembled RDF triples). Consumers used data flow queries to define how to transform and scope that data to their needs. It also had a pretty sweet time-travel story. (I’m always a sucker for a good time-travel story.)

If well-considered boundaries and excellent operational tooling are the enabling factors of a services architecture, what are the enabling factors of an event-modeled architecture?

Reclaim the hacker mindset

There was a time when the hacker mindset was about something nice.

They’ve adopted a hacking mindset. They translate this clever, ethical, enjoyable, excellence-seeking behaviour to their everyday lives. See? Hacking is a mindset, not a skillset. When you seek, in your everyday life, to deliberately find opportunities to be clever, ethical, to enjoy what you are doing, to seek excellence, then you’re hacking.

Not enriching a few people. Not replacing everyone else’s bad things with differently-bad treadmills. Not crushing 20-hour days, the latest programming hype, or whatever Paul Graham/Peter Thiel are saying. The orange website ethos, as one might say.

Enjoyable. Ethical. Seeking excellence to reshape the world into something better for everyone’s everyday life.

No topic is off-limits

My favorite thing about software development is the breadth and depth of the profession. On the one hand, there’s a ton to learn about computer science, programming languages, operating systems, databases, user interface, networking, and so on. On the other hand, there’s even more to learn about math, payments, sociology, team dynamics, finance, commerce, linguistics, business, design, etc. Pretty much the whole world around us!

Some folks tell you topics are off-limits. “Front-end developers don’t need to know databases”. “Back-end developers don’t need to know design”. “You only need to know Linux if you’re doing dev-ops”. “The humanities are a waste of your time.”

Those folks are wrong. 😡

You can pick up whatever ideas you want. You can study a topic at any depth. Choose your own specialization. Learn whatever you want, however you want.

Maybe you want to know just enough Fourier math to understand how imaging and audio systems work. Maybe you’re so hungry for clever math you work the problem sets from a college course. Either way is fine!

Several years ago I wanted to understand the jargon and mechanics of economics and finance. So, I listened to a bunch of podcasts, read a few books, and consistently read a magazine. I can throw around words like “negative externalities” or “financial instrument” now, but I’m no expert. I’m cool with that. I’m just here to understand the shape of things, not to become a professional.

Point is, all of these ideas could come in handy under the very large tent that is software development. Go learn economics, databases, design, or whatever. The more you know, the more likely you are to create a connection between adjacent ideas.

Beyond the languages, the libraries, and all the hype cycles, the ability to understand domains of knowledge is what sets great developers aside from good ones. And none of that knowledge, whether technical or otherwise, is off limits!

Problem solvers

We could be problem-solving technologists. We could avoid getting wrapped up in programmer elitism and tribal competition.

We might solve more problems that way!

We can still find joy in certain technologies. We can still ply our trade in solving meta-problems with those technologies while solving increasingly interesting problems with the technology.

We might have more fun and worry less about the hype treadmill!

We’d have more mental space to consider how we’re solving problems. We could communicate better with our teammates and customers.

We might consider whether the thing we’re building is right for the world we live in!

Postmodernism rules everything around me

Greater Los Angeles – Geoff Manaugh. Remember when an iPhone had trouble with cellular reception if you put your fingers in the wrong place and a response that was overblown and taken out of context was “you’re holding it wrong”? Los Angeles is a city which you cannot hold wrong. It is so vast and varied that everyone belongs in some way and yet everyone can be alone in some way. It’s not about where you came from or what you did, but what you’re making of it right now. The idea of moving to LA is daunting, but at least it’s a bit romantic.

Corporate Background Music Is Taking Over Every Part of Our Lives – Sophie Haigney. Apparently there’s a whole post-job/career industry of making and (royalty-free) licensing of music to play in the commercial spaces where we do our consumer society thing. Previously we would have called this Muzak, which was also the name of a company, which is also still a thing.

What Kurt Vonnegut’s “Slaughterhouse-Five” Tells Us Now – Salman Rushdie. My favorite phrase, “So it goes” is a bit more gallows than I remembered:

I had not remembered, until I reread “Slaughterhouse-Five,” that that famous phrase “So it goes” is used only and always as a comment on death. Sometimes a phrase from a novel or a play or a film can catch the imagination so powerfully—even when misquoted—that it lifts off from the page and acquires an independent life of its own. “Come up and see me sometime” and “Play it again, Sam” are misquotations of this type. Something of this sort has also happened to the phrase “So it goes.” The trouble is that when this kind of liftoff happens to a phrase its original context is lost. I suspect that many people who have not read Vonnegut are familiar with the phrase, but they, and also, I suspect, many people who have read Vonnegut, think of it as a kind of resigned commentary on life. Life rarely turns out in the way the living hope for, and “So it goes” has become one of the ways in which we verbally shrug our shoulders and accept what life gives us. But that is not its purpose in “Slaughterhouse-Five.” “So it goes” is not a way of accepting life but, rather, of facing death. It occurs in the text almost every single time someone dies, and only when death is evoked.

It may be impossible to stop wars, just as it’s impossible to stop glaciers, but it’s still worth finding the form and the language that reminds us what they are and calls them by their true names. That is what realism is.

Slaugherhouse Five is not my favorite Vonnegut novel (Cat’s Cradle is, lets hear it for Bokonism), but it’s certainly the most consequential and the one I get the most out of re-reading (or have ever re-read?). I had no idea Hitchhiker’s Guide to the Galaxy is so intertwined with it (which I also could stand to re-read).