Life's Easy Mode

This morning I walked a half mile, not too far, to a neighborhood coffee shop. I had two breakfast tacos and a sweet-flavored latte.

I can choose to walk, and take a Sunday morning (really, a whole weekend) to myself because I went to college, fooled around with computers a bunch, and happened upon a time of tremendous income growth for people who fooled around with computers a lot.

On the way, I walked down a well-maintained and safe sidewalk in an neighborhood in the middle of teardowns and gentrification. At one point, a small branch had grown over the sidewalk. Not big enough to walk around entirely, just the right size to push away.

But then, like a miracle, the wind blew just so and pushed the branch out of my way. It was like nature’s automatic sliding door.

Seems that’s a pretty good way of summing up the Easy Mode of Life that is being a professional white guy.


Doubt mongering

Doubt mongering. It’s a thing that happens because egos are fragile. Some doubts I’ve heard or uttered myself in the past month:

  • That sounds like building a dependency manager, and look how great those are in JavaScript!
  • Swagger is an IDL and I had bad experiences with IDLs when using SOAP and/or Thrift so we probably shouldn’t use Swagger.
  • Microservices sound like microkernels, and that never took off.

They’re FUD and they work off cognitive biases. When someone’s trying to vent, angle into a conversation, or show how smart they are, doubt mongering can happen.

Some of us are more prone to doubt mongering than others. I’m probably more prone to it than I realize. Writing this is making me cringe inside a little.

What irks me is that I often have to pause to separate the doubt mongering from the little bit of insight inside of it.

Say we’re talking about Swagger, for example. Most human endeavors are flawed. It’s perfectly legitimate to say “not all uses of IDLs have succeeded” and “let’s learn from past experience”. That’s a useful insight!

But it’s not okay to do so in a way that takes the energy out of the conversation. It’s not okay do so in a way makes someone feel less smart for suggesting something. It’s not okay to derail. Don’t be a gumption trap.

I still have to remind myself to Yes, And conversations that need a historical context. This isn’t a silver bullet and has its own nuances of application, but at least it’s not a Hard No. It preserves the energy and gumption in a group, rather than sapping it.


NASA: robots everywhere! Military: nuke the moon!

NASA (2014 funding: $17 billion) has sent man to the moon and robots all over the solar system. The military (2015 funding: unfathomable) wanted to nuke the moon. Maybe we could throw more cash at NASA and less at the military industrial complex?


What about event sourcing?

I was chatting about Event Sourced data models with a pal last week. He is really taken by the idea and excited that perhaps its a “next big thing” in data modeling. Regretfully, I have an adverse reaction to “next big thing” thinking and pointed out that Event Sourced data models are more complex than the equivalent third-normal form data model. Thus, I said, tooling and education need to set in before Event Sourcing could achieve broad impact.

(Before I proceed, I need to put forth a lament of vocabulary. Events, in this context, are not fine-grained language constructs like in a continuation-passing-style asynchronous system. They are business events, a sale or page impression, or technical events, a request or cache hit. These are not callbacks.)

That said, there’s a few strings to pull from Event Sourcing that seem like possible trends:

  • Integration via event logs using something like Kafka. The low hanging fruit is to replace background jobs with messages on a Kafka stream. The next step is to think about messaging as reading from a database’s replication log.
  • Intermediate storage of historical event records in Hadoop. Once applications are publishing messages on changes to their data, you can slurp up each topic (one per domain model) into a Hadoop table. Then…
  • ETL of event logs in place of some messaging/REST integrations. Instead of querying another system or implementing a topic consumer, periodically query the event data in Hadoop. Transform it if necessary and load it into another application’s database. LinkedIn has extensive tooling for this and it seems like they have done their homework.
  • Data and databases modeled around the passage of time. Event Sourcing is sort of like introducing the notion of accounting to database records. We can go a step further and model our data such that we can travel forward or back in time, not just recalculate from the past. Git has a model of time. Datomic is modeled on time.
  • Event Sourcing as an extension of third-normal form. We still need normalized data models, and we still need the migration, ORM, and reporting tooling built on top of them. Event Sourcing gives us an additional facet to our data. Now, instead of just having the data model, we have the causality that created it. (If you’re curious, probably the enabling technology for storing all that causality is the diminishing cost of storage, adoption of append-only data structures, and data warehouses.)
  • Synchronization streams instead of REST for disconnected clients. When you store the events that brought data to where it is, and you have a total ordering on those events, you can keep disconnected applications up to date by sending them the events they’ve missed. This is way better than clever logic for querying the central database to update state without squashing local state. Hand-wavy analogy: think Git instead of SQLite (both are wonderful software).

In particular, the case for synchronization is when things started clicking for me. Hat tip to David Nolen’s talk on Om Next (start at 17:12) for this. As we continue building native and mobile web apps that are frequently disconnected, we may need an additional tool to augment resource-based workflows. In the same way that perhaps Event Sourcing is something we build as an extension of third-normal form data models, I’ll bet event logs as APIs will pop up more often. But we may see event logs entirely usurping resource workflows. Why implement consuming a log and implementing updates via REST when you could write a log producer and ship new events off to the server?

The developer impedance mismatch I’m finding with message logs is request-reply thinking. There’s a temptation to recreate REST semantics in Kafka topics. If a consumer fails to process a message, does it stop processing entirely, skip the message, discard the message? Does it notify another consumer via a separate topic, or does it phone home to its developers via an error notification? I haven’t found a satisfying answer to this, but I suspect its a matter of time, education, and tooling.


Encapsulation is a tradeoff too

Better understand Encapsulation. I can’t 😍 this article enough:

An individual programmer has fixed limits on how quickly they can build up instructions and later on how quickly they can correct problems. A highly-effective team can support and extend a much larger code base than the sum of its individuals, but eventually the complexity will grow beyond their abilities. There is always some physical maximum after which the work becomes excessively error prone or consistently slower or both. There is no getting around complexity, it is a fundamental limitation on scale.

Useless datapoint: my personal maximum is around three thousand lines of code, or 4–6 weeks of clean-slate effort.

So maybe I need to start encapsulating once I reach that limit?

To get the most out of encapsulation, the contents of the box must do something significantly more than just trivially implement an interface. That is, boxing off something simple is essentially negative, given that the box itself is a bump in complexity. To actually reduce the overall complexity, enough sub-complexity must be hidden away to make the box itself worth the effort.

This has been bugging me for a while. Encapsulation is treated as an unquestionable good by many developers. To question encapsulation is to adopt the opposite, that design isn’t worthwhile.

But it’s a tradeoff! Introducing encapsulation incurs a temporary increase in the net complexity of a system. Over the course of a tactical refactoring of methods and classes, the increased complexity is only observable by one or two developers doing the work.

But, if services are encapsulation (they are!), then rearranging the pieces will leave you paying for the increased complexity for days, weeks, months. Now the encapsulation takes on real costs: the risk of completing it, the burden of explaining to others what you’re doing, etc. That encapsulation better be worth it and not just a hunch!

For example, one could write a new layer on top of a technology like sockets and call it something like ‘connections’, but unless this new layer really encapsulates enough underlying complexity, like implementing a communications protocol and a data transfer format, then it has hurt rather than helped. It is ‘shallow’. What this means is that for any useful encapsulation, it must hide a significant amount of complexity, thus there should be plenty of code and data buried inside of the box that is no longer necessary to know outside of it. It should not leak out any of this knowledge. So a connection that seamlessly synchronizes data between two parties (how? We don’t know) correctly removes a chunk of knowledge out of the upper levels of the system. And it does it in a way that it is clear and easy to triage problems as being ‘in’ or ‘out’ of the box.

My experience is that encapsulation, if it happens at all, starts off shallow. Real encapsulation, where a developer can treat it as a black box, never needing to peak inside to understand the mechanisms or in/out problems, is rare. It takes the best designers of software to achieve it.

We should all be so bold as to attempt building encapsulations of that quality, but not so proud to think that we succeed at it even half the time.

In little programs, encapsulation isn’t really necessary, it might help but there just isn’t enough overall complexity to worry about. Once the system grows however, it approaches the threshold really fast. Fast enough that many software developers ignore it until it is way too late, and then the costs of correcting the code becomes unmanageable.

I feel like prefactoring a program or architecture only increases the complexity growth rate of small systems. A dominant factor in complexity is communication and coordination cost. If you start off with ten classes instead of three, or three services instead of one, you haven’t tripled your complexity, you’ve squared it (or worse).

I’m all for minimal solutions and fighting to keep things small, but not at the cost of incurring large coordination overhead.

To build big systems, you need to build up a huge and extremely complex code base. To keep it manageable you need to keep it heavily organized, but you also need to carve out chunks of it that have been done and dusted for the moment, so that you can focus your current efforts on moving the work forward. There are no short-cuts available in software development that won’t harm a project, just a lot of very careful, dedicated, disciplined work that when done correctly, helps towards continuing the active lifespan of the code.

Emphasis mine. In a successful system, size and complexity are nearly unavoidable. Almost every “best practice” and “leading edge approach” we know of is contextual and expresses trade-offs. Thus I’m left agreeing that the unsatisfying, hand-wavy craft of “careful, dedicated, disciplined work” is the principle most likely to generate code that’s improves (rather than regresses) over its lifetime.


Bridging design and development with data

Programming and designing with Pure UI:

The process involved, among other things, creating a new UI, ditching the dependency on Flash in favor of HTML5 and introducing new functionality…The particular way in which I implemented it led me to some interest insights around the growing convergence of the designer and programmer roles…The fundamental idea I want to discuss is the definition of an application’s UI as a pure function of application state.

This pulls together three threads:

  • that design and development are duals in a deep way
  • thinking in data structures is useful even if you aren’t using gobs of parenthesis (i.e. Lisp)
  • removing resistance to experimenting with software behavior, in this case by describing behavior with data structures instead of conditionals in code, yields good things (see also Bret Victor)

Medium-term bet: Facebook, through tools like React(-Native), continues to push tasks that were previously outside of “text editors”, such as visual design and animations, into things-resembling-code via the function-of-state paradigm that React is sneaking into people’s brains.

(Also, the use of a fixed-width font in the page design there is 💯)


Microservices in context

An interview with John Allspaw, on Etsy infrastructure and operations:

For example, a good friend of mine runs and has run an electronic trading exchange. You could imagine his goals and constraints when designing an electronic trading exchange are very different than, say, Facebook. Facebook might be very different architecturally because they have different constraints than Amazon. And Amazon might be different than even Etsy.

When you have a conversation that unnecessarily paints the discussion as, “Are you micro-services or are you a monolith?” then it wipes away all of the context-specificity. Which you actually have no real way of talking inspecifics.

Compared to the previous buzzword, SOA, what does microservices mean? As far I can tell, its two things:

  • A Rorschach test. What do you see in this buzzword? What does it say to you?
  • A signaling mechanism. I’m most likely to hear about microservices from those trying to distinguish themselves from those other people who write code that doesn’t share their values.

Context-specificity is the important part. I’ve been reading David Byrne’s How Music Works and he spends the first chapter entirely on how the performance venue (a savannah, a noisy club, an austere concert hall) puts its mark on the music that is performed there (percussion oriented, loud and compressed, or quiet and precise).

In architecture, context is also king. Building and deploying services is different at Heroku, Netflix, Facebook, and the place where you work. You can build services of varying size and complexity anywhere on any stack. What the team, culture, and organization prefers is the real determinant.

I find it useful to read about other people’s service architectures to learn what works elsewhere. Even better if they describe the context they built that service architecture in. But it is always foolish cargo-culting to attempt to replicate another team’s architecture without the team and organizational context in which it was born.


When we model

I’ve observed a few levels of modeling (i.e. thinking about a problem and describing it in concepts plus data structures) that software developers do in the wild:

  • structural modeling, describe structure of the problem domain and represent that directly in code, probably using the concepts that your ORM or data layer provide
  • operational modeling, evolving a structural model to include models of the operations and workflows that interact with the structural models
  • deep modeling, evolving an operational model to include language that describes how the model, problem domain, and solution domain interact and describe each other

A structural model is what happens in a “just ship it” culture. If you’re lucky, you might start thinking about an operational model as you convert that just-ship-it app into an ecosystem of services connected by APIs and messaging.

Any of these models could poof into existence at a higher level. That is, a team could pop out an operational or deep model of a system on their first try. This is even more likely if it’s their second or third take on a problem domain.

Some ideas for kinds of even-higher level modeling that high-functioning teams perform: error-case modeling, coordinated system modeling, social modeling, migration modeling.

And, let’s not even speak of metamodeling :P


Word processors, still imitating typewriters

Right after we finish ridding the world of “floppy-disk-to-save” icons, I propose we remove this bit of obtuse skeumorphism from the default view in word processors like Google Docs:

[caption id=“attachment_3570” align=“aligncenter” width=“660”]the margin setting thingy on a typewriter Who uses this anymore?[/caption]

I vaguely remember using one of these to adjust margins and such on a real typewriter once. Its possible I used one to eek out an extra page in a school report during junior high. Since then? Wasted screen space!

Act like a modern device, word processors. Hide that stuff in a menu somewhere!


Ideas for Twittering better

When it comes to Twitter, things can get out of hand fast. Setting aside the hostile environment some people face when they participate in Twitter (which is setting aside a doozy!), it helps to have a few defense mechanism for what is appearing in your stream.

Most importantly, I evaluate each potential follow by the rule of “smart and happy”. Which doesn’t mean smart, angry people are automatically off the list. But, they have to show a really unique intelligence to get past my emotional filter. I made a graphic to boil down my “should follow?” decision:

[caption id=“attachment_3566” align=“aligncenter” width=“235”]How to decide to follow someone on Twitter. How to decide to follow someone on Twitter.[/caption]

Non-brilliant and happy? Probably in! Brilliant and happy? Probably in! Smart with a little bit of edge? Maybe. Just angry? No thanks.

Information overload, confirmation bias, and overwhelming negativity are also handy things to manage. I do a few things to keep my head above water and a not-too-dismal outlook on life:

  • Don't worry about keeping up. It's impossible. That's OK!
  • When I have stuff that needs doing, shut it down. The tweets will go on without me.
  • Follow people with a perspective different from your own.
  • Keep a private list for high signal-to-noise follows. Good friends and people whose ideas I don't want to miss end up here.
  • But follow a lot more people as a firehose of interesting and diverse voices.
  • When on vacation: don't even care about Twitter. Disconnect as much as possible.

I hope one of these ideas can help you Twitter better!


"Everybody Wants to Rule the World", too much of its time

I really dislike “Everybody Wants to Rule the World” by Tears for Fears because it’s a perfectly written song that sounds exactly like the year it was recorded, 1985. Five years earlier, it would have sounded mildly seventies-ish and been great. Five years later and it would have had a little more grit and sound very late eighties.

What I’m saying is, if I could un-invent certain musical sounds, the bass on that track would appear on the list.


Hype curve superpositions

It seems, these days, that technologies can exist in multiple phases of the hype curve, simultaneously. Two data points I read this weekend:

  • Node, which I personally place somewhere between "trough of disallusionment" and "plateau of productivity", is in the "exceptional exuberance" phase for the author of Monolithic Node.js
  • Ruby, which I personally place in the "plateau of productivity" phase is in the "trough of disallusionment" for the author of The Ruby Community: The Next Version

In short, I strongly disagree with both of these opinions. But I think that’s not the useful datapoint here. The takeaway is that both viewpoints can exist simultaneously, in their own context, and not be entirely wrong.


Raising all boats

It’s easy to complain about PHP. For instance, why didn’t they choose ☃ as their namespace resolution operator?! As a developer with lofty opinions, I’m not a big fan of PHP. To me, it’s an argument against allowing accretion to determine the design of a system. I don’t think it’s controversial to call the PHP language, core library, and ecosystem “inconsistent” and “a matter of curious histories”. A language feature here, a library function there, year over year, and you’ve got a “quaint” design. Yes, those are scare-quotes.

Whenever I feel a big rant about PHP shortcomings approaching, I try remember a few important facets of its success:

  • PHP made programming web applications accessible to lot of people for whom writing CGIs with Perl, Python or Java servlets was overwhelming. Myself included!
  • You still can’t beat the simplicity of PHP’s deployment model: acquire commodity web hosting, upload source files, and done.
  • Due to its accessibility and ease of deployment, a whole new kind of person started building stuff with code. Jason Kottke called part of this Liberal Arts 2.0. Less mathy programming, more craftsy.

Fast forward to today. PHP is still doing fine, though lots of people switched to Ruby or Python many moons ago, depending on personality type. And lots of those have since moved on to other things. The technology hype curve is an overlapping, ongoing thing.


Of those that switched, many ended up with JavaScript, in the guise of browser-side frameworks or server-side Node (and its ilk). I think there’s a huge opportunity here. JS is not without flaws, like PHP. But its sort of backed into really broad reach. Embedded, games, applications, mobile, probably more that I don’t even know about. That could make it compelling for an even less math-y demographic of people building stuff with computers.

And yet, there is no single JS community. There’s browser people, there’s server people. The future may hold mobile, gaming, and device people. That creates dissonance and some uphill battles.

But maybe that’s the really cool part. The JavaScript communities will have to slog uphill a bit to make accessible the previously intimidating domains of mobile apps, games, and embedded software. And that could raise the boat for people who aren’t building web apps but could be building software.


Functions about nothing

The tricky thing about decomposing code into abstractions is you end up with “functions about nothing”. You’ve probably seen on of these: a method or function with really vague names glommed into a utility or enumerations junk drawer. It’s probably innocuous, but as you’re reading code, it takes you out of your flow and forces you to think in the abstract instead of the concrete.

It’s easy to guess how these things happen. Successive refactoring iterations end up pulling business logic into a pile of predicates and side-effects and separate pile of abstractions. We feel pretty good ourselves at the end of the refactoring and write a fancy blog post about it!

The rub is when we come back to read the code later. Its easy to find the abstraction first and get side-tracked by figuring out why it exists, the context in which it was created, and when we might use it again. This is better than predicates and side-effects interwoven. But it’s still a problem.

I don’t have a salve for this. I just wanted to put the phrase “functions about nothing” on the internet. [SLAP BASS OUTRO RIFF PLAYS HERE]


Missing the big picture for the iterations

I.

Driving in Italy is totally unlike driving in America. For one thing, there are very often no lane markers. Occasionally a 1.5 lane road is shared by two cars moving in opposite directions. Even if there were lane markers, it’s doubtful Italian drivers would heed them. Italian traffic flows like water, always looking for shortcuts, ways to squeeze through, and running around temporary obstacles. For an American, driving in a big Italian city is a white-knuckle affair.

My conjecture is that the unspoken rule of Italian drivers is “never break stride”. Ease in and out of lanes, blend in at traffic circles. There’s almost a body language to Italian driving by which you can tell when someone is going to merge into your lane, when a motorbike may swerve in front of you, or when a tiny delivery van is going to blow past you on a two-lane road.

II.

Start with the result. I find myself mired in optimizing for short-term results that I can incrementally build upon. This is a fine tactic, especially when getting started. It’s a nice way to show progress quickly and keep making progress when rhythm matters.

But, it’s a tactic. To make a musical analogy, it’s how you write a song, not how you write a whole album. At some point I need a strategy, a bigger idea. I need a result in mind.

III.

I love to tinker with new technology. The grass is always greener with new langauges, libraries, tools, etc. I’ve learned a lot this way, and kept up with the times. I’ve got lots of surface-level experience with lots of things. But increasingly I want more experience with deeply accomplishing or understanding something.

IV.

Driving in Italy was extremely jarring for me at first. It closely resembled chaos. Eventually, I got used to it, at small and medium scales. (But never drive in Rome/Milan). Now, I sort of miss driving in Italy, at least the good parts. I miss the freedom to overtake other drivers without having to swerve through lanes, and I miss not stopping at traffic signals any time there’s an intersection.

Maybe this is a reminder, for me, that getting out of my routine (American driving) isn’t so bad. Worth the initial shock. Maybe my routines, my tactics, my tool/library/langauge novelty seeking, were helping me along as much as constraining me.

Maybe the big picture result, not the iteration, is the thing and how you get there (highly ordered American driving or seemingly unordered Italian driving) is of less consequence.


Aliens through the eyes of boys

On screening Aliens for a slumber party of 11 year old boys:

"I like the way this looks," one said. "It's futuristic but it's old school. It's almost steampunk." "This is like Team Fortress 2," another remarked. "Dude, shut up, this was made like 20 years before Team Fortress 2," said the kid next to him. "This is, like, every science fiction movie ever made," another said, as Ripley operated the power loader for the first time.

I love works of culture that bisect their genre. There were symphonies before Beethoven, and symphonies after Beethoven. There were comedies before Animal House and comedies after Animal House. For action and sci-fi action movies, there were movies before Die Hard and Aliens, and there are movies after.

In all of these cases, the pieces after are a wholly better ballgame because the piece bisecting the genre changed it so completely.


It's not your fault if your tools confuse you

I. Pet Peeve #43: Complaining About Frameworks

The whole point of a framework is that you trade one or more axes of freedom in how you structure your program so that you can move faster writing the meaningful (unique) part of that program. To wit, the first definition I learned of a framework goes something like:

A framework is a library that takes over the main() function of your program and provides a higher-level entry point for calling your code.

Most programs would have a boring main() anyway, so this is a great tradeoff in most cases.

And yet:

  • Some programs aren’t boring.
  • Some programs demand more control.
  • Some developers crave choice and accept the burden of picking each library and choosing how to wire them together.
  • There’s a bit of a hero complex about rolling your own framework from libraries.

Where we end up is that I notice people complaining about frameworks. I think most of these are proxy complaints about someone else choosing a framework they have to live with. To their credit, some complainers are actively trying to make a better thing too. Kudos to them. To the idle complainers: cut it out.

2. I complain about a framework

RSpec and minitest are both frameworks in the sense that your entry point is a special class with special methods. RSpec is more frameworky in that you don’t actually write classes or methods. You use a language-y DSL to define test cases that share some aspect of scope. You can go pretty crazy in this manner. Or you can not go crazy, RSpec accomodates either style.

Lately I’ve been wrestling with test cases written using at least three kinds of RSpec scope and it’s driving me a little crazy. They look something like this:

describe "POST /something" do
  before do
    @thingy = somethings(:alices)
    post :something, thingy: thingy
  end

  it { expects(assigns[:something].attributes).to eq(@thingy) }
  it { is_expected.to render(:created) }
  it { is_expected.to respond_with(:json) }
end

If scoping is about answering the question “what methods are available to me on this line of code”, this code seems to have four kinds of scope:

  1. Inside the describe block, we can call RSpec methods
  2. Inside the before block we can call RSpec expectations and rspec-rails controller methods (but that is implicit!)
  3. Inside the it blocks we can call RSpec expectations
  4. When we call is_expected, we can reference…I’m not sure at all

It is at this point I try and take my own medicine and make a constructive observation rather than yelling “this sucks and someone should feel bad about it!” into the wind (i.e. Twitter).

3. What’s an RSpec for?

I didn’t like RSpec at first (circa 2008), then I really liked it (circa 2010), and now I’m sort of ambivalent. The question of what RSpec exists to solve has evolved too. At first it was “hey, BDD!” and then it was a less underscore-y way to build tests and now I think it’s a tool for writing tests in a style that some prefer and some don’t. In short, the Ruby community is figuring this out and kind of storming through the awkward parts.

That awkward part is how I think my example test came to be so unclear. Without doing a deep archaeological dig on the code, I’m guessing this code had three phases:

  1. Originally written according to the RSpec vogue at the time, which was to use it one-liners as forcing function on the constraint that tests should have only one assertion (Which in retrospect I think is a rule about the functionality of the method under test and not a guideline about how to write tests. This isn’t the first time developers have adhered to the letter of the rule and missed the spirit entirely).
  2. Use the amazingly great transpec tool to translate RSpec 2.x style to RSpec 3.x style without having to spend months carefully transitioning a large test suite from one API to another. Mostly this works out great, but you get some awkward translation, like the is_expected part.
  3. Use the newer RSpec 3.x style expect syntax for assertions, introducing two ways to say the same thing with the intended long-term benefit of using the clearer expect style everywhere.

I don’t blame RSpec for any of these phases. You can easily swap out the names of libraries and concepts for any other language or library and find a similar story buried in any chunk of code that’s been around for more than a year and worked on by more than one person. It’s a thing that happens to code. I’m not even sure people should feel bad about it. Mindful of cleaning it up over time, yes. Throw it all in the bin and start over, no.

My first temptation is to say that using it one-liners is a smell. They are nice to scan through but tricky to write and trickier still to change. But I can see where a series of well-intentioned code changes compresses many structurally similar test cases down to nearly-declarative (nearly!) one-liners without much duplicate typing. I can imagine a high-functioning team writing their own matchers, carefully using one-liners, and succeeding. So this one is a word of warning and not a smell.

The real smell, I think, is that its really easy to have very different scopes adjacent to each other in an RSpec test file. Further, not all scope-introducing constructs look the same! describe/context introduce one kind of scope, it introdues another kind of scope, let/subject/before introduce three similar but different kinds of scope, and expects/is_expected look the same but have different scopes as well.

Even smellier is that I’m making this list from an empirical understanding and not by examining the implementation or experimenting with reduced test cases.

What are you gonna do about it?

I’m probably going to leave that code alone. Wait for a muse to strike at the same time I need to make wholesale changes.

People who use RSpec should feel fine about themselves. People who contributed to RSpec should feel great about themselves. People who struggle with figuring out scope in RSpec should take solace that the best of us find this stuff confusing and frustrating at times. Developers not in one of these camps should take my advice, globals are bad but a bunch of weird scoping is not great either. Everyone else can smile and nod.


Organize your Gemfile by function and coupling

Most Gemfiles I see are either unordered (just throw new gems in, wherever!) or alphabetically ordered. A while back, I reordered the Sifter Gemfile, ordering by difficulty of removing the dependency and grouping by functional area. Thus organized, it came out a bit like this:

  • Framework: Rails, Rack, the mysql2 driver, JSON, Rake. We will basically use these forever.
  • Gems for our vendors: Postmark, Braintree, Skylight, Bugsnag. We'll use these as long as we're using their respective service.
  • Partner integratons: GitHub, OmniAuth, etc. Most of these aren't maintained by the partner, and we'd have to drop the integration to drop the gem.
  • Extensions: jquery-rails, delayed-job, will_paginate, etc. We could stop using these if we cared a lot, but we're pretty committed to them.
  • File uploading: carrierwave, fog, wand, rmagick. We'd have to overhaul our file attachment support if we wanted to remove these.
  • Sphinx support: thinking-sphinx and ts-delayed-delta. We'd have to overhaul our search to remove these.
  • Admin: bootstrap, etc. Things we could easily remove next time we overhaul our admin, which isn't really a big deal if we decide to.
  • Gems for building assets: jquery-ui-rails, therubyracer, execjs, coffee-rails, sass, compass, etc. We'd have to remove uses of the underlying tool (CoffeeScript, SCSS, etc.) if we wanted to drop one of these.
  • Development gems: pry, ffaker, byebug, better_errors, spring, rack-mini-profiler. We can switch these out if we want.
  • Testing gems: rspec-rails, capybara, etc. Again, if we change tools we can change these.
  • Deployment: capistrano. This could go in the framework section, seems unlikely we'd overhaul our deploy scripts away from Capistrano.

This organization makes it easy to know where to add a new dependency. More importantly, we can better understand how much we depend on a gem and the level of effort to remove it if we need to.


Programming advice for a younger me

How to get better at programming without even programming:

  1. Accept, in your heart and mind, that the languages, libraries, and tools that you use to write programs may not be good for other people, other teams, or other problems.
  2. Search for deeper understanding of approaches to programming that seem strange or incorrect to you. Don't look for wrongness in what someone else is doing or what you're thinking.
  3. You will come across scenarios that challenge principles 1) and 2). When you do, say what you can to help but err on the side of not making things worse; let go of what you can't control.

Bonus: be a kind person.


Leadership, pick a size

Like fast food or coffee at Starbucks, maybe team leadership comes in three sizes.

Extra large leadership. “This is what you’re doing. Make it happen, let me know when it’s done.” The old school, top-down approach. This is what I thought all management was like many moons ago. In my experience this is only useful when people need to think only about what they’re doing, not why they’re doing it or how they do it. Leadership sets the work, lays out the exact process for doing that work, and closely monitors the work as its done. See also: Taylorism.

Large leadership. “Here are some things that need doing. Do one of them, keep me posted.” This is where I think cohesive teams of knowledge workers should aim to be. Leadership lays out the tasks to be done, makes the work to do clear, and gets out of the way. Luckily this is a lot more frequent lately, but probably not so much in part-time labor. Leadership still monitors the flow of work, but not overtly as in Taylorism.

Medium leadership. “Do what you think is most important. Let me know how to remove obstacles.” I suspect most leaders really want to get here, but are constrained, imagined or in reality, by some organizational detail. It’s definitely where I want to be in leadership style. High trust in the team, high trust in the leadership. Lots of unicorn dust.

Oddly enough, I think some teams and projects require large leadership and some require medium leadership. Some people want a little structure to their work, some projects require well-defined milestones to rally around. Other people can effectively guide themselves to useful outcomes, other projects are less about the milestone and more about the journey.