On Code Review:
Bias to small, digestible review requests. When possible, try to break down your large refactor into smaller, easier to reason about changes, which can be reviewed in sequence (or better still, orthogonally). When your review request gets bigger than about 400 lines of code, ask yourself if it can be compartmentalized. If everyone is efficient at reviewing code as it is published, there’s no advantage to batching small changes together, and there are distinct disadvantages. The most dangerous outcome of a large review request is that reviewers are unable to sustain focus over many lines, and the code isn’t reviewed well or at all.
This has made code review of big features way more plausible on my current team. Large work is organized into epic branches which have review branches which are individually reviewed. This makes the final merge and review way more tractable.
Your description should tell the story of your change. It should not be an automated list of commits. Instead, you should talk about why you’re making the change, what problem you’re solving, what code you changed, what classes you introduced, how you tested it. The description should tell the reviewers what specific pieces of the change they should take extra care in reviewing.
This is a good start for a style guide ala git commits!
Itamar Turner-Trauring, Incremental results: how to succeed at large software projects:
- Faster feedback…
- Less unnecessary features…
- Less cancellation risk…
- Less deployment risk…
👏 👏 👏 👏 👏 read the whole thing, Itamar’s tale is well told.
Consider: incremental approaches consist of taking a large scope and finding smaller, still-valuable scopes inside of it. Risk is 100% proportional to scope. Time-to-deliver grows as scope grows. Cancellation and deployment risk grow as time-to-deliver grows. It’s not quite math, but it is easy to demonstrate on a whiteboard. In case you happen to need to work with someone who wants large scope and low risk and low time-to-delivery.
A little bit of fan reflection on Transportation in Tomorrowland and how to revitalize it:
When you visit Disneyland in California, how do you feel when you walk down Main Street, U.S.A. and turn right to enter Tomorrowland? I mostly feel a combination sadness and frustration when I walk through Tomorrowland–primarily due to the misplaced and pathway– clogging Astro Orbiter and the vacant, rotting PeopleMover track. And while fantasy space travel is well represented in Tomorrowland (Buzz Lightyear Astro Blasters, Star Tours, and Space Mountain), any semblance of tangible ways of pondering, dreaming about, and honoring humankind’s achievements and the wonders of the future are long gone. It’s as if Disneyland, like seemingly so much of the world, gave up on an optimistic view of the future, too.
Personally, I’d copy/paste the Magic Kingdom People Mover over to Disneyland, keep the monorail as-is, bring back the motor boat cruise as some kind of “see the world from a personal-sized yacht” thing, and reimagine Autopia as pure-electric autonymous cars that are integrated with pedestrian, bicycle, and commercial traffic in a way that is less car-centric as our current world. The five second pitch: the future of transportation is global and interconnected.
A great video explainer on how computers and creative destruction are different this time. Why Automation is Different this Time. Hint: we need better ideas than individualism, “markets”, and supply-side economics for this to end well. Via Kottke.
The idea behind Facebook’s Relay is to write declarative queries, put them next to the user interaction code that uses them, and compose those queries. It’s a solid idea. But this snippet about Relay Modern made me chuckle:
The teams realized that if the GraphQL queries instead were statically known — that is, they were not altered by runtime conditions — then they could be constructed once during development time and saved on the Facebook servers, and replaced in the mobile app with a tiny identifier. With this approach, the app sends the identifier along with some GraphQL variables, and the Facebook server knows which query to run. No more overhead, massively reduced network traffic, and much faster mobile apps.
Relay Modern adopts a similar approach. The Relay compiler extracts colocated GraphQL snippets from across an app, constructs the necessary queries, saves them on the server ahead of time, and outputs artifacts that the Relay runtime uses to fetch those queries and process their results at runtime.
How many meetings did they need before they renamed this from “GraphQL stored procedures” to “Relay Modern”?
(FWIW, I worked on a system that exposed stored procedures through a web service for client-side interaction code. It wasn’t too bad, setting aside the need to hand write SQL and XSLT.)
Jeremy Johnson, It’s time to get a real watch, and an Apple Watch doesn’t count:
…watches are one of the key pieces of jewelry I can sport, and while many have no clue what’s on my wrist, those that do… well do. And they are investments. Usually good purchases will not only last forever (with a little love and care), but go up or retain most of their value over time.
When pal Marcos started talking to me about watches, I realized they checked all the boxes cars do, but at a fraction of the price. If cars check your boxes, look into watches. Jeremy’s intro will get you started without breaking the bank.
Fourteen Months with Clojure. Dan McKinley on using Clojure to build AWS automation platform Skyliner:
The tricky part isn’t the language so much as it is the slang.
Also, the best and worst part of Clojure:
When the going gets tough, the tough use maps
This is probably better now that specs and schema are popular. Before, when they were mysterious maps full of Very Important State, reading Clojure code (and any kind of Lisp) was pretty challenging.
Make sure you stick around for the joke about covariance and contravariance. Those type theories, hilarious!
When your project reaches midlife and tasks start taking noticeably longer, that’s the time to refactor. Not to radically decouple your software or erect onerous boundaries. Refactor to prepare the code for the next feature you’re going to build. Ron Jeffries, Refactoring — Not on the backlog!
Simples! We take the next feature that we are asked to build, and instead of detouring around all the weeds and bushes, we take the time to clear a path through some of them. Maybe we detour around others. We improve the code where we work, and ignore the code where we don’t have to work. We get a nice clean path for some of our work. Odds are, we’ll visit this place again: that’s how software development works.
Check out his drawings, telling the story of a project evolving from a clear lawn to one overwhelmed with brush. Once your project is overwhelmed with code slowing you down, don’t burn it down. Jeffries says we should instead use whatever work is next to do enabling refactorings to make the project work happens.
Locality is such a strong force in software. What I’m changing this week I will probably change next week too. Thus, it’s likely that refactoring will help the next bit of project work. Repeat several times and a new golden path emerges through your software.
Don’t reach for a new master plan when the effort to change your software goes up. Pave the cow paths through whatever work you’re doing!
The TTY demystified. Learn you an arcane computing history, terminals, shells, UNIX, and even more arcanery! Terminal emulators are about the most reliable, versatile tools in my not-so-modern computing toolkit. It’s nice to know a little more about how they work, besides “lots of magic ending in -TY”, e.g. teletypes, pseudo-terminals, session groups, etc.
Clinton Dreisbach: A favorite development tool: direnv. I’ve previously used
direnv to manage per-project environment variables. It’s easy to set up and use for this. I highly recommend it! But, I’d never thought of using it to define per-project shell aliases as Clinton does. Smart!