Juniors/seniors and incremental/vision development

The ability to focus on one concern at a time is the mark of a senior developer. It takes experience to ignore other factors as noise. It takes time to learn how to avoid tripping on distractions and side-quests.

Ben Nadel, Only Solve One Problem at a Time:

This lesson hits me hard in the feels because when I reflect on my own work history, some of the biggest mistakes that I’ve made in my career revolve around trying to solve multiple problems at the same time. Perhaps one of the most embarrassing examples of this is when I was attempting to learn OOP (Object Oriented Programming) on a new project. This was two problems: I had to build a new application; and, I tried to do it in a new programming paradigm.

Needless to say, the project ended up coming in months late and was a horrible mess of hard-to-maintain code. Trying to solve two problems at the same time ended in disaster.

Nearly universal advice for developers of all experience levels!

The trick for juniors is, they’re always learning more than one thing at a time, often on accident. They want to build a feature, but it requires a new library, and it requires learning the library. They went to start up a development server, but then something weird happens with Unix. It’s the essential challenge of being a junior – they’re just getting started, so they’re always learning a couple of things at a time.

Perversely, a senior who can see the whole feature/change in their head is sometimes tempted to push the whole thing through in one (large) change. They’re tempted to make the entire thing happen in one outburst of crisp thinking.

Developers who have learned to avoid pitfalls and gotchas sometimes have to relearn how to work incrementally. Juniors (frequently) don’t have this problem. If they don’t work incrementally, they won’t make progress at all! (Caution: juniors that try to work like the seniors they see around them will fall into this trap.)

That said, juniors and seniors both tend to struggle with:

  • deciding when to focus vs. when to jump out of a rabbit hole
  • building their own feedback loops with tests/compilers, jigs/scaffolding
  • imagining and applying constraints
  • using first-principle thinking to reduce the overwhelming possibility space of programming

These skills don’t come with simple experience. One has to decide to apply them and then build up experience using them to keep development on-track, focused, and effective.


Rails generators are underrated

Every experienced Rails developer should pick up Garrett Dimon’s Frictionless Generators. Generators are an often overlooked piece of the Rails puzzle. The book shines a light on getting the most out of generators in your Rails projects. It’s an amazing-looking digital object too, the illustrations are great.

(Before I go further, it’s worth noting that Garrett Dimon sent me a review copy, acted on my feedback, and is generally a pal.)

You should customize generators and write your own

Conventions, in the form of assumptions that are woven into the framework’s code but also exist as a contract of sorts, are essential to Rails. Following those conventions would not work well if creating a new model, resource, etc. wasn’t made easy and consistent without our friends rails generate resource/model/etc. These generators have been present in Rails since the beginning. I’ve used them hundreds of times and thought nothing of it.

A few years ago, I realized that generators had become a public API in Rails and that re-use was encouraged. The guide is a good start. I was able to knock out a simple generator for a novel, but conventional, part of our application. In the process, I realized a couple of things.

No one likes boilerplate and tedium. Generators automate the tedium of boilerplate. This is particularly helpful to less experienced developers, who are likely a bit overwhelmed with trying to comprehend decades of Rails evolution and legacy code (from their perspective) quickly.

Rails developers are under-utilizing this leverage. Every system that makes it past a few thousand lines of code (or several months in use) develops bespoke conventions. These are easier to follow if you can reduce the mental burden to “when you add a new thingy, run rails g thingy”. Added bonus: starting new conceptual pieces in your app from generators encourages consistency, itself an under-appreciated sustaining element in long-lived Rails applications.

Luckily, someone was thinking the same thing I was…

Garrett knows more about generators than anyone I know

The Rails guides for generators are okay. They whet the curiosity and appetite. But, they aren’t particularly deep. When I first tinkered with generators, I mostly learned by reading the code for rails generate model/resource/etc.

Frictionless Generators does not require one to jump right into reading code. It opens with ideas on the possibilities of developing with custom generators. Then, it moves onto the practicalities of writing one’s own generator and crafting a good developer experience. From there, it’s down the rabbit hole: learning about the APIs for defining generators, implementing the file-creation logic therein, writing help and documentation for them, generating files from templates, and customizing the built-in Rails generators.

Garrett goes into as much depth on generators as any other technical author I know, on any topic. Did you know you can make HTTP requests and use their responses in generators? I did not, but Garrett does! Did you know that generators apply the same kind of “oh, yeah, that’s common sense” convention for copying files from generators into applications? I did not, but Garrett does! I don’t think I’d use all these ideas on every generator, but I like the idea that I can return to Frictionless Generators should I have an idea and wonder how the existing, low-friction APIs can help me.

Further, Garrett offers frequent insights into the developer experience and leverage of building generators. On building generators for “fingertip feeling” so developers can easily (and frequently!) use them:

I like to aim for no more than one value argument and one collection argument to keep generators easier to use. Everything else becomes an option.

On approaching generators as high-leverage side-quests:

Remember that the ease of use is generally inversely proportional to the length of your documentation. Assistance first. Brevity second. Content third. Or, be helpful first, concise second, and thorough third. That said, there are a few other categories of information that can be helpful.


For me, a good technical book balances presentation of technical information, the right amount of detail, and wisdom on how to apply the two in practical ways. Garrett succeeds at striking that balance while keeping things moving and easy to follow.

In short, recommended! Rails generators are underrated, whether you’re aware of their existence or not. Smart teams are customizing generators and writing their own bespoke generators. There’s a book on this now, which you should check out if any of this resonated.


“Yes, and” despite pessimistic engineering intuitions

As engineers, we often face the consequences of shallow ideas embraced exuberantly. Despite those experiences, we should try to solve future problems instead of re-litigating problems-past.

Engineers put too much value on their ability to spot flaws in ideas. It’s only worth something in certain situations.

— Thorsten Ball, 63 Unpopular Opinions

Don’t be edge-case Eddie, wont-scale Walter, legacy code Lonnie, or reasons Reggie. At least try to think around those issues and solve the problem.

This is very much a note to my previous self(s).

Pay attention to intuitive negative emotion…If you’ve been asked for quick estimates a bunch, you might have noticed that sometimes the request triggers negative emotions: fear, anxiety, confusion, etc. You get that sinking feeling in the pit of your stomach. “Oh crap”, you think, “this is not going to be easy.” For me (and most experienced software engineers), there are certain kinds of projects that trigger this feeling. We’re still pattern matching, but now we’re noticing a certain kind of project that resists estimation, or a kind of project that is likely to go poorly.

– Jacob Kaplan-Moss, The art of the SWAG

Jacob recommends noting how intuition and emotion are natural and not entirely negative influences on the process of evaluating ideas. The trick, he says, is to pause and switch to deeply thinking through the idea (or estimate) you’re presented with.

This, again, is very much a note to my previous self(s).

Now, if you’ll excuse me, I need to get back to brainstorming and estimating this time machine, so I can deliver this advice to my former self.


Building software is great

…even if some days working in corporations or under unwanted pressure makes it considerably less fun.

I also just don’t especially want to stop thinking about code. I don’t want to stop writing sentences in my own voice. I get a lot of joy from craft. It’s not a universal attitude toward work – from what I can tell, Gen Z is much more anti-work and ready to automate away their jobs – but I’ve always been thankful that programming is a craft that pays a good living wage. I’d be a luthier, photographer, or, who knows, if those jobs were as viable and available. But programming lets you write and think all day. Writing, both code and prose, for me, is both an end product and an end in itself. I don’t want to automate away the things that give me joy.

– Tom MacWright, The One About AI

What a great distillation of what makes working on software great! It’s an opportunity to think all day, earning a good wage doing so. Sometimes, to make something of value. Even more rarely, to make something of lasting value. Most of all, to be challenged every day. On the good days, it’s the future we were promised!


Careers are non-linear

David Hoang, Should managers be technical?:

Career development looks more like unlocking attributes for a different subclass in a role-playing game, than picking a distinct class that can never change. It’s not a path. It’s a collection of skills and attributes focused on certain outcomes. Applying foundational skills is heavily contingent on your role and responsibility.

👍🏻 Careers, management or not, aren’t straight lines. The skills you need for your career aren’t a tree with one root. You can skip between various skill trees, if you like! You can go deep, but wide is an option too. The more you know, the more you can delegate!

You should check out David’s newsletter too.

A wise person from a Destiny 2 Slack:

I guess when you’re done with the main quest, you go back and do side quests

Careers (and lives) are non-linear. Occasionally their trajectories don’t make sense. They may even outright disappoint, in the moment. The silver lining is, they give us unique skills and experience that someone in the world wants if only we can find them. 📈


Sidestep process by sharing tangible progress

Nat Bennett:

Cannot overstate the value of regularly delivering working software.

My single most effective software dev habit is to start with a walking skeleton – a “real” if very stubbed out program that can be deployed on its real infrastructure, receive real calls, visited for real etc. – because of what this does for non-programming stakeholders.

When they see a real working thing and then they see that thing get meaningful improvements they tend to chill way out and get much easier to work with.

You can save a week of effort on process with a couple hours of sharing tangible progress.

Related: you can save a week of planning with a couple hours of programming. You can save a week of programming with a couple hours of planning.


Zawinski's law, updated

Every program attempts to expand until it can

  • read email (the original)
  • invite a friend
  • check off tasks in a list
  • record consent to receiving cookies or storing data in the United States (GDPR)
  • store an audit trail (SOC2)

My summer at 100 hertz

Is there a lost art to writing code without a text editor, or even a (passable) computer? It sounds romantic, I’ve done it before, I tried it again, and…it was not that great. 🤷🏻‍♂️


Tooling has improved for ambitious software developers

Tools for working on software in the large1 have improved a lot over since I last considered them ten years ago.


Top of Mind No. 5

Like everyone (it seems), I’m exploring how large language model copilots/assistants might change how I work. A few of the things that have stuck with me:

My take right now: GitHub Copilot is the real deal and helpful, today. On the low-end, it’s the most useful autocomplete I’ve used. On the high-end, it’s like having a pair who has memorized countless APIs (with a somewhat fallible memory) but can also type in boilerplate bits of code quickly, so I can edit, verify, and correct them.

I don’t expect the next few products I try to hit that mark. But, I suspect I will have a few LLM-based tools in my weekly rotation by the end of the year.


Err the Blog, revisited

Before there was GitHub, there was Err the Blog. Chris Wanstrath and PJ Hyett wrote one of the essential Rails blogs of early Rails era. Therein, many of the idioms and ideas we use to build Rails apps today were documented or born.

I’d figured this site was offline, as are most things from the mid-2000s. ‘Lo and behold, it’s still online in its pink-and-black glory. Lots of nostalgia on my part here.


Make code better by writing about it

Writing improves thoughts and ideas. Doubly so for thoughts and ideas about code.

Writing, about software or otherwise, is a process wherein:

  1. thoughts and ideas are clarified
  2. ideas are transferred to colleagues
  3. culture (of a sort) is created by highlighting what’s essential and omitting what’s transient

Documenting code, as a form of writing, is a process wherein:

  1. the concepts and mechanics in the code are clarified
  2. what’s in our head today is made available to our teams (and ourselves) in the future
  3. culture happens by highlighting what’s intended and what’s “off the beaten path” when working with this codebase

I suspect that open source is (often) of higher quality than bespoke, proprietary software because it has to go through the crucible of documentation. Essentially, there’s a whole other design activity you have to go through when publishing a library or framework that internal/bespoke software does not.

I can’t objectively verify this. Subjectively, when I have made the time to write words about a bit of code I wrote, it has resulted in improving the design of the code along the way. Or, I better understand how I might design the code in the future.

Writing is a great tool for designing code. Use it more often!


Turn the pages. Read the code. Hear the words.

“Turn every page. Never assume anything. Turn every goddamned page.” — Robert Caro, Working

So goes the wisdom super-biographer Robert Caro received from a mentor when he was an investigative reporter in New York. Caro went on to apply this energy and depth to write a sprawling biography on real estate developer Robert Moses and four volumes on the life of LBJ.

I like the energy, determination, and purpose of Caro’s advice. In his writing, Caro takes a maximalist1 perspective2. He looks to understand the system. Caro read every original document in every archive he could find (“turning the pages”) to ensure he fully grasped the context of historical events and didn't miss any details that might change the interpretation of events. Caro tries to load the whole state of his subject into his head and notes. Only then does he start writing his expansive biographies.

1. Read the code

Building software benefits from the same energy and determination displayed by Caro. As I’m working on a project, I flip between code I’m writing, code I’m using, and adjacent systems. Once I’ve read enough to have the problem and system in my head, I can move through the system and answer questions about it with that everything is at my fingertips feeling3. Fantastic.

Recommended: read third-party libraries, frameworks, standard/core/prelude libraries. Read demo code. Find inspiration, learn something new, and build the muscle for reading code with confidence.

2. Hear the words

When I’m really listening, avoiding the urge to think through what I’ll say next, I’m doing my best work as a coach or mentor. When I really hear and understand what a colleague is trying to accomplish or solve, it’s a bigger win for everyone.

Subsequently, I can switch to brainstorm or solution mode. Not before.

Recommended: literally listen to what they’re saying. Especially for the leaders and managers out there. Get the context needed to understand what they’re thinking. Ask clarifying questions until you’re sure they understand what they’re thinking. Don’t start responding until you’re sure you understand the context and the kind of response4 expected of you.

3. Build a model

Reading (words and code) and listening are great ways to build a mental model of how an organization, system, team, or project works. That said, a model is mostly predictions, not rules or hard-won truths.

To verify your model, you have to get hands-on at some point. A model is likely invalid unless it’s been applied hands-on to the system in question. Make a change, propose an updated process. See what happens, what breaks, who pushes back. Building (with code, with words) upon the model will evolve your understanding and predictions in ways that further reading or listening will not.

Recommended: turn reading words, reading code, and listening into a model of how your code-base, team, or organization work together. Apply that model to a problem you’re facing and use the results to improve your predictions on what actions will produce welcome outcomes. Rinse and repeat.

4. Go forth and deeply understand a system

With due credit to Robert Caro, I suggest doing more than “turning the pages”:

“Read the code. Read every damn function or module you’re curious about.” — me, a few months ago

"Listen to what they're saying. Hear every damn word until they're done talking." — me, several weeks ago

Next time you think “I need more context here” or “that seems like magical code/thinking”, go deeper. Take 15 minutes to read the code, or listen 15 seconds longer to your conversation-mate.

Turn the pages. Read the code. Hear the words.

  1. Aside from his biography quote above, all of his books are doorstoppers.
  2. Extensively aided, it should be noted, by further research and organization by his wife.
  3. Aka Fingerspitzengefühl
  4. e.g. commiserating, brainstorming, un-blocking, taking action, etc.

Offloading fast operations in Ruby by data structure

Noteflakes: A Compositional Approach to Optimizing the Performance of Ruby Apps — the idea is to offload “inner-loop”-type operations from Ruby to C-extensions. The clever twist is this happens via data-structure-as-language. Ruby being Ruby, you can wrap a DSL around the data structure generation to reduce the context switch from Ruby to offloaded operations.

There’s precedent to the approach: if you squint, it’s not unlike offloading the math for computer graphics or machine learning to a GPU. That said, the speed-up is unlikely to be as dramatic.

I hope to hear more of this approach in the future!

Adjacent: “it’s wild how much of the 2021 programming ecosystem is declarative data structures evaluated by recursive functions.”


Code minutiae, October 23, 2017

For some reason, identifier schemes that are global unique, coordination-free, somewhat humanely-representable, and efficiently indexed by databases are a thing I really like. Universally Unique Lexicographically Sortable Identifier (ulid, for humans) is one of those things. Implementations available for dozens of languages! They look like this: 01ARZ3NDEKTSV4RRFFQ69G5FAV.

Paul Ford’s website is twenty years old. For maybe half that time I’ve been extremely jealous of how well he writes about technology without being dry and technical. When I grow up, I’ll write like that!

How Awesome Engineers Ask For Help. So much good stuff there, I can’t quote it. There’s something in there for new and experienced engineers alike. In particular: don’t give up, actively participate in the process of getting unstuck, take and share notes, give thanks afterwards.

The best time to work on your dotfiles is on weekends between high-intensity project pushes at work. No better time to do some lateral thinking and improving of your workflow. Feels good, man.


You must be this tall to ride the services

If I were trying to convince myself to extract a (micro)service, today, I’d do it like this. First I’d have a conversation with myself:

  • you are making tactical changes slightly easier at the expense of making strategic changes quite hard; is that really the trade-off you're after?
  • you must have the operational acumen to provision and deploy new services in less than a week
  • you must have the operational acumen to instrument, monitor, and debug how your applications interact with each other over unreliable datacenter networks
  • you must have the design and refactoring acumen to patiently encapsulate the service you want to build inside your current application until you get the boundaries just right and only then does it make sense to start thinking about pulling a service out

I would reflect upon how most of the required acumen is operational and wonder if I’m trying to solve a design problem with operational complexity. If I still thought that operational complexity was worthwhile, I’d then reflect upon how close the code in question was to the necessary design. If it wasn’t, I would again kick the can down the road; if I can’t refactor the code when it’s objects and methods, there’s little hope I can refactor it once its spread across two codebases and interacting via network calls as API endpoints, clients, data formats, etc.

If, upon all that reflection, I was sure in my heart that I was ready to extract a service, it’d go something like this:

  • try to encapsulate the service in question inside the current app
  • spike out an internal API just for that service; this API will become the client contract
  • wrap an HTTP API around the encapsulation
  • make sure I have an ops buddy who can help me at every provisioning and deployment step, especially if this sort of thing is new and a monolith is the status quo
  • test the monolith calling itself with the new API
  • trial deploy the service and make some cross-cutting changes (client and server) to make sure I know the change process
  • start transferring traffic from the monolith to the service

In short, I still don’t think service extraction is as awesome as it sounds on paper. But, if you can get to the point of making a Modular Monolith, and if you can level up your operations to deal with the demands of multiple services, you might successfully pull off (micro)services.


How methodical and quality might keep up with fast and loose

I’ve previously thought that a developer moving fast and coding loose will always outpace a developer moving methodically and intentionally. Cynically stated, someone making a mess will always make more mess than someone else can clean up or produce offsetting code of The Quality.

I’ve recently had luck changing my mindset to “make The Quality by making the quantity”. That is, I’m trying to make more stuff that express some aspect of The Quality I’m going for. Notably, I’m not worrying too much if I have An Eternal Quality or A Complete Expression of the Quality. I’m a lot less perfectionist and doing more experiments with my own style to match the code around me.

I now suspect that given the first two developers, its possible to make noticeably more Quality by putting little bits of thoughtfulness throughout the code. Unless the person moving fast and loose is actively undermining the quality of the system, they will notice the Quality practices or idioms and adopt them. Code review is the first line of defense to pump the brakes and inform someone moving a little too fast/loose that there’s a Quality way to do what they’re after without slowing down too much.

Sometimes, I’m an optimist.


One step closer to a good pipeline operator for Ruby

I’ve previously yearned for something like Elm and Elixir’s |> operator in Ruby. Turns out, this clever bit of concision is in Ruby 2.5:

object.yield_self {|x| block } → an_object
# Yields self to the block and returns the result of the block.

class Object
  def yield_self
    yield(self)
  end
end

I would prefer then or even | to the verbosely literal yield_self, but I’ll take anything. Surprisingly, both of my options are legal method names!

class Object

  def then
    yield self
  end

  def |
    yield self
  end

end

require "pathname"

__FILE__.
 then { |s| Pathname.new(s) }.
 yield_self { |p| p.read }.
 | { |source| source.each_line }.
 select { |line| line.match /^\W*def ([\S]*)/ }.
 map { |defn| p defn }

However, | already has 20+ implementations, either of the mathematical logical-OR variety or of the shell piping variety. Given the latter, maybe there’s a chance!

Next, all we need is:

  • a syntax to curry a method by name (which is in the works!)
  • a syntax to partially apply said curry

If those two things make their way into Ruby, I can move on to my next pet feature request: a module/non-global namespace scheme ala Python, ES6, Elixir, etc. A guy can dream!


Strange Loop 2017

I was lucky enough to attend Strange Loop this year. I described the conference to friends as a gathering of minds interested in programming esoterica. The talks I attended were appropriately varied: from very academic slides to illustrated hero’s journeys, from using decomposed mushrooms to create materials to programming GPUs, from JavaScript to Ruby. Gotcha, that last one was not particularly varied.

In short, most of the language-centric conferences I’ve been to in the past were about “hey look at what I did with this library or weird corner of the language”, though the most recent Ruby/Rails conference are more varied than this. By comparison, Strange Loop was more about “I did this thing that I’m excited about and its a little brainy but not intimidating and also I’m really excited about it.”

Elm Conf 2017

I started the weekend off checking out the Elm community. I already think pretty highly of the language. I would certainly use it for a green-field project.

Size, excitement, and employment-wise, Elm is about where Ruby was when I joined the community in 2005. Lots of excited folks, a smattering of employed folks, and a good technical/social setup for growth.

A nice thing about the community is that there is no “other” that Elm is set against. Elm code often needs to interface with JavaScript to get at functionality like location or databases, so they don’t turn their nose up at it. It’s a symbiotic relationship. Further, most Elm developers are probably coming from JavaScript, so its a pretty friendly relationship. This is nice shift from the tribalism of yore.

It’s also exciting that Elm is already more diverse than Ruby was at the same point in its growth/inflection curve. Fewer dudes, more beginners, and none of the “pure Ruby” sort of condescension towards Rails and web development.

Favorite talks:

  • “Teaching Elm to Beginners” (no talk video), Richard Feldman. Using Elm at work requires teaching Elm to beginners. Teaching is a totally different skill set, disjoint from programming. When answering a question, introduce as few new concepts as possible. Find the most direct path to helping someone understand. It’s unimportant to be precise, include lots of details, or being entertaining when teaching. You can avoid types and still help students build a substantial Elm program.
  • If Coco Chanel Reviewed Elm, Tereza Sokol: Elm as seen through the lens of high and low fashion. Elm is a carefully curated, slow releasing collection of parts ala Coco Chanel. It is not the hectic variety of an H&M store.
  • Accessibility with Elm, Tessa Kelly: Make accessible applications by enforcing view/DOM helpers with functional encapsulation and types. Your program won’t compile if you forget an accessibility annotation. A pretty good idea!
  • Mogee, or how we fit Elm in a 64x64 grid”, Andrew Kuzmin: A postmortem on building games with Elm. Key insight: work on the game, not on the code or engine. Don’t frivolously polish code. Use entity-component-system modeling. Build sprite/bitmap graphics in WebGL by making one pixel out of two triangles.

The majority of the talks referenced Elm creator Evan Czaplicki’s approach to designing APIs. He is humble enough that I don’t think this will backlash like it did with DHH’s opinions did with Rails.

By far the biggest corporate footprint in the community and talks was NoRedInk. Nearly half of the talks were by someone at the company.

Most practical talks from StrangeLoop

Types for Ruby: it seems like they’ve implemented a full-blown type system for Ruby. It’s got all the gizmos and gadgets you might expect: unions, generics, gradual typing. It applies all its checks at runtime though, and they didn’t say if it does exhaustive checking, so I’m not sure how handy it would be in the way that e.g. Elm or Flow are. On my list of things to check out later.

Level up your concurrency skills with Rust. Learning Rust’s concepts for memory and concurrency safety, i.e. resources, ownerships, and lifetimes, can help you program in any language. Putting concurrency into a system is refactoring for out-of-orderness and most likely a retrofit of the underlying structure. Rust models memory like a resource, ala file handles or network sockets are modeled by the operating system. Rust resource borrowing in summary: if you can read it, no one else can write it; if you can write it, no one else can read or write it; borrows are checked at compile time so there is no runtime overhead/cost.

GPGPU programming with Metal. Your processor core has a medium sized arithmetic logic unit and a giant control unit (plus as much memory/cache as they can spare). A GPU is thousands of arithmetic logic units. Besides drawing amazing pictures, you can use all those arithmetic logic units to train/implement a neural network, do machine vision or image processing, run machine learning algorithms, and any kind of linear algebra or vector calculus. Work is sent to the GPU by loading data/state into buffers, converting math instructions to GPU code and load that into GPU buffers, and then let the GPU go wild executing it.

Seeking a better culture and organization of open source maintainership (no talk video). Projects are getting smaller, more fragmented, and attracting no community (ed. the unintended consequence of extreme modularity?) Bitcoin and Ethereum have very little backing despite the astronomical amounts of money in the ecosystem. We need a new perspective on funding open source work. Consumption of open source has won, but production of open source is still in a pretty bad place.

How to be a compiler. Knitting is programming; you can even compile between knitting description pseudo-languages. Implemented Design by Numbers, a Processing predecessor, as transpiler to SVG.

Random cool things people are really doing

Measuring and optimizing tail latency. Activating instrumentation and “slow-path” techniques on live web requests that run so long they will fall into the 99th percentile. Switch processor voltage to “power up” a processor that’s running a slow request so it will finish faster, e.g. switch a core from low power/500MHz mode to high power/2GHz mode.

Really using functional ideas of composition and state in production, consumer-facing applications (e.g. the NY Times) and using ML-style type checkers with JavaScript (e.g. Flow and Elm).

My two favorite talks by far: Making digital art with JavaScript, WebGL, vdom and immutability. Scraping/querying/aggregating image data from various space missions (e.g. Jupiter and Pluto flybys).

Facebook stopped using datacenter routers and started building their own servers that program the networking chips a router would use from CentOS, basically giving them programmable routers that deploy like you would update infrastructure like Nginx or memcached. I wonder when/if treating network devices as software will scale down to your typical large company?

Strange Loop takeaways

  • a conference of diverse backgrounds and experiences is a better one
  • my favorite talks told a hero’s journey story through illustrations
  • folks in this sphere of technology are taking privacy and security very seriously, but the politics of code, e.g. user safety and information war, were not particularly up there in the talks I went to (probably by self-selection)
  • way more people are doing machine learning applications than I’d realized; someone said off-hand that we’d “emerged from the AI winter in 2012” and that struck me as pretty accurate
  • everyone gets the impostor syndrome, even conference speakers and wildly successful special effects and TV personalities like Adam Savage

If you get the chance, you should go to Strange Loop!


exa in 30 seconds

What is it? exa is ls reimagined for modern times, in Rust. And more colorfully. It is nifty, but not life-changing. I mostly still use ls, because muscle memory is strong and its basically the only mildly friendly thing about Unix.

How do I do boring old ls things?

Spoiler alert: basically the same.

  • ls -a: exa -a
  • ls -l: exa -l
  • ls -lR: exa -lR

How do I do things I rarely had the gumption to do with ls?

  • exa -rs created: simple listing, sort files reverse by created time. Other options: name, extension, size, type, modified, accessed, created, inode
  • exa -hl: show a long listing with headers for each column
  • exa -T: recurse into directories ala tree
  • exa -l --git: show git metadata alongside file info