Context is data to burst your bubbles
Context is a slippery topic that evades attempts to define it too tightly. Some definitions cover just the immediate surroundings of an interaction. But in the interwoven space-time of the web, context is no longer just about the here and now. Instead, context refers to the physical, digital, and social structures that surround the point of use.
Great design is built around people, not devices or software. Applying responsive design or native app UX is a tool, not a solution. Instead, we should design software that solves a problem for a real person (not a power-user or one of our colleagues) given the devices available to them and the context of use they’re in.
A high information density display is no good to a parent trying to get their kids out the door. Documentation based on video tutorials is no good for someone riding a bus. A developer troubleshooting a service bottleneck needs to know more than the average response time.
As both designers of user experiences and developers of software, we need to get away from the desk and out amongst those we’re building for. It’s too easy to build for ourselves and our friends. We need to consider how others approach and use what we make. Armed with that context, we can design a solution for everyone, and not just those we share a bubble with.
Hyperthreading illustrated
I'm fond of saying hyperthreading is a lie. It's true though; a dual hyperthreaded core is nowhere near as awesome as a four real cores. That's more provocative than useful, so let me draw you some pictures.
If you zoom way out, a single core, dual cores, and a single hyperthreaded core look like this:

Note how the hyperthreaded core is really a single core with an extra set of registers and instruction/stack pointers. The reason hyperthreading is a lie is you can't actually run four processes, or even four threads, at the same time. At best, you can run two threads with two others ready in the wings.
I'm a dilletante of processor architecture at best, but I think I can explain why chip designers would do this.

My best guess as to why you would design and release a hyperthreaded core is to increase the number of instructions you can retire (i.e. mark as completely executed) per clock cycle. Instructions retired per cycle is one of the primary metrics processor architects use for judging a design.
The enemy of instructions retired per clock cycle is memory accesses and branch mispredictions. When a processor has to go access a cache line or worse, something out in main memory, it has nothing to do but wait. When a branch (i.e. a conditional or loop) is incorrectly speculatively executed (how awesome is it that processors start executing code paths before they even know if its the right thing to do?) they end up in the same predicament. Cache misses and branch mispredictions are at best a recipe for some overhead, and at worst a recipe for waiting around on main memory.
Hyperthreading attempts to solve this problem by keeping extra programs (threads or processes) waiting in the wings, ready to start executing as soon as another pauses due to needing something from main memory. This means our execution units, the things that actually do math and execute the logic of a program, are (almost) always utilized and retiring instructions. And that gets us to our happy place of a higher instructions retired per clock cycle.
Why not just throw more execution units in and have real cores ready to work? I'm not sure, there must be something about how modern processor pipelines work that I don't know which makes it too expensive to implement. That said, hyper threading (as I understand it) is a pretty clever hack for improving the efficiency of a processor.
TextMate's beautiful and flawed extension mechanism
This is about how TextMate’s bundle mechanism was brilliant, but subtly flawed. However, to make that point, I need to drag you through a dichotomy of developer tools.
Composition vs. context in developer tools
What's the difference between a tool that developers work with and a tool developers often end up working against? Is there a useful distinction between tools that seem great at first, but end up loathed as time goes on? Neal Ford has ideas. Why Everyone (Eventually) Hates (or Leaves) Maven:
I defined two types of extensibility/programability abstractions prevalent in the development world: composable and contextual. Plug-in based architectures are excellent examples of the contextual abstraction. The plug-in API provides a plethora of data structures and other useful context developers inherit from or summon via already existing methods. But to use the API, a developer must understand what that context provides, and that understanding is sometimes expensive.
Composable systems tend to consist of finer grained parts that are expected to be wired together in specific ways. Powerful exemplars of this abstraction show up in *-nix shells with the ability to chain disparate behaviors together to create new things.
Ford identifies Maven and IDEs like Eclipse as tools that rely on contextual extension to get developer started with specific tasks very quickly. On the other hand, a composable tool exchange task-oriented focus for greater adaptability.
Contextual systems provide more scaffolding, better “out of the box” behavior, and contextual intelligence via that scaffolding. Thus, contextual systems tend to ease the friction of initial use by doing more for you. Huge global data structures sometimes hide behind inheritance in these systems, creating a huge footprint that shows up in derived extensions via their parents. Composable systems have less implicit behavior and initial ease of use but tend to provide more granular building blocks that lead to more eventual power.
And thus, the crux of the biscuit:
Contextual tools like Ant and Maven allow extension via a plug-in API, making extensions the original authors envisioned easy. However, trying to extend it in ways not designed into the API range in difficultly from hard to impossible...
Contextual tools are great right up to the point you hit the wall of the original developer’s imagination. To proceed past that point requires a leap of one or two orders of magnitude in effort or complexity to achieve your goal which the original developer never intended.
Bundles are beautiful
Ford wrote this as a follow-on to a piece Martin Fowler wrote about how one extends their text edior. It turns out that the extension models of popular text editors, such as VIM and Emacs, are more like composable systems than extension-based systems.
All of this is a extremely elaborate setup for me to sing the praise of TextMate. Amongst the many things it got very right, TextMate brilliantly walked the line between a nerdy programmer’s editor and an opinionated everyday tool for a wide range of developers. It did this by exposing its extension mechanism through two tools that every developer knows: scripts and regular expressions.
To add functionality to TextMate, you make a bundle. A bundle is a convention for laying out a directory such that TextMate knows the difference between a template and a syntax definition through a convention. This works because developers know how to put things in the right folder. There were only ever five or so folders you needed to know about, so this was a simple mechanism that didn’t become a burden.
To tell TextMate how to parse a language and do nifty things like folding text, you wrote a bunch of regular expressions. If I recall, there were really only a few placeholders to wedge in these regular expressions. This worked great, as most languages, though the “serious” tools use lexers and parsers, are amenable to low-fidelity comprehension with a series of naive pattern matches. The downside was that languages that didn’t look like C were sometimes odd to work with.
In my opinion, the real beauty of TextMate’s bundles was that all of the behavioral enhancement was handled with shell scripts. Plain-old Unix. You could write them in Ruby, Python, bash, JavaScript, whatever fit your fancy. As long as you could read environment variables and output text (or even HTML), you could make TextMate do new things. This led to an absolute explosion of functionality provided by the community. It was a great thing.
Downfall
Interestingly enough, TextMate is essentially a runtime for bundles. This is how VIM and Emacs are structured as well. TextMate just put a nicer interface around that bundle runtime. However, the way it did so was its downfall, at least for me.
Recall, a few hundred words ago, the difference between composable and contextual extensions. A contextual extension is easy to get going, but comes up short when you imagine something the creator of the extension point didn’t imagine. The phenomenal thing about TextMate was how well it chose the extension points and how much further those extension points took the program than what came before it. I’d estimate that about 90% of what TextMate ever needed to do, you could do with bundles. But the cost to find that last 10%, it was brutal.
Eventually, I bumped up against this limitation with TextMate. I wanted split windows, I wanted full-screen modes, I wanted better ctags integration. No one could add those (when TextMate wasn’t open source, circa 2010) because they required writing Objective-C rather than using TextMate’s extension mechanism. And so, I ended up on a different editor (after several months of wandering in a philosophical desert).
The moral of the story
If possible, you should choose a composable extension mechanism (a full-blown language, probably) and use that extension mechanism to implement your system, ala Vimscript/VIM and elisp/Emacs. If you can’t do that, you can get most of the benefit by doing a plugin API, but you have to choose the extension points really, really well.
Austin's startup vibe
It's different from other towns. What's the Difference between Austin and San Francisco?:
Austin offers you more options, but greater variety means that, on the whole, Austinite’s don’t focus as intensely as in San Francisco. Austin’s defining characteristic (part of it’s slacker culture) is a belief that intensity isn’t always the best thing. Austin believes in variety and moderation. This affects the startup community. Austin, the city, will let you pick and choose from its buffet line, and then admire the smorgasbord you put together. Your lifestyle is a work of art in Austin, and I think the culture rewards you for how you live as much as what you do, often moreso.
In my few visits to San Francisco, I’ve found that I cannot wrap my Texan brain around that town. Trying to really understand its startup culture with just a few visits to the city and Palo Alto is similarly folly. But I did notice the intensity that SF has. It’s not a bad way to describe the town.
That said, I think this is a pretty decent encapsulation of Austin. Austin is a slower town (slower even than Dallas) and revels in the variety of activities available to its people. The Austin tech community is more about smaller groups and individuals too. It’s not (always) about aim-for-the-stars startups or working for large companies using large technology from large vendors. It’s as much about a few people on a team or individuals hacking something out while enjoying their city, family, and friends.
Obviously, I dig that a lot.
Update: I should mention that, while it’s popular to write Austin off as a slacker town, there’s a lot of people dedicated to their work and craft here. It’s not all tacos and BBQ. The events I go to most often are frequented by people who using their evenings to learn something new or talk shop while they’re making something. That is, I think, the most important factor of a startup community: the more people who are putting their evenings into making things, the more likely those things will end up awesome and grow into a business-like organism.
Senior VP Jean-Luc Picard, of the USS Enterprise (Alpha Quadrant division)
If you’re working from the Jean-Luc Picard book of management, a nice little Twitter account of Picard-esque tips on business and life, we can be friends. Consider:
Picard management tip: Be willing to reevaluate your own behavior.
And:
Picard diplomacy tip: Fighting about economic systems is just as nonsensical as fighting about religions.
But I’m not so sure about this one:
Picard management tip: Shave.
If you’re playing from home, the fictional characters that have most influenced my way of thinking are The Ghostbusters (all of them) and Jean-Luc Picard. I also learned everything I need to know about R&B from The Blues Brothers.
SoundCloud, micro-services, and software largeness
From a monolithic Ruby on Rails app to the JVM, how Soundcloud has transitioned to a hybrid approach with Ruby apps intermingling with Scala and Clojure apps. I think some of their idea of what is idiomatic Rails and how to operate Ruby are not exactly on center. But, their approach to the problem of a large Rails app is right on: break it up into “micro-services” that, if you don’t like the code, you can rewrite quickly if necessary.
Lest you fear this is yet another “Rails doesn’t scale!” deck, they do make a key observation. “Rails, PHP, etc. are a very good choice to start something”. Once you get past “starting” and “growing” to “successful and challenging”, you’ll face the same level of challenge no matter what you choose: Ruby or Java, MySQL or Riak. All the technologies we have today are challenged when they grow large.
So don’t let applications and services get large. Easy to say; hard, but worthwhile, to practice.
Those Who Make, by hand
Those Who Make is a series about people who craft. Physical things, by hand, that don’t come out the same every time. I love watching people make things, and I doubly love hearing their passion for whatever it is they’re making. Even more enlightening, this is a very international series. It’s not all hipster shops in San Francisco, Portland, and Brooklyn; it’s everywhere.
This is delightful stuff.
[vimeo www.vimeo.com/58998157 w=500&h=250]
How coffee is made in a colorful shop in another country, shot in the “Vimeo style” (is this a thing?): that will always get me.
Thoughts on "Being a Senior Engineer"
On Being a Senior Engineer made the rounds late last year. Before I finished reading it, I felt it was pointing me down a path I hadn't realized was there but needed to go down. It's the kind of "yes, this!" writing that I often end up ineptly giving people a link to without the ability to explain why they should care or how amazing it is.
I chewed on the original article for a few months, following the links, re-reading it. Basically, I'm trying to completely consume this idea of the responsibilities and abilities of a mature engineer. Below, a bunch of quotes that struck a chord with me and follow-up ideas.
I expect a “senior” engineer to be a mature engineer.
Mature engineers seek out constructive criticism of their designs.
Here's an example of how I try to apply this: attempt to hold all the options (designs, causes, etc.) in your head. This is doubly important if you have identified a design or project plan as infeasible, but it appeals to those who don't have the whole thing in their head. Empathy and understanding of other points of view is crucial.
Being able to write a Bloom Filter in Erlang, or write multi-threaded C in your sleep is insufficient. None of that matters if no one wants to work with you.
A thousand times yes! I have often felt that internet culture lionizes those who are quick and merciless in cutting down those that don't agree or can't code as prodigiously as the original poster. A senior/mature engineer is not soley defined by typing the most code per day.
Be the engineer that everyone wants to work with.
Please, if you ever see me not being that engineer, tell me!
…they have a responsibility to others to make themselves interpredictable. In general, mature engineers are comfortable with working within some nonzero amount of uncertainty and risk.
This is from a section on making estimates. It's hard to make estimates, because they feel like binding contracts. If you're working with the right people, it's OK, they're not a contract. Make a guess and help others you work with understand the level of entropy involved in your project reaching a milestone at a specific date.
This code looks good, I’m proud of myself. I’ve asked other people to review it, and I’ve taken their feedback. Now: how long will it last before it’s rewritten? Once it’s in production, how will its execution affect resource usage? How much so I expect CPU/memory/disk/network to increase or decrease? Will others be able to understand this code? Am I making it as easy as I can for others to extend or introspect this work?
- The only time is runtime, but a lot of developers focus on the static, build-time properties of their code.
- As a corollary, developers become the experts at the "hypothetical" of their code, and the ops team become the experts at the "practical" of their code. This isn't a good division of labor.
Generosity of spirit is one of our core engineering values, but also a primary responsibility of our Staff Engineer position, a career-level position. These engineers spend the time to make sure that more junior or new engineers unfamiliar with the tech or processes we have not only understand what they are doing, but also why they are doing it.
I've found it challenging that I'm so far removed from the struggles of a junior developer that in some ways I don't even comprehend them anymore. Trying to help those who have come up through Hungry Academy, even just a little, has paid dividends in understanding "junior" programmers and more experienced developers who don't have my experiences.
They know that they work within a spectrum of ideal and non-ideal, and are OK with that. They are comfortable with it because they strive to make the ideal and non-ideal in a design explicit.
Again: hold all the things in your head, even though you take only one path. For now. It's software you can and will change your mind.
Further: write software such that doing the right thing is easy, the wrong thing is hard, and amending the shortcomings is possible at a later time.
Being empathetic in this sense means having the ability to view the project from another person’s perspective and to take that into consideration into your own work.
Hold all the people, and their conflicting goals, in your head too. Isn't engineering fun?
…never go to your boss with a complaint about anything without at least one (ideally more than one) suggestion for a solution. Even demonstrating that you’ve tried working the problem on your own and came up empty-handed is better than an empty complaint.
There will always be things that suck. Complaining about them feels good! Proposing, advocating, and working on solutions is better.
The issue with cognitive biases is that we can be blissfully unaware of when we are interpreting data with our own brains in ways that defy empirical data, and can have a surprising effect on how we get work done and work on teams.
For every time I wonder what cognitive bias I'm currently exhibiting, I'm sure there's two more times when I have no idea. His list of biases is well worth reading into.
Ten Commandments of Egoless Programming. Yes.
How people feel about technologies, technical decisions, and technical directions is just as important (if not more) than the facts about the details.
People are irrational. Work with it. People have scars learnt from bad experiences. Deal with it. Everyone has succeeded in different ways and made the right and wrong inferences from it. Listen when people talk and speak to what they are excited and concerned about.
Feynman's mess of jiggling things
Richard Feynman, in the process of explaining rubber bands:
[youtube https://www.youtube.com/watch?v=baXv_5z7HVY&w=420&h=315]
The world is a dynamic mess of jiggling things, if you look at it right!
This simplification delights and amuses me. The great thing is its fractal truth: you can observe our lives at many levels and conclude that they are dynamic jiggling messes.
The Rite of March
INT. OFFICE: A team of enthusiastic young folk rush to get their “game changing” app ready for SXSW. A cacophony of phone calls, typing, and organizing swag.
EXT. PATIO: A team of folks that have done the SXSW ritual before look at their calendar, note it’s almost the middle of March, and shrug. They go back to drinking a tasty beverage and working at their own pace.
[youtube=www.youtube.com/watch
The double-tap
I use Alfred because I believe that my computer should be practically unusable to other people who try to use it. My goal is to put the things I use frequently close at hand. Conversely, the things I use rarely should be accessible without cluttering my most common workflows.
Last week, I came up with a way to bring the two or three applications I use all the time very close to hand. Ladies and gentlemen, I present to you the double tap:

Since I use VIM inside a terminal several hours a day, I want really quick access to iTerm 2. My thumb just happens to sit near the command key all day. Ergo, assigning a key to quickly switch to the terminal makes a lot of sense.
But it gets even better! Alfred knows about double-taps of the control, alt, and command keys. So you can assign an application to each of those keys and really quickly switch back and forth between them. It’s pretty rad.
My experience is that this works exactly how I’d want it to 80% of the time. A couple times a day, I will start to chord a different key combo and mysteriously end up in iTerm. It’s not disruptive, just a little odd at first, and I keep going about my business.
If you use Alfred with the Powerpack and love your keyboard, you should definitely start using double-taps.
Computers do what we tell them to, except when we give up
We tell ourselves, “a computer only does what we tell it to.” But, when it comes down to it, if we aren’t getting the result we want out of the computer, we often give in and do whatever it is the computer wants us to do.
I’m fascinated by this phenomenon. Novices do it when they’re confused, even a little afraid they may have done something wrong. Experts do it when they’re frustrated and upset that the computer is preventing them from doing whatever it is they actually wanted to do.
What’s it say about our increasingly dependent relationship with computers? At what point do we give up on our own goal and do what the computer wants so we can make progress? Is it really computers we’re giving into, or the dysfunction of the relationship between the designer, developer and the user of a computer?
A maxim you could conduct your modern life by: beware technologists bearing a solution, lest it become another chore you have to tend to.
Twitter's optimizations
Data point: a few of the infrastructure pieces out of Twitter have been implemented in low-level, heavy metal C and they’re optimizing on individual machines instead of architecture. Today, twitter/fatcache, a memcached-on-SSDs:
To understand why network connected SSD makes sense, it is important to understand the role distributed memory plays in large-scale web architecture. In recent years, terabyte-scale, distributed, in-memory caches have become a fundamental building block of any web architecture. In-memory indexes, hash tables, key-value stores and caches are increasingly incorporated for scaling throughput and reducing latency of persistent storage systems. However, power consumption, operational complexity and single node DRAM cost make horizontally scaling this architecture challenging. The current cost of DRAM per server increases dramatically beyond approximately 150 GB, and power cost scales similarly as DRAM density increases.
It’s fascinating to observe Twitter’s architectural growth from the outside. They quickly exceeded the capacity of typical MySQL setups, then of Ruby and Rails, then memcached alone. They’ve got distributed filesystems, streaming distributed processing pipelines, and distributed databases. Now they’re optimizing down to the utilization of their hardware, taking advantage of the memory-like latencies of SSDs. When you start caring about power and the size of your index entries, you’ve reached a whole new level of Maslow’s hierarchy of scaling.
If trends continue and Twitter is a leader in how large-scale distributed systems are implemented, watch out. Twitter led many of us to Scala, ZooKeeper, and their own inventions like Storm and Finagle. Gird your programming and scaling fashion loins, because you’re about to learn a lot more about malloc, ERRNO, and processor architecture than you ever wanted to know!
Stella by Starlight
My latest weekend project, called "Stella by Starlight" after a Charles Mingus recording, was to build an analytics-style dashboard for looking at random metrics and events generated by a faked-out backend. I started with these rules for myself:
- Write every 30-60 minutes. If I don't keep myself honest in journaling this, I'll never get around to writing about it.
- Use Scala for the backend service. More on this in a moment.
- Learn D3.js. This is a primary goal of this exercise.
- Maybe use Coffeescript. More on this too, below.
And now, my notes from periodic progress reports. I've annotated it with GitHub commits at each step; you can also peek at the whole repo.
Get started with Scalatra
Per my motivation to use Scala, I figured I'd start with Scalatra. It integrates with Akka, which seems like a great thing, sticks pretty close to the Sinatra-style of web app construction, and seems like its probably approachable by a Scala beginner.
Strike 1 is that you need to use this giter8 tool. Luckily, it's available via Homebrew, so it's not a blocker.
Strike 2 is all the work needed to setup a Coffeescript gizmo to work with Scalatra. I've been blocked on SBT-related stuff before, and the instructions don't match the style of SBT project that was generated by giter8.
So I have a decision to make: should I plow forward with Scala and drop CoffeeScript or make a strategic retreat to the comfortable land of Ruby where there's probably a thing that will automatically compile CoffeeScript every time I hit a page?
For this weekend, I'm going to tradeoff getting better with Scala instead of CoffeeScript. The latter's domain is browsers, a domain I have chosen not to optimize myself for.
First commit: I have no idea what I'm doing.
Boilerplates
With one decision down, I need to figure out what my first real milestone is. It seems like a spike/prototype project like this requires two setup steps before real work can begin:
- Put all the project boilerplate in place. Get the backend service running and responding to requests. Decide on any front-end boilerplate I'm going to use (Twitter bootstrap, et. al.) and put it where your server will hand it off to a browser.
- Get your feedback loop working. Make a trivial change to the backend app, make sure it appears in the front-end. Make a front-end change and make sure everything changes properly.
Once I've got these two nailed down, I'm ready to actually iterate on the idea in your head.
I've got the Scalatra part of this working, and just fetched Bootstrap. Now I just need to get its boilerplate working and I can start actually working on an analytics dashboard.
Templates and cargo cults
So Scala can manipulate XML as a language-level thing. This is both terrifying and, in the case of emitting HTML inline within a Scalatra action, useful. But the limit of this is that not-quite-valid XML, but perfectly reasonable HTML, will cause your Scala program to flat-out not compile. Ergo, I decided it was time to bite the bullet and move my HTML bits into an actual template (commit).
That turned out to be pretty easy. Read the Scalate (Scalatra view templates) docs, skim the actual Scalate docs and you're mostly good to go. The only catch is that I had pre-generated HAML-style templates laying around which were causing a weird error about title not being defined. Once I figured out I had cruft laying around and killed it dead, all was pretty good.
I cargo culted all the CSS from a Twitter Bootstrap example and ended up with something decent looking. Note to past-self: HTML and CSS are terrible, but things like Bootstrap will make it reasonably possible to put up a decent-looking app quickly without needing a designer or browser-bug expert.
The change loop for Scalatra is nice and quick when doing front-end work. The SBT feature that watches the filesystem for changes and automatically runs tasks on change is pretty handy and, IMO, a better place for that functionality than in something like Guard.
Let there be charts
Now I want to get Cubism in place. At first glance, I thought I was goinging against the grain here. Cubism has a slight tendency towards using Graphite or Cube as the metric source. However, the demo page for Cubism shows some charts using random data.
Peeking at the source showed me the way to creating a data source that isn't pulling from Graphite or Cube (commit). This saved me the effort of trying to reverse engineer the Graphite/Cube query APIs before I could make any progress at all.
This points to an important lesson of prototyping: when in doubt, steal from the example that looks like what I want to do. It's totally OK to cargo-cult things into your system at this point. Later on, I can do it with software engineering and craftsmanship. In the present, I want to make progress on exploring my idea.
Random numbers as a service
Now I want to emit some random numbers via JSON from the service. This ended up being a not-so-tough journey into actual Scala. Turns out the JSON support in Scalatra is pretty straight-forward. I had to take a side-trip through JodaTime, a library I'd heard about before but never worked with directly. Of course, that resulted in some temporary Maven confusion, but all was well in the end (commit).
I was pleased by how one can go about quickly emitting JSON from a Scalatra action. What you do is write a case class (somewhat analogous to a Ruby struct, but with more tricks up it s sleeve) for the structure you're going to convert to JSON. Then you return one or more instances of that class from your action and the library handles the rest. All of this mostly made sense when I read the examples and converted it to my own thing, so I guess the basics of Scala are starting to stick. Happy!
Better random numbers, an attempt
I wanted to generate more realistic data from the service. I figured I would port the random metric generator from the Cubism example's JavaScript to Scala. This would make it easier for me to grok the timeline windowing scheme that Cubism uses.
It ended up that porting this algorithm was a bit trickier than I thought. Oddly enough, you can paste the crux of the algorithm from JavaScript to Scala and it looks like valid Scala. However, doing so gave me compiler errors that took me a little while to work out. Basically, the algorithm expects to work with doubles, but the compiler infers integers if you specify any default value such as start = 0. Adding type annotations to declarations resolves all of this. With that worked out, making the Scala compiler happy was a little more obvious.
It turned out that the Cubism example, as I cargo culted it, passes timestamp strings to the service. It was getting late and the first few things I tried to parse timestamps in Scala didn't work out, so I decided to call it there (embarrassingly broken commit).
How'd I do?
On the bright side, I didn't get hung up on Maven dependencies, I roughed my way through the Scala type system, and I had a pleasant experience with Scalatra and Cubism. On the downside, I didn't get to streaming events to the browser from the server and I couldn't quite get random metrics flowing from the server into the browser.
These weekend hacks are like that. I learn about things I expected to learn about and I learn about entirely different things too. I didn't expect to find myself pressing ahead with Scala, but doing so was an entirely different kind of educational fun than if I hadn't.
The nice things about these weekend hacks is that they're just that; a hack over the weekend. It's not a big project that I am responsible for afterwards. But it's still enough progress that I can write about it here and share it on GitHub. That feels productive. Learning plus productivity feels really good!
Don't isolate yourself
As a remote developer, it's tempting to create an environment where all you do is focus on churning out the code you're paid to write. Minimal email distractions, no noise, meetings and chats only when you want it. Seems pretty ideal on paper!
I've found the exact opposite. Checking out of a team like that, even if I'm fulfilling all my duties, robs me of valuable context. It's handy to know what other people are working on, when they're succeeding, and how they're learning from failures. It might not directly relate to my work, but it helps to stay aware of the environment into which your work fits.
I recently "turned on the floodgate" for the development organization around me. In our GitHub install, I picked one or two projects from each development team to follow. Since most teams use a pull-request workflow, I get a few dozen emails per day that give me the chance to peek into the cadence of a team's work. This fills in context I miss in Campfire or your typical email broadcast.
My job as a developer isn't to know all the things going on; I'm not suggesting you keep close tabs on every project. Instead, I'm trying to keep my finger on the pulse of colleagues on other teams. I find myself better prepared to help them out and make my own projects fit in with where the organization as a whole needs to go.
Adam’s Law of Redis
No matter how many times you tell everyone to not use KEYS, there remains a non-empty set of people who think they can use KEYS.
You can’t use KEYS because it has to look at every key in the database. Even if you use a prefix pattern to narrow the scope.
Don’t use KEYS. If that means you need to redesign your schema, you have no choice but to redesign your schema.
Thoughts on (Programming) Scala
On a whim, I flew through Programming Scala this weekend. I’ve had the book for a while, and actively tried to read it before. But this time, it stuck.
All the ideas in Scala are fascinating for a language nerd. It’s the best instance I know of, so far, where ideas from object-oriented and functional programming are combined intentionally and at-scale to produce a language that developers are using on a day-to-day basis. For a language nerd like me, it’s fun to see how all those ideas play out together.
That said, there is a lot of language lawyering. Having to write a chapter on scoping and public/protected/private rules in OO seems like a demoralizing thing for the authors to tackle. And all those hybrid OO/FP ideas come at a conceptual cost; it seems like there’s a lot to know. I’ve noted before that I’m very interested to see how Scala does in the marketplace of minds. It’s a very large language, but I think it’s large in a way that is already familiar to developers. So it could end up that Scala isn’t a great beginner language, but is fine for someone who already knows one FP and one OO language.
I should note that this isn’t my first Scala rodeo. I’ve tried, at various times, to tinker and hack on little projects or simply to grok other people’s code. The blocker on these previous attempts is that I, personally, am sbt-challenged. Whenever I’ve tried to compile projects or add dependencies to my own, I end up in an sbt-shaped trough of disallusionment. Part of this is my ongoing war of attrition with Maven. Part of this is, well, I’m not sure yet. I should note that I can mostly make leiningen, also Maven-based, work. So it’s not entirely Maven’s fault.
Most interesting to me is that Scala could have the versatility of Ruby, wherein one can grow a program from a script, to a message-based program, to a hybrid OO/functional system, to a multi-machine distributed program. You can’t say this about other JVM languages like Java or Clojure. The JVM is a gift and a curse. It makes Scala and Java impractical for scripts, due to startup time. But once your program is somewhat grown-up, Hotspot and the JVM’s excellent concurrency features come in quite handy.
More specific to the book, it cleared up some ideas I’d previous found confusing:
- What’s a method call/operator overloading? It’s an object, a dot or space, and then a method/operator name.
- Implicit methods/views; if you declare methods with the
implicitkeyword and they have the right type signature, the compiler will use them to coerce objects to your own types, giving you many of the benefits of something like Ruby’s open classes - How functions, maps, and ad-hoc data structures that are typical in Ruby map to actual types in Scala; lots of things get converted to
FunctionandTupleobjects by the compiler, which makes sense when you think about it in an ML-ish everything-is-strongly-typed way. - Internal DSLs feel weird, but parser combinators for external DSLs seems like it would be great.
- for-comprehensions; I guess I’ve read enough about them in Clojure now that they make sense in Scala. It’s worth noting that Scala’s for-comprehensions feel simpler than Clojure’s.
- Self-type annotations; I’ve seen this all over in Scala code and didn’t quite understand what was going on. It sure does have an odd name.
And some things are still confusing to me:
- Type bounds, variance; when will I need these?
- Linearization of object hierarchies; rules, I don’t like learning them!
- Tail-calls/trampolines; the JVM makes this a headhurt.
- Path-dependent types; not sure when I’d really need this, but it’s good to know about.
- Anything that’s a band-aid over type-erasure; again, the JVM is sometimes a headhurt.
I don’t have any projects that imminently need the things that Scala provides. Further, I think imposing Scala on a team that’s already succeeding at Ruby or Python is a stretch. You have to be in a place where you really need low, predictable latencies to accept the tradeoff of working with a much larger language.
That said, it’s totally a reasonable choice as a way to get yourself onto the JVM; if Clojure isn’t your thing, Scala probably is. Even if neither are your thing, don’t be a wuss; read some code in either language and expand your mind to reduce your headhurt.
Lessons from premature design
Lessons from Premature Abstractions Illustrated. I’ve run afoul of all three of these:
Make sure you have someone on the team or externally available that will keep the critical, outside look at the project, ready to scream and shout if things turn bad.Don’t let your technical solution influence your design decisions. It’s the tool that needs to fit the job, not the other way round.
Don’t build abstractions as long as you have no proven idea on how the levels below that abstraction will look like.
I could have used an outside, trusted voice to gently reel me in if when I went off into the unproductive weeds. Someone to ask “how will this help the team in two weeks?”, someone to point out ideas that might be great but have only achieved greatness in my head. A person who is asking questions because they want me to succeed, not because they’re trying to take me down a notch.
I have rushed into implementing the first idea in my head. Sometimes I’ve convinced myself that my first idea is the best, despite knowing I need to review it from more angles. I’ve jumped into projects with a shiny new tool and a bunch of optimism, only to cut myself on a sharp edge later on.
I’ve built systems that look fine on their own, but don’t fit into the puzzle around them. I’ve isolated myself building up that system, afraid to figure out how to fit my system into the puzzle in a useful way. I’ve used mocks and stubs to unintentionally isolate myself from the real system.
Basically, these are all really good ways to paint yourself into a corner. It seems like being in a corner with a shiny new system/tool/abstraction would be nice. Unfortunately, my experience is that once you have to make sense of that abstraction in a team, things get dicey.
It’s dangerous to run a software project on your own! Take a friend.
Semantics/Empathy
People argue about words all the time. In the past two weeks, I've participated and watched as nerds unproductively tried to convince each other that they are incorrectly using the words bijection, hypermedia, and dependency injection. Nerds easily fall into this trap because many of us are fascinated by knowledge, sharing that knowledge, and teaching that knowledge.
Arguing about words is fun. Arguing about words is practically useless.
Semantics are good
Words are a tricky business. An overused, overloaded, or ambiguous word isn't particularly useful. "Synergize", "web-scale", or "rockstar" are mush words that don't convey much meaning anymore. It's tempting to think that encouraging others to be judicious in their use of words and mind the specific context and meaning of their statements could move the needle in making the world better.
On the other hand, human interaction is fidgety. We all have differing experiences, so the way we think and feel about things can vary wildly. You might say "we should pivot our business", remembering the time you did so and took the company in a much better direction. I might hear you say "pivot" and think about all the abuses of the word in startup discourse or all the companies that have "pivoted" and still failed. Even though we are thinking of the same definition of "pivot", we are thinking different things.
Semantics are good for getting two people in the same mental ballpark. I can say "web framework" and expect you to know I'm not talking about dogs, tacos, coffee, or compilers. You and I may differ on what a web framework is and what it does, but at least we're both thinking of things that help developers build web-based applications. We may not be talking about the same thing, but we're close.
This is why I think strong semantics are interesting, but not a silver bullet. Very rarely have I solved a problem by applying stronger semantics to the words used in the discussion of the problem. Never have I solved a problem by telling someone they are using the wrong semantics and that they should correct themselves.
We can argue about words all we want, but it's not getting us any closer to solving the real problem. The problem we started talking about before we decided to have a side argument on the meaning of a word.
Empathy is better.
Empathy is a better tool. When someone misuses a word, I stop myself and think, "OK, let's allow that one to slide. What are they really trying to say?" Rarely does someone misuse a word on purpose. It's more likely they know it in a different context; discovering that context and matching it to your own is how the conversation moves forward.
If you say "we need to pivot our web commerce company to a web framework consultancy", I may not know precisely what you mean by "pivot", "web framework", or "consultancy" but I can get on the same page with you. You think we need to change directions and that some services-oriented business based on helping people build web applications is the way to move forward. Armed with that, I can ask you questions about why we need to change directions, what that web framework looks like, or how we would change ourselves to a services-oriented company. It's not as important that you get the words right; it's important that we find a way to talk about the same thing.
Words are fun, but what's useful is to figure out what the other person is thinking or feeling and talk to that. Setting aside the tension of telling someone they're wrong, it's not productive. I'd rather talk about how we can make better programs or better understand our world than foible over the meanings of a few words.
Words are a lossy representation, they can't possibly ever connote the full meaning and nuance of any idea of interesting size. Don't get caught up in skirmishes about the marginally important details of semantics. Use words to show others what you're thinking and guide them towards your understanding of the problem and a proposed solution.
Reflecting on Ruby releases
Ruby 1.8 brought us a couple changes that made many kinds of metaprogramming easier, plus a whole bunch of library additions that made Ruby feel more "grown up". Without seeking external libraries, one could write Ruby to solve many problems developers face in commonplace jobs. I wasn't around for Ruby 1.6, but I've been thinking of Ruby 1.8 as a transition from "better Perl or Java" to "better Smalltalk".
Ruby 1.9 brought us features that make some functional programming idioms easier. Lambdas, i.e. anonymous functions, require less syntax and are better defined. Enumerators make it possible to use features of Enumerable, itself a very functional-esque feature, in more places. Symbol-to-proc makes it easier to pass methods around as blocks, another FP-esque practice. I might say that Ruby 1.9 is the "better MatzLisp" version of Ruby.
Ruby 2.0 is bringing us features that, on the surface, make it easier for Rails to extend the Ruby language via ActiveSupport. I think that's too shallow of a reading. The new tools in Ruby 2.0 (excepting the highly-controversial refinements) make it easier to cleanly add functionality to Ruby's core objects and library. Reducing the cost of extending the core make it possible for more libraries and applications to judiciously make high-leverage additions to the lower levels of Ruby. That seems like a pretty good thing.
I can't find a source for this, but I could have sworn I once read that all programming is language design. It was probably related to Lisp, where you're arguably directly manipulating the AST much of the time. If the changes in Ruby 2.0 can take us closer to this level of program design, where we think more about building language up to the problem domain instead of objects and mechanism, sign me up.