A food/software change metaphor
Are You Changing the Menu or the Food? Incremental change, the food metaphor edition. It's about software and startups. But food too. Think "software" when he says "food". Just read it, OK?
How do you devop?
I’m a sucker for good portmanteau. “Devops” is a precise, but not particularly rewarding concatenation of “development” and “operations”. What it lacks in sonic fun, it makes up in describing something that’s actually going on.
For example, the tools that developers build for themselves are taking cues from the scripts that the operations team hobbles together to automate their work. In the bad old days, you manually configured a server after it was racked up. Then there was a specific load out of packages, a human-readable script to work from, a disk image to restore from, or maybe even a shell script to execute. Today, you can take your pick from configuration management systems that make the bootstrap and maintenance of large numbers of servers a programmatic matter.
It’s not just bringing up new servers that developers are dabbling in. Increasingly, I run across developers who are really, really interested in logging everything, using operational metrics to guide their coding work, and running the deploys themselves. In some teams, the days of “developers versus operations” and throwing bits over walls is over. This is a good.
You devop and don’t know it
Even if you don’t know Chef or Puppet, even if you never ssh into a database server even once, even if you never use the #devop hashtag or attend a like-marketed conference, you’re probably dabbling in operations. You, friend, are so devops, and you don’t even know it.
You use a tool or web app to look at the request rate of your application or the latency of specific URLs and you use that information to decide where to focus your performance efforts. You watch the errors and exception that your app encounters and valiantly fix them. Browsers request images, scripts, and stylesheets from your site and you work to make sure they load quickly, the site draws as soon as possible, and users from diverse continents are well served. You run deploys yourself, you build an admin backend for your app, you automate the processes needed to keep the business going. You consult with operations about what infrastructure systems are working well, what could improve, and what tools might serve everyone better.
All of these things skirt the line between development and operations. They’re signs of diversifying your skillset, better helping the team, and taking pride in every aspect of your work. You can call it devops if you want, but I hope you’ll consider it just another part of making awesome stuff.
The Current and Future Ruby Platform
Here we are, in the waning months of 2011. Ruby and its ecosystem are a bit of an incumbent these days. It’s a really great language for a few domains. It’s got the legs to become a useful language for a couple of other domains. There are a few domains where I wouldn’t recommend using it at all.
Ruby’s strong suit
Ruby started off as a strong scripting language. The first thing that attracted non-tinkerers was a language with the ease-of-hacking found in Perl with the nice object-oriented features found in Java or Python. If you see code that uses special globals like $! and $: or weird constants like ARGF and __DATA__ and it mostly lacks classes and methods, you’re looking at old-fashioned scripting code.
As Ruby grew, it got a niftier way of doing object-oriented programming. Developers started to appreciate it in the same places they might use Java or Smalltalk. A few of the bravest started building production systems using a nice object-oriented language without the drawbacks of a high-maintenance type system (Java) or the isolation of an image (Smalltalk). This code ends up looking a little like someone poking Ruby with their Java brain; they’re not using the language to its fullest, but they’re not abusing it either.
Out of the OO crowd exploded the ecosystem of web frameworks. There were a few contenders for a while, but then Rails came and sucked the air out of the competitive fire. For better or worse, nearly everyone doing web stuff with Ruby was doing Rails for a few years. This yielded buzz, lots of hype, some fallings out, some useful forward progress in the idioms of software development, and a handful of really great businesses. At this point in Ruby’s life, its interesting properties (metaprogramming, blocks, open classes) were stretched, broken, and put back together with a note pointing out that some ideas are too clever for practical use.
As Ruby took off and more developers started using it, there was a need for integration with other systems. Thus, lots of effort was put into projects to make Ruby a part of the JVM, CLR, and Cocoa ecosystems. Largely, they delivered. At the end of 2011, you can use Ruby to integrate with and distribute apps for the JVM and OS X, and maybe even Windows. This gave Ruby credibility in large “enterprisey” shops and somewhat freed Ruby from depending on a single implementation. The work to make this happen is non-trivial and thankless but hugely important even if you never touch it; when you see one of these implementers, thank, hug, and/or bribe them.
Ruby could go to there
WARNING Prognostication follows WARNING, your crystal ball is possibly different than mine
Scala, a hybrid functional/object-oriented language for the JVM, is a hot thing these days. A lot of people like that it combines the JVM, the best ideas of object-oriented programming, and then swizzles in some accessible and useful ideas from the relatively untapped lore of functional programming (FP). So it goes, Ruby already does one or two of these things, depending on how you count. The OO part is in the bag. Enumerable exposes a lot of the same abstractions that lie at the foundation of FP. If you’re using JRuby, you’re getting many of the benefits of the JVM, though Scala does one better in this regard right now. Someone could come along and implement immutable, lazy data structures and maybe a few combinators and give Ruby a really good FP story.
Systems programming is traditionally the domain of C and C++ developers, with Java and Go starting to pick up some mindshare. Think infrastructure services like web servers, caches, databases, message brokers, and other daemon-y things. When you’re hacking at this level, control over memory and execution is king. Access to good concurrency and network primitives is also important. Ruby doesn’t do a great job of providing all of these right now, and Matz’s implementation might never rank highly here. However, one of the promising aspects of Rubinius is that they’re trying very hard to do well in terms of performance, concurrency, and memory management. If Rubinius can deliver on those efforts, offer easily hacked trapdoors to lower level bits, and encourage the development of libraries for network and concurrent programming, Ruby could easily turn into a good solution for small-to-medium sized infrastructure projects.
Distributed systems are sort of already in Ruby’s wheel house and sort of a stretch for Ruby. On the one hand, most Ruby systems already run over a combination of app servers and queue workers, storing data in a hodgepodge of browser caches, in-heap caches, and databases. That’s a distributed application, and it’s handy to frame one’s thinking about building an application in terms of the challenges of a distributed system: shared state is hard to manage, failure cases are weird and gnarly, bottlenecks and points of failure are all over the place. What you don’t see Ruby used for is implementing the infrastructure underneath distributed applications. Hadoop, Zookeeper, Cassandra, Riak, and doozerd all rely on both the excellent concurrency and network primitives of their respective platforms and on the reliability and performance those platforms provide. Again, given some more progress on Ruby implementations and good implementations of abstractions for doing distributed messaging, state management, and process supervision, Ruby could be an excellent language to get distributed infrastructure projects off the ground.
Unlikely advances for Ruby
Embedded systems, those that power your video game consoles, TVs, cars, and steroes, rely on promises that Ruby has trouble keeping. C is king here. It provides the control, memory footprint, and predictability that embedded applications crave. Rite is an attempt to tackle this domain. The notion of a small, fast subset of Ruby has its appeal. However, developers of embedded systems typically hang out on the back of the adoption curve and are pretty particular about how they build systems. Ruby might make in-roads here, but it needs a killer app to acheive the success it currently enjoys in application development.
Mobile apps are an explosive market these days. Explosive markets go really well with Ruby (c.f. “web 2.0”, “AJAX”, “the social web”), but mobile is different. It’s dominated by vendor ecosystems. Largely, you’ve got iOS with Objective-C and Cocoa, and Android with Java and, err, Android. Smart developers don’t tack too far from what is recommended and blessed by the platform vendor. There are efforts to make Ruby play well here, but without vendor blessing, they aren’t likely to get a lot of traction.
Place your bets, gentlemen
Tackling the middle tier (object/functional, distributed/concurrent, and systems programming) is where I think a lot of the really promising work is happening. Ruby 1.9 is good enough for many kinds of systems programming and has a few syntactic sugars that make FP a little less weird. JRuby offers integration into some very good libraries for doing distributed and concurrent stuff. Rubinius has the promise to make those same libraries possible on Ruby.
Really sharpening the first tier (thinking about how to script better, getting back to OO principles, fine tuning the web development experience, improving JRuby’s integration story) is where Ruby is going to grow in the short term. The ongoing renaissance, within the Ruby community, of Unix idioms and OO design is moving the ball forward; it feels like we’re building on better principles than we were just two years ago. The people who write Ruby will likely continue to assimilate old ideas, try disasterous new ones, and trend towards adopting better ways of building increasingly large applications.
When it comes to Ruby, go long on server-based applications, hedge your bets on systems infrastructure, and short anything that involves platforms with restricted resources or vendor control.
Your frienemy, the ORM
When modeling how our domain objects map to what is stored in a database, an object-relational mapper often comes into the picture. And then, the angst begins. Bad queries are generated, weird object models evolve, junk-drawer objects emerge, cohesion goes down and coupling goes up.
It’s not that ORMs are a smell. They are genuinely useful things that make it easier for developers to go from an idea to a working, deployable prototype. But its easy to fall into the habit of treating them as a top-level concern in our applications.
Maybe that is the problem!
What if our domain models weren’t built out from the ORM? Some have suggested treating the ORM, and the persistence of our objects themselves, as mere implementation details. What might that look like?
Hide the ORM like you’re ashamed of it
Recently, I had the need to build an API for logging the progress of a data migration as we ran it over many million records, spitting out several new records for every input record. Said log ended up living in PostgreSQL1.
Visions of decoupled grandeur in my head, I decided that my API should be not leak its databaseness out to the user. I started off trying to make the API talk directly to the PostgreSQL driver, but that I wasn’t making much progress down that road. Further, I found myself reinventing things I would get for free in ActiveRecord-land.
Instead, I took a principled plunge. I surrendered to using an AR model, but I kept it tucked away inside the class for my API. My API makes several calls into the AR model, but it never leaks that ARness out to users of the API.
I liked how this ended up. I was free to use AR’s functionality within the inner model. I can vary the API and the AR model independently. I can stub out, or completely replace the model implementation. It feels like I’m doing OO right.
Enough of the suspense, let’s see a hypothetical example
User model. Everyone has a name, a city, and a URL. I can all do this in my sleep, right?
I start with by defining an API. Note that all it knows is that there is some object called Model that it delegates to.
class User
attr_accessor :name, :city, :url
def self.fetch(key)
Model.fetch(key)
end
def self.fetch_by_city(key)
Model.fetch_by_city(key)
end
def save
Model.create(name, city, url)
end
def ==(other)
name == other.name && city == other.city && url == other.url
end
end
That’s a pretty straight-forward Ruby class, eh? The RSpec examples for it aren’t elaborate either.
describe User do
let(:name) { "Shauna McFunky" }
let(:city) { "Chasteville" }
let(:url) { "http://mcfunky.com" }
let(:user) do
User.new.tap do |u|
u.name = name
u.city = city
u.url = url
end
end
it "has a name, city, and URL" do
user.name.should eq(name)
user.city.should eq(city)
user.url.should eq(url)
end
it "saves itself to a row" do
key = user.save
User.fetch(key).should eq(user)
end
it "supports lookup by city" do
user.save
User.fetch_by_city(user.city).should eq(user)
end
end
Not much coupling going on here either. Coding in a blog post is full of beautiful idealism, isn’t it?
“Needs more realism”, says the critic. Obliged:
class User::Model < ActiveRecord::Base
set_table_name :users
def self.create(name, city, url)
super(:name => name, :city => city, :url => url)
end
def self.fetch(key)
from_model(find(key))
end
def self.fetch_by_city(city)
from_model(where(:city => city).first)
end
def self.from_model(model)
User.new.tap do |u|
u.name = model.name
u.city = model.city
u.url = model.url
end
end
end
Here’s the first implementation of an actual access layer for my user model. It’s coupled to the actual user model by names, but it’s free to map those names to database tables, indexes, and queries as it sees fit. If I’m clever, I might write a shared example group for the behavior of whatever implements create, fetch, and fetch_by_city in User::Model, but I’ll leave that as an exercise to the reader.
To hook my model up when I run RSpec, I add a moderately involved before hook:
before(:all) do
ActiveRecord::Base.establish_connection(
:adapter => 'sqlite3',
:database => ':memory:'
)
ActiveRecord::Schema.define do
create_table :users do |t|
t.string :name, :null => false
t.string :city, :null => false
t.string :url
end
end
end
As far as I know, this is about as simple as it gets to bootstrap ActiveRecord outside of a Rails test. So it goes.
Let’s fake that out
Now I’ve got a working implementation. Yay! However, it would be nice if I didn’t need all that ActiveRecord stuff when I’m running isolated, unit tests. Because my model and data access layer are decoupled, I can totally do that. Hold on to your pants:
require 'active_support/core_ext/class'
class User::Model
cattr_accessor :users
cattr_accessor :users_by_city
def self.init
self.users = {}
self.users_by_city = {}
end
def self.create(name, city, url)
key = Time.now.tv_sec
hsh = {:name => name, :city => city, :url => url}
users[key] = hsh
users_by_city[city] = hsh
key
end
def self.fetch(key)
attrs = users[key]
from_attrs(attrs)
end
def self.fetch_by_city(city)
attrs = users_by_city[city]
from_attrs(attrs)
end
def self.from_attrs(attrs)
User.new.tap do |u|
u.name = attrs[:name]
u.city = attrs[:city]
u.url = attrs[:url]
end
end
end
This “storage” layer is a bit more involved because I can’t lean on ActiveRecord to handle all the particulars for me. Specifically, I have to handle indexing the data in not one but two hashes. But, it fits on one screen and its in memory, so I get fast tests at not too much overhead.
This is a classic test fake. It’s not the real implementation of the object; it’s just enough for me to hack out tests that need to interact with the storage layer. It doesn’t tell me whether I’m doing anything wrong like a mock or stub might. It just gives me some behavior to collaborate with.
Switching my specs to use this fake is pretty darn easy. I just change my before hook to this:
before { User::Model.init }
Life is good.
Now for some overkill
Time passes. Specs are written, code is implemented to pass them. The application grows. Life is good.
Then one day the ops guy wakes up, finds the site going crazy slow and see that there are a couple hundred million user in the system. That’s a lot of rows. We’re gonna need a bigger database.
Migrating millions of rows to a new database is a pretty big headache. Even if it’s fancy and distributed. But, it turns out changing our code doesn’t have to tax our brains so much. Say, for example, we chose Cassandra:
require 'cassandra/0.7'
require 'active_support/core_ext/class'
class User::Model
cattr_accessor :connection
cattr_accessor :cf
def self.create(name, city, url)
generate_key.tap do |k|
cols = {"name" => name, "city" => city, "url" => url}
connection.insert(cf, k, cols)
end
end
def self.generate_key
SimpleUUID::UUID.new.to_guid
end
def self.fetch(key)
cols = connection.get(cf, key)
from_columns(cols)
end
def self.fetch_by_city(city)
expression = connection.create_index_expression("city", city, "EQ")
index_clause = connection.create_index_clause([expression])
slices = connection.get_indexed_slices(cf, index_clause)
cols = hash_from_slices(slices).values.first
from_columns(cols)
end
def self.from_columns(cols)
User.new.tap do |u|
u.name = cols["name"]
u.city = cols["city"]
u.url = cols["url"]
end
end
def self.hash_from_slices(slices)
slices.inject({}) do |hsh, (k, columns)|
column_hash = columns.inject({}) do |inner, col|
column = col.column
inner.update(column.name => column.value)
end
hsh.update(k => column_hash)
end
end
end
Not nearly as simple as the ActiveRecord example. But sometimes it’s about making hard problems possible even if they’re not mindless retyping. In this case, I had to implement ID/key generation for myself (Cassandra doesn’t implement any of that). I also had to do some cleverness to generate an indexed query and then to convert the hashes that Cassandra returns into my User model.
But hey, look! I changed the whole underlying database without worrying too much about mucking with my domain models. I can dig that. Further, none of my specs need to know about Cassandra. I do need to test the interaction between Cassandra and the rest of my stack in an integration test, but that’s generally true of any kind of isolated testing.
This has all happened before and it will all happen again
None of this is new. Data access layers have been a thing for a long time. Maybe institutional memory and/or scars have prevented us from bringing them over from Smalltalk, Java, or C#.
I’m just sayin’, as you think about how to tease your system apart into decoupled, cohesive, easy-to-test units, you should pause and consider the idea that pushing all your persistence needs down into an object you later delegate to can make your future self think highly of your present self.
This ended up being a big mistake. I could have saved myself some pain, and our ops team even more pain, if I’d done an honest back-of-the-napkin calculation and stepped back for a few minutes to figure out a better angle on storage. ↩
Relentless Shipping
Relentless Quality is a great piece. We should all strive to make really fantastic stuff. But I think there’s a nuance worth observing here:
Sharpen the edges, polish the surface and make it shine.
I’m afraid that some people are going to read more than the Kneath intends here. Quality does not mean perfection. Perfection is the enemy of shipping. Quality is useless if it doesn’t ship. Quality is not an excuse for not shipping.
Quality is a subjective, amorphous thing. To you, it means the fit and finish. To me, it means that all the bugs have been eliminated and possible bugs thought about and excised. Even to Christopher Alexander, quality isn’t nailed down; he refers to good buildings as possessing the “quality without a name”.
To whit, this shortcoming is pointed out in the original essay:
Move fast and break things, then move fast and fix it. Ship early, ship often, sacrificing features, never quality.
Scope and quality are sometimes at odds. Schedules and quality are sometimes at odds. There may come a time when you have to decide between shipping, maintaining quality, and including all the features.
The great thing about shipping is that if you can do it often enough, these problems of slipping features or making sacrifices in quality can fade away. If you can ship quickly, you can build features out, test them, and put that quality on them in an iterative fashion. Shipping can’t cure all ills, but it can ease many of them.
Kneath is urging you to maintain quality; I’m urging you to ship some acceptable value of quality and then iterate to make it amazing. Relent on quality, if you must, so you can ship relentlessly.
The guy doing the typing makes the call
Everyone brings unique perspective to a team. Each person has learned from successes and failures. There is a spectrum of things that are highly valued and that are strongly avoided and each team member is a different point on that spectrum.
It’s easy to bikeshed decisions. Everyone should feel free to share their ideas if they have something useful and constructive to contribute. High-functioning teams share assets and liabilities, so naturally they should share and discuss ideas.
That said, teams don’t exist for rhetorical indulgence. They exist to get shit done. Teams have to get all the ideas on the floor, decide what is practical, and move on to the next thing.
If there isn’t an outstanding consensus, the tie breaker is simple: the person who ends up doing the work makes the call. That’s not to say they should go cowboy and do whatever they want; they should use their knowledge of the “situation on the ground” to figure out what is most practical. With responsibility comes the right to pick a resolution.
It’s worth repeating: the guy doing the typing makes the decision.
How to listen to Stravinsky's Rite of Spring
Igor Stravinsky’s The Rite of Spring is an amazing piece of classical music. It’s one of the rare pieces that was really revolutionary in its time. But in our time, almost one hundred years on, it doesn’t sound that different.
Music has moved on. We are used to the odd times of “Take Five” and the dissonant horns of a John Williams soundtrack. Music offending the status quo is nothing unheard of.
To enjoy Rite of Spring in its proper context, you have to forget all that. Put yourself in the shoes of a Parisian in 1913, probably well off. You probably just enjoyed a Monet and a coffee. But your world is changing. Something about workers revolting. A transition from manual labor to mechanical labor.
Now imagine yourself at the premier for this new ballet from Russia. You being a Parisian, you’re probably expecting something along the lines of Debussy or perhaps Debussy or Berlioz.
Instead, you get mild dissonance and then total chaos. The changing time signatures, the dissonance, the subject of virgin sacrifice. You’d probably riot too!
Skip the hyperbole
Hyperbole is a tricky thing. In a joke, it works great. Its the foundation of a tall tale (TO BRASKY!). But in a conversation of ideas, it can backfire.
The trick about humans is that we rarely know exactly what the humans around us are thinking. Do they agree with what I’m saying? Are my jokes bombing? Is this presentation interesting or is the audience playing on their phones?
So the trick with hyperbole is that I might make an exagerated statement to move things along. But the other people in the conversation might think I actually mean what I said. Maybe they understand the thought behind the hyperbole, but maybe I end up unintentionally derailing the conversation. More times than I can remember, I’ve said something bold to move things along and it totally backfired. Hyperbole backfired.
Nothing beats concise language.
Practical words on mocking
Practical Mock Advice is practical:
Coordinator objects like controllers are driven into existence because you need to hook two areas of your application together. In Rails applications, you usually drive the controller into existence using a Cucumber scenario or some other integration test. Most of the time controllers are straightforward and not very interesting: essentially a bunch of boilerplate code to wire your app together. In these cases the benefit of having isolated controller tests is very little, but the cost of creating and maintaining them can be high.Includes the standard description of how to use mocks with external services. But more interesting are his ideas and conclusions on when to mock, how to mock caching implementations, and how to mock controllers/presenters/coordinator objects.A general rule of thumb is this: If there are interesting characteristics or behaviors associated with a coordinator object and it is not well covered by another test, by all means add an isolated test around it and know that mocks can be very effective.
Locking and how did I get here?
I've got a bunch of browsers tabs open. This is unusual; I try to have zero open. Except right now. I'm digging into something. I'm spreading ephemeral papers around on my epemeral desk and trying to make a concept, not ephemeral, at least in my head.
It all started with locking. It's a hard concept, but some programs need it. In particular, applications running across multiple machines connected by imperfect software and unreliable networks need it. And this sort of thing ends up being difficult to get right.
I've poked around with this before. Reading the code of some libraries that are implementing locking in a way that might come in handy to me, I check out some documentation that I've seen referenced a couple times. Redis' setnx command can function as a useful primitive for implementing locks. It turns out (getset) is pretty interesting too. Ohm, redis-objects and adapter-redis all implement locking using a combination of those two primitives. Then I start to dig deeper into Ohm; there's some interesting stuff here. Activity feeds with Ohm is relevant to my interests. I've got a thing for persistence tools that enumerate their philosophy. Nest seems like a useful set of concepts too.
I'm mentally wandering here. Let's rewind back to what I'm really after: a way to do locking in Cassandra. There's a blog post I came across before on doing critical sections in Cassandra, but it uses ZooKeeper, so that's cheating. Then I get distraced by a thing on HBase vs. Cassandra and another perspective on Cassandra that mentions but does not really focus on locking.
And then, paydirt. A wiki page on locking in Cassandra. It may be a little rough, and might not even work, but it's worth playing with. Turns out it's an adaptation of an algorithm devised by Leslie Lamport for implementing locking with atomic primitives. It uses a bakery as an analgoy. Neat.
Then I get really distracted again. I remember doozer, a distributed consensus gizmo developed by Blake Mizerany at Heroku. I get to reading its documentation and come across the protocol spec, which has an intriguing link to a Plan 9 manpage on the Plan 9 File Protocol. That somehow drives me to ponder serialization and read about TNetstrings.
At this point, my cup has overfloweth. I've got locking, distributed consensus, serialization, protocols, and philosophies all on my mind. Lots of fun intellectual fodder, but I'll get nowhere if I don't stick my nose into one of them exclusively and really try to figure out what it's about. So I do. Fin.
Refactor to modules, for great good
Got a class or model that’s getting a little too fat? Refactor to Modules. I’ve done this a few times lately, and I’ve always liked the results. Easier to test, easier to understand, smaller files. As long as you’ve got a tool like ctags to help you navigate between methods, there’s no indirection penalty either.
That said, I’ve seen code that is overmodule’d. But, that almost always goes along with odd callback structures that obscure the flow-of-control. As long as you stick to Ruby’s method lookup semantics, it’s smooth sailing.
ZeroMQ inproc implies one context
I’ve been tinkering with ZeroMQ a bit lately. Abstracting sockets like this is a great idea. However, the Ruby library, like sockets in general, is a bit light on guidance and the error messages aren’t of the form “Hey dumbie, you do it in this order!”
Here’s something that tripped me up today. ZeroMQ puts everything into a context. If you’re doing in-process communication (e.g. between two threads in Ruby 1.9), you need to share that context.
Doing it right:
# Create a context for all in-process communication
>> ctx = ZMQ::Context.new
# Set up a request socket (think of this as the client)
>> req = ctx.socket(ZMQ::REQ)
# Set up a reply socket (think of this as the server)
>> rep = ctx.socket(ZMQ::REP)
# Like a server, the reply socket binds
>> rep.bind('inproc://127.0.0.1')
# Like a client, the request socket connects
>> req.connect('inproc://127.0.0.1')
# ZeroMQ only knows about strings
>> req.send('1')
=> true
# Reply/server side got the message
>> p rep.recv
"1"
=> "1"
# Reply/server side sends response
>> rep.send("urf!")
=> true
# Request/client side got the response
>> req.recv
=> "urf!"
Doing it wrong:
# Create a second context
>> ctx2 = ZMQ::Context.new(2)
# Create another client
>> req2 = ctx2.socket(ZMQ::REQ)
# Attempt to connect to a reply socket, but it doesn't
# exist in this context
>> req2.connect('inproc://127.0.0.1')
RuntimeError: Connection refused
from (irb):16:in `connect'
from (irb):16
from /Users/adam/.rvm/rubies/ruby-1.9.2-p180/bin/irb:16:in `'
I believe what is happening here is that each ZMQ::Context gets a thread pool to manage message traffic. In the case of in-process messages, the threads only know about each other within the confines of a context.
And now you know, roughly speaking.
Booting your project, no longer a giant pain
So your app has a few dependencies. A database here, a queue there, maybe a cache. Running all that stuff before you start coding is a pain. Shutting it all down can prove even more tedious.
Out of nowhere, I find two solutions to this problem. takeup seems more streamlined; clone a script, write a YAML config. foreman is a gem that defines a Procfile format for defining your project’s dependencies. Both handle all the particulars of starting your app up, shutting it down, etc.
I haven’t tried either of these because, of course, they came out the same week I bite the bullet and write a shell script to automate it on my projects. But I’m very pleased that folks are scratching this itch and hope I’ll have no choice but to start using one when it reaches critical goodness.
Ruby's roots in AWK
AWK-ward Ruby. One man Unix wrecking squad Ryan Tomayko reflects on aspects of Ruby that arguably grew from AWK more than Perl. Great archaeology, but also a good gateway drug to understanding how awk is a useful tool. Only recently have I started to really grok awk, but it’s super handy for ad-hoc data munging in the shell.
Humankind's genius turned upon itself
When We Tested Nuclear Bombs. An absolutely fantastic collection of photos from the nuclear test program. Beautiful to look at, terrifying to contemplate the ramifications in context. It’s harrowing to think that one of science’s greatest achievements could undo so much of science’s achievement.
Burpess and other intense workouts
But when pressed, he suggested one of the foundations of old-fashioned calisthenics: the burpee, in which you drop to the ground, kick your feet out behind you, pull your feet back in and leap up as high as you can. “It builds muscles. It builds endurance.” He paused. “But it’s hard to imagine most people enjoying” an all-burpees program, “or sticking with it for long.”I'm having trouble deciding whether I should say good things about burpees. I only do a handful at a time, usually as part of a series of movements. They're not so bad if you start with just a few and work up from there.
Burpees aside, it’s interesting to see opinions on what the most useful exercise movements are. I’m really glad I don’t need to start doing butterflies though.
Post-hoc career advice for twenty-something Adam
No program was ever made better by one developer scoffing at another. Computer science does not move forward with condescending attitudes. Success in software isn’t the result of looking down your nose or wagging your finger at others.
And yet, if you observe from the outside, you’d think that we all live in a wacky world of wonks, one where it’s not the facts, but how violently you repeat your talking points that matters the most. The Javascript guys do this in 2011, the Ruby guys did it in 2005, the .NET people before that in 2002, and on down the line.
Civility isn’t always what gets you noticed, but if you don’t have an outsized ability to focus on technical problems for tens of hours, it sure helps. You’re not the most brilliant developer on the planet, but you like to make people laugh, and you like to hang around those who are smarter than me. That’s not the recipe for a solid career in programming, but it’s a good bridge to get you from the journeyman side of the river over to the side where people think you might know what you’re doing.
Once you reach the other side, its a matter of putting in the hours, doing the practice, learning things, and always challenging yourself. Work with the smartest people you can, push yourself to make something better every day. Grind on that enough and you’ll get to the point where you really know what you’re doing.
Then, you close the loop. You were civil, you didn’t piss too many people off. They are eager to hear about the awesome and exciting things you did. So tell them. Even if you don’t think it’s all that awesome, some will know that you’ve got the awesome in you and that it will come out eventually. Some of them aren’t your mom!
This is what some call a successful career. It’s not so bad, but it’s not exactly the extravagant lifestyle you imagined when you were twenty. On the plus side, you do roughly the same things on a daily basis as you did back then, which isn’t so bad. Being an adult turns out to be pretty alright.
At some point, you write this advice to yourself on your weblog, except in the second person. Hopefully someone younger, perhaps on the precipice of idolizing a brilliant asshole, will read it and take a more civil path. Maybe you’ll get to work with them someday. Let’s hope it’s not too awkward.
Don't complain, make things better
notes on “an empathetic plan”:
Worse is when the the people doing the complaining also make software or web sites or iPhone applications themselves. As visible leaders of the web, I think there are a lot of folks who could do a favor to younger, less experienced people by setting an example of critiquing to raise up rather than critiquing to tear down.Set agreement to maximum. If you’re complaining on Twitter just to make yourself feel better, keep in mind that some of us are keeping score.If you’re a well known web or app developer who complains a lot on Twitter about other people’s projects, I am very likely talking about you. You and I both know that there are many reasons why something works a certain way or why something in the backend would affect the way something works on the front-end.
Don’t waste your time griping and bringing other people down. Spend your time making better things.
Perfection isn't sustainable
When an interesting person is momentarily not-interesting, I wait patiently. When a perfect organization, the boring one that's constantly using its policies to dumb things down, is imperfect, I get annoyed. Because perfect has to be perfect all the time.More and more, I think perfection is the biggest enemy of those who want to ship awesome things. Iteration can lead to moments of perfection, but perfection is not sustainable over time.
Using Conway's Law for the power of good
Michael Feathers isn’t so quick to place negative connotations on Conway’s Law. Perhaps it’s not so much that organizations don’t communicate well, the traditional reading of Conway’s Law. Maybe as organizations grow, people tend to only communicate frequently with a few people and those interactions end up defining the API layers.
I’ve been thinking about this a bit lately. It’s possible there’s something to be said about using Conway’s Law to your advantage when building service-based shearing layers. Some parts of your application should evolve quickly, others require more stability. Some iterate based on user and conversion testing, others iterate as TDD or BDD projects. You can discover these layers by observing team interactions and using Conway’s Law to define where the APIs belong.