The ability to focus on one concern at a time is the mark of a senior developer. It takes experience to ignore other factors as noise. It takes time to learn how to avoid tripping on distractions and side-quests.
This lesson hits me hard in the feels because when I reflect on my own work history, some of the biggest mistakes that I’ve made in my career revolve around trying to solve multiple problems at the same time. Perhaps one of the most embarrassing examples of this is when I was attempting to learn OOP (Object Oriented Programming) on a new project. This was two problems: I had to build a new application; and, I tried to do it in a new programming paradigm.
Needless to say, the project ended up coming in months late and was a horrible mess of hard-to-maintain code. Trying to solve two problems at the same time ended in disaster.
Nearly universal advice for developers of all experience levels!
The trick for juniors is, they’re always learning more than one thing at a time, often on accident. They want to build a feature, but it requires a new library, and it requires learning the library. They went to start up a development server, but then something weird happens with Unix. It’s the essential challenge of being a junior – they’re just getting started, so they’re always learning a couple of things at a time.
Perversely, a senior who can see the whole feature/change in their head is sometimes tempted to push the whole thing through in one (large) change. They’re tempted to make the entire thing happen in one outburst of crisp thinking.
Developers who have learned to avoid pitfalls and gotchas sometimes have to relearn how to work incrementally. Juniors (frequently) don’t have this problem. If they don’t work incrementally, they won’t make progress at all! (Caution: juniors that try to work like the seniors they see around them will fall into this trap.)
That said, juniors and seniors both tend to struggle with:
deciding when to focus vs. when to jump out of a rabbit hole
building their own feedback loops with tests/compilers, jigs/scaffolding
imagining and applying constraints
using first-principle thinking to reduce the overwhelming possibility space of programming
These skills don’t come with simple experience. One has to decide to apply them and then build up experience using them to keep development on-track, focused, and effective.
Every experienced Rails developer should pick up Garrett Dimon’s Frictionless Generators. Generators are an often overlooked piece of the Rails puzzle. The book shines a light on getting the most out of generators in your Rails projects. It’s an amazing-looking digital object too, the illustrations are great.
(Before I go further, it’s worth noting that Garrett Dimon sent me a review copy, acted on my feedback, and is generally a pal.)
You should customize generators and write your own
Conventions, in the form of assumptions that are woven into the framework’s code but also exist as a contract of sorts, are essential to Rails. Following those conventions would not work well if creating a new model, resource, etc. wasn’t made easy and consistent without our friends rails generate resource/model/etc. These generators have been present in Rails since the beginning. I’ve used them hundreds of times and thought nothing of it.
A few years ago, I realized that generators had become a public API in Rails and that re-use was encouraged. The guide is a good start. I was able to knock out a simple generator for a novel, but conventional, part of our application. In the process, I realized a couple of things.
No one likes boilerplate and tedium. Generators automate the tedium of boilerplate. This is particularly helpful to less experienced developers, who are likely a bit overwhelmed with trying to comprehend decades of Rails evolution and legacy code (from their perspective) quickly.
Rails developers are under-utilizing this leverage. Every system that makes it past a few thousand lines of code (or several months in use) develops bespoke conventions. These are easier to follow if you can reduce the mental burden to “when you add a new thingy, run rails g thingy”. Added bonus: starting new conceptual pieces in your app from generators encourages consistency, itself an under-appreciated sustaining element in long-lived Rails applications.
Luckily, someone was thinking the same thing I was…
Garrett knows more about generators than anyone I know
The Rails guides for generators are okay. They whet the curiosity and appetite. But, they aren’t particularly deep. When I first tinkered with generators, I mostly learned by reading the code for rails generate model/resource/etc.
Frictionless Generators does not require one to jump right into reading code. It opens with ideas on the possibilities of developing with custom generators. Then, it moves onto the practicalities of writing one’s own generator and crafting a good developer experience. From there, it’s down the rabbit hole: learning about the APIs for defining generators, implementing the file-creation logic therein, writing help and documentation for them, generating files from templates, and customizing the built-in Rails generators.
Garrett goes into as much depth on generators as any other technical author I know, on any topic. Did you know you can make HTTP requests and use their responses in generators? I did not, but Garrett does! Did you know that generators apply the same kind of “oh, yeah, that’s common sense” convention for copying files from generators into applications? I did not, but Garrett does! I don’t think I’d use all these ideas on every generator, but I like the idea that I can return to Frictionless Generators should I have an idea and wonder how the existing, low-friction APIs can help me.
Further, Garrett offers frequent insights into the developer experience and leverage of building generators. On building generators for “fingertip feeling” so developers can easily (and frequently!) use them:
I like to aim for no more than one value argument and one collection argument to keep generators easier to use. Everything else becomes an option.
On approaching generators as high-leverage side-quests:
Remember that the ease of use is generally inversely proportional to the length of your documentation. Assistance first. Brevity second. Content third. Or, be helpful first, concise second, and thorough third. That said, there are a few other categories of information that can be helpful.
For me, a good technical book balances presentation of technical information, the right amount of detail, and wisdom on how to apply the two in practical ways. Garrett succeeds at striking that balance while keeping things moving and easy to follow.
In short, recommended! Rails generators are underrated, whether you’re aware of their existence or not. Smart teams are customizing generators and writing their own bespoke generators. There’s a book on this now, which you should check out if any of this resonated.
As engineers, we often face the consequences of shallow ideas embraced exuberantly. Despite those experiences, we should try to solve future problems instead of re-litigating problems-past.
Engineers put too much value on their ability to spot flaws in ideas. It’s only worth something in certain situations.
Don’t be edge-case Eddie, wont-scale Walter, legacy code Lonnie, or reasons Reggie. At least try to think around those issues and solve the problem.
This is very much a note to my previous self(s).
Pay attention to intuitive negative emotion…If you’ve been asked for quick estimates a bunch, you might have noticed that sometimes the request triggers negative emotions: fear, anxiety, confusion, etc. You get that sinking feeling in the pit of your stomach. “Oh crap”, you think, “this is not going to be easy.” For me (and most experienced software engineers), there are certain kinds of projects that trigger this feeling. We’re still pattern matching, but now we’re noticing a certain kind of project that resists estimation, or a kind of project that is likely to go poorly.
Jacob recommends noting how intuition and emotion are natural and not entirely negative influences on the process of evaluating ideas. The trick, he says, is to pause and switch to deeply thinking through the idea (or estimate) you’re presented with.
This, again, is very much a note to my previous self(s).
Now, if you’ll excuse me, I need to get back to brainstorming and estimating this time machine, so I can deliver this advice to my former self.
…even if some days working in corporations or under unwanted pressure makes it considerably less fun.
I also just don’t especially want to stop thinking about code. I don’t want to stop writing sentences in my own voice. I get a lot of joy from craft. It’s not a universal attitude toward work – from what I can tell, Gen Z is much more anti-work and ready to automate away their jobs – but I’ve always been thankful that programming is a craft that pays a good living wage. I’d be a luthier, photographer, or, who knows, if those jobs were as viable and available. But programming lets you write and think all day. Writing, both code and prose, for me, is both an end product and an end in itself. I don’t want to automate away the things that give me joy.
What a great distillation of what makes working on software great! It’s an opportunity to think all day, earning a good wage doing so. Sometimes, to make something of value. Even more rarely, to make something of lasting value. Most of all, to be challenged every day. On the good days, it’s the future we were promised!
Other days, it’s a bit much. Corporations and all their baggage will get ya down. Deadlines, communication, coordination are how one makes big things, but they have their drawbacks. They (can) drain all the energy and excitement of making something.
There are jobs that sound exciting from the outside or on paper. Driving race cars and being around motorsport, sounds exciting! But it’s probably a lot of toil, intense competition, and very little invention. Imagineering at Disney is likely immensely rewarding when an idea makes it all the way to the real world or a theme park, every several years. Between those years, it’s likely equal amounts of frustration and the friction of working at a giant company.
So, for me, building software it is. Even on the days when deadlines and coordination have got me down. Thinking them through to put a bit of the magic of software into the world, it balances out.
Career development looks more like unlocking attributes for a different subclass in a role-playing game, than picking a distinct class that can never change. It’s not a path. It’s a collection of skills and attributes focused on certain outcomes. Applying foundational skills is heavily contingent on your role and responsibility.
👍🏻 Careers, management or not, aren’t straight lines. The skills you need for your career aren’t a tree with one root. You can skip between various skill trees, if you like! You can go deep, but wide is an option too. The more you know, the more you can delegate!
I guess when you’re done with the main quest, you go back and do side quests
Careers (and lives) are non-linear. Occasionally their trajectories don’t make sense. They may even outright disappoint, in the moment. The silver lining is, they give us unique skills and experience that someone in the world wants if only we can find them. 📈
Cannot overstate the value of regularly delivering working software.
My single most effective software dev habit is to start with a walking skeleton – a “real” if very stubbed out program that can be deployed on its real infrastructure, receive real calls, visited for real etc. – because of what this does for non-programming stakeholders.
When they see a real working thing and then they see that thing get meaningful improvements they tend to chill way out and get much easier to work with.
You can save a week of effort on process with a couple hours of sharing tangible progress.
Related: you can save a week of planning with a couple hours of programming. You can save a week of programming with a couple hours of planning.
0. An archaic summer, even by the standards of the late 1990s
The summer of 2002 was my last semester interning at Texas Instruments. I was tasked with writing tests verifying the next iteration of the flagship DSP for the company, the ‘C64141. This particular chip did not yet exist; I was doing pre-silicon verification.
At the time, that meant I used the compiler toolchain for its ‘C64xx predecessors to build C programs and verify them on a development board (again, with one of the chip’s predecessors) for correctness. Then, I shipped the same compiled code off to a cluster of Sun machines2and ran the program on a gate-level simulation of the new chip based on the hardware definition language (VHDL, I think) of the as-yet physically existent chip3.
The output of this execution was a rather large output file (hundreds of megabytes, IIRC) that captured the voltage levels through many4 of the wires on the chip5. Armed with digital analyzer software (read: it could show me if a wire were high or low voltage i.e., if its value was 0 or 1 in binary), and someone telling me how to group together the wires that represented the main registers on the chip, I could step through the state of my program by examining the register values one cycle at a time.
Beyond the toolchain and workflow, which is now considered archaic and generally received by younger colleagues as an “in the snow, uphill, both ways” kind of story, I faced another complication. As you can imagine if you work out what “running a gate-level simulation of a late-90’s era computer chip on late-90’s era Sun computers” implies, from first principles, you’ll realize that at a useful level of fidelity this kind of computation is phenomenally expensive.
In fact, I had plenty of time to contemplate this and one day in fact did so. Programs that took about a minute to compile, load into a development board, and execute ran over the course of an hour on the simulator. Handily, the simulator output included wall-clock runtime and number of cycles to execute. And so I divided one by the other and came to a rough estimate that the simulator ran programs at less than 100hz; the final silicon’s clock speed was expected to hit 6-700MHz if I recall correctly.
I was not very productive that summer6! But it does return me to the point of this essay: the time when it was better to think through a program than write it and see what the computer thought of it was not that great.
1. Coding “offline” sounds romantic, was not that great
I like to imagine software development in mythological halls like Bell Labs and Xerox PARC worked like this:
You wrote memos on a typewriter, scribbled models and data structures in notebooks, or worked out an idea on a chalkboard.
Once you worked out that your idea was good, with colleagues over coffee or in your head, you started writing it out in what we’d call, today, a low-level language like C or assembly or gasp machine code.
If you go far back enough, you turn a literally handwritten program into a typed-in program on a very low-capacity disk or a stack of punch cards.
By means that had disappeared way earlier than I started programming, you convey that disk or stack of cards to an operator, who mediated access to an actual computer and eventually gave you back the results of running your program.
The further you go back in computing history, the more this is how the ideas must have happened. Even in the boring places and non-hallowed halls.
I missed the transition point by about a decade,7 as I was starting to write software in the late nineties. Computers were fast enough to edit, compile, and run programs such that you could think about programming nearly interactively. The computational surplus reached the point that you could interact with a REPL, shell, or compiler fast enough to keep context in one’s head without distraction from “compiler’s running, time for coffee”8.
In an era of video meetings and team chats and real-time docs, this older way of working sounds somewhat relaxing. 🤷🏻♂️ Probably I’m a little romantic about an era I didn’t live through, and it was a bit miserable. Shuffled or damaged punch cards, failed batch jobs, etc.
I like starting projects away from a computer, in a notebook or whiteboard. Draw all the ideas out, sketch some pseudocode. Expand on ideas, draw connections, scratch things out, make annotations, draw pictures. So much of that is nigh impossible in a text editor. Linear text areas and rectangular canvases, no matter how infinite, don’t allow for it.
But, there’s something about using only your wits and hands to wrestle with a problem. A different kind of scrutiny you get from writing down a solution, examining it. Scratching a mistake out, scribbling a corner case to the side. Noticing something I’ve glazed over, physically adjusting your body to center it in your field-of-vision, and thinking more deeply about. I end up wondering if I should do more offline coding.
2b. Ways of programming “offline” that weren’t that great in 2023
Sketching out programs/diagrams on an iPad was not as good as I’d hoped. Applications that hint at the hardware’s paper- and whiteboard-like promise exist. But it’s still a potential future, not a current reality. The screen is too small, drawing lag exists (in software, at least), and the tactility isn’t quite right.
Writing out programs, long-hand, on paper or on a tablet is not great. I tried writing out Ruby to a high-level of fidelity9. Despite Ruby’s potential for concision, it was still like drinking a milkshake through a tiny straw. It felt my brain could think faster than I could jot code down and it made me a little dizzy to boot.
Writing pseudocode was a little more promising. Again, stylus and tablet isn’t the best, but it’s fine in a pinch. By hand on paper/notebooks/index cards is okay if you don’t mind scratching things out and generally making a mess10. Writing out pseudocode in digital notes, e.g. in fenced code blocks within a Markdown document, is an okay to get a short code out and stay in a “thinking” mindset rather than “programming”.
3. Avoid the computer until it’s time to compute some things
To an absurd reduction, we use programming to turn billions of arithmetic operations into a semblance of thinking in the form of designs, documents, source files, presentations, diagrams, etc. But, rarely is the computer the best way of arriving at that thinking. More often, it’s better to step away from the computer and scribble or draw or brainstorm.
(Aside: the pop culture of workspaces and workflows and YouTube personalities and personal knowledge management and “I will turn you into a productivity machine!” courses widely misses on the mark on actually generating good ideas and how poorly it fits into a highly engaged social media post, image, or video.)
Find a reason to step away from the computer
Next time you’re getting started on a project, try grabbing a blank whiteboard/stack of index cards/sheet of paper. Write out the problem and start scribbling out ideas, factors to consider, ways you may have solved it before.
Even better: next time you find yourself stuck, do the same.
Better still: next time you find a gnarly problem inside a problem you previously thought you were on your way to solving, grab that blank canvas, step away from the computer, and dump your mental state onto it. Then, start investigating how to reduce the gnarly problem to a manageable one.
The TMS320C64x chips were unique in that they resembled a GPU more than a CPU. They had multiple instruction pipelines which the compiler or application developer had to try to get the most out of. It didn’t work out in terms of popularity, but it was great fun to think about. ↩
…in a vaguely-named and ambiguously-located datacenter somewhere. Cloud computing! ↩
IDEs are better, faster, and have excellent navigation/search features. Full-text search is now somewhat syntax aware and able to index and quickly query large codebases. Tools like Sourcegraph exist on the high end and ripgrep on the low end.
AI assistants/copilots can wear the hat of “better autocomplete” today and may wear the “help me understand this code” or “help me write a good PR/commit message” hat later. I’m skeptical about the wisdom of handing off the latter to a program, but we’ll see how it goes.
Applications have made tail -f a better and more legible experience. Somehow, exception trackers and performance monitoring tools don’t seem to have evolved much over the past ten years. This is perhaps a result of market/product consolidation more than an indicator that the category is tapped out.
It’s hard for me to say how much language is creating leverage for working on software in the large. Ruby and JavaScript were prominent in my daily work ten years ago and are still prominent today. Both have evolved gradual type systems that might make it easier to hold a large program in an individual’s head productively. Both gradual type systems are going through the “trough of disillusionment” phase of the technology hype cycle. I’m cautiously optimistic that some kind of static analysis, whether it’s linters or type checkers, will make writing Ruby and JavaScript less haphazard.
Notably, deploying software doesn’t seem to have improved much at all for the individual developer. Heroku, in its prime, is still the golden standard. Perversely, Heroku sometimes fails to meet this mark. The options free of lock-in for such a service are limited-to-nascent2.
In short, tooling has made it easier for fewer people to maintain and enhance larger software. With luck, the options for doing so without paying a monthly tithe to dozens of vendors will improve over the next decade!
Like everyone (it seems), I’m exploring how large language model copilots/assistants might change how I work. A few of the things that have stuck with me:
My take right now: GitHub Copilot is the real deal and helpful, today. On the low-end, it’s the most useful autocomplete I’ve used. On the high-end, it’s like having a pair who has memorized countless APIs (with a somewhat fallible memory) but can also type in boilerplate bits of code quickly, so I can edit, verify, and correct them.
I don’t expect the next few products I try to hit that mark. But, I suspect I will have a few LLM-based tools in my weekly rotation by the end of the year.
Before there was GitHub, there was Err the Blog. Chris Wanstrath and PJ Hyett wrote one of the essential Rails blogs of early Rails era. Therein, many of the idioms and ideas we use to build Rails apps today were documented or born.
I’d figured this site was offline, as are most things from the mid-2000s. ‘Lo and behold, it’s still online in its pink-and-black glory. Lots of nostalgia on my part here.
There was a Big Moment here, in my career and Ruby/Rails, where an abnormal density of smart people were together in a moment (if not a community) and the basis of my career took shape. It was good times, even if we didn’t yet have smartphones or doom-scrolling.
Allow me to reflect on what I found as I went through the archives.
How we built things back then
Rails circa 2.x was a quite unfamiliar game to the modern developer. REST conventions for CRUD controllers had just taken hold, and were not yet the canonical way to structure Rails code. There was a lot of experimentation and many of the solutions we take for granted today (namely, dependencies) were extremely unsolved back then.
DRY Your Controllers — err.the_blog – ideas about CRUD controllers before CRUD controllers were the thing in Rails (2.0 I think?). That said, if you were to write this now… I'd have issues with that. 😆
My Rails Toolbox — err.the_blog – this probably represented the state of the art for building Rails in its time… 17 years ago. 👴
Vendor Everything — err.the_blog – I followed this approach on my first Rails app. It was a pretty good way to keep things going for one, enthusiastic developer at the time. But RubyGems, Bundler, etc. are far better than vendor’ing these days. And, one of the crucial leverage points for working in the Rails space.
How we built things back now
Some things stay the same. For example, the need to fill in the gaps between Rails’ conventions for organizing your app, enhancing Ruby via ActiveSupport, and the lack of a suitable approach to view templates that satisfies writing code, testing code, and building front-ends.
Organize Your Models — err.the_blog – early memories of attempting to organize files in a Rails 1.2 app despite numerous headwinds presented by Rails itself. (IMO, organizing a Rails app by folder+namespace has really only started to work after Rails 6.0).
Rails Rubyisms Advent — err.the_blog – a love letter to ActiveSupport's extensions to the Ruby language. Many of these are in the Ruby language now, thankfully! ActiveSupport (still) rubs some folks the wrong way, but it remains one of my favorite things about Rails.
View Testing 2.0 — err.the_blog – amazingly, there's still no good story here. It's all shell games; write e2e tests instead of unit tests, use object-like intermediaries instead of ERB templates, etc.
How we stopped building things that way
Rails has always had flawed ideas that need re-shaping or removing over time. Mostly in making ActiveRecord as good of an ecosystem participant as it is a query-generation API.
with_scope with scope — err.the_blog – ActiveRecord scopes are way better now! I think with_scope remains, at least in spirit, in the Rails router API.
ActiveRecord Variance — err.the_blog – wherein our heroes discover inconsistencies in AR's find* APIs and patch their way to more predictable operation thereof.
How I was even more excited about Ruby
Err the Blog was not first on the Rails hype wave of the mid-2000’s. But, it was consistently one of the best. Every time a new post was published, I knew it was worthwhile to make time to read the latest post. I learned a lot about my favorite things about Ruby from Err: writing little languages and Enumerable.
Pennin' a DSL — err.the_blog – I could not read enough posts on building DSLs in my early Ruby days. It was the feature I was excited about in Ruby. Thankfully, it's a lot easier to do 'macro magic' in Ruby these days. And, hooking into the idiomatic ways to write Rails-style declarative bits is much better now.
Writing improves thoughts and ideas. Doubly so for thoughts and ideas about code.
Writing, about software or otherwise, is a process wherein:
thoughts and ideas are clarified
ideas are transferred to colleagues
culture (of a sort) is created by highlighting what’s essential and omitting what’s transient
Documenting code, as a form of writing, is a process wherein:
the concepts and mechanics in the code are clarified
what’s in our head today is made available to our teams (and ourselves) in the future
culture happens by highlighting what’s intended and what’s “off the beaten path” when working with this codebase
I suspect that open source is (often) of higher quality than bespoke, proprietary software because it has to go through the crucible of documentation. Essentially, there’s a whole other design activity you have to go through when publishing a library or framework that internal/bespoke software does not.
I can’t objectively verify this. Subjectively, when I have made the time to write words about a bit of code I wrote, it has resulted in improving the design of the code along the way. Or, I better understand how I might design the code in the future.
Writing is a great tool for designing code. Use it more often!
“Turn every page. Never assume anything. Turn every goddamned page.” — Robert Caro, Working
So goes the wisdom super-biographer Robert Caro received from a mentor when he was an investigative reporter in New York. Caro went on to apply this energy and depth to write a sprawling biography on real estate developer Robert Moses and four volumes on the life of LBJ.
I like the energy, determination, and purpose of Caro’s advice. In his writing, Caro takes a maximalist1 perspective2. He looks to understand the system. Caro read every original document in every archive he could find (“turning the pages”) to ensure he fully grasped the context of historical events and didn't miss any details that might change the interpretation of events. Caro tries to load the whole state of his subject into his head and notes. Only then does he start writing his expansive biographies.
1. Read the code
Building software benefits from the same energy and determination displayed by Caro. As I’m working on a project, I flip between code I’m writing, code I’m using, and adjacent systems. Once I’ve read enough to have the problem and system in my head, I can move through the system and answer questions about it with that everything is at my fingertips feeling3. Fantastic.
Recommended: read third-party libraries, frameworks, standard/core/prelude libraries. Read demo code. Find inspiration, learn something new, and build the muscle for reading code with confidence.
2. Hear the words
When I’m really listening, avoiding the urge to think through what I’ll say next, I’m doing my best work as a coach or mentor. When I really hear and understand what a colleague is trying to accomplish or solve, it’s a bigger win for everyone.
Subsequently, I can switch to brainstorm or solution mode. Not before.
Recommended:literally listen to what they’re saying. Especially for the leaders and managers out there. Get the context needed to understand what they’re thinking. Ask clarifying questions until you’re sure they understand what they’re thinking. Don’t start responding until you’re sure you understand the context and the kind of response4 expected of you.
To verify your model, you have to get hands-on at some point. A model is likely invalid unless it’s been applied hands-on to the system in question. Make a change, propose an updated process. See what happens, what breaks, who pushes back. Building (with code, with words) upon the model will evolve your understanding and predictions in ways that further reading or listening will not.
Recommended: turn reading words, reading code, and listening into a model of how your code-base, team, or organization work together. Apply that model to a problem you’re facing and use the results to improve your predictions on what actions will produce welcome outcomes. Rinse and repeat.
4. Go forth and deeply understand a system
With due credit to Robert Caro, I suggest doing more than “turning the pages”:
“Read the code. Read every damn function or module you’re curious about.” — me, a few months ago
"Listen to what they're saying. Hear every damn word until they're done talking." — me, several weeks ago
Next time you think “I need more context here” or “that seems like magical code/thinking”, go deeper. Take 15 minutes to read the code, or listen 15 seconds longer to your conversation-mate.
Turn the pages. Read the code. Hear the words.
Aside from his biography quote above, all of his books are doorstoppers. ↩
Extensively aided, it should be noted, by further research and organization by his wife. ↩
Noteflakes: A Compositional Approach to Optimizing the Performance of Ruby Apps — the idea is to offload “inner-loop”-type operations from Ruby to C-extensions. The clever twist is this happens via data-structure-as-language. Ruby being Ruby, you can wrap a DSL around the data structure generation to reduce the context switch from Ruby to offloaded operations.
There’s precedent to the approach: if you squint, it’s not unlike offloading the math for computer graphics or machine learning to a GPU. That said, the speed-up is unlikely to be as dramatic.
I hope to hear more of this approach in the future!
For some reason, identifier schemes that are global unique, coordination-free, somewhat humanely-representable, and efficiently indexed by databases are a thing I really like. Universally Unique Lexicographically Sortable Identifier (ulid, for humans) is one of those things. Implementations available for dozens of languages! They look like this: 01ARZ3NDEKTSV4RRFFQ69G5FAV.
Paul Ford’s website is twenty years old. For maybe half that time I’ve been extremely jealous of how well he writes about technology without being dry and technical. When I grow up, I’ll write like that!
How Awesome Engineers Ask For Help. So much good stuff there, I can’t quote it. There’s something in there for new and experienced engineers alike. In particular: don’t give up, actively participate in the process of getting unstuck, take and share notes, give thanks afterwards.
The best time to work on your dotfiles is on weekends between high-intensity project pushes at work. No better time to do some lateral thinking and improving of your workflow. Feels good, man.
If I were trying to convince myself to extract a (micro)service, today, I’d do it like this. First I’d have a conversation with myself:
you are making tactical changes slightly easier at the expense of making strategic changes quite hard; is that really the trade-off you're after?
you must have the operational acumen to provision and deploy new services in less than a week
you must have the operational acumen to instrument, monitor, and debug how your applications interact with each other over unreliable datacenter networks
you must have the design and refactoring acumen to patiently encapsulate the service you want to build inside your current application until you get the boundaries just right and only then does it make sense to start thinking about pulling a service out
I would reflect upon how most of the required acumen is operational and wonder if I’m trying to solve a design problem with operational complexity. If I still thought that operational complexity was worthwhile, I’d then reflect upon how close the code in question was to the necessary design. If it wasn’t, I would again kick the can down the road; if I can’t refactor the code when it’s objects and methods, there’s little hope I can refactor it once its spread across two codebases and interacting via network calls as API endpoints, clients, data formats, etc.
If, upon all that reflection, I was sure in my heart that I was ready to extract a service, it’d go something like this:
try to encapsulate the service in question inside the current app
spike out an internal API just for that service; this API will become the client contract
wrap an HTTP API around the encapsulation
make sure I have an ops buddy who can help me at every provisioning and deployment step, especially if this sort of thing is new and a monolith is the status quo
test the monolith calling itself with the new API
trial deploy the service and make some cross-cutting changes (client and server) to make sure I know the change process
start transferring traffic from the monolith to the service
In short, I still don’t think service extraction is as awesome as it sounds on paper. But, if you can get to the point of making a Modular Monolith, and if you can level up your operations to deal with the demands of multiple services, you might successfully pull off (micro)services.
I’ve previously thought that a developer moving fast and coding loose will always outpace a developer moving methodically and intentionally. Cynically stated, someone making a mess will always make more mess than someone else can clean up or produce offsetting code of The Quality.
I’ve recently had luck changing my mindset to “make The Quality by making the quantity”. That is, I’m trying to make more stuff that express some aspect of The Quality I’m going for. Notably, I’m not worrying too much if I have An Eternal Quality or A Complete Expression of the Quality. I’m a lot less perfectionist and doing more experiments with my own style to match the code around me.
I now suspect that given the first two developers, its possible to make noticeably more Quality by putting little bits of thoughtfulness throughout the code. Unless the person moving fast and loose is actively undermining the quality of the system, they will notice the Quality practices or idioms and adopt them. Code review is the first line of defense to pump the brakes and inform someone moving a little too fast/loose that there’s a Quality way to do what they’re after without slowing down too much.
object.yield_self {|x| block } → an_object
# Yields self to the block and returns the result of the block.
class Object
def yield_self
yield(self)
end
end
I would prefer then or even | to the verbosely literal yield_self, but I’ll take anything. Surprisingly, both of my options are legal method names!
class Object
def then
yield self
end
def |
yield self
end
end
require "pathname"
__FILE__.
then { |s| Pathname.new(s) }.
yield_self { |p| p.read }.
| { |source| source.each_line }.
select { |line| line.match /^\W*def ([\S]*)/ }.
map { |defn| p defn }
However, | already has 20+ implementations, either of the mathematical logical-OR variety or of the shell piping variety. Given the latter, maybe there’s a chance!
Next, all we need is:
a syntax to curry a method by name (which is in the works!)
a syntax to partially apply said curry
If those two things make their way into Ruby, I can move on to my next pet feature request: a module/non-global namespace scheme ala Python, ES6, Elixir, etc. A guy can dream!
I was lucky enough to attend Strange Loop this year. I described the conference to friends as a gathering of minds interested in programming esoterica. The talks I attended were appropriately varied: from very academic slides to illustrated hero’s journeys, from using decomposed mushrooms to create materials to programming GPUs, from JavaScript to Ruby. Gotcha, that last one was not particularly varied.
In short, most of the language-centric conferences I’ve been to in the past were about “hey look at what I did with this library or weird corner of the language”, though the most recent Ruby/Rails conference are more varied than this. By comparison, Strange Loop was more about “I did this thing that I’m excited about and its a little brainy but not intimidating and also I’m really excited about it.”
Elm Conf 2017
I started the weekend off checking out the Elm community. I already think pretty highly of the language. I would certainly use it for a green-field project.
Size, excitement, and employment-wise, Elm is about where Ruby was when I joined the community in 2005. Lots of excited folks, a smattering of employed folks, and a good technical/social setup for growth.
A nice thing about the community is that there is no “other” that Elm is set against. Elm code often needs to interface with JavaScript to get at functionality like location or databases, so they don’t turn their nose up at it. It’s a symbiotic relationship. Further, most Elm developers are probably coming from JavaScript, so its a pretty friendly relationship. This is nice shift from the tribalism of yore.
It’s also exciting that Elm is already more diverse than Ruby was at the same point in its growth/inflection curve. Fewer dudes, more beginners, and none of the “pure Ruby” sort of condescension towards Rails and web development.
Favorite talks:
“Teaching Elm to Beginners” (no talk video), Richard Feldman. Using Elm at work requires teaching Elm to beginners. Teaching is a totally different skill set, disjoint from programming. When answering a question, introduce as few new concepts as possible. Find the most direct path to helping someone understand. It’s unimportant to be precise, include lots of details, or being entertaining when teaching. You can avoid types and still help students build a substantial Elm program.
If Coco Chanel Reviewed Elm, Tereza Sokol: Elm as seen through the lens of high and low fashion. Elm is a carefully curated, slow releasing collection of parts ala Coco Chanel. It is not the hectic variety of an H&M store.
Accessibility with Elm, Tessa Kelly: Make accessible applications by enforcing view/DOM helpers with functional encapsulation and types. Your program won’t compile if you forget an accessibility annotation. A pretty good idea!
Mogee, or how we fit Elm in a 64x64 grid”, Andrew Kuzmin: A postmortem on building games with Elm. Key insight: work on the game, not on the code or engine. Don’t frivolously polish code. Use entity-component-system modeling. Build sprite/bitmap graphics in WebGL by making one pixel out of two triangles.
The majority of the talks referenced Elm creator Evan Czaplicki’s approach to designing APIs. He is humble enough that I don’t think this will backlash like it did with DHH’s opinions did with Rails.
By far the biggest corporate footprint in the community and talks was NoRedInk. Nearly half of the talks were by someone at the company.
Most practical talks from StrangeLoop
Types for Ruby: it seems like they’ve implemented a full-blown type system for Ruby. It’s got all the gizmos and gadgets you might expect: unions, generics, gradual typing. It applies all its checks at runtime though, and they didn’t say if it does exhaustive checking, so I’m not sure how handy it would be in the way that e.g. Elm or Flow are. On my list of things to check out later.
Level up your concurrency skills with Rust. Learning Rust’s concepts for memory and concurrency safety, i.e. resources, ownerships, and lifetimes, can help you program in any language. Putting concurrency into a system is refactoring for out-of-orderness and most likely a retrofit of the underlying structure. Rust models memory like a resource, ala file handles or network sockets are modeled by the operating system. Rust resource borrowing in summary: if you can read it, no one else can write it; if you can write it, no one else can read or write it; borrows are checked at compile time so there is no runtime overhead/cost.
GPGPU programming with Metal. Your processor core has a medium sized arithmetic logic unit and a giant control unit (plus as much memory/cache as they can spare). A GPU is thousands of arithmetic logic units. Besides drawing amazing pictures, you can use all those arithmetic logic units to train/implement a neural network, do machine vision or image processing, run machine learning algorithms, and any kind of linear algebra or vector calculus. Work is sent to the GPU by loading data/state into buffers, converting math instructions to GPU code and load that into GPU buffers, and then let the GPU go wild executing it.
Seeking a better culture and organization of open source maintainership (no talk video). Projects are getting smaller, more fragmented, and attracting no community (ed. the unintended consequence of extreme modularity?) Bitcoin and Ethereum have very little backing despite the astronomical amounts of money in the ecosystem. We need a new perspective on funding open source work. Consumption of open source has won, but production of open source is still in a pretty bad place.
How to be a compiler. Knitting is programming; you can even compile between knitting description pseudo-languages. Implemented Design by Numbers, a Processing predecessor, as transpiler to SVG.
Random cool things people are really doing
Measuring and optimizing tail latency. Activating instrumentation and “slow-path” techniques on live web requests that run so long they will fall into the 99th percentile. Switch processor voltage to “power up” a processor that’s running a slow request so it will finish faster, e.g. switch a core from low power/500MHz mode to high power/2GHz mode.
a conference of diverse backgrounds and experiences is a better one
my favorite talks told a hero’s journey story through illustrations
folks in this sphere of technology are taking privacy and security very seriously, but the politics of code, e.g. user safety and information war, were not particularly up there in the talks I went to (probably by self-selection)
way more people are doing machine learning applications than I’d realized; someone said off-hand that we’d “emerged from the AI winter in 2012” and that struck me as pretty accurate
everyone gets the impostor syndrome, even conference speakers and wildly successful special effects and TV personalities like Adam Savage
If you get the chance, you should go to Strange Loop!
What is it?exa is ls reimagined for modern times, in Rust. And more colorfully. It is nifty, but not life-changing. I mostly still use ls, because muscle memory is strong and its basically the only mildly friendly thing about Unix.
How do I do boring old ls things?
Spoiler alert: basically the same.
ls -a: exa -a
ls -l: exa -l
ls -lR: exa -lR
How do I do things I rarely had the gumption to do with ls?
exa -rs created: simple listing, sort files reverse by created time. Other options: name, extension, size, type, modified, accessed, created, inode
exa -hl: show a long listing with headers for each column
exa -T: recurse into directories ala tree
exa -l --git: show git metadata alongside file info
Now, I attempt to write in the style of a tweetstorm. But about code. For my website. Not for tweets.
For a long time, we have been embracing specialization. It's taken for granted even more than capitalism. But maybe not as much as the sun rising in the morning
From specialization comes modularization, inheritance, microservices, pizza teams, Conway's Law, and lots of other things we sometimes consider as righteous as apple pie.
Specialization comes at a cost though. Because a specialized entity is specific, it is useless out of context. It cannot exist except for the support of other specialized things.
Interconnectedness is the unintended consequence of specialization. Little things depend on other things.
Those dependencies may prove surprising, fragile, unstable, chaotic, or create a bottleneck.
Specialization also requires some level of infrastructure to even get started. You can't share code in a library until you have the infrastructure to import it at runtime (dynamic linking) or resolve the library's dependencies (package managers).
The expensive open secret of microservices and disposable infrastructure is that you need a high level of operational acumen to even consider starting down the road.
You're either going to buy this as a service, buy it as software you host, or build it yourself. Either way, you're going to pay for this decision right in the budget.
On the flip side is generalization. The grand vision of interchangeable cogs that can work any project.
A year ago I would have thought this was as foolish as microservices. But the ecosystems and tooling are getting really good. And, JavaScript is getting good enough and continues to have the most amazing reach across platforms and devices.
A year ago I would have told you generalization is the foolish dream of the capitalist who wants to drive down his costs by treating every person as a commodity. I suspect this exists in parts of our trade, but developers are generally rare enough that finding a good one is difficult enough, let alone a good one that knows your ecosystem and domain already.
Generalization gives you a cushion when you need help a short handed team get something out the door. You can shift a generalist over to take care of the dozen detail things so the existing team can stay focused on the core, important things. Shifting a generalist over for a day doesn't get you 8 developer hours, but it might get you 4 when you really need it.
Generalization means more people can help each other. Anyone can grab anyone else and ask to pair, for a code review, for a sanity check, etc.
When we speak of increasing our team's bus number, we are talking about generalizing along some axis. Ecosystem generalists, domain knowledge generalists, operational generalists, etc.
On balance, I still want to make myself a T-shaped person. But, I think the top of the T is fatter than people think. Or, it's wider than it is tall, by a factor of one or two.
Organizationally, I think we should choose what the tools and processes we use carefully so that we don't end up where only one or two people do something. That's creating fragility and overhead where it doesn't yield any benefit.
Sometimes, programmers like to disparage “magical code”. They say magical code is causing their bugs, magical code is offensive to use, magical code is harder to understand, we should try to write “less magical” code.
“Wait, what’s magic?”, I hear you say. That’s what I’m here to talk about! (Warning: this post contains an above-average density of “air quotes”, ask your doctor if your heart is strong enough for “humorous quoting”.)
I can start to understand why a big of code is frustratingly magical to me by categorizing it. (Hi, I’m Adam, I love categorizing things, it’s awful.)
“Mathemagical” code escapes my understanding due to its foundation in math and my lack of understanding therein. I recently read Purely Functional Data Structures, which is a great book, but the parts on proving e.g. worst-case cost for amortized operations on data structures are completely beyond my patience or confidence in math. Once Greek symbols enter the text, my brain kinda “nope!”s out.
“Metamagic” is hard to understand due to use of metaprogramming. Code that generates code inside code is a) really cool and b) a bit of a mind exploder at first. When it works, its glorious and not “magical”. When it falls short, it’s a mess of violated expectations and complaints about magic. PSA: don’t metaprogram when you can program.
“Sleight of hand” makes it harder for me to understand code because I don’t know where the control flow or logic goes. Combining inheritance and mixins when using Ruby is a good example of control flow sleight-of-hand. If a class extends Foo, includes Bar, and all three define a method do_the_thing, which one gets called (trick question: all of them, trick follow-up question: in what order!)? The Rails router is a good example of logical sleight-of-hand. If I’m wondering how root to: “some_controller/index” works and I have only the Rails sources on me, where would I start looking to find that logic? For the first few years of Rails, I’d dig around in various files before I found the trail to that answer.
“Multi-level magic schemes” is my new tongue-in-cheek way to explain a tool like tmux. It’s a wonderful tool for those of us who prefer to work in (several) shells all day. I’m terrified of when things go wrong with it, though. To multiplex several shells into one process while persisting that state across user sessions requires tmux to operate at the intersection of Unix shells, process trees, and redrawing interfaces to a terminal emulator. I understand the first two in isolation, but when you put it all together, my brain again “nope!”s out of trying to solve any problems that arise. Other multi-level magic schemes include object-relational mappers, game engines, operating system containers, and datacenter networking.
I can understand magic and so can you!
I’m writing this because I often see ineffective reactions to “magical” code. Namely, 1) identify code that is frustrating, 2) complain on Twitter or Slack, 3) there is no step 3. Getting frustrated is okay and normal! Contributing only negative energy to the situation is not.
Instead, once I find a thing frustrating, I try to step back and figure out what’s going on. How does this bit of code or tool work? Am I doing something that it recommends against or doesn’t expect? Can I get back on the “golden path” the code is built for? Can I find the code and understand what’s going on by reading it? Often some combination of these side quests puts me back on my way an out of frustration’s way.
Other times, I don’t have time for a side quest of understanding. If that’s the case, I make a mental note that “here be dragons” and try to work around it until I’m done with my main quest. Next time I come across that mental map and remember “oh, there were dragons here!”, I try to understand the situation a little better.
For example, I have a “barely tolerating” relationship with webpack. I’m glad it exists, it mostly works well, but I feel its human factors leave a lot to be desired. It took a few dives into how it works and how to configure it before I started to develop a mental model for what’s going on such that I didn’t feel like it was constantly burning me. I probably even complained about this in the confidence of friends, but for my own personal assurances, attached the caveat of “this is magical because it’s unfamiliar to me.”
Which brings me to my last caveat: all this advice works for me because I’ve been programming for quite a while. I have tons of knowledge, the kind anyone can read and the kind you have to win by experience, to draw upon. If you’re still in your first decade of programming, nearly everything will seem like magic. Worse, it’s hard to tell what’s useful magic, what’s virtuous magic, and what’s plain-old mediocre code. In that case: when you’re confronted with magic, consult me or your nearest Adam-like collaborator.
Bias to small, digestible review requests. When possible, try to break down your large refactor into smaller, easier to reason about changes, which can be reviewed in sequence (or better still, orthogonally). When your review request gets bigger than about 400 lines of code, ask yourself if it can be compartmentalized. If everyone is efficient at reviewing code as it is published, there’s no advantage to batching small changes together, and there are distinct disadvantages. The most dangerous outcome of a large review request is that reviewers are unable to sustain focus over many lines, and the code isn’t reviewed well or at all.
This has made code review of big features way more plausible on my current team. Large work is organized into epic branches which have review branches which are individually reviewed. This makes the final merge and review way more tractable.
Your description should tell the story of your change. It should not be an automated list of commits. Instead, you should talk about why you’re making the change, what problem you’re solving, what code you changed, what classes you introduced, how you tested it. The description should tell the reviewers what specific pieces of the change they should take extra care in reviewing.
There’s plenty of room to criticize JavaScript as a technology, language, and community(s). But, when I’m optimistic, I think the big things JavaScript as a phenomenon brings to the world are:
amazing reach: you can write JS for frontends, backends, games, art, music, devices, mobile, and domains I'm not even aware of
a better on-ramp for people new to programming: the highly motivated can learn JS and not worry about the breadth of languages they may need to learn for operations, design, reporting, build tooling, etc.
lots of those on-ramps: you could start learning JS to improve a spreadsheet, automate something in Salesforce, write a fun little Slack bot, etc.
In short, JavaScript increases the chances someone will level up their career. Maybe they’ll continue in sales or marketing but use JS as a secret weapon to get more done or avoid tedium. Maybe it gives them an opportunity to try programming without changing job functions or committing to a bootcamp program.
Bottom line: JavaScript, like Ruby and PHP before it, is the next thing that’s improving the chances non-programmers become programmers and reach the life-improving salary and career trajectories software developers have enjoyed for the past decade or two.
One of my friends has been working on a sort of community software for several years now. Uniquely, this software, Uncommon, is designed to avoid invading and obstructing your life. From speaking with my Brian, it sounds like people often mistake this community for a forum or a social media group. That’s natural; we often understand new things by comparing or reducing them to old things we already understand.
The real Quality Uncommon is trying to embody is that of a small dinner party. How do people interact in these small social settings? How can software provide constructive social norms like you’d naturally observe in that setting?
I’m currently reading How Buildings Learn(also a video series). It’s about architecture, building design, fancy buildings, un-fancy buildings, pretty buildings, ugly buildings, etc. Mostly it’s about how buildings are suited for their occupants or not and whether those buildings can change over time to accommodate the current or future occupants. The main through-lines of the book are 1) function dictates form and 2) function is learned over time, not specified.
A building that embodies the Quality described by How Buildings Learn uses learning and change over time to become better. A building with the Quality answers 1) How does one design a building such that it can allow change over time while meeting the needs and wants of the customer paying for its current construction? and 2) How can the building learn about the functions its occupants need over time so that it changes at a lower cost than tearing it down and starting a new building?
Bret Victor has bigger ideas for computing. He seeks to design systems that help us explore and reason on big problems. Rather than using computers as blunt tools for doing the busy work of our day-to-day jobs as we currently do, we should build systems that help all of us think creatively at a higher level than we currently do.
Software that embodies that Quality is less like a screen and input device and more like a working library. Information you need, in the form of books and videos, line the walls. Where there are no books, there are whiteboards for brainstorming, sharing ideas, and keeping track of things. In the center of the room are wide, spacious desks; you can sit down to focus on working something through or stand and shuffle papers around to try and organize a problem such that an insight reveals itself. You don’t work at the computer, you work amongst the information.
Over the years, many hours in front of a computer have afforded me the gift of keyboarding skills. I’ve put in the Gladwellian ten thousand hours of work and it’s really paid off. I type fairly quickly, somewhat precisely, and often loudly.
Pursuant to this great talent, I’ve optimized my computer to have everything at-hand when I’m typing. I don’t religiously avoid the mouse. I do seek more ways to use the keyboard to get stuff done quickly and with ease. Thanks to tools like Alfred and Hammerspoon, I’ve acheived that.
With the greatest apologies to Bruce Springsteen:
Well I got this keyboard and I learned how to make it talk
Like every good documentary on accomplished performers, there’s a dark side to this keyboard computering talent I posess. There are downsides to my keyboard-centric lifestyle:
I sometimes find it difficult to step back and think. Rather than take my hands off the keyboard, I could more easily switch to some other app. I feel like this means I'm still making progress, in the moment, but really I'm distracting myself.
Even when I don't need to step back and think, it's easy for me to switch over to another app and distract myself with social media, team chat, etc.
Being really, really good at keyboarding is almost contrary to Bret Victor's notion of using computers as tools for thinking rather than self-contained all-doing virtual workspaces.
Thus I often find I need to push the keyboard away from me, roll my chair back, and think, read, or write to do the deep thinking.
All that said, when I am in the zone, my fingers dance over this keyboard, I think with my fingers, and it’s great.
Since I started writing web applications in around 1999, there’s been an ever-present boundary around what you can do in a browser. As browsers have improved, we have a new line in the sand. They’re more enabling now, but equally annoying.
Applications running on servers in a datacenter (Ruby, Python, PHP) can’t:
interact with a user’s data (for largely good reason)
make interesting graphics
hop over to a user’s computer(s) without tremendous effort (i.e. people running and securing their own servers)
Browser applications can’t:
store data on the user’s computer in useful quantities (this recently became less true, but isn’t widely used yet)
compute very hard; you can’t run the latest games or intense math within a browser
hang around on a user’s computer; browsers are a sandbox of their own, built around ephemeral information retrieval and not long-term functionality
present themselves for discovery in various vendor app stores (iTunes, Play, Steam, etc.)
It may seem like native applications are the way to go. They can do all the things browsers cannot. But!
native apps are more susceptible to the changing needs of their operating system host, may go stale (look outdated) or outright not work anymore after several years
often struggle to find a sustainable mechanism for exchanging a user’s money for a developer’s time; part of that is the royalty model paid to platforms and stores, part of that is the difficulty of business inherent to building any application
cannot exceed the resources of one user’s computer, except for a few very high-end professional media production applications
In practice, this means there’s a step before building an application where I figure out where some functionality lives. “This needs to think real hard, it goes on a very specific kind of server. This needs to store some data so it has to go on that other server. That needs to present the data to the user, so I have to bridge the server and browser. But we’d really like to put this in app stores soooo, how are we going to get something resembling a native app without putting a lot of extra effort into it?”
There’s entirely good reasons that this dichotomy has emerged, but it’s kinda dumb too. In other words, paraphrasing Churchill:
Browsers are the worst form of cross-platform development, except for all the others.
I miss the blogging scene circa 2001-2006. This was an era of near-peak enthusiasm for me. One of those moments where a random rock was turned over and what lay underneath was fascinating, positive, energizing, captivating, and led me to a better place in my life and career.
As is noted by many notable bloggers, those days are gone. Blogs are not quite what they used to be. People, lots of them!, do social media differently now.
Around 2004, amidst the decline of peer-to-peer technologies, I had a hunch that decentralized technology was going to lose out to centralization. Lo and behold, Friendster then MySpace then Facebook then Twitter made this real. People, I think, will always look to a Big Name first and look to run their own infrastructure nearly last.
In light of that, I still think the lost infrastructure of social media is worth considering. As we stare down the barrel of a US administration that is likely far less benevolent with its use of an enormous propaganda and surveillance mechanism, should we swim upstream of the ease of centralization and decentralize again?
Consider this chart identifying community and commercially run infrastructure that used to exist and what has, in some cases, succeeded it:
[caption id=“attachment_3832” align=“alignnone” width=“796”] Connective tissue, then and now[/caption]
I look over that chart and think, yeah a lot of this would be cool to build again.
Would people gravitate towards it? Maybe.
Could it help pop filter bubbles, social sorting, fake news and trust relationships? Doesn’t seem worth doing if it can’t.
Do people want to run their identity separate of the Facebook/Twitter/LinkedIn behemoth? I suspect what we saw as a blog back then is now a “pro-sumer” application, a low cost way for writers, analysts, and creatives to establish themselves.
Maybe Twitter and Facebook are the perfect footprint for someone who just wants to air some steam about their boss, politics, or a fellow parent? It’s OK if people want to express their personality and opinions in someone else’s walled garden. I think what we learned in 2016 is that the walled gardens are more problematic than merely commercialism, though.
That seems pessimistic. And maybe missing the point. You can’t bring back the 2003-6 heyday of blogging a decade later. You have to make something else. It has to fit the contemporary needs and move us forward. It has to again capture the qualities of fascinating, positive, energizing, captivating, and leading to a better place.
I hope we figure it out and have another great idea party.