Like everyone (it seems), I’m exploring how large language model copilots/assistants might change how I work. A few of the things that have stuck with me:
- Use LLMs to reduce the cost of doing things, there by increasing ambition. That is, reducing cost increases demand.
- Using LLM prompting to think through/design a new bit of program functionality. If one can manage to write a generic prompt, without proprietary information, you have given many programmers a much wiser pair than they might normally have.
- Use LLM flexible tool for thinking through problems or solving them outright. GPT4 is like rolling a REPL, a junior developer, and a conversational partner into one very flexible toolkit.
My take right now: GitHub Copilot is the real deal and helpful, today. On the low-end, it’s the most useful autocomplete I’ve used. On the high-end, it’s like having a pair who has memorized countless APIs (with a somewhat fallible memory) but can also type in boilerplate bits of code quickly, so I can edit, verify, and correct them.
I don’t expect the next few products I try to hit that mark. But, I suspect I will have a few LLM-based tools in my weekly rotation by the end of the year.