In recent months, I’ve been doing a fair number of interviews about my new book, Slow Productivity. I’m often asked during these conversations about the potential impact of artificial intelligence on the world of knowledge work.
I don’t talk much about AI in my book, as it focuses more on advice that individuals can put into place right now to escape busyness and find a more sustainable path toward meaningful accomplishment. But it’s a topic I do think a lot about in my role as a computer scientist and digital theorist, as well as in my recent journalism for The New Yorker (see, for example, this and this).
With this in mind, I thought I would share three current thoughts about the intersections of AI and office productivity…
First, the large language model tools drawing the bulk of the attention at the moment, including ChatGPT, Claude, and Gemini, will not, on their own, revolutionize knowledge work productivity.
Language models can help speed up administrative tasks. For example, you can use them to write initial drafts of an email or fix the language on an email you wrote quickly. They can also create a summary of what was discussed in a long chat transcript or help you brainstorm ideas.
This is useful, but not necessarily transformative. Other technologies have previously sped up the execution of administrative tasks (think: every major breakthrough of the personal computer revolution), but speeding up these tasks has a way of inducing even more to fall into their slipstream. The result is less a new productivity utopia than an even more intense level of freneticism.
(There are some interesting exceptions here. These models’ ability to produce ready-to-use computer code and bespoke images, as well as fully automate certain customer support interactions, could lead to immediate disruption in certain fields.)
Second, the real impact will come when artificial intelligence tools gain the ability to plan, including future prediction and the simulation of other minds. As I reported for The New Yorker (and summarized here) this will involve the combination of language models with other types of (non-neural network) models, like those used to explore moves in game-playing programs.
Such multi-strategy systems can go beyond speeding up administrative tasks and instead fully automate them. For example, instead of helping you draft an email, such a program might respond to an email entirely on your behalf. This would be a game changer.
(Mustafa Suleyman has argued that the real Turing Test that matters is whether a given AI can go off and earn $100,000 for you on the internet. I would argue the test that’s more relevant — and consequential — is whether an AI can empty your inbox.)
Third, while it’s true that the major players in this space are certainly working on these types of planning-enabled systems, there’s no need to wait for these new technologies to improve our professional lives. We can achieve their promised revolutions right now; not with fancy computer programs, but with new common sense rules and processes for how we manage workloads, and how we communicate about our efforts. I don’t need a trillion-parameter model to empty my inbox if I can prevent my inbox from getting so full in the first place.
My new book, this newsletter, and my podcast, among many other sources, all contain practical ideas for achieving such an overhaul of knowledge work. None of these ideas depend on radical new tools. They rely instead on new perspectives and common sense. Put another way: We don’t need transformer-based neural networks to revolutionize work, we just need a willingness to try something new.
#####
Speaking of books, my longtime friend and collaborator Scott Young has a fantastic new title out this week called: Get Better at Anything: 12 Maxims for Mastery. I think Scott has written the ultimate (and approachable) guide to getting better at the stuff that matters. If you’re interested in engineering a deeper life, you need this book in your proverbial toolkit. (Learn more.)