The Work Assistant
An Endangered Species Worth Saving?
by edward tenner
edward tenner, a frequent contributor to the Review, is a research affiliate of the Smithsonian Institution and Rutgers University, and author of Why the Hindenburg Had a Smoking Lounge: Essays in Unintended Consequences (American Philosophical Society Press, 2025).
Published May 5, 2026
The good news is that chatbots have passed a key threshold. At least for writers and researchers, ChatGPT and Claude are now delivering answers to complex questions with verifiable sources in a minute or less — a far cry from the hallucinations of two years ago.
Consider health care ethics and policy. After reading in The New York Times that fully 5 percent of all deaths in Canada are legally sanctioned suicide, I was curious about the economics of medically assisted dying on health care finance. Did the Canadian health care system lose or make money on the practice? Once I probably would have had to search at least three or four databases — government reports, professional journals, magazines and newspapers — to put together an answer. But in moments, my paid ChatGPT subscription discovered estimates of from C$35 million C$145 million, with links to sources. Equally importantly, it analyzed the limitations of statistics on this topic.
It did, in other words, what a human assistant or I personally would have done, but in a tiny fraction of the time and at no cost beyond my subscription. (AI data centers’ strain on electrical grid capacity may add to my monthly electric bill, though I would have to pay that whether or not I used the service.)
Is it good news that so much mental tedium has at last been automated? I’m not sure, since I can see the argument from the perspective of assistants not eager to be displaced by digital circuitry. In any event, the conflict may be moot. The grants that pay for academic assistants in the humanities and social sciences are endangered.
I became an assistant myself over 50 years ago, shut out of my fantasy teaching career by the crisis of the academic job market in my specialty, German history. One of my graduate teachers at the University of Chicago, Bill McNeill, rescued me after I lost a community college teaching position. My assignment was searching the French- and German-language literature in the history of medicine and public health for raw material for McNeill’s book in progress on pandemics in history. His book, by the way, was published as Plagues and Peoples, and is still in print today.
My work as an assistant showed me how the history of science, medicine and technology could not be divorced from the general history that I had studied — a retrospectively obvious point, but one glaringly absent from my education.
Little did I know that the project was a turning point in my life. First, it showed me how the history of science, medicine and technology could not be divorced from the general history that I had studied — a retrospectively obvious point, but one glaringly absent from my education. Second, it let me experience how a best-selling (but scholarly) generally interest book is put together, again a skill they don’t teach in grad school, perhaps because so few professors have mastered it.
A second assistantship paid me to edit a paper comparing Canadian and American health care systems. Why were we (and still are) struggling to achieve universal affordable health care? I discovered that it’s not only because powerful interests stood in the way; there were (and still are) significant cultural differences even among neighboring nations.
More important, I learned how to edit readable academic writing. I was graciously listed as co-author, and the boost helped me launch a career in science book acquisition at a university press, a wonderful refuge from the cruel economic realities of academic humanities.
Today the health care research role might have been filled by a chatbot unless the journal forbids use of AI for this purpose. I am less sure about the potential role of AI in multilingual historical bibliography like the kind I undertook, but the machines are probably on their way to domination.
In turn, my own human assistants in publishing and historical research have benefited (I hope) from their year or two, not because I am necessarily a joy to work for, but because of the challenge of the assignments. One became an editor at Apple and rode the high-tech wave; she now has her own foundation. Another became a professor of anthropology; still others, a greeting card entrepreneur and a published Einstein scholar.
But the biggest beneficiary of the assistant system was me. Assistantships are a way to build friendships across generations; Bill McNeill remained a mentor and advisor.
Being an assistant often leads to a deeper understanding of the work at hand. And in the corporate sector, this period of assistance, observation and imitation helps to build organizational culture.
The question of what we lose when AI assistants can do everything that entry-level staff can do, only better and faster, is commonly framed in terms of unemployment and the short-circuiting of junior jobs. So far, though, it appears that, apart from well-publicized layoffs dubiously justified by management invoking AI as force majeure, the real effect of chatbots is longer and harder work by junior staff struggling to hold back the advancing silicon army.
The traditional assistant was like the apprentice sushi chef in Frans de Waal’s The Ape and the Sushi Master,performing grunt work while carefully observing the skills of the expert and gradually being allowed to imitate them — a practice that de Waal observed originated among non-human primates. (Think “Monkey see, monkey do!”) Learning things the hard way may seem inefficient, but it often does lead to a deeper understanding of the work at hand. And in the corporate sector, this period of assistance, observation and imitation helps to build organizational culture.
This is true in the professions as well. As the technology management guru Matthew Beane has written:
[S]ince residents are slower and make more mistakes than an experienced surgeon would, … surgeons are opting to cut residents out of the action. Before, residents might operate for four hours during a 4.5-hour procedure. In my nationwide data, their robotic average time hovered in the 10- to 15-minute range. And residents got less operating time in 88% to 92% of cases. In this situation, we end up with much-less-capable surgeons.
Likewise, even (or especially) the most, prestigious law schools do not prepare their graduates for an attorney’s most typical work, drafting contracts. That traditionally has been done by partners and senior associates. So, when LLMs prepare the contracts, a vital learning experience may be removed.
Finally, artificial intelligence is threatening access to the kind of invaluable experience I enjoyed when forced to pivot. Ironically, this is showing up in academic computer science itself. The AI researcher Professor Ariel Rosenfeld recently recalled the value of his own time as a grad student struggling with assignments from his faculty supervisors. In an essay in Science, he notes the irony that a generation of senior professionals are facing: “Personally, I am seriously tempted not to take a chance on a novice for my new project — which means today, I probably wouldn’t recruit my younger self.”