49 Comments
User's avatar
Riley D. Choquette's avatar

I think these questions are ancient. I love this quote from Ecclesiastes (the same book that says “there is nothing new under the sun”): “One who watches the wind will not sow, and the one who looks at the clouds will not reap…. In the morning sow your seed, and at evening do not let your hand rest, because you don’t know which will succeed, whether one or the other, or if both of them will be equally good.”

‭‭

I like the idea that an ancient farmer would ask the same type of question, or even a more modern one. What’s the point of my work if it might rain? Should I even learn how to drive oxen if tractors are coming? In most any case, the one who has already started is better off when the chips fall where they may. I love your point that rising grad students are better suited than their advisors to unlock AI’s potential. And I totally resonate with your point about the interface between technical knowledge and human intuition/experience. I think AI just removes access to the technical knowledge as a limiting reactant; now human intuition and real value creation can be the main focus.

Thanks for sharing.

Lomklal's avatar

This counterintuitively made me more anxious...

David Bessis's avatar

Thanks for the blunt comment!

That wasn't the intended effect, even though I intentionally tried to stay "sentiment-neutral" rather than "comforting".

I've made some edits to the final paragraphs to better reflect my (cautiously optimistic) perspective, let me know if this makes any difference.

Lucas's avatar

much of the public discourse is conduct "doomer" people. I think we need more piece like yours.

Mike Mellor's avatar

Au contraire, I think that public discourse on AI is dominated by boomers.

Like Horace, I would leave it twenty years, but we don't have that long to evaluate the impact of AI.

Lucas's avatar

I mean by doomers = "we are cooked" type of people.

Mike Mellor's avatar

So far the evidence points that way.

Jameson Graber's avatar

Fantastic piece. The first thing I appreciate is that you care about why we do research. I, too, have encountered the attitude "well it's fun for me so who cares if it's useful" more frequently than I ever imagined I would. To me, it's distressing. The first question I ask myself before I commit to anything is whether it's the right thing to do.

Then I really appreciate how you insist that to find the answer to that question, we have to look inward. This isn't always natural. External checks seem like they should be more reliable. And I guess it would be unwise to completely ignore them. But ultimately, nothing creative can ever come from looking at currently established incentives.

I also like how you acknowledge the ambiguity in the current state of specialization. This isn't an easy problem to think about! It's all the more frustrating when we hear administrators tell us how much they want "interdisciplinary" research, as if a few words of encouragement is all it takes to produce revolutionary ideas that transcend disciplines. I don't know how to think properly about specialization. If I had never started by thinking about a very narrow set of problems, I never would have even started into research. But the older I get, the more I wonder if I can ever answer a "real" question.

Mike Mellor's avatar

In my country we used to have the Council on Scientific and Industrial Research. I met a guy who worked there who couldn't believe that he was given all these fantastic toys to play with, and he was paid as well!

Prof. Gavin Brown's avatar

Sent to my lab group

MD's avatar

About gap-filling: While I was studying for my driving license test, I somehow found a bachelor thesis that started (paraphrasing) "People have studied the kinds of mistakes students make when learning to drive. But nobody has yet compared the mistakes they make with their instructors to the mistakes they make in the final state exams, so let's do it!"

In that moment I realised that not only we will never answer every question, we probably shouldn't. Finding something better than that has been a project ever since.

Mark Frankel's avatar

You are right that the crisis of intellectual work began before AI. Academia often rewards publishing and specialization more than real understanding. Your distinction between “original research” and “valuable research” is important. A machine can produce correct papers, but it cannot decide why a question matters. You also reframe the student’s fear well. The danger is not simple replacement. AI removes the safety of routine intellectual labor and forces people to justify why they are thinking at all.

Where I think you are mistaken is your location of the human advantage. You place it in sincerity and personal interest. But caring about a topic does not make it socially valuable. AI will increasingly form hypotheses and evaluate arguments. The real human role is judgment and responsibility. Intellectual work survives because society needs accountable decision makers, not just curious thinkers.

David Bessis's avatar

Thanks. My point is about intellectual sincerity, which is more specific than sincerity alone and more rigorous than personal interest. Maybe we’re not hearing the same nuance in this expression, but in my view the intellectual sincerity of a trained academic should (hopefully) have a strong component of judgment and responsibility.

Eg for me: my intellectual sincerity convinced me to shift from proving theorems (the socially agreed job description of a pure mathematician) to writing about mathematical cognition (a less immediately recognized activity), because I sincerely felt that this was more “interesting” and “important”, as least from my subjective perspective: my natural curiosity was there and I decided to trust my gut. My retrospective analysis is that, yes, the social value of my work increased as a result of this shift, which was guided my (trained and aligned with value) intellectual instinct.

David Bessis's avatar

The short version could be: at a certain point in my career, I decided that I had proven myself enough as a creative intellectual and it was time to trust my intellectual instinct, even if it led me to a place where the value creation would be less immediately recognized.

Mark Frankel's avatar

I think I understand your nuance better. You are describing something more disciplined than “follow your passion.” A trained mind develops a sense for which questions are alive and which are sterile, and your shift was a redirection of rigor, not an escape from it.

My concern is that internal instinct cannot be the safeguard. The issue is not whether a thinker feels a question is important, but whether the world can correct him when he is wrong. AI removes scarcity of technical competence, so inwardly validated inquiry alone will not hold value.

The pressure AI creates is externalization. Intellectual work matters when it shapes learning, decisions, or real outcomes outside the thinker. Your move worked not only because you trusted your gut, but because it connected to how people actually think and learn. That standard is stricter than intellectual sincerity.

David Bessis's avatar

Yes, it's true that there is something stricter that intellectual sincerity. There are dimensions of openness, empathy, integrity, value-centricity, etc...

(Not sure I'm able to articulate the full list without making it long and intimidating ;) )

I hope the young readers this piece is destined to will get the main idea, which is about not being afraid of going where they secretely feel the true value lies, even if the institution pushes them in another direction, as in the end the most intellectual sincere and ambitious path might be the less risky one.

svengineer99's avatar

Thank you David for sharing this wonderful essay; I especially enjoyed the quotes from Larry McEnerney and Rainer Marie Rilke, and the further reading and thinking those lead to.

Daniel Aronoff's avatar

This is apposite and wonderful. It evokes the realization that a machine that answers all known (and in the case of an LLM publicly recorded) questions, cannot know of questions that have never been asked or answered (though it may discover novel analogies inside the convex hull of existing documents). Over the course of history it is precisely this - which “stands on the shoulders of the ancestors” that constitutes creativity and leads to progress in every domain. I don’t see the current application of linear algebra trained by the chain rule on the corpus of previously written documents fundamentally changing that.

svengineer99's avatar

Also, answering a question is not the same as answering it correctly, contextually, with appropriate provisos and disclaimers, or perhaps questioning the question, context, etc...

Considering 'Why Most Published Research Findings Are False' (Ioannidis, 2005), the answer from any statistical inference machine where such publications constitute the best of the training data might also be taken with a large grain of salt. And while that may historically apply less to the field of mathematics than others, I wonder if it may become less so as AI contributions and reviews enter the mix.

Lindsay Meisel's avatar

I watched the full Larry McEnerny lecture on your recommendation, and I'm so glad I did. 

My favorite part was when he took issue with researchers justifying their work by claiming that it "closes a gap" in our knowledge—as if the body of knowledge were a finite thing, like a puzzle and you're just looking for the missing pieces. But knowledge, he argues, is nothing like a puzzle, rather it's infinite and shifting and defined (and re-defined) by whomever makes the strongest case that something is relevant and valuable and worthy of being called knowledge.

All of which seems relevant to your point here!

David Bessis's avatar

Thanks! It's one the few videos that had a profound influence on me, and I systematically recommend to my friends.

Nathan Desdouits's avatar

Hello David,

Thank you for this great piece. It resonated deeply with my own (short) experience in academia, and with my insecurities toward AI and its impact on intellectual jobs. The most striking part, to me, is your blunt description of academia being "a by-product of the academic technostructure" - it's the first time I see this particular diagnostic on academia's woes, and it totally fits the symptoms.

I believe I am extremely lucky to have the right blend of experience, intellectual acuity and socio-economic capital to be able to be just barely on the "right side" of the AI revolution, the "driver seat". Yet looking at the incredible pace of AI development I still have deep anxiety that this winning cocktail will only net me a limited lead time before I get to be obsolete like everyone else... And then, what's next?

To me, here's the big question: when the time has come when everyone is playing on an even field in terms of technical mastery, what proof do we have that the focus on sincerity and value will actually be what brings food on the table for intellectuals?

David Bessis's avatar

Thank you Nathan!

Two things:

- First, it won't be an even field in terms of mastery. The coming mastery will remove some legacy "technical" aspects, but create new ones, just like modern mathematicians manipulate higher level abstractions but eschew computational aspects that used to be central in their practice. The playing field will never be level and there will always be some experts, even if their role is limited to prompting AI systems and analyzing outputs, and helping with their alignment.

- Second, there's no reason to conflate "intelligence", whatever it means, with the social functions enabled by "intelligence". Humans are social animals and being an intellectual is a social role before being a skillset.

Nobody knows how human organization will evolve in the future, but at this point there is zero referenceable example where humans have let go of their social nature.

The focus on sincerity and value is a strategy for intellectual creativity, it increases the likelihood of success but doesn't guarantee it. It's always been hard, there's always been starving intellectuals and it will continue, but on the other hand there's always been successful ones.

Do you seriously think people are going to only converse with AI, and won't be interested in a smart perspective on what AI is telling them, from a person who is substantially more competent and legitimate than them on the subject?

That would be a radical anthropological transformation, not just a technological one. I'm not saying I'm 100% sure it won't happen some day, but it'd be exaggerated to say it's the only possible scenario.

Gnaminin efraim tresor Blay's avatar

It’s a good sensational when I read you . I learn much

Jostein Iversen's avatar

Thank you for your deep, profound letter adressing a contemporary concern for many aspiring to pursue intellectual journeys. I find your insights and learnings motivating. I hope to find time to properly follow the links you have shared and digest them with pressure and time. Find questions that have real value and trim the plant; so it can develop in your cardiovascular roots.

Aditya Chandrasekhar's avatar

A true AGI can automate anything, including great research. By definition.

David Bessis's avatar

Sure, but defining omnipotent entities doesn't make them exist overnight.

Your argument feels theological to me.

Joe Dubsky's avatar

You know what I find most confusing... assuming the onset of true AGI, why are we so specifically worried about intellectual work?

In theory, a form of AI with agency, robust common sense, continuous learning, and whatever else comes with AGI (at this point why not throw in consciousness too???) won’t ‘take over’ just intellectual work, but the scope of the jobs it could target are so immense and vast I do not understand the worry about intellectual/academic work specifically.

If this form of intelligence if even truly possible, which I do have my own doubts on, why stop here? Why not assume we converge to a quasi-WALL-E world?

As a (young) PhD student myself, it seems like intellectual work would be one of the LAST targets. To form and ask truly new and important questions, this is something LLM’s couldn’t possibly dream about. The gap may seem to be closing, but there’s still a jump to be made that seems so consequentially gigantic that even AGI, with all the aforementioned traits, would struggle to fill. This is all my dull and ill researched interpretation.

Kafka Kat's avatar

Yes, courage and honesty are antidotes to the anxiety over technical mastery.

If one hopes to achieve any accomplishments by doing only "half-ass" labour, (s)he is bound to feel overwhelmed by AI. In a sense, you'll suffer from actual imposter status plus the imposter syndrome. I like that you make a distinction between a class of originality of the merely novel and transformative originality.

Einstein prescribes the history and philosophy of science "A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is—in my opinion—the mark of distinction between a mere artisan or specialist and a real seeker after truth"

Thank you David for writing.