June 13, 2025

Good Medicine: Dr. Bob Wachter

Dr. Bob Wachter breaks down realism in medicine

As I discussed in our inaugural post, with each publication, we’re going to focus on an attribute that seems essential for Good Medicine. And this week’s theme is realism. I thought this was a particularly appropriate place to start because we are living in a time of unprecedented technological change. Patients have access to infinite information. New drugs and devices are constantly being brought to market. Some of them even work! And we as physicians are challenged to stay up-to-date so we can provide sage, considered advice about what to do.

This environment requires striking the right balance between hype and skepticism so we can deliver the best advice to our patients. Put another way, it requires realism. And I can’t think of anyone better than Dr. Bob Wachter to provide a framework of how to be that physician that guides us through these choppy waters of change.

Though he needs no introduction, Bob is chairman of the Department of Medicine at UCSF. He has published half-a-dozen books on various topics within health care, including on topics like quality, safety, ethics, and AI technology, as well as more than 250 articles. Hundreds of thousands of people follow him for his insightful takes across a variety of platforms, including LinkedIn and X (the app formerly known as Twitter). In full disclosure, he is an advisor to Roon.

So, let’s dive in.

Realism regarding your own expertise

Bob has been famous in academic medicine for a long time. But his internet fame really took off over the last decade and exploded because of his hot takes during the pandemic. Before the Covid pandemic hit, Wachter told us he had about 15,000 followers on Twitter.  What about after? His follower count swelled to almost 300,000 people - a massive increase. I was curious - How was he able to capture so many people’s hearts and minds during such a terrifying time – and what can other medical communicators learn from him?

Wachter told me a few things. First, people were “desperate for trusted sources” during the pandemic, he said. And he knew that his experience as a general internal medicine physician set himself up nicely to be an effective interpreter of Covid data. Wachter, of course, is not a virologist and certainly had not spent a career studying infectious disease or vaccines. But as he told us, “I’m a generalist,” and it’s his job to “know something about a lot of things.”

As a deep subspecialist in surgical neuro-oncology, I found this statement striking. We’re so often in our niche silos in medicine, but the job of the generalist seems impossibly tough when I listened to Bob describe his responsibility as a provider and steward of trusted medical information. Sure there’s incredible value in deep subspecialty expertise, but increasingly our society and medicine requires people, like Bob, who are master integrators. People who can vacuum up insights from virologists and vaccinologists and interpret their findings for the broader public. And essential to Bob’s success was a sense of realism in his shared insights. He offered that he was straightforward in his communications about what he did not know and this certainly lent him a high degree of authenticity and trust with the broader public. This authenticity and realism, a core skill set for any practicing physician, also happens to be key attributes of influencers on other social media apps. Notably, he didn't have to dance, sing or be obnoxious to have reach.  Bob, it turns out, was just being Bob.

Realism about fighting misinformation

Bob also articulated that the ability to persuade in the public sphere has its limits. People inclined to trust science found him to be a useful resource, but those who gravitated to misinformation were unlikely to have been truly swayed by his tweets. Complicating matters, Bob told us that the person putting out misinformation has a “huge advantage” in getting disseminated over physicians like himself. That said, he found that one of the best tactics to fight misinformation during the pandemic involved sharing personal stories. He would talk about what he was doing, or his family was doing, to stay safe. He would talk about whether or not he’d eat in restaurants, or whether he’d fly. That put the concept of risk - which can be hard for humans to truly conceptualize - in terms that most people could understand. If Bob Wachter is choosing not to eat at restaurants, given everything he’s seeing and reading, then perhaps I should avoid it, too.

That is a hugely important insight that is widely overlooked and one we’ve written about before: Storytelling is powerful and people are mostly interested in other people. Facts and figures should be part of the story, but not the only thing that medical communicators share. It’s important to put medical information in context. “I am a human and I am making a choice,” is how Wachter described his approach to education around Covid Risk. “And people found that very useful.”

This reflects a core problem in our society today. Free speech is one of our bedrock principles, but its consequences include an infinite array of media sources, many of which exploit users with misinformation for financial and/or political gain. A hard problem without an easy solution. But as Bob has shown us, the solution is not to wish away the problem. Rather, all doctors need to learn to communicate effectively in our public squares with messages that are both educational and compelling. If misinformation is the poison, then we have a responsibility to be the antidote.

Realism and AI

In Bob’s Book “Digital Doctor” published in 2015,  he astutely noted the disconnect between computerized systems like the Electronic Health Record (EHR) and the value they generated for physicians. He remarked that, “It felt like all we were doing was feeding this thing and getting very little useful intelligence.” Fast forward to 2024, we’re now on track to deploy technology that really could impact the physician workforce, a real “game changer” in his words.  For example, Bob correctly anticipated the rise of digital scribes - those note taking and documentation apps that ambiently listen to patient-doctor conversations and generate a note for a physician to review. From what we’ve both seen these technologies are already quite good and inch us closer to true “keyboard liberation.”

I suspect these technologies forecast a rapidly coming future where every doc in the world has a capable low-cost assistant to help with documentation, coding, and suggested billing - a true era of both keyboard liberation and assistance. And let’s not forget those soul crushing activities like scheduling, appointment reminders, prior authorizations, insurance appeals, and more (it’s likely that insurance will develop its own AI so it will become the battle of AI bots! I imagine a hilarious, escalating, never-ending feud between bots arguing over the need for PET scan! But I digress..). In these areas, Bob and I agree that AI has the potential to be most immediately useful.

On diagnostic capabilities, we both felt this will take a lot more time - to get right. For one thing, so much of the medical record exists outside the EHR. All those conversations that may not be documented, those social/cultural aspects that may influence a particular medical recommendation, and more can often markedly influence decision making. In this way, physicians can’t be asleep at the wheel when it comes to relying on these technologies if we’re going to deliver high-quality, patient centered care. On that point, Bob was more skeptical about a more immediate practical impact on patient care. Just like when health IT was initially deployed, there were many unintended consequences and we’ll likely have that in this new era of AI as well.

What else might we worry about? First, to me it’s an open question whether these technologies will actually allow physicians to spend more time with patients in traditional health care settings. Does improved productivity just lead to more patient throughput, with similar poor patient satisfaction? Or maybe we’ll spend the same amount of time seeing patients but the quality of that time will be much improved. The other worry Bob and I spoke about related to the potential deskilling of the physician workforce. If more and more of the clinical history taking, and reasoning is delegated to AI, how will our quality as doctors evolve? On this, Bob was more optimistic suggesting that a similar uproar likely happened with the mass adoption of calculators. Yes, we’re all worse mathematicians today, but we’re doing far more complex math nowadays. No one is pining for the days of the abacus! For the foreseeable future, it’s probably best to think of AI as not our medical co-pilots but our chief of staff. Why not co-pilot? As Bob pointed out “Humans tend to trust the computer in some ways more than themselves.” And we “stink” at being a reliable quality control when we’re put into the final sign-off of a semi-automated loop. This reflects the well-known automation bias which has been widely studied in many fields. This is why the Chief of Staff analogy to me makes more sense; we can have agents that help us be the best, most productive version of ourselves but we still need to set a north star, while actively assessing quality and performance.

One interesting point that needs to be said: our AI today is the worst it’s ever going to be. Yes, various studies have shown that the AI gets it totally wrong a not insignificant percentage of the time. Yes, our data sets that train our models are biased and full of errors. But what are we comparing it to? We humans commit medical errors, often serious too. The point is AI is poised to only get better and its impact on medicine will be incredibly profound in ways that we can’t yet anticipate (a recent study in Nature Medicine for example demonstrated a significant reduction in all-cause mortality when an AI alerted a physician to a significant finding on an electrocardiogram). All of this reminds me of the head scratching around the “Internet” back in the 90s. The applications that were subsequently developed because of the internet were unpredictable but ultimately what made it fascinating and essential. And that’s where AI applications are headed.

After our conversation on AI in healthcare, it left me with many more questions than answers. While the productivity aspects of AI seem easy to wrap your mind around, I’m curious as to whether AI will actually be able to be sold against the actual cost of work. In certain fields like radiology, I think the required physician workforce will be much smaller than it is today eventually. But on the actual patient care side, its an open question how impactful AI will be against our current cost structure in health care. For example, even if I could replace much of my clinical support staff with an AI agent, I can’t really operate on more patients since the time it takes to operate is fixed, and I’ll always need to see the patient before and after an operation. Moreover, I’ll still need people in the office to answer very specific patient questions and assess them in person. Most importantly, those same people offer comfort to patients, which doesn't seem easily replaced.

Realism about medical education

I personally think it’s fascinating to consider what happens to medical education in the age of AI. Do I really need to remember the formula of the fractional excretion of sodium? Or can I just ask Siri to tell me the value for the particular patient I’m seeing? So much of medical expertise seems to be from the imprinting of information that comes through study and experience. For most skilled clinicians, it’s often the case that they just instantly know what to do after collecting the relevant details (and also argue against the alternative scenarios). Wachter is doing a lot of thinking about how medical education will change and is in fact chairing a commission to look at how AI should impact medical school. Like the calculator example, I think he’s right that we’ll probably be slightly worse doctors but better clinicians. And since these tools are already out there for young doctors to utilize, it’s much better to teach physicians how to use these tools responsibly. That will require some training for doctors on the basics of how these models are trained. And maybe it will produce an even better doctor in the end. Why? Because every doc will now need to look at an AI recommendation through a plausibility filter and the development of that filter will require significant upleveling of expertise. Because guess who’s responsible if an AI generates a wrong recommendation? Not OpenAI. You, Dr. Jones, You. How’s that for realism?

His one book recommendation?

I think you learn so much by asking people what they read outside of medicine. Bob highly recommended

Ethan Mollick

’s new book “Co-Intelligence,” which is based on his brilliant Substack

One Useful Thing

. Mollick studies innovation, entrepreneurship and the true potential for AI. We’ve already downloaded it to our Kindle.

Dr. Wachter on Roon and online

I love watching great doctors explain things. Here are some of Bob’s best videos on Roon, where he provided his expertise on our dementia app.


I also loved this recent podcast he did with Dr. Abraham Verghese and Dr. Eric Topol

Well that’s it folks. I hope you enjoyed the very first dose of Good Medicine. We have an incredible line up already booked for upcoming editions, but would welcome your suggestions for who you’d like to see us interview. Keep an eye on Roon and on our LinkedIn, Instagram and our blog for more highlights and the audio from our interview with Dr. Wachter. You can follow him on X  @Bob_Wachter.

Dr. Rohan Ramakrishna is a Professor of Neurosurgery at Weill Cornell Medicine and one of the founders of Roon.