Log In

How bioengineering professor Russ Altman uses AI

Published 1 day ago5 minute read

This Q&A is part of a new series exploring how Stanford experts are using artificial intelligence in their personal and professional lives. These candid conversations aim to shed light on the practical, everyday choices people are making as AI becomes increasingly integrated into work, research, and home life. This series does not reflect official university policy or guidance.


Russ Altman, a Stanford bioengineering professor who’s been working with AI since the 1980s, shares why he still writes his own letters of recommendation, how AI helped him shop for a camera lens, and why he remains optimistic about the benefits of AI while worrying about its impact on the current generation.

 

First, I subscribe to whatever AI chatbot my AI-sophisticated students in data science, informatics, and AI research recommend. So I have switched a couple of times to different vendors based on who is doing the best at a given time. 

I can write quickly and effectively (my own opinion, but I think true), and so I am better and faster at good writing than AI – for now, that may change. So I have used it to create some outlines or initial text, but then I heavily fix it up. (By the way, these answers were NOT created with AI!) I use it probably a bit more in my personal life than at work. But I have asked it for summaries of technical areas that I am not familiar with, and had it comment on some text that I wrote where I wanted a quick second opinion. My personal life does not currently depend on generative AI. My professional life does depend on generating novel AI technologies for discovery in biology and medicine, but that is as a builder, not a user. 

 

For research, I wrote a one-page grant summary and asked for strengths/weaknesses. It was useful but a bit superficial. But it thought of a couple of criticisms that I hadn’t thought about. 

I chaired a university committee that advised the provost about how to approach both the promise and threats of AI in education, research, and administration at Stanford.

In teaching, I allow AI in my classes, but students must disclose use (such as the prompt and how they processed the output afterwards). Many students are using it, and they are not always up front about acknowledging its use, even when it is allowed and they just need to disclose. I know colleagues are worried that students will not learn how to write, and therefore how to think, and this is a very valid concern. I do not know precisely what to do about it, but I am trying to prepare students for the future while also making sure they learn things. 

In terms of personal use, I find it very useful. I recently bought a camera lens for my photography hobby and used AI extensively to understand the pros/cons of different products and understand which would likely fit my requirements. It was very, very useful, since there were hundreds of product reviews it could use to give me advice. In that case, I had already done a fair amount of research without AI, so I could tell it was giving me good info. 

 

Any personal letter and letters of recommendation. I want them to be 100% in my voice because they are so critical for making the world go round (decisions about careers, promotions, etc.). Others disagree with me, and tools may become better. I do not accept suggestions for text messages or emails, for the most part, and just write them myself. 

 

I am worried about ensuring that we educate students (kindergarten through college) appropriately and that we don’t create a generation of people who are less skilled thinkers. This generation is at specific risk because those before have been taught how to read/write/think without AI. In 10-20 years, we will have this all figured out (I hope!), but this is the generation that could be harmed the most if we don’t figure things out. 

I am most optimistic about the impact of AI on scientific and engineering discovery, and particularly what it can do to help discovery in biology and medicine with associated advances in treatment. There is a lot of potential for exciting advances in material science, energy, sustainability, and other important areas, too. 

I worry about a worsening digital divide, and so we want to make sure AI is available to all and that it improves society – aims that are compatible with the mission of Stanford HAI

 

I worked on AI in the 1980s, when it was not very powerful. Now it is very, very powerful and I am still optimistic. I think it will define the intellectual agenda for educators and researchers for the next 10-20 years, at least. I am not worried about existential threats, but I am worried that people will not take control of AI and will let a few “deciders” make too many decisions. Decisions about AI should be made at a societal level and with some transparency.

Origin:
publisher logo
stanford
Loading...
Loading...
Loading...

You may also like...