At the moment we’re sitting down with Peter Lee, head of Microsoft Analysis. Peter and quite a few MSR colleagues, together with myself, have had the privilege of working to guage and experiment with GPT-4 and help its integration into Microsoft merchandise.
Peter has additionally deeply explored the potential software of GPT-4 in well being care, the place its highly effective reasoning and language capabilities may make it a helpful copilot for practitioners in affected person interplay, managing paperwork, and lots of different duties.
Welcome to AI Frontiers.
I’m going to leap proper in right here, Peter. So that you and I’ve identified one another now for a number of years. And one of many values I consider that you simply and I share is round societal impression and particularly creating areas and alternatives the place science and expertise analysis can have the utmost profit to society. The truth is, this shared worth is without doubt one of the causes I discovered coming to Redmond to work with you an thrilling prospect
Now, in getting ready for this episode, I listened once more to your dialogue with our colleague Kevin Scott on his podcast across the concept of analysis in context. And the world’s modified just a little bit since then, and I simply marvel how that considered analysis in context sort of finds you within the present second.
Peter Lee: It’s such an vital query and, , analysis in context, I believe the best way I defined it earlier than is about inevitable futures. You strive to consider, , what will certainly be true concerning the world sooner or later sooner or later. It may be a future only one yr from now or possibly 30 years from now. But when you consider that, what’s undoubtedly going to be true concerning the world after which attempt to work backwards from there.
And I believe the instance I gave in that podcast with Kevin was, nicely, 10 years from now, we really feel very assured as scientists that most cancers will probably be a largely solved downside. However growing old demographics on a number of continents, significantly North America but additionally Europe and Asia, goes to offer large rise to age-related neurological illness. And so understanding that, that’s a really completely different world than in the present day, as a result of in the present day most of medical analysis funding is concentrated on most cancers analysis, not on neurological illness.
And so what are the implications of that change? And what does that inform us about what sorts of analysis we needs to be doing? The analysis continues to be very future oriented. You’re wanting forward a decade or extra, nevertheless it’s located in the true world. Analysis in context. And so now if we take into consideration inevitable futures, nicely, it’s wanting more and more inevitable that very common types of synthetic intelligence at or probably past human intelligence are inevitable. And possibly in a short time, , like in a lot, a lot lower than 10 years, possibly a lot lower than 5 years.
And so what are the implications for analysis and the sorts of analysis questions and issues we needs to be enthusiastic about and dealing on in the present day? That simply appears a lot extra disruptive, a lot extra profound, and a lot more difficult for all of us than the most cancers and neurological illness factor, as large as these are.
I used to be reflecting just a little bit by my analysis profession, and I noticed I’ve lived by one side of this disruption 5 occasions earlier than. The primary time was once I was nonetheless an assistant professor within the late Eighties at Carnegie Mellon College, and, uh, Carnegie Mellon College, in addition to a number of different prime universities’, uh, pc science departments, had plenty of, of actually improbable analysis on 3D pc graphics.
It was actually a giant deal. And so concepts like ray tracing, radiosity, uh, silicon architectures for accelerating these items have been being invented at universities, and there was a giant tutorial convention known as SIGGRAPH that will draw tons of of professors and graduate college students, uh, to current their outcomes. After which by the early Nineties, startup corporations began taking these analysis concepts and founding corporations to attempt to make 3D pc graphics actual. One notable firm that acquired based in 1993 was NVIDIA.
, over the course of the Nineties, this ended up being a triumph of elementary pc science analysis, now to the purpose the place in the present day you actually really feel bare and susceptible when you don’t have a GPU in your pocket. Like when you depart your property, , with out your cell phone, uh, it feels unhealthy.
And so what occurred is there’s a triumph of pc science analysis, let’s say on this case in 3D pc graphics, that finally resulted in a elementary infrastructure for all times, a minimum of within the developed world. In that transition, which is only a constructive end result of analysis, it additionally had some disruptive impact on analysis.
, in 1991, when Microsoft Analysis was based, one of many founding analysis teams was a 3D pc graphics analysis group that was amongst, uh, the primary three analysis teams for MSR. At Carnegie Mellon College and at Microsoft Analysis, we don’t have 3D pc graphics analysis anymore. There needed to be a transition and a disruptive impression on researchers who had been constructing their careers on this. Even with the triumph of issues, if you’re speaking concerning the scale of infrastructure for human life, it strikes out of the realm fully of—of elementary analysis. And that’s occurred with compiler design. That was my, uh, space of analysis. It’s occurred with wi-fi networking; it’s occurred with hypertext and, , hyperlinked doc analysis, with working techniques analysis, and all of these items, , have turn out to be issues that that you simply rely upon all day, day-after-day as you go about your life. And so they all symbolize simply majestic achievements of pc science analysis. We are actually, I consider, proper within the midst of that transition for giant language fashions.
Llorens: I ponder when you see this explicit transition, although, as qualitatively completely different in that these different applied sciences are ones that mix into the background. You’re taking them as a right. You talked about that I depart the house day-after-day with a GPU in my pocket, however I don’t consider it that approach. Then once more, possibly I’ve some sort of personification of my cellphone that I’m not pondering of. However definitely, with language fashions, it’s a foreground impact. And I ponder if, when you see one thing completely different there.
Lee: , it’s such a very good query, and I don’t know the reply to that, however I agree it feels completely different. I believe by way of the impression on analysis labs, on academia, on the researchers themselves who’ve been constructing careers on this area, the consequences may not be that completely different. However for us, because the customers and customers of this expertise, it definitely does really feel completely different. There’s one thing about these massive language fashions that appears extra profound than, let’s say, the motion of pinch-to-zoom UX design, , out of educational analysis labs into, into our pockets. This would possibly get into this large query about, I believe, the hardwiring in our brains that after we work together with these massive language fashions, regardless that we all know consciously they aren’t, , sentient beings with emotions and feelings, our hardwiring forces us—we are able to’t resist feeling that approach.
I believe it’s a, it’s a deep kind of factor that we developed, , in the identical approach that after we have a look at an optical phantasm, we might be informed rationally that it’s an optical phantasm, however the hardwiring in our sort of visible notion, simply no quantity of willpower can overcome, to see previous the optical phantasm.
And equally, I believe there’s the same hardwiring that, , we’re drawn to anthropomorphize these techniques, and that does appear to place it into the foreground, as you’ve—as you’ve put it. Yeah, I believe for our human expertise and our lives, it does seem to be it’ll really feel—your time period is an effective one—it’ll really feel extra within the foreground.
Llorens: Let’s pin a few of these, uh, ideas as a result of I believe we’ll come again to them. I’d like to show our consideration now to the well being side of your present endeavors and your path at Microsoft.
You’ve been eloquent concerning the many challenges round translating frontier AI applied sciences into the well being system and into the well being care area basically. In our interview, [LAUGHS] really, um, once I got here right here to Redmond, you described the grueling work that will be wanted there. I’d like to speak just a little bit about these challenges within the context of the emergent capabilities that we’re seeing in GPT-4 and the wave of large-scale AI fashions that we’re seeing. What’s completely different about this wave of AI applied sciences relative to these systemic challenges in, within the well being area?
Lee: Yeah, and I believe to be actually right and exact about it, we don’t know that GPT-4 would be the distinction maker. That also needs to be confirmed. I believe it actually will, nevertheless it, it has to truly occur as a result of we’ve been right here earlier than the place there’s been a lot optimism about how expertise can actually assist well being care and in superior medication. And we’ve simply been disenchanted again and again. , I believe that these challenges stem from possibly just a little little bit of overoptimism or what I name irrational exuberance. As techies, we have a look at a few of the issues in well being care and we predict, oh, we are able to resolve these. , we have a look at the challenges of studying radiological pictures and measuring tumor progress, or we have a look at, uh, the issue of, uh, rating differential prognosis choices or therapeutic choices, or we have a look at the issue of extracting billing codes out of an unstructured medical be aware. These are all issues that we predict we all know the best way to resolve in pc science. After which within the medical group, they have a look at the expertise business and pc science analysis, and so they’re dazzled by all the snazzy, impressive-looking AI and machine studying and cloud computing that we’ve. And so there may be this unbelievable optimism coming from either side that finally ends up feeding into overoptimism as a result of the precise challenges of integrating expertise into the workflow of well being care and medication, of creating certain that it’s protected and kind of getting that workflow altered to actually harness the most effective of the expertise capabilities that we’ve now, finally ends up being actually, actually troublesome.
Moreover, after we get into precise software of medication, in order that’s in prognosis and in growing therapeutic pathways, they occur in a extremely fluid setting, which in a machine studying context entails plenty of confounding components. And people confounding components ended up being actually vital as a result of medication in the present day is based on exact understanding of causes and results, of causal reasoning.
Our greatest instruments proper now in machine studying are primarily correlation machines. And because the previous saying goes, correlation will not be causation. And so when you take a basic instance like does smoking trigger most cancers, it’s essential to take account of the confounding results and know for sure that there’s a cause-and-effect relationship there. And so there’s at all times been these kinds of points.
After we’re speaking about GPT-4, I bear in mind I used to be sitting subsequent to Eric Horvitz the primary time it acquired uncovered to me. So Greg Brockman from OpenAI, who’s superb, and truly his complete staff at OpenAI is simply spectacularly good. And, uh, Greg was giving an illustration of an early model of GPT-4 that was codenamed Davinci 3 on the time, and he was displaying, as a part of the demo, the power of the system to unravel biology issues from the AP biology examination.
And it, , will get, I believe, a rating of 5, the utmost rating of 5, on that examination. In fact, the AP examination is that this multiple-choice examination, so it was making these a number of decisions. However then Greg was capable of ask the system to clarify itself. How did you give you that reply? And it could clarify, in pure language, its reply. And what jumped out at me was in its rationalization, it was utilizing the phrase “as a result of.”
“Properly, I believe the reply is C, as a result of, , if you have a look at this side, uh, assertion of the issue, this causes one thing else to occur, then that causes another organic factor to occur, and subsequently we are able to rule out solutions A and B and E, after which due to this different issue, we are able to rule out reply D, and all of the causes and results line up.”
And so I turned instantly to Eric Horvitz, who was sitting subsequent to me, and I stated, “Eric, the place is that cause-and-effect evaluation coming from? That is simply a big language mannequin. This needs to be inconceivable.” And Eric simply checked out me, and he simply shook his head and he stated, “I don’t know.” And it was simply this mysterious factor.
And in order that is only one of 100 facets of GPT-4 that we’ve been learning over the previous now greater than half yr that appeared to beat a few of the issues which were blockers to the mixing of machine intelligence in well being care and medication, like the power to truly cause and clarify its reasoning in these medical situations, in medical phrases, and that plus its generality simply appears to offer us simply much more optimism that this might lastly be the very vital distinction maker.
The opposite side is that we don’t must focus squarely on that medical software. We’ve found that, wow, this factor is absolutely good at filling out varieties and decreasing paperwork burden. It is aware of the best way to apply for prior authorization for well being care reimbursement. That’s a part of the crushing sort of administrative and clerical burden that medical doctors are below proper now.
This factor simply appears to be nice at that. And that doesn’t actually impinge on life-or-death diagnostic or therapeutic selections. However they occur within the again workplace. And people back-office capabilities, once more, are bread and butter for Microsoft’s companies. We all know the best way to work together and promote and deploy applied sciences there, and so working with OpenAI, it looks like, once more, there’s only a ton of cause why we predict that it may actually make a giant distinction.
Llorens: Each new expertise has alternatives and dangers related to it. This new class of AI fashions and techniques, , they’re essentially completely different as a result of they’re not studying, uh, specialised perform mapping. There have been many open issues on even that sort of machine studying in numerous purposes, and there nonetheless are, however as an alternative, it’s—it’s acquired this general-purpose sort of high quality to it. How do you see each the alternatives and the dangers related to this type of general-purpose expertise within the context of, of well being care, for instance?
Lee: Properly, I—I believe one factor that has made an unlucky quantity of social media and public media consideration are these occasions when the system hallucinates or goes off the rails. So hallucination is definitely a time period which isn’t a really good time period. It actually, for listeners who aren’t acquainted with the thought, is the issue that GPT-4 and different comparable techniques can have typically the place they, uh, make stuff up, fabricate, uh, info.
, over the various months now that we’ve been engaged on this, uh, we’ve witnessed the regular evolution of GPT-4, and it hallucinates much less and fewer. However what we’ve additionally come to know is that evidently that tendency can be associated to GPT-4’s skill to be artistic, to make knowledgeable, educated guesses, to interact in clever hypothesis.
And if you consider the follow of medication, in lots of conditions, that’s what medical doctors and nurses are doing. And so there’s kind of a nice line right here within the need to be sure that this factor doesn’t make errors versus its skill to function in problem-solving situations that—the best way I’d put it’s—for the primary time, we’ve an AI system the place you possibly can ask it questions that don’t have any identified reply. It seems that that’s extremely helpful. However now the query is—and the danger is—are you able to belief the solutions that you simply get? One of many issues that occurs is GPT-4 has some limitations, significantly that may be uncovered pretty simply in arithmetic. It appears to be excellent at, say, differential equations and calculus at a fundamental degree, however I’ve discovered that it makes some unusual and elementary errors in fundamental statistics.
There’s an instance from my colleague at Harvard Medical College, Zak Kohane, uh, the place he makes use of commonplace Pearson correlation sorts of math issues, and it appears to constantly overlook to sq. a time period and—and make a mistake. After which what’s fascinating is if you level out the error to GPT-4, its first impulse typically is to say, “Uh, no, I didn’t make a mistake; you made a mistake.” Now that tendency to sort of accuse the person of creating the error, it doesn’t occur a lot anymore because the system has improved, however we nonetheless in lots of medical situations the place there’s this type of problem-solving have gotten within the behavior of getting a second occasion of GPT-4 look over the work of the primary one as a result of it appears to be much less connected to its personal solutions that approach and it spots errors very readily.
In order that complete story is a long-winded approach of claiming that there are dangers as a result of we’re asking this AI system for the primary time to sort out issues that require some hypothesis, require some guessing, and will not have exact solutions. That’s what medication is at core. Now the query is to what extent can we belief the factor, but additionally, what are the methods for ensuring that the solutions are pretty much as good as potential. So one method that we’ve fallen into the behavior of is having a second occasion. And, by the best way, that second occasion finally ends up actually being helpful for detecting errors made by the human physician, as nicely, as a result of that second occasion doesn’t care whether or not the solutions have been produced by man or machine. And in order that finally ends up being vital. However now transferring away from that, there are greater questions that—as you and I’ve mentioned rather a lot, Ashley, at work—pertain to this phrase accountable AI, uh, which has been a analysis space in pc science analysis. And that time period, I believe you and I’ve mentioned, doesn’t really feel apt anymore.
I don’t know if it needs to be known as societal AI or one thing like that. And I do know you may have opinions about this. , it’s not simply errors and correctness. It’s not simply the chance that these items may be goaded into saying one thing dangerous or selling misinformation, however there are greater points about regulation; about job displacements, maybe at societal scale; about new digital divides; about haves and have-nots with respect to entry to those issues. And so there are actually these greater looming points that pertain to the thought of dangers of these items, and so they have an effect on medication and well being care straight, as nicely.
Llorens: Definitely, this matter of belief is multifaceted. , there’s belief on the degree of establishments, after which there’s belief on the degree of particular person human beings that have to make selections, robust selections, —the place, when, and if to make use of an AI expertise within the context of a workflow. What do you see by way of well being care professionals making these sorts of choices? Any obstacles to adoption that you’d see on the degree of these sorts of unbiased selections? And what’s the best way ahead there?
Lee: That’s the essential query of in the present day proper now. There may be plenty of dialogue about to what extent and the way ought to, for medical makes use of, how ought to GPT-4 and its ilk be regulated. Let’s simply take america context, however there are comparable discussions within the UK, Europe, Brazil, Asia, China, and so forth.
In america, there’s a regulatory company, the Meals and Drug Administration, the FDA, and so they even have authority to control medical gadgets. And there’s a class of medical gadgets known as SaMDs, software program as a medical machine, and the large dialogue actually over the previous, I’d say, 4 or 5 years has been the best way to regulate SaMDs which are based mostly on machine studying, or AI. Steadily, there’s been, uh, increasingly more approval by the FDA of medical gadgets that use machine studying, and I believe the FDA and america has been getting nearer and nearer to truly having a reasonably, uh, strong framework for validating ML-based medical gadgets for medical use. So far as we’ve been capable of inform, these rising frameworks don’t apply in any respect to GPT-4. The strategies for doing the medical validation don’t make sense and don’t work for GPT-4.
And so a primary query to ask is—even earlier than you get to, ought to this factor be regulated?—is when you have been to control it, how on earth would you do it. Uh, as a result of it’s principally placing a health care provider’s mind in a field. And so, Ashley, if I put a health care provider—let’s take our colleague Jim Weinstein, , an important backbone surgeon. If we put his mind in a field and I give it to you and ask you, “Please validate this factor,” how on earth do you consider that? What’s the framework for that? And so my conclusion in all of this—it’s potential that regulators will react and impose some guidelines, however I believe it could be a mistake, as a result of I believe my elementary conclusion of all that is that a minimum of in the intervening time, the principles of software engagement have to use to human beings, to not the machines.
Now the query is what ought to medical doctors and nurses and, , receptionists and insurance coverage adjusters, and all the folks concerned, , hospital directors, what are their tips and what’s and isn’t acceptable use of these items. And I believe that these selections aren’t a matter for the regulators, however that the medical group itself ought to take possession of the event of these tips and people guidelines of engagement and encourage, and if mandatory, discover methods to impose—possibly by medical licensing and different certification—adherence to these issues.
That’s the place we’re at in the present day. Sometime sooner or later—and we’d encourage and actually we’re actively encouraging universities to create analysis tasks that will attempt to discover frameworks for medical validation of a mind in a field, and if these analysis tasks bear fruit, then they may find yourself informing and making a basis for regulators just like the FDA to have a brand new type of medical machine. I don’t know what you’d name it, AI MD, possibly, the place you would really relieve a few of the burden from human beings and as an alternative have a model of some sense of a validated, licensed mind in a field. However till we get there, , I believe it’s—it’s actually on human beings to sort of develop and monitor and implement their very own conduct.
Llorens: I believe a few of these questions round take a look at and analysis, round assurance, are a minimum of as fascinating as, [LAUGHS] —doing analysis in that area goes to be a minimum of as fascinating as—as creating the fashions themselves, for certain.
Lee: Sure. By the best way, I need to take this chance simply to commend Sam Altman and the OpenAI people. I really feel like, uh, you and I and different colleagues right here at Microsoft Analysis, we’re in a particularly privileged place to get very early entry, particularly to attempt to flesh out and get some early understanding of the implications for actually crucial areas of human growth like well being and medication, training, and so forth.
The instigator was actually Sam Altman and crew at OpenAI. They noticed the necessity for this, and so they actually engaged with us at Microsoft Analysis to sort of dive deep, and so they gave us plenty of latitude to sort of discover deeply in as sort of sincere and unvarnished a approach as potential, and I believe it’s vital, and I’m hoping that as we share this with the world, that—that there might be an knowledgeable dialogue and debate about issues. I believe it could be a mistake for, say, regulators or anybody to overreact at this level. This wants research. It wants debate. It wants sort of cautious consideration, uh, simply to know what we’re coping with right here.
Llorens: Yeah, what a—what a privilege it’s been to be anyplace close to the epicenter of those—of those developments. Simply briefly again to this concept of a mind in a field. One of many tremendous fascinating facets of that’s it’s not a human mind, proper? So a few of what we would intuitively take into consideration if you say mind within the field doesn’t actually apply, and it will get again to this notion of take a look at and analysis in that if I give a licensing examination, say, to the mind within the field and it passes it with flying colours, had that been a human, there would have been different issues concerning the intelligence of that entity which are underlying assumptions that aren’t explicitly examined in that take a look at that then these mixed with the data required for the certification makes you match to do some job. It’s simply fascinating; there are methods during which the mind that we are able to presently conceive of as being an AI in that field underperforms human intelligence in some methods and overperforms it in others.
Llorens: Verifying and assuring that mind in that—that field I believe goes to be only a actually fascinating problem.
Lee: Yeah. Let me acknowledge that there are most likely going to be plenty of listeners to this podcast who will actually object to the thought of “mind within the field” as a result of it crosses the road of sort of anthropomorphizing these techniques. And I acknowledge that, that there’s most likely a greater method to speak about this than doing that. However I’m deliberately being overdramatic by utilizing that phrase simply to drive dwelling the purpose, what a unique beast that is after we’re speaking about one thing like medical validation. It’s not the sort of slim AI—it’s not like a machine studying system that offers you a exact signature of a T-cell receptor repertoire. There’s a single proper reply to these issues. The truth is, you possibly can freeze the mannequin weights in that machine studying system as we’ve carried out collaboratively with Adaptive Biotechnologies with the intention to get an FDA approval as a medical machine, as an SaMD. There’s nothing that’s—that is a lot extra stochastic. The mannequin weights matter, however they’re not the elemental factor.
There’s an alignment of a self-attention community that’s in fixed evolution. And also you’re proper, although, that it’s not a mind in some actually essential methods. There’s no episodic reminiscence. Uh, it’s not studying actively. And so it, I suppose to your level, it’s simply, it’s a unique factor. The massive vital factor I’m making an attempt to say right here is it’s additionally simply completely different from all of the earlier machine studying techniques that we’ve tried and efficiently inserted into well being care and medication.
Llorens: And to your level, all of the pondering round numerous sorts of societally vital frameworks are attempting to catch as much as that earlier technology and never but even aimed actually adequately, I believe, at these new applied sciences. , as we begin to wrap up right here, possibly I’ll invoke Peter Lee, the pinnacle of Microsoft Analysis, once more, [LAUGHS] sort of—sort of the place we began. It is a watershed second for AI and for computing analysis, uh, extra broadly. And in that context, what do you see subsequent for computing analysis?
Lee: In fact, AI is simply looming so massive and Microsoft Analysis is in a bizarre spot. , I had talked earlier than concerning the early days of 3D pc graphics and the founding of NVIDIA and the decade-long sort of industrialization of 3D pc graphics, going from analysis to only, , pure infrastructure, technical infrastructure of life. And so with respect to AI, this taste of AI, we’re kind of on the nexus of that. And Microsoft Analysis is in a extremely fascinating place, as a result of we’re without delay contributors to all the analysis that’s making what OpenAI is doing potential, together with, , nice researchers and analysis labs all over the world. We’re additionally then a part of the corporate, Microsoft, that wishes to make this with OpenAI part of the infrastructure of on a regular basis life for everyone. So we’re a part of that transition. And so I believe for that cause, Microsoft Analysis, uh, will probably be very centered on sort of main threads in AI; in actual fact, we’ve kind of recognized 5 main AI threads.
One we’ve talked about, which is that this kind of AI in society and the societal impression, which encompasses additionally accountable AI and so forth. One which our colleague right here at Microsoft Analysis Sébastien Bubeck has been advancing is that this notion of the physics of AGI. There has at all times been an important thread of theoretical pc science, uh, in machine studying. However what we’re discovering is that that model of analysis is more and more relevant to making an attempt to know the elemental capabilities, limits, and pattern strains for these massive language fashions. And also you don’t anymore get sort of laborious mathematical theorems, nevertheless it’s nonetheless sort of mathematically oriented, similar to physics of the cosmos and of the Huge Bang and so forth, so physics of AGI.
There’s a 3rd side, which extra is concerning the software degree. And we’ve been, I believe in some components of Microsoft Analysis, calling that costar or copilot, , the thought of how is that this factor a companion that amplifies what you’re making an attempt to do day-after-day in life? , how can that occur? What are the modes of interplay? And so forth.
After which there may be AI4Science. And, , we’ve made a giant deal about this, and we nonetheless see simply super simply proof, in mounting proof, that these massive AI techniques can provide us new methods to make scientific discoveries in physics, in astronomy, in chemistry, biology, and the like. And that, , finally ends up being, , simply actually unbelievable.
After which there’s the core nuts and bolts, what we name mannequin innovation. Just a bit whereas in the past, we launched new mannequin architectures, one known as Kosmos, for doing multimodal sort of machine studying and classification and recognition interplay. Earlier, we did VALL-E, , which simply based mostly on a three-second pattern of speech is ready to verify your speech patterns and replicate speech. And people are sort of within the realm of mannequin improvements, um, that can hold taking place.
The long-term trajectory is that sooner or later, if Microsoft and different corporations are profitable, OpenAI and others, this can turn out to be a totally industrialized a part of the infrastructure of our lives. And I believe I’d count on the analysis on massive language fashions particularly to begin to fade over the subsequent decade. However then, complete new vistas will open up, and that’s on prime of all the opposite issues we do in cybersecurity, and in privateness and safety, and the bodily sciences, and on and on and on. For certain, it’s only a very, very particular time in AI, particularly alongside these 5 dimensions.
Llorens: It is going to be actually fascinating to see which facets of the expertise sink into the background and turn out to be a part of the muse and which of them stay up shut and foregrounded and the way these facets change what it means to be human in some methods and possibly to be—to be clever, uh, in some methods. Fascinating dialogue, Peter. Actually admire the time in the present day.
Lee: It was actually nice to have an opportunity to talk with you about issues and at all times simply nice to spend time with you, Ashley.