0:14 - Unidentified Speaker 
Hi, everyone. Can you all hear me? Anybody?

0:28 - D. B. 
Yeah. Thanks. All right.

1:15 - E. G. 
This group is going almost three years.

1:18 - D. B. 
Yeah, 145 meetings is getting pretty close to three years. I think we started in February. How's the weather in your neck of the woods?

1:33 - E. G. 
We just finished a huge snowstorm. Everything's closed today.

1:40 - D. B. 
My wife and I took a hike in the snow earlier, and it was several inches deep.

1:53 - Multiple Speakers 
You've got more snow than we have. Maybe.

1:56 - D. B. 
Yeah, I'm in the Louisville area, and we got a bunch of snow too.

2:01 - A. B. 
It was like, what, eight to 10 inches or so this last week. Oh, it was last week.

2:07 - D. B. 
So I think, because there was another storm that went from west to east and hit, I was actually in Baltimore for something a few days, several days ago, and it hit Baltimore pretty bad too. But I think that was the storm you had was the same one but I think exactly one yeah we're getting some more today it's like six more inches today it's actually probably for this one yeah we just the snow

2:34 - E. G. 
ended a few hours ago but it was going pretty pretty heavy for almost 24 hours yeah yeah we've got nothing here it's just cold cold what town are you in again amden it's One town over from Bangor.

2:52 - D. B. 
Hamden, yeah, okay.

2:54 - E. G. 
Yeah, we're just directly south of Bangor. Yeah.

2:59 - D. B. 
All right.

3:00 - M. M. 
Daniel, can you use your car? Did you go by car somewhere or not difficult to drive?

3:12 - Unidentified Speaker 
Me?

3:13 - D. B. 
Yeah, you said it.

3:15 - M. M. 
Well, they did plow the road earlier today. But we live on a hill.

3:21 - D. B. 
There's a little part of the hill that is a little dicey. So I haven't been out in the car, no.

3:30 - M. M. 
Me neither. I'm stuck here at the house, OK? But I hope that tomorrow will be OK. Yeah, did they plow your road? No, no. Everything is in snow.

3:42 - D. B. 
be okay.

3:44 - E. G. 
Time to build snowmen.

3:49 - M. M. 
Yes, yes, we can.

3:53 - D. B. 
Well, I guess we can go ahead and get started. Okay, so one of our regular features then is going to be about these master's projects where they're going to be using AIs to write a book or an equivalent website. And I see that most of the folks are not here, but Lamont is.

4:34 - L. G. 
Welcome, Lamont.

4:35 - Unidentified Speaker 
Hello, I'm here.

4:37 - D. B. 
Do you have anything to any new thoughts or anything to report, ask about, progress, anything like that? You're muted. You're muted.

4:49 - L. G. 
Over the last two weeks, I've been trying to figure out a good platform or framework I could use to attempt a multi-agent one. I started with something called Agent Zero, but it's kind more automated, so you can say, hey, you can have multiple agents, but the workflow isn't as easy to define. I was wondering if there was anyone who had experience using one multi-agent platform where you could specify the number of agents or the role or task of the agents.

5:27 - D. B. 
Who was the person on our group that was doing their PhD on that?

5:33 - Unidentified Speaker 
Ishman.

5:33 - M. M. 
Ishman, my student and an one, they give a presentation of multi-agent. Atif also is using, oh, I don't remember the name right now. Maybe I can search and give the link.

5:49 - D. B. 
You don't remember his first name or anything like that?

5:54 - M. M. 
No, no, no. I don't remember the name of the tool that they are using.

6:01 - Multiple Speakers 
Oh, who's the student?

6:03 - M. M. 
Multi-agent using the agents LLM, and they assign the role of all of different agents. Maybe if I find the link right now. What was the name of the student? The name of the student is Atif. Another one is Ishman. He gave a presentation here. Samin, all of my students, we love this.

6:32 - D. B. 
Well, I'll just make a mention then.

6:35 - M. M. 
Yeah, I will try even right now to find the name of the tool.

6:39 - L. G. 
And I'll try to research a little bit more with the information you gave me as well. But yeah, that's kind of what I was thinking is to use the, there were several ways we could go. I'm trying to do the background kind of literature review to see if it's better to look at it like, OK, well, if you use x number of agents, if he has some measures of efficiency or quality or how that would turn out, or if it was better to look at it from like a number of fact methods, so you like you have facts and how they fall into the data quality framework, or you could use a lot of different ways to measure the effectiveness. I was thinking, how would you measure the effectiveness of a multi-agent team inside of this project?

7:24 - M. M. 
Like how, what would that look like, right? Depending from the task.

7:28 - Multiple Speakers 
Yeah.

7:28 - Unidentified Speaker 
Yeah.

7:28 - M. M. 
your task, efficiencies related with your objectives, your cost function or loss function or whatever is the task. So what I had done.

7:40 - Multiple Speakers 
Yeah, I had done with mine is actually passed mine into another LLM and had another LLM evaluated.

7:49 - M. M. 
Yeah.

7:49 - E. G. 
So that way you create the the cross pollination. And then at that point, I'd have second LLM identify inefficiencies or deficiencies in the product, and that would be my input to the previous LLM for improvement. Ben, what is that called when you have one LLM validate the other LLM?

8:26 - M. M. 
When one LLM validates another, I don't know.

8:31 - E. G. 
Van knows everything. Yeah, another fallacy bites the dust.

8:37 - D. B. 
OK, well, I think maybe offline, Lamont, if you send me an email, we need to get started with finding out who these students are so you can talk to them, email them and ask them or talk to them or something. So just send an email to me and Dr. M., and then she'll send email to the students, and then we'll get an interchange going.

9:12 - L. G. 
We'll do. Thank you, Dr.

9:14 - D. B. 
M. and Dr.

9:16 - E. G. 
Burleigh. OK. Very good. It's called meta-evaluation.

9:19 - L. G. 
Thanks, E.

9:20 - Unidentified Speaker 
Oh, OK. Yeah. Dan, you've just crushed me. It was bound to happen.

9:27 - E. G. 
Yeah. Okay.

9:28 - D. B. 
And then there's J., who is really focusing on agents, prompt engineering with agents, but he hasn't been here in a few weeks. I don't know if they'll ever come back or whatever. But if he does, Because we have some things to talk about that he's done with them. And Nash is not here. And then we had a member who was assigned by the university to a group to talk about AI in universities, in a kind of cross-university group to advise our campus, but she hasn't been here in several weeks either, so we may not be able to get an update from her. She was a professor in the math and statistics department.

10:34 - E. G. 
Can I ask a question here? Sure. On section five, you guys heard, we had discussed about this, but since AAI is becoming so prevalent at the university level. A person applied for a graduate program, and they were declined because they used AI in their, I guess. It was a statement of purpose.

11:03 - D. B. 
Statement of purpose.

11:05 - Unidentified Speaker 
Yeah.

11:06 - E. G. 
Is my mic working now?

11:10 - D. D. 
Yes, yes, yes.

11:11 - D. B. 
Yeah, I, I, so, so this, you know, the university there was, was screening these letters. I mean, I've gotten, you know, I get emails from students that are, that are AI generated, you know, and they're usually pretty good, but, but that's, you know, they didn't write them themselves, right?

11:31 - Unidentified Speaker 
They had an AI do it.

11:33 - E. G. 
If somebody was applying to the, say, literature or the arts department, I would agree. But if they're applying to the sciences, I'd be more concerned if they didn't.

11:45 - D. B. 
Well, for a statement of purpose, you want something that comes from the heart, you know?

11:51 - E. G. 
Well, you have the idea come from the heart.

11:55 - V. W. 
But this may be a case of the university disqualifying itself as an appropriate place for that person to study.

12:03 - D. B. 
And that's what I'm getting it.

12:06 - V. W. 
And that because we're now living in the age of the sovereign citizen, where the person is more in the driver's seat with regards to knowledge expansion.

12:15 - D. B. 
But then how do you determine if someone didn't make a if it did come from the heart, because you could get an AI to generate one that sounds good.

12:25 - E. G. 
Good. That's when you interview them and tell them what drives drove them to it. And a lot of if they don't have the backstory to support it, you're able to tell right away.

12:41 - V. W. 
Right away.

12:42 - Unidentified Speaker 
Yeah.

12:42 - D. B. 
Yeah, back to interviewing. I mean, some universities do interview prospective PhD students. Most don't.

12:51 - Multiple Speakers 
I know I did when I applied to UALR.

12:56 - E. G. 
I went through the interview process. I sat down and went far as my judge. Why did I want to come to UALR? I mean, I had used a lot of the tools, in fact, grammarly, to make sure that I formed it well. But when I sat down, what drove me to it, it was easy based on my history, based on my work history, based on discussion.

13:27 - D. B. 
must be thinking about a different university, because we don't interview PhD students.

13:32 - E. G. 
Oh, no. In fact, when I applied, I remember I came there, and I sat in one of your classes.

13:38 - D. B. 
That was my interview. Oh, okay. Yeah, I interviewed.

13:41 - Multiple Speakers 
That was your request, right? It wasn't a requirement for the...

13:45 - D. B. 
Oh, no, no, no.

13:46 - E. G. 
It wasn't a requirement, but it was my request, so that way I could get into the several different times, and I had to go through, and I met with Dr.

14:01 - D. B. 
Talbert. Okay. So you had trouble getting in, huh? Getting admitted?

14:07 - E. G. 
Yeah, I think it was because of my background, what I was able to commit to it, since most of my work has been in coding, was I up to the task of writing? And that's what I think he was trying to portray, is that at this point, I'm going to be doing less of the technical and more of the writing.

14:41 - D. B. 
OK. All right.

14:47 - Unidentified Speaker 
Okay, anything else anyone would like to bring up before we go to the video.

14:54 - D. B. 
Alright, so we're up to 20 minute 22 and 15 seconds and I need to, I'm gonna have to unshare the screen and then share it, optimize for video sharing. So stop sharing. The software is crew AI, but I sent the presentations to Daniel too.

15:24 - M. M. 
Let me put in the chat how But it's very good. They use it a lot.

15:39 - L. G. 
Oh, awesome.

15:40 - M. M. 
Thank you so much. Cru.ai. They use it a lot. Maybe you should find the link, too.

15:48 - Multiple Speakers 
OK. What is it? It's Cru.ai, I think?

15:51 - M. M. 
Yeah. Cru.ai.com. Yes. And I will send you the presentation right now. It looks like they have a relationship with NVIDIA.

16:01 - L. G. 
Thank you. Yeah. Yeah.

16:03 - M. M. 
Because we participate in hackathons, several, with NVIDIA. And they're good, you know, by students. And we suggest this also with RAC together to hospitals. They like it. So you will see the presentation, Daniel, I'm sending right now. Oh, this is it.

16:24 - D. B. 
What's the tool that is, is there a tool that's being presented there?

16:30 - M. M. 
Yes. Screw AI.

16:33 - D. B. 
How do you spell it?

16:37 - E. G. 
I sent to the chat.

16:41 - D. B. 
Oh, C-R-E-W-A-I dot com.

16:44 - E. G. 
Say it again. C-R-E-W-A-I dot com.

16:53 - M. M. 
Yeah. I sent you, Daniel, to ULR address, the presentation.

17:00 - Unidentified Speaker 
Yeah, that's good. All right.

17:04 - D. B. 
Thank you. And did you send it to Lamont also? No. What is the email?

17:14 - M. M. 
I have to find the email.

17:19 - D. B. 
Wilmot, what's your email?

17:24 - M. M. 
Okay, I will, I will. It's LA, did you get his email?

17:38 - Multiple Speakers 
Yeah, yes, I get the email too. Thank you so much.

17:50 - D. B. 
I appreciate it. You're welcome. Yeah, that's good.

17:54 - M. M. 
You're welcome. Very good software. If you have problems, my students can help. All right. So I'm trying to adjust it.

18:06 - D. B. 
We're about 10 seconds, about 15 seconds overlap from last time. So we'll just, that'll work. Did I need to turn up the sound level?

18:21 - Unidentified Speaker 
Let me do that.

18:23 - E. G. 
Yeah, I was actually turning it up on my side.

18:30 - D. B. 
OK, let's try this one more time.

18:35 - Unidentified Speaker 
How's that? Perfect. OK.

18:37 - E. G. 
So it adds another 617 million parameters to the network. Meaning our count so far, is a little over a billion.

18:47 - Unidentified Speaker 
A small, but not wholly insignificant fraction of the 175 billion that we'll end up with in total. As the very last mini-lesson for this chapter, I want to talk more about the softmax function, since it makes another appearance for us once we dive into the attention blocks. The idea is that if you want a sequence of numbers to act as a probability distribution, say a distribution over all possible next words, then each value has to be between 0 and 1, and you also need all of them to add up to 1. However, if you're playing the deep learning game, where everything you do looks like matrix vector multiplication, the outputs that you get by default don't abide by this at all. The values are often negative or much bigger than 1, and they almost certainly don't add up to 1. Softmax is the standard way to turn an arbitrary list of numbers into a valid distribution in such a way that the largest values end up closest to 1, and the smaller values end up very close to zero. That's all you really need to know. But if you're curious, the way that it works is to first raise e to the power of each of the numbers, which means you now have a list of positive values, and then you can take the sum of all

19:55 - D. B. 
those positive values and divide each term by that sum, which normalizes it into to a list that adds up to 1. Thoughts or questions? I mean, I guess you can make any set of positive numbers add up to 1 if you divide them all by the sum. So that's that. And you can positivize them by using them as an exponent. All right.

20:23 - E. G. 
Yeah, but what I think they're trying to say is by doing that you'd end up with the distribution that distribution may have a few at the top end and a few at the bottom end but softmax does by using e raised to the value basically says that if it's a larger number it's going to make it huge and if it's smaller number it's going to make it tiny so that when we look at it it has a clear winner and a clear loser.

20:58 - D. B. 
So you think this calculation has a kind of skewing effect or something? Yeah. You know how you have in residuals?

21:08 - E. G. 
Outliers tend to have... More weight than they deserve, yeah. Yeah, you square it, so that way it has a bigger impact on the equation.

21:20 - V. W. 
So you have to wonder what kind of distortion is introduced into the process by the choice of softmax as your normalization function.

21:30 - E. G. 
Well, your winner is still your winner and your loser is still your loser. Softmax only highlights it.

21:37 - V. W. 
But the idea of negative values could mean in a semantic, a direction space, don't go in that direction instead of just, uh, minimize your likelihood of going in that direction.

21:52 - V. W. 
sort of tending to differentiate the result more clearly.

21:55 - E. G. 
Yeah, but don't you, when you're going in something like this, don't you only select the highest value?

22:02 - V. W. 
Sure, but then the question is, to what degree have you distorted the highest value? Or how have, like for this 0.61 direction, we choose that for the next word. Now, everything we do is predicated on that choice we've just made. But maybe the 0.21 direction, had it not been skewed would have also been a violent candidate direction to consider. Okay. Just a thought. Notice that if one of the numbers in the input is meaningfully bigger than the rest, then in the output, the corresponding term dominates the distribution. So if you were sampling from it, you'd almost certainly just be picking the maximizing input, but it's softer than just picking the max, in the sense that when other values are similarly large, they also get meaningful weight in the

22:56 - Unidentified Speaker 
distribution, and everything changes continuously as you continuously vary the inputs. In some situations, like when ChatGPT is using this distribution to create a next word, there's room for a little bit of extra fun by adding a little extra spice into this function, with a constant T thrown into the denominator of those exponents.

23:14 - D. B. 
We call it the temperature, since it vaguely resembles the role of temperature in certain thermodynamics equations. Any thoughts on that?

23:23 - E. G. 
I use temperature often in the LLMs that I'm doing.

23:29 - D. B. 
So it changes the skewing between the higher and between the, at least at the upper end, it separates them more. Right? A low temperature will separate them more.

23:45 - V. W. 
It's a hack on top of the choice of softmax. Yeah.

23:51 - D. B. 
But if T was very high, say infinity, I say T is infinity, then they'd all be E to the zero is one, and all the words would have the same...

24:05 - V. W. 
That's a great observation.

24:10 - D. B. 
So what you want is a low T will cause the sort of highly ranked words, the most highly ranked word to be further distinguished or more dramatically distinguished from the second most highly ranked word. Except at absolute zero, all hell breaks loose. Well, if T is zero, you mean, you know, T is zero, that's a, well, of course, then you're dividing by zero, but if T is infinitesimal, then, you know, I guess probably one is going to end up as a

24:43 - Multiple Speakers 
value of one or all the others are going to end up as zero.

24:48 - D. B. 
And also you introduce an opportunity for numerical instability to begin to dominate the situation, floating point, overflow, et cetera.

24:56 - E. G. 
Dr. M., what do you use? Because when I do it, I'm using a temperature of 0.5.

25:03 - M. M. 
I've even gone as far But in the sense of this enlarged language models, temperature that they use inside is how precisely you want to be your answer, how much information you want in your answer, or how less information. This is the meaning of the temperature in the language models. I don't think that this temperature, this temperature probably influence, but how they understand the temperature when you're using chat GPT is how deeper you want to go to your answer or how short needs to be. So if you always pick the most likely next word, it wouldn't matter what the temperature is.

25:54 - D. B. 
But if you sort of decide you don't, you know, you want to go with the second most likely word, sometimes if it's to the most likely word, then a high temperature will allow you to pick words with similar heats or similar probabilities, similar values, whereas low temperature will sort of force you to only pick the highest one and ignore the rest. Okay, so let's run with that for a second.

26:23 - V. W. 
Let's say you've chosen a temperature such that two values are considered the same when they weren't before. Now you have two possible outcomes in your computation. And then at the next word, you have four possible outcomes. And so then you have this plethora of possible conversations, all of which may have some validity. How do you then collect those back together to give yourself a sense of what not only that you gained, but what you might have missed?

26:58 - E. G. 
I think you're looking at it based on historical going forward in multiple realms. As soon as it picks the next word, everything behind it is done and it's off running on that thread.

27:12 - V. W. 
Exactly. Which is introduced in a determinism that is false because you may have gone the wrong direction for data quality reasons.

27:21 - D. D. 
Well, that's the motivation that you build in with temperature.

27:26 - E. G. 
So as you change the temperature, the perturbation, the variability is now brought into the equation. Daniel?

27:33 - D. B. 
I thought Daniel had a comment.

27:36 - D. D. 
So when I think of the temperature, I like to think of it as its creativity. So if I want to constrain the large language model to keep it from going off on its I like to reduce that temperature.

27:55 - M. M. 
Yeah, more specific answer or more wild answer. This is the temperature is doing if you use it in child GPT. I'm sure that Daniel is using a lot still.

28:14 - Multiple Speakers 
temperature.

28:14 - D. B. 
What if you used a high temperature, so you had a lot of different, you use a high temperature and you try it several, you know, a number of different ways to get, you get, you know, multiple passages. So you just run it 10 times, you get 10 different passages, and you have a high temperature. So they're all pretty different, all the passages are pretty different. Does it make sense to take that ensemble and try to look for a commonality in exact, like if you're doing scientific reasoning, getting 10 different creative answers may not give you the outcome that you want.

28:51 - E. G. 
We are looking for a specific answer, low temperature, high temperature, you've got 10, 10 different answers. But in sciences, we're looking for one answer. So the perturbation of creativity at that point, for instance, if I'm looking at for a term within a document, scanning for certain terms that relate to certain cancers, I'm looking for that term. I don't want it to be creative and say, well, at all. So at that point, I use a low temperature. Now, if I'm going to do some creative writing, I'd like to see where it goes.

29:35 - V. W. 
It kind of has a verse aspect to it, that once we've committed ourselves to a reasoning direction, we instantly fork off a multiverse copy that went the other direction and landed somewhere else. And so in scientific reasoning, we want as few multiverses as possible.

29:54 - E. G. 
I got one for you to pickle your brain there, Van. Imagine threading, where each thread went off in its own multiverse and every subsequent thread went off on its own multiverse. Imagine the different perturbations that you can get from that.

30:11 - V. W. 
Well, this actually is closer to home than you might think because one of my students is looking at using AI machine learning techniques to handle classic combinatorial math problems. So if you think about combinatorial problems, you always think of those as unapproachable because you haven't gone through all the combinations. And yet machine learning appears to give us a powerful leverage against the combinatorial explosion that occurs in these kinds of problems. So then if we take this multiverse point of view, we reintroduce the possibility that we haven't addressed the combinatorial problem correctly. And it's almost like a fundamental delay dilemma that we face.

30:59 - E. G. 
Well, the common or combinatorial problems are what you're talking about with these branches of brute force. Now, with qubits, you turn that on its ear because it could have keep state of multiple going off at once.

31:17 - V. W. 
Yeah, you just that's a money shot right there. That's a great statement that because you've actually reconciled reintroduction of combinatorial complexity by immediately resolving it with the next horizon of computing. That is like Heifetz playing the freaking violin, man. All right.

31:38 - D. B. 
And the effect is that when t is larger, you give more weight to the lower values, meaning the distribution is a little bit more uniform. And if t is smaller, the then the bigger values will dominate more aggressively.

31:55 - Unidentified Speaker 
Where in the extreme, setting t equal to zero means all of the weight goes to that maximum value. For example, I'll have GPT-3 generate a story with the seed text, once upon a time there was a, but I'm going to use different temperatures in each case. Temperature zero means that it always goes with the most predictable word. And what you get ends up being kind of a trite derivative of Goldilocks. A higher gives it a chance to choose less likely words, but it comes with a risk. In this case, the story starts out a bit more originally about a young web artist from South Korea, but it quickly degenerates into nonsense. Technically speaking, the API doesn't actually let you pick a temperature bigger than 2. There is no mathematical reason for this. It's just an arbitrary constraint imposed, I suppose, to keep their tool from being seen generating things that are too nonsensical. So if you're curious the way this animation is actually working is I'm taking the 20 most probable next tokens that GPT-3 generates, which seems to be the maximum they'll give me, and then I tweak the probabilities based on an exponent of one-fifth.

33:04 - D. B. 
Any comments or questions? All right, well, he said GPT-3 will give him up to 20 words, but I guess that means you have to do an API call because you can't get I can't do that with the stock LLM.

33:24 - D. D. 
I've constrained the model and then run my prompt multiple times through the API to compare the results. And if my request is complex, then I will never get an identical answer out of multiple times running it. Even with the temperature down low, it's still variable. Language is so diverse, you can get to the same result saying in a different way. The higher the temperature, I could see how you know, it could get just, you know, way off topic.

34:19 - Multiple Speakers 
Multi-verse warning, multi-verse warning, red flag ahead. Yeah.

34:22 - D. D. 
What temperature are you using? I would, even with the zero, you're getting different answers.

34:28 - E. G. 
Oh, try it. Yeah.

34:29 - D. D. 
If you've got access to the API, you know, like if it's a simple, like if I say do this one thing, then I could probably run it. I have run simple, simple things like make this class in Python. And I'll tell it exactly what to do, and it'll do it every time. But if it's very complex, if it has to take multiple steps, then it'll take the steps in different order. It will make slight variations. But yeah, I did that. It's in one of my papers that I just did.

35:12 - E. G. 
Can you email me that? Because I actually tested something like that with Databricks, and I asked it to do something, and I used the same term, and it came with the exact same answer.

35:29 - D. D. 
Yeah, I'll find that. I'll find that paper and email it to you.

35:36 - E. G. 
Thank you, sir. You bet.

35:38 - D. B. 
As another bit of jargon, in the same way that you might call the components of the output of this function probabilities, people often refer to the inputs as logits. Or some people say logits, some people say logits, I'm going to say logits.

35:58 - Unidentified Speaker 
So for instance, when you feed in some text, you have all these word embeddings flow through the network, and you do this final multiplication with the unembedding matrix, machine learning people would refer to the components in that raw, unnormalized output as the logits for the next word prediction. A lot of the goal with this chapter was to lay the foundations for understanding the attention mechanism, Karate Kid Wax On Wax Off style. You see, if you have a strong intuition for word embeddings, for softmax, for how dot products measure similarity, and also the underlying premise that most of the calculations have to look like matrix multiplication with matrices full of tunable parameters. Then, understanding the attention mechanism, this cornerstone piece in the whole modern boom in AI, should be relatively smooth. For that, come join me in the next chapter. Any comments on that? Last little bit?

36:56 - V. W. 
Well, I have a comment that, since it's almost 4.39 and we're at the scary part of the seminar, hour. You know, J. H. said that quantum computing is still 10 to 20 years off this last week, which I found really disappointing because he keeps his finger on the pulse beat of advances like we saw with Google's Willow. So I'm a little concerned about that because we should be getting there faster. Another thing is that AI was used as a planning tool for the immolation of a Tesla in front of a Trump hotel, which was also disappointing. And then the AI didn't seem to do much to help LA quench its fires. So I think this has been a rough week for us in the computational horizons.

37:45 - E. G. 
I think humans can really screw up what best planning could put out there.

37:51 - M. M. 
Yeah, that's true. That's true. Not really good week. Yeah. All right.

37:56 - D. B. 
Let's see. I guess we will just go through this. Of the next one chapter that we'll start on next time.

38:37 - Unidentified Speaker 
Okay, I'll say we finished.

38:43 - V. W. 
I think I need to hit the LLMs and ask what our horizon is for the quantum computing advances that we're waiting on and long coherence times, maximal agreement between computations and things like that. Dan had a great question in his Back to the Future class that said, wisdom of the hive, when will we see flying cars? When we will see these various advances we've all been waiting on? And 2049 turned out to be the hot number in the class that I was sitting in on. But I'd be curious what the LLM has to say on what the real progress that we're going to experience is.

39:38 - E. G. 
Go back and look at what they anticipated to be available by now. We were having these huge flying cars, everything would be copasetic, right now we have to put on, back then we actually had, and this is one of my favorite, car manual would tell you how to set the set the gaps on the valves. Today, we tell them not to drink the contents of the car battery.

40:10 - V. W. 
That was a hard right turn for me, E., but I get you. Gapping spark plugs used to be a really important thing to do.

40:22 - Multiple Speakers 
No, what I'm saying is we can ask of it, but if we look back, 25 years ago or 30 years ago, what were we going to have today?

40:36 - V. W. 
It was just wrong. But the tool for more comprehensive reasoning is now with us. So one could argue that our event horizon might more similarly resemble what we're going to get because we're drawing from a larger corpus of knowledge than just, you know, pop side type of stuff.

40:58 - D. B. 
OK, well, folks, I guess we can call it a day.

41:07 - V. W. 
E., do you want to grab the paper out of chat?

41:16 - Unidentified Speaker 
Oh, yes. Let me do that. Thank you. Thanks, D. Yes, sir.

41:27 - D. B. 
chat successfully? Yes, sir.

41:29 - Unidentified Speaker 
Okay.

41:29 - E. G. 
All right, folks.

41:31 - D. B. 
Well, thanks again, and we'll see you next time.

41:34 - Multiple Speakers 
See you guys. Bye. Have a good weekend.

41:37 - M. M. 
Thank you.

41:38 - L. G. 
Thank you.