Artificial Intelligence Study Group
Fri, Aug 1, 2025
0:01 - R. S.
you.
0:54 - D. B.
Hi everyone.
0:55 - Unidentified Speaker
Hello.
0:55 - R. S.
Hey, A.
1:06 - D. B.
A, are you there? Oh, hey, yes, sir.
1:09 - A. B.
How you doing? Hi. Yeah, I got back this afternoon.
1:13 - D. B.
That's why I canceled all the meetings until this one. I figured I was sure to be back by.
1:20 - A. B.
Yeah. No, all good. All good. I've had a good trip.
1:24 - D. B.
Yeah, we took a trip to Baxter State Park in Maine.
1:28 - A. B.
It was fantastic. Yeah, it was really good.
1:31 - D. B.
But I got pretty tired. I'm getting pretty old.
1:35 - A. B.
Yeah.
1:36 - Unidentified Speaker
Awesome.
1:37 - D. B.
Glad you had a good trip. All right. Yeah. OK. Well, we'll give another minute. Hello, V.
1:54 - V. K. (CARTI)
Good afternoon. Hi.
1:58 - D. B.
Well, if we have another small group today, which it was looking like it might be, we'll just continue to do our readings of abstracts and things to assess, evaluate our next reading. So I guess we can go ahead and get started.
2:24 - V. K. (CARTI)
Have a good weekend.
2:27 - Unidentified Speaker
Hello?
2:28 - D. B.
Did someone say something?
2:30 - Unidentified Speaker
All right. Anyone have any announcements, updates, or questions?
2:36 - D. B.
Then D. has agreed to do a demo on how he generates the transcripts of these meetings. Turns out he was not well, not feeling well last week. But he's not here now, so we'll just do it when he comes back, whenever he's available. And E. also says that they're still working on the slides and they still want to do this. So again, when possible, we'll do it. And V. is happy to demo his wind tunnel system, I think, anytime. But he's not here now, so we'll put that on the queue as well. And finally, E. S., who's a professor in the psychology department, is still willing to meet with us and give us her opinion of this book. And she's trying to schedule when to do it. And she said, well, you know, whenever we want her to do it, she'll do it. So I just have to schedule a time.
3:46 - D. D.
Hello, D. Hello. Hi.
3:48 - D. B.
And again, I'm planning on, we got a bunch of master's projects available, so when the semester starts, I'll send out an email to the master's students and see if anyone wants to do any of these. An essential prerequisite is that they're free to meet with us on Fridays at four. If they're not willing to do that, they can find another project.
4:16 - A. B.
What else? Yeah, go ahead. I would like to, at some point, as I get closer to defense this semester, maybe do a dry run, if that's possible here. Absolutely.
4:25 - D. B.
We've done that many times, and I recommend it. In fact, your defense probably should be one of the meetings here, if your committee members can make it. Oh, OK. Yeah.
4:35 - Unidentified Speaker
If they can't, we'll do a rehearsal, and then your defense will be some other time.
4:40 - D. B.
But I always try, whenever there's an AI-related defense or something like that that I'm involved with, I always say, well, how about Friday for, you know, doesn't always work, but you know. Okay, well, D., do you feel like telling us about your, your interaction process with? Yeah, yeah, I could do that.
5:04 - D. D.
Yeah, that'd be great.
5:06 - D. B.
We'll go ahead and do that, then. All right.
5:10 - D. D.
I'm gonna go ahead and unshare the screen so you can share and go right ahead. Find the share button. Yes, that's the share. I don't have anything to share just yet, so I'm just going to share my whole screen. Yeah, I don't pick it in my computer. No picking. All right.
5:43 - D. B.
Can I make a comment from the peanut gallery about your, no, I'm just kidding.
5:51 - D. D.
Go ahead. So I got the, so I go to somewhere like, so what we're talking about, anybody that has no idea what's going on. So the AI, Read AI has a transcript. Can to identify who's speaking, what they're saying, and it builds a full transcript of the entire meeting. And you can download it as a text document. And so what I'm doing is I'm taking that transcript and I'm bringing it to the AI and having the AI anonymize it so that you know, it just puts our initials instead of our name, and then anybody can just Read the transcript or whatever, and they really don't know who was there.
6:50 - D. B.
As a footnote, D. sends me the anonymized transcripts and I add them to the minutes, so you can go to the website and check the minutes and see what the transcript was, but it's publicly viewable to the world. So it's anonymized.
7:06 - D. D.
So I think it's this one I go to. So I go here and I log in to the API. Oh there's something in my way I can't see though. Oh there it is.
7:27 - Unidentified Speaker
Okay. And this This is the chat GPT API, and this is the dashboard.
7:36 - D. D.
I go in here and I have to find the prompt, and you can see it's not super. I think it's called clear names.
7:57 - D. B.
So as a point of information, these prompts are all, you've saved a bunch of your old prompts so you can reuse them?
8:05 - D. D.
Yeah, some of these are for my research. Some of these are just stuff that I practiced with. Let's see if I can find this thing. Oh, it's here.
8:17 - R. R.
So do you actually do some prompts to get this code? Is that what you're doing?
8:23 - D. D.
I'm sorry, say that again, sir.
8:27 - R. R.
No, don't call me sir, just call me R.
8:32 - D. D.
R, okay. Yeah.
8:34 - R. R.
Do you use prompts and do you create code based on the prompts? Is that what you're doing?
8:43 - D. D.
Well, I mean, I could do it through like a Python wrapper and there's other ways to besides Python. But I could access the API through API key and just have a script or something. But no, I just do it. It's kind of like another chat interface. Over here is where I have the system properties.
9:07 - Unidentified Speaker
Yeah. All right. I'm sorry.
9:09 - R. R.
This is the system prompt. I'll just Read it.
9:12 - D. D.
I don't know if you guys can Read that well. Yeah, yeah.
9:16 - R. R.
You can Read it well?
9:18 - Unidentified Speaker
Everybody OK with it? Yeah. OK. I just want to Read better, bigger.
9:23 - D. D.
I'm sorry, what?
9:24 - R. R.
I just made my screen bigger. OK, yeah. And so. This chat GPT 4 0. I think is the one I finally decided on because, OK, so that, you know, this is an hour long meeting.
9:44 - D. D.
Approximately, I think it's 45 minutes, some weeks and an hour, some weeks. But there can be a lot of communication. And so the text can get long. So you need a model with a very large context window. And I would like to say I have tested this on smaller models that I run locally on a Linux machine next to me. And it can't handle the job. So it probably can if I get creative about breaking it up you know, maybe send it to the thing in multiple prompts. But what I have to do is I have to make sure that I've got it where it'll save the temperature at zero. And this is the V4 of this prompt. That means this is the fourth version. I have three other versions of this same thing, but this is the best But it will not save the tokens. And so I'm going to turn that all the way up.
10:54 - D. B.
I have a question, D. Can you show us the previous versions and maybe explain what went wrong with them? OK. I mean, if you don't mind.
11:07 - D. D.
I don't mind. Thank you.
11:09 - R. R.
That's a good question. Thank you.
11:12 - D. D.
I think people are always wondering, well, make good prompts. Okay so all right so this let me go let me see what I got here so version one is an older version of chat GPT and it was saved with a temperature of one and this right here of course could be adjusted but it only goes up to 496 which is not enough and that temperature it kills it and so the reason kills it is because the model is a generating model. And if you give it, the temperature is its creativity. Okay. So what it likes to do, and sometimes it doesn't do it much, very subtle, but it'll change what was recorded in the transcript. It will reword what somebody said to try to say it better for them.
12:11 - R. R.
Okay. Which I don't like that.
12:13 - D. D.
I think that that's a bad stir of data. If I take the data and I change it and say that somebody said something that they didn't. So I've learned that I have to change the temperature. So this version was a bust, right? So next version, let's see what I did.
12:34 - D. B.
How do you change the temperature?
12:36 - D. D.
Okay, so it's just in the settings.
12:39 - D. B.
see this little thing right here? Yeah.
12:42 - D. D.
It's just right here next to the model. You change the model, change the temperature.
12:47 - Unidentified Speaker
And the temperature, this is version 2, the temperature is all the way down and saved at 0.
12:54 - D. D.
So this one will work. And then this can go up to the 4096. But what happens is it'll only go through like a third of the transcript. And then I'll have to type in there, please finish. And then I'll do another third. Well, that token, I thought, that token limit looks different than your last version. I thought the last version had a lot more token input. Well, that was version four. Version one had the same token limit.
13:36 - A. B.
I'll go back. Sorry.
13:38 - D. D.
No, no, no, it's fine. So this right here is version one. The temperature was saved. And this right here goes to 496 or 4,096. And then the same with version two. And I think that I got the temperature right, but I still didn't have the context window that I wanted. Then I got to version 3, and I think version 3 is actually similar to version 4. Let me just check really quick.
14:13 - Unidentified Speaker
I thought I saw a much bigger number in the token.
14:18 - A. B.
You did.
14:18 - Unidentified Speaker
It might be here. Yeah, that's it.
14:21 - A. B.
This one right here will go all the way.
14:26 - D. D.
Let me see, this is my default. So let me see if there's a difference. 16, 3, 8, 4. OK, so what happened here was when I got to version 3 and I saw that I got the temperature safe where I wanted and I had the context window, I saved it. But then whenever I went to use version 4, the context window return back, I had to come and manually do it. So that's why I'm on version four. But this is the best I can do right here is this model. But every time I started, I have to do the manual tokens. And then I have to find a transcript.
15:20 - Unidentified Speaker
Let me see if Find the transcript here. Here we go. Here's one, a raw transcript. Oh, look, I have all these others open.
15:39 - D. D.
And you can say that this tells you his room Here's Dr. M. I guess this is the one I wasn't in. I couldn't come last week. I had a medical problem. Oh, no, I was here. So this must be the week before that. So you can see it's got everybody's name in it. That was it here in the mail. I'm going to take this control and copy it. Just copy the text. And then. I'm just going to paste it in the chat. Now, if everything goes right, I wish I could get rid of this.
16:37 - Unidentified Speaker
OK, there it's gone. This right here. And I didn't say that.
16:45 - D. D.
I bet you the recording, I didn't say that.
16:47 - Unidentified Speaker
But that's what I think the transcript they Read that AI frequently attributes the wrong person to this. Yeah.
16:55 - D. D.
But, uh, yeah. So, yeah, I can, if you can imagine though, if that, if the one AI makes mistakes and then this other AI decides to change it, it can get, it could get pretty convoluted, but so, um, Let's see if I can run this. Theoretically, if everything's set up right, this will run the entire transcript. The first time this happened, that I was able to run the entire transcript was when I sent the transcripts earlier this week. I think it was earlier this week. That was the first time that I've got it to do the whole thing. One, because this context window is so large. It's a good model. And the temperature's turned down low. And so when the temperature's all the way at zero, it doesn't try to create anything. It only does what it's told.
18:00 - D. B.
I noticed that also there was, like people were talking about, like we talk about G. H., it'll change it to GH, which is what you told it to do.
18:13 - Unidentified Speaker
Yeah, that's what I told it today. That's right. This is R.
18:19 - D. D.
Can I ask you some dumb questions?
18:22 - R. R.
Are you talking to me? Yes. Yes, you sure can. Okay. All right. Temperature tokens and I saw another dimension there. I think I understand what tokens are in this context, but could you explain? Yeah, please. Thank you. Yeah. Explain what those dimensions are. Temperature, tokens, top piece, store logs.
18:53 - D. D.
I didn't hear the last part.
18:57 - R. R.
The store logs, the last one.
19:01 - Unidentified Speaker
Yeah.
19:03 - D. D.
the store logs are some part of the API. I don't know what they are exactly, but I mean, I could find that out probably. Okay, so the temperature. The temperature is described as the model's creativity. So the, It's not done yet. Whenever the temperature is high, I'll demonstrate that. As soon as it's done and we get finished with this, I'll just have it do it again, except for with the temperature turned up.
19:47 - D. B.
I'd like to describe temperature in a more under the hood way. If you think of generative AI as picking the most likely next word. If the temperature is zero, it will, in fact, pick the most likely next word. If temperature is higher than zero, it will, with some small or large, with some probability, it'll pick a word that's not the most likely, but maybe the second most likely, or the third most likely, or the 10th most likely, with declining probability.
20:18 - Unidentified Speaker
I think that you're describing top.
20:21 - D. B.
Oh. OK. So temperature is.
20:24 - D. D.
I do. I think that you're describing top.
20:29 - D. B.
So luckily, we live in the world of AI now.
20:35 - D. D.
And so I can go, what is the temperature for an A, let's go A, large language model? Oh, looks like somebody is a bad typist or spelled wrong, which. Maybe both who knows. So in the context of large language models, the temperature parameter controls the randomness of the models output oil. That sounds like what you're saying. This parameter influences high predictability and create or creative. The models output is or how creative it should say how predictable or how creative yet. Low temperature results in more deterministic. And predictable outputs where the model favors the most probable words. This is useful for tasks requiring accuracy such as summarization or translation. Medium temperature provides a balance between randomness and predictability. And often used as a default for generating text. High temperature is randomness and All right, so let's go back.
21:47 - D. B.
I have a question about the temperature. If a student wants to use an AI to hand in their homework, do their homework, if they make the temperature high or higher, then would an AI detector be less able to detect that it was written by an AI?
22:12 - D. D.
The only thing that I know to do is to form a couple of hypothesis and null hypothesis and do a bunch of tests. We'll get a sample of your students and say, OK, now you don't do your homework, but you do your homework.
22:35 - D. B.
All right, what is time?
22:36 - D. D.
It makes sense what you're saying.
22:40 - A. B.
because it would be a more, higher temperature would produce a more randomized output. Therefore, I mean, those detectors that are trained on, you know, finding, I guess, language that sounds more deterministic probably wouldn't catch that, right? Yeah, how else would it be able to detect AI?
23:01 - D. D.
So the top P for a large language model, in the context of large language model, top P, also known as Nucleus sampling is a setting that determines which tokens, words, or subword units the model considers when generating response. It is a way to control the randomness and diversity of the generated text. Here's how top P works. Probability distribution, when a large language model generates, it first calculates the probability of each possible next token in its vocabulary. And so it doesn't sound a terribly lot different than temperature, does it? Cumulative probability threshold, you set a top value between 0 and 1, or null, which is default to 1. This value represents a cumulative probability threshold. Selecting the nucleus, the model sorts the tokens by their probability in descending order and the smallest set of high probability tokens whose combined commutative probability is at least equal to the top P threshold. This set of tokens is referred to as the nucleus. Sampling. The large language model then samples the next token only from this selected nucleus of tokens.
24:31 - D. B.
So I'm going to jump in. So if you set the temperature too high, it gives you some really weird, senseless, gibberish results. That one model that we tried last time, yes. I don't know about this model. But if you set the top P to disregard those really low probability next words, then it it will be, you know, it could be highly, fairly random because it's a high temperature, but it won't do something like give a Chinese word, you know, or a mishmash words together or something like that. Cause there's too low probability. Yeah.
25:15 - A. B.
It's like the temperature is telling you how surprising the results can be, like how surprising the results you're, you're allowing it to be. But then the top P is more like, well, how, how far down in the likelihood are you allowing the model to go?
25:31 - D. D.
Yeah. So really with a top P closer to one, it's actually giving it a little more creative output options. Right.
25:40 - Unidentified Speaker
Yeah.
25:40 - A. B.
So he's onto something here.
25:42 - D. D.
Let's go back to our thing. Now let me see if I can see if this is a good place to go.
25:53 - A. B.
So then the D's Pointer, if you had a high temperature and hide top P, that would probably not spike up in a detector or something, right?
26:06 - D. D.
So this is the logs of when I've been working on these transcripts here. So you can see that it's got, so I imagine that if I uncheck that box, and I took away your picture. Was it R that asked the question originally about the settings? Yep. Yeah. And so I think that this just says, like, see where I've been typing in, please finish, please finish, please finish. Those were different times, you know, different, different times, maybe the same day, maybe not the same day where I was asking the model to, you know, finish.
26:50 - A. B.
When you're doing it, lower token input.
26:52 - D. D.
along here and I found the Eureka moment, right? When it'll do the whole thing. That's with the higher token input setting, right?
27:03 - A. B.
Yeah, that's with that.
27:04 - Unidentified Speaker
This turned my token back down. Now, let's turn the temperature up.
27:10 - D. D.
Now, my clipboard may still have this in here. It does. So we'll just turn the temperature all the way up. The top is all the way up. The token count is, we don't necessarily need to do the whole token count. Let's just turn the token count down for the, I think it was suggesting it somewhere around 2,000, so we'll go down to there. We don't need the whole thing done. But let's see what it gives us here. This is with the temperature up, the top up, and a kind of low token. Now we're starting to get symbols that maybe are some, maybe Chinese, maybe.
27:55 - Unidentified Speaker
You got a mix.
27:58 - D. D.
I see Korean, I see. Maybe, yeah, maybe Korean. Which that would be more, that would be more.
28:10 - A. B.
Expanding out the top P would probably lead to more of that stuff, right? You have like different languages.
28:17 - Unidentified Speaker
Well, we're at max, I think, on top.
28:20 - A. B.
That's what I'm saying, like the fact that we did that, that's probably a result more of the top P that brings in these other languages. I think so, yeah. Allowing...
28:31 - D. D.
I'm going to count that as a second to Dr. B. Theory. Well, you could try the same high temperature, but with a lower top P. Yeah. Might turn down the token count, because this thing is getting it.
28:48 - A. B.
Look, it put an emoji in there.
28:51 - D. D.
And it would be interesting to see if any of these words make sense. You know, this might actually be, you know, witty. But I can't rate it.
29:08 - D. B.
I wonder if a space counts as a token because it seems like sometimes it's deleting spaces.
29:17 - A. B.
I know the timestamps are gone.
29:20 - D. D.
It's totally gotten rid of those. I found it hard to believe that it would even do this. I didn't see that one. There's a mailbox right there. Order something coulda Dr. Trotter long miss contact her savory point where services or Zitzer dent yeah it's just it's you know the AI knows what it's saying maybe or maybe it's making very low probability words frequently yeah very low probability that is a true story yeah I think that's it I think I think we're going to have to go to a version 5 and turn that top down. So let's turn it all the way down, just the extreme. Let's leave it at this. It's a little bit lower than what I had. And we'll turn the temperature all the way back up. Paste in the transcript and see what it does now.
30:25 - A. B.
I don't see any other language. Well, I think it's only picking the top probability word now.
30:29 - D. B.
Right now that the top P is set down, exactly.
30:32 - A. B.
You could do the same thing with a temperature of 0.
30:36 - D. B.
Well, it wasn't the temperature. I was curious.
30:40 - A. B.
On the last go, we had a very high top P. And we saw a bunch of different language references and stuff. And that's why I was wondering if that was contributing to that. So then we split the top P down.
30:54 - D. D.
Let's see if we can get this.
30:57 - Unidentified Speaker
Yeah.
30:57 - D. B.
All right.
30:58 - D. D.
Let's take a look at this.
31:01 - D. B.
Does anyone want to ask D. to try some other configuration there?
31:07 - D. D.
Okay. This is the one we're looking at right here. So this is identical conference room. Speaker two, hi everyone. Unidentified, hello. Dr. M., hello everyone. Hello everybody, hello. Happy Friday, all right. Looks like it's holding out. V. got changed. Look at that.
31:45 - D. B.
I mean, it's because it's... That's why I have to, yeah.
31:50 - D. D.
That's why I have to, yeah.
31:52 - D. B.
If you set the top P to, you know, maybe 0.8 or something, then it'll only pick the top probability possibilities. If you have a high temperature, a P that's not too close to one, it'll pick a lot of random words, reasonable random words So this is kind of a this is kind of a double Double whammy, I mean there's two ways of Okay, so can I turn the top down you it seems like you?
32:33 - D. D.
You get the same result if you turn the temperature all the way down set the top to two point point eight, right?
32:45 - D. B.
All right, so it's at top 2.8.
32:51 - D. D.
Alright, hold on, I need to...
33:38 - A. B.
Hello. Hello. Hi. Hello. Hello.
33:43 - D. D.
Yep. Everybody happy. All right. Can you go to that?
33:50 - A. B.
4-3 timestamp with Dr. B. Or yep, I was just trying to find a longer one to see if there's more like a discrepancy with a woman 0 colon 4-3 Those things are pretty close All right, make the top P 0.9, 0.95, or do 0.95.
34:24 - D. B.
If you do 0.95, that'll keep out all that, all those foreign language words and stuff. Probably 0.99 even, get rid of most of that stuff.
34:40 - D. D.
Yeah, it looks like that it's holding.
34:45 - D. D.
At Tom.
34:47 - D. B.
All right, try 0.9, 0.99. I think it's a pretty sensitive.
34:58 - Unidentified Speaker
Yeah, it's good enough, whatever, 0.8, 0.98, whatever. Whoa.
35:08 - D. D.
Where did this go?
35:13 - Unidentified Speaker
Control V.
35:18 - A. B.
I didn't cut it.
35:29 - D. D.
That's good enough. I don't know. I'm just guessing.
35:39 - D. B.
Your guess is as good as mine. Let's try it.
35:41 - D. D.
Let's see if we can get 90.
36:07 - D. B.
So 0.43 grant we just applied. Yeah, it's pretty similar thing. And it was at 100 that we ran it and we got like the foreign That's so interesting.
36:35 - D. D.
Yeah.
36:36 - A. B.
It's like, I want, we have to do 99 now.
36:41 - D. D.
I mean, let's look at two. This is a pretty, this is a pretty good long one right here.
36:52 - Unidentified Speaker
245.
36:52 - Unidentified Speaker
Let's look at 245. All right. Okay. I found the video.
36:57 - D. D.
We'll get it. Let me go back and show you. Well, it deleted a space between today, period of Oh, it deleted a couple of, it's deleting a lot of spaces.
37:20 - D. B.
It's deleting all the spaces after the periods. I don't think it did that before. Or some of the spaces. Yeah, so it's down the one space rule instead of two spaces.
37:43 - D. D.
I think they changed that. It used to be, yeah, it used to be you're supposed to put two spaces after and then somebody finally came along and said, stop it, it's crazy.
38:07 - A. B.
The Oxford comma people.
38:10 - Unidentified Speaker
We're done.
38:11 - D. D.
We're going to quit wasting all this space. We got to save those bits. All right, so we're going to do the top. Just one little tweak. Temperature all the way up.
38:41 - Unidentified Speaker
Hey, everybody.
38:43 - D. D.
Do you guys want to see the large language models that I have? Through Olamo? We have time. We're almost out of time.
38:57 - Unidentified Speaker
Maybe another time I could show you.
39:03 - A. B.
Sorry, is this something that you did some one-shot Retraining or front?
39:11 - D. D.
No, I just downloaded models on a llama on another computer that I can access through here.
39:18 - A. B.
Oh, okay.
39:19 - D. D.
Local large language models that I can access through a chat UI similar to this one.
39:26 - A. B.
Oh, through llama, gotcha. I don't know if y'all have seen that.
39:37 - A. B.
Yeah, I mean, it's interesting. So pretty responsive. Like again, you're, I would like to demonstrate it sometime whenever, whenever Dr. B. can let me, I'll show you all what I did.
39:51 - D. D.
It's very exciting. Kind of, uh, well, really it was E. He kind of pushed me in this direction to do it.
40:00 - A. B.
And then I just went ahead and did it.
40:04 - D. B.
Now, what is it?
40:06 - D. D.
Tell me about it. It's a so I Downloaded I have three models Three large language models, I think they're 30 something billion parameter models And One of them is a coder And the other ones are just chat.
40:34 - Unidentified Speaker
But I have the models I'm running.
40:39 - D. D.
I put them on Lynx machine because I have two 3060 graphics cards. And the, that gets me 24 gigs of RAM. And I picked the models that can, I have VRAM, I'm sorry, not regular RAM, but I have a lot more regular RAM. But the VRAM, I have enough to run, I think it's 30 something billion parameter models. So I have three, and I think I have like one of the deep seek models and then a Quinn coder and a Quinn chat.
41:21 - D. B.
Yeah, we just let me and do it, we'll schedule it.
41:26 - D. D.
I'm ready.
41:26 - D. B.
Maybe we can not do it next week, just because we did something similar this week.
41:33 - Unidentified Speaker
OK.
41:33 - D. D.
Maybe the week after. Just tell me when, and everything's set up and ready to go.
41:39 - D. B.
OK, what should I call this demo? Give it a title or something.
41:44 - D. D.
Local Large Language Models? Are you talking about the future demo? Yeah, the one you've just been talking about. Yeah. Uh, just, you know, local large language models.
41:57 - D. B.
They're just low.
41:59 - Unidentified Speaker
Yeah.
41:59 - A. B.
So on this topic that I'm circling back here, so no big changes as far as, uh, like words for language or anything like that. So this is that, this is that 0.99. It's the one, it's the one top. Yeah, that's so, that's fascinating. Can we try it one more time at the 100 just to see if that's what, like it.
42:29 - Unidentified Speaker
Yeah.
42:29 - A. B.
See if it wasn't like some glitch. Yeah, right.
42:32 - D. D.
Like it's so random. It just for one, just for one thing. It just, you know, it's like, and I imagine that the, that it probably has a play in it, you know, somewhere when the temperatures, I don't I mean my directive is pretty tight though, right? You know, so You know that it's a setting that's oh, yeah, I gotta redo this It's a setting that you know that they have that then I Okay, temperature all the way, top is set at one.
43:27 - Unidentified Speaker
Give it the push through.
43:35 - D. D.
It's one. 100. That's interesting.
43:39 - Unidentified Speaker
So there must be in that one percentage point or whatnot, that must be where it all drops off, and there's just a bunch of junk.
43:49 - A. B.
I guess that's, hmm. Can the temperature be above 2?
43:53 - D. B.
I mean, I thought, I know the slider bar only goes up to 2, but maybe using an API, it can get higher.
44:03 - D. D.
I don't think so. I mean, I think probably if you put it higher, it would probably default to whatever default is.
44:15 - A. B.
Yeah, that's really fascinating on the top view though.
44:21 - Unidentified Speaker
I see.
44:22 - D. D.
High temperature is two is what they're showing here.
44:27 - Unidentified Speaker
I'm sure that that's it. I mean, I don't I bet it doesn't.
44:39 - D. D.
I mean, I'm sure there is a way though.
44:47 - A. B.
But yeah, I don't know.
44:52 - D. D.
I mean, I had I would not have thought in a million years that, did I close that down? Oh, I didn't, okay. I would have thought that just 100th of the top would make the difference and it wouldn't matter what my temperature was.
45:14 - A. B.
I would have never thought that.
45:17 - D. B.
It's actually, I mean, I'm gonna claim that it's not a very good, they haven't, figured that this interface very well because it didn't for it to demonstrate this well here now we're bringing in this direct prompt this instruction is supposed to be followed right yeah so it should check periodically again to make sure that it's doing that.
45:55 - D. D.
It's supposed to make no other comments.
46:00 - Unidentified Speaker
It's supposed to just do this one thing.
46:05 - D. D.
You can't honestly say that the system prompt might have something to do with it, some factor. I don't know if we have time, but if we've got time, I say we put the temperature at one.
46:36 - D. B.
Put the top at five.
46:40 - D. D.
This probably isn't super scientific, I'm just going to delete that and I'll delete this again and then it probably still remembers what happened though. Previously I don't know but let's just see if there's Is there anything notable here?
47:07 - A. B.
Seems like a pretty normal. It does look normal.
47:20 - Unidentified Speaker
When it stops, I'll check it. Yeah.
47:30 - A. B.
I was kind of curious, I know we're kind of out of time, but like the high top piece, have you still selected the highest top P and then like a zero, like no, no temperature. I wonder if it would still do the same thing. I'm thinking it would, right? Like as far as like the references to the other, you know, languages and symbols and stuff. Cause it's, that seems to be the factor that's going and grabbing all these You want to do a high top and a zero temperature?
48:04 - D. D.
Right. Yeah.
48:05 - A. B.
That's the way I do it.
48:07 - D. D.
Oh, I thought you had a high temperature the last time we ran it.
48:13 - A. B.
Okay. You want to do a zero top and a high temperature?
48:17 - D. D.
No, no.
48:18 - A. B.
I was thinking the other way. I thought you had it high temperature and high top P the last time.
48:27 - D. D.
last time that we ran it yeah that was the one that was super confusing that's the top is if the top is one yeah then and the high temperature it it just gives us garbage but I haven't really been changing the top so that what this one is zero top He says on the top is zero.
49:00 - D. B.
Okay, so what is this right here?
49:08 - D. D.
We 34 That just Three four V's not baby Yeah, V's not V's not doing this thing today oh V's not V's not doing this thing yeah that's actually pretty good oh we had a kid issue crash it's a computer crash remember yeah so I mean it really it really just had everything to do with the top the whole time It's interesting. Yeah. And so in order to compensate for the top being at one, you have to turn the temperature all the way down to zero. But wait, so did, okay. So we did do that.
50:07 - A. B.
So that, so remind me what this, this output is at. This is just both in the middle.
50:16 - D. D.
Okay. So which one do you want to do? So you said that we did temperature at zero and top P at one Yeah, that's what that's what I do that's when I turn them in I turn them in that way right there And then that doesn't result in any odd like symbols and the the first time when I went through there one time I did it and there was just some slight changes and with the temperature at one. OK.
50:48 - A. B.
And so that's when I started changing the temperature to zero.
50:54 - D. D.
I saw just that just I mean, some subtle changes that really didn't change a lot. And and then we discovered that if you turn the temperature all the way up, that it just gives you some kind of, you know, gibberish.
51:14 - A. B.
Yeah. And, uh, God makes sense. Okay. Yeah.
51:18 - D. D.
And so the, but it's really the, it's really the, in order to offset the top of wine, the temperature has to be low, very low. And that, and that's, and it's just really 100 on the top that we saw. I mean, you know, there would be a lot more, um, we'd have to really You know, really check it because like I said, when the temperature is one, I mean, I saw the, where it just kind of, it just, you wouldn't even notice.
51:53 - Unidentified Speaker
I mean, if you just, you know, wasn't looking at it. No, this has been really helpful.
51:59 - Unidentified Speaker
Like I said, I've never got under the hood and tested in this way.
52:04 - D. D.
So I have to, um, I learned something today. Yeah.
52:08 - A. B.
Is this, do you just have a normal, do I have like a, the $20 paid account with, um, check duty?
52:14 - D. D.
So how this works, all right, so this is not normal, no.
52:18 - A. B.
Let me see if I can get back to that.
52:24 - D. D.
So this right here, so when I get to here, so this is $20 a month or whatever the subscription is, right? But when you have this subscription, you can put money in the cloud, so to speak. And so I think I lose, I lose money every year. Or I lost money the first year. I put $20 on there. And I mean, and I did a lot of stuff. I mean, I fine-tuned some models, like three or four models I've done, you know, just all kinds of just experiments and stuff. And I ended up, I think they kept like $12 or $13 and made me pay another amount of money, whatever it was, $20 or something after a year. So it's not very expensive to be in the API.
53:19 - A. B.
So if you have to pay an account, you still have to, you have to pay a little bit to, to go into the, to get the API.
53:31 - D. D.
I've heard now. So, Oh, I got an email. From P. that you know that P. now has the API but I don't know exactly where it is I haven't found it is one of the things that I was gonna do but I think my subscription to P. I was paying you know whatever their subscription was and I thought it was just a great deal and then they lowered it they had a they actually sent me an email saying you you can lower your subscription to $8 And so I have access to all this stuff and tons of models and it's like $8 a month. I mean, you know, AI right now is the candy of the internet. I would really like to have subscriptions to all of them. These right here, this one right here, This is a really good model. And this is a free plan.
54:44 - A. B.
It's really good.
54:48 - D. D.
And I think G. is pretty good. This is a free plan. I don't know to how much, but I just get them and just test them out. Just random topics or whatever. They seem pretty good for you. Yeah, I use the A. one a bit too.
55:22 - A. B.
I need to do some more with G. I did this one right here.
55:30 - D. D.
This was talking about like a teacher grading app. I was at the hospital or the hall, whatever they call it, nursing home. I was at the nurse home visiting my dad. And he just kind of sits there and hangs out. So I just went on here and just constructed this thing with it. And I was just amazed. You know, this is kind of the high level engineering part of building an app. But I mean, I got the makings of the documentation to get started. I think in enough time, I think I could, you know, I could build this whole app. You know, and then, you know, once I have it engineered, I could just go into the pair programming. And I've Read a book, where the guy's suggesting that you use something like Copilot and then something like ChatGPT. If Copilot gives you something that you don't understand, you run it in ChatGPT and ChatGPT tells you what that code is doing. You can still be the overlord of what's happening, but you have somebody that's actually writing the code for you and somebody that's actually explaining the code to you. You're just directing and managing the operation. It's way more efficient, way more efficient. Requires access to two large language models, and at least one coder. And I think they're saying that the coders, they actually have models that are just built for coding that might be better than JetGPT. But I guess I'm done with that presentation. I will gladly do another one at any time.
57:48 - D. B.
do another one. You know, I, I actually I don't know if I'll be available next week. I'm traveling back to Little Rock around then but but I'll let you know if it is canceled. But anyway, I just, you know, instead of doing two in a row, I'll won't make it.
58:07 - D. D.
We'll put a break in the middle somehow.
58:10 - D. B.
Okay, definitely happy to do the next one too. I think these practical is kind of practical hands on demos are going to be popular because people, you know, people, even experts are trying to figure out how best to use this stuff.
58:23 - D. D.
All right, well, thanks. Did we lose R?
58:28 - A. B.
I think so.
58:29 - D. B.
Oh, that other R. That is the other R.
58:33 - D. D.
Yeah, I see you there. I see you there. Your mic is off if you're talking to us.
58:40 - D. B.
It's on. OK, folks. Well, yeah, thanks. Thanks to D., and we'll see you all next time, hopefully next week.
58:49 - A. B.
All right, guys. Take care. Have a good weekend. Bye.