Friday, November 8, 2024

11/8/24: Discussions about on campus events and their implications

    Machine Learning Study Group

Welcome! We meet from 4:00-4:45 p.m. Central Time. Anyone can join. Feel free to attend any or all sessions, or ask to be removed from the invite list as we have no wish to send unneeded emails of which we all certainly get too many. 
 Contacts: jdberleant@ualr.edu and mgmilanova@ualr.edu

Agenda & Minutes  (137th meeting, Nov. 8, 2024

Table of Contents
* Agenda and minutes
* Read.ai transcript

Agenda and minutes
  • Announcements, updates, questions, etc.?
    • The campus has assigned a group to participate in the AAC&U AI Institute's activity "AI Pedagogy in the Curriculum." IU is on it and  updated us on it.
    • A demo of the real time use of AI to create the Doppler effect interactive animation and perhaps other demos will be scheduled as soon as convenient for RM and VW.
    • There was an event related to National Cybersecurity Awareness Month. JK attended and shared thoughts leading to discussion.
    • Here is another event, upcoming:

      Harness the Power of Generative AI for Good, not Evil Register now for the Ark-AHEAD Fall (Virtual) Workshop: Helping Students (and Faculty) Harness the Power of Generative AI for Good, not Evil



      Presenter: Liz McCarron, EdD, MBA, ACC, CALC
      Webinar Nov 14, 2024
      9:30-Noon

      Students quickly adopted Generative AI, but faculty have been slower to get on board. Worried about cheating, many schools banned the technology. But
      this can hurt neurodiverse students who have adopted GenAI at a higher rate than neurotypical peers. This session will help beginners learn what
      GenAI is and what it is not, what it can do and what it can’t. Attendees will gain a basic understanding of how ChatGPT works and its key features, 
      capabilities, and limitations. Attendees will also experience creating and refining prompts. We will discuss the ethical implications of using GenAI and
      how to create assignments that help students use GenAI responsibly. Join us and get inspired to experiment with GenAI to help your students and yourself.

    • Anything else?
  • Here are the latest on readings and viewings
    • Next we will work through chapter 5: https://www.youtube.com/watch?v=wjZofJX0v4M. We got up 15:50 but it has been awhile so we will start from the beginning next time we work on this video! (When sharing the screen, we need to click the option to optimize for sharing a video.)
    • We can work through chapter 6: https://www.youtube.com/watch?v=eMlx5fFNoYc
    • We can work through chapter 7: https://www.youtube.com/watch?v=9-Jl0dxWQs8
    • Computer scientists win Nobel prize in physics! Https://www.nobelprize.org/uploads/2024/10/popular-physicsprize2024-2.pdf got a evaluation of 5.0 for a detailed reading.
    • We can evaluate https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10718663 for reading & discussion.
    • Chapter 6 recommends material by Andrej Karpathy, https://www.youtube.com/@AndrejKarpathy/videos for learning more.
    • Chapter 6 recommends material by Chris Olah, https://www.youtube.com/results?search_query=chris+olah
    • Chapter 6 recommended https://www.youtube.com/c/VCubingX for relevant material, in particular https://www.youtube.com/watch?v=1il-s4mgNdI
    • Chapter 6 recommended Art of the Problem, in particular https://www.youtube.com/watch?v=OFS90-FX6pg
    • LLMs and the singularity: https://philpapers.org/go.pl?id=ISHLLM&u=https%3A%2F%2Fphilpapers.org%2Farchive%2FISHLLM.pdf (summarized at: https://poe.com/s/WuYyhuciNwlFuSR0SVEt). 6/7/24: vote was 4 3/7. We read the abstract. We could start it any time. We could even spend some time on this and some time on something else in the same meeting. 

Read.ai Transcript

Fri, Nov 8, 2024

:31 - E.G. 
Nope, told him the opposite.

1:35 - D.B. 
I'm sure you'll be glad to know.

1:40 - J.K. 
I'm actually in the nvidiarapids.ai on another screen right now.

1:46 - Multiple Speakers 
It is phenomenal.

1:50 - Unidentified Speaker 
OK.

1:51 - J.K. 
I am definitely going to have to talk to doctor.

1:58 - E.G. 
Talk to Doctor M. Taking her class. That's a good. It's a good series they're putting on, huh?

2:10 - Multiple Speakers 
Oh, it's phenomenal.

2:12 - E.G. 
I've it's just off the chart good. Wow.

2:18 - D.B. 
You don't usually hear that.

2:23 - R.S. 
What course is this?

2:26 - E.G. 
This one is in parallel data loading and distribution tracking gradient descent, log loss, stochastic, uh, leveling of data.

2:44 - J.K. 
So they want to, I mean, there's, they're in the business of selling chips.

2:54 - E.G. 
Yeah, but they also built, uh, an engine that uses their chips for data science.

3:06 - D.B. 
Okay. Welcome back, A.

3:09 - A.B. 
Hey, sorry about that.

3:12 - D.B. 
I don't know if it was your end or my end, but something was not right.

3:25 - Unidentified Speaker 
All good. No, thanks much. OK.

3:30 - V.W. 
Looks like we got the whole crew here.

3:37 - J.K. 
All right, I'm trying to remember how to start the AI transcriber here.

3:46 - D.B. 
I got a message saying the meeting was being recorded. I have no idea where that came from

3:58 - V.W. 
Well, I did allow read AI into the meeting. Oh, that works.

4:07 - Multiple Speakers 
No, it's not my jam. I appreciate it, though.

4:16 - J.K. 
Oh, yeah, that does sound good.

4:22 - D.B. 
But Zoom is supposed to have a, and we've used it, and I can't find it.

4:35 - V.W. 
I guess we'll have to fire up the old LLM. And I, you know, not that we need to, is anyone uncomfortable with having the transcriptions?

4:47 - D.B. 
I don't post, I mean, I, I actually last week I posted the transcription, but I deleted all the names, people's names from it.

4:57 - V.W. 
I think they're tremendously useful because some of the conversations that we've had in this group border on the historical, and it would be nice to go back to them if we ever want to write either historical or technical works based on conversations that we've had. Also, so that we can attribute all the folks that have participated, which have been numerous.

5:21 - D.B. 
Yeah, well, start summaring. Okay, maybe that's the button. All right. Well, Let's see where we are. Well, let's go ahead and get started. A couple of inputs from different people. So one thing is that the campus here has assigned a body to participate in the, I guess, American Association of Colleges Universities, they have an AI Institute about AI pedagogy. And I.U., who's present today, is involved in that. I think she's the representative from the STEM College. But I'll let her tell us something about this and what's going on, just so we can get an update on the news.

6:19 - I.U. 
I., are you? OK. And this will be really short and sweet because we're just now getting started. We attended the institute online back in September, and that had standard conference presentations about different kinds of AI and what AI is for people who weren't familiar in those kind of things, and sharing about how some institutions were using AI in various forms. And we were every institution who sent a group was tasked with coming up with what they called an action plan for UALR in our case. And what we decided was we didn't really know enough about where UALR was currently, what was actually going on on campus. And we are in the process of, we have a draft version of the survey to go out to both faculty and students. We are working on the IRB protocol to get IRB approval for this, and it will ask about what people think about AI in general and its use in education, and try to find out what people are actually doing in their or allowing their students to do in their classes using AI, if anything, and then get input back from students on how they are using AI, both in their classes and outside of class, and try to establish a baseline, and then decide where we want to go from there by talking to the institution as a whole? Does the institution want to come up with some sort of statement about AI? Do we want to ask faculty to put a section in their syllabus about AI and what students are allowed to do with it, if anything? Do we want to set up like an in-house conference kind of thing or training, maybe through Atlee, where people who are using AI in any form in their classes come and tell other people about it. But like I say, the survey is what's being developed right now to find out what's actually happening on campus. Faculty and students, and administrators, as far as that goes, right now. OK. Thank you.

9:17 - D.B. 
Yep. Any questions about this?

9:19 - A.B. 
I just had a question about, in this meeting or whatnot, are they discussing any potential tools to, I don't know, to potentially curb use of AI in certain academic contexts? I've read up on some of these things, And I guess from what I've read, none of them do a great job of actually detecting.

9:46 - I.U. 
That was the consensus of the conference. In any place that it came up, people were saying that you certainly were not going to want to put yourself in a position of trying to call students out on using AI if you had told them not to, say, a composition class or something because the detection tools were not at that point yet.

10:14 - A.B. 
Got it. Yep. Makes sense.

10:16 - V.W. 
I left a couple of comments in the chat. Short version is survey sounds like a good idea. And I enumerated why.

10:27 - I.U. 
And if anybody in this group has comments that they want to make to our group as a whole, or if you just want to make comments to me about something, feel free to shoot me an email or give me a phone call or let me know that you'd like to chat. And I'm happy to get input from any and everybody.

10:55 - D.B. 
I've got a couple of, oh, go ahead.

10:58 - D.D. 
I was just gonna ask, V., did you say put something in the chat.

11:04 - V.W. 
Yes. Um, oh wait. And it was sent.

11:08 - Multiple Speakers 
Thank you for noting the glitch. I'll try again because it went to, uh, reading notes dot, uh, AI.

11:15 - V.W. 
And let me just fix that real quick. Thanks for checking on that, D.

11:21 - D.D. 
Thank you. Quality control is everything.

11:24 - V.W. 
I'm going to be the fly in the ointment here.

11:37 - E.G. 
Is becoming so pervasive, and the ability to detect it is from week to week. Sometimes it's harder to buck or go against this tide than it is to embrace it, but put it in constraints. One of the things that we do, or when I was at Florida Cancer, is AI was actively used, but we would work with the individuals, the employees, to make sure that they got the right context, the whole prompt engineering piece. It might be wise for us to, instead of fully negating it, creating the assignment such that, and this is going to be a lot of work for the instructors, is to push the envelope. Because we know in some places where they hallucinate. Have them come back, and instead of assigning them the answer, assigning them, cause a situation where AI hallucinates. Explain why the difference between AI and human differ.

13:28 - D.B. 
Yeah, how about if you say, get the AI to say something incorrect. Well, then they have to know what's correct, right? Only if they know.

13:42 - Multiple Speakers 
a lot of work for the instructor if the instructor isn't using AI to build the curriculum.

13:49 - E.G. 
True, true. And I made an assumption that that was the case.

13:54 - V.W. 
I think those of us are kind of getting used to having this new set of clothes on, this new set of coveralls. It's so far reaching in its implications that the whole idea of detecting whether someone is cheating or not is just a waste of time relative to what you could be doing in instructing people how to prompt, how to interpret what they prompt, and how to pit multiple AIs and LLMs against each other to make sure you're getting a consensus view that's relatively safe. Those skills take up so much of the instructional time that the whole idea of cheating is just, that's like not even to work it's just it's just a waste and one of the things that I loved about this is

14:50 - E.G. 
telling them to use certain models that if they can prompt engineering and the document size is a it's a cornerstone but but also a rabbit hole. You have to have your, it's not buckets, it's tokens. So if you pass a document larger than a certain size, the tokens is gonna cause it to fail. You can actually design your problem state to force them into where most models will fail.

15:34 - J.K. 
I really agree with what V. said about instructors integrating AI as far as teaching this new tool. Recently, I had a professor reach out from my alma mater at UCA where I did undergrad. She's kind of heading the effort to guide guide AI adoption within the writing department. And I built her a custom prompt where an instructor can put their curriculum outline in or their learning objectives. And then the AI comes back with suggestions on how to integrate prompt engineering. So it's this thing where it's really going to be, there's going to be, I think a conversation about just like, what is the role of higher education? Like, how do we equip people for this new world of work? And I mean, the first step is adopting it ourselves as curriculum designers and things like that. But I really think that an organization that cracks down on the most powerful tool that exists right now for learning or for doing work is just going to result in graduates who are competing with people who did learn how to use it, which is it's scary to say, I mean, like, it's, I understand that this is kind of a bleeding edge thing. But I think a lot of professors are going to have to use similar similar strategies where they're really thoughtfully saying, I teach x, how do I integrate AI into this? And and UCA is doing a really good job. Like, I'm excited that we have this survey going out and that we're making progress on this.

17:23 - I.U. 
You learn And people were already saying at the conference that they were hearing back from employers and grad schools and places wanting to know what their graduates were going to know about AI and what was out there and its ethical uses and so on. And they were making that exact same point that we're insane if we think we can say anymore, you can't use AI at all. Yeah, that was not going to fly. But I think that and I know a little bit more than some people because I've tried to come to these meetings and some other ones and listen to people who use it a lot and know what's going on in that thing. But when I go talk to other people across campus, one of the very first things I get as a question is, how am I going to detect when my students are using it to cheat? It's almost the first question from a decent number of people, which is one of the reasons we want the survey out there. We want to know how prevalent that is and try to find out eventually, this survey is not asking people for names, but what we'd like to know eventually is who is using it and can come and tell other people that their focus is, how do I get them not to cheat with it? How to get them to broaden their perspective? And see how to think about how they could use it in their classes in ways they hadn't even thought about before.

19:26 - V.W. 
This reminds me a little bit of that line in the Matrix after Trinity's crashed through the building and Agent Smith and his cohorts are talking and they're saying to the police who have arrived on the scene, your men are already dead. It's sort of like the Cat is so far out of the bag now that we're more in a situation of how do we keep attrition because Nicholas Negroponte said the internet treats censorship like damage and routes around it. And I see the same thing for the educated student in the 18 to 24 age range where they very quickly will make decisions about whether or not the curriculum is meeting their intellectual and hopefully workforce needs, and if they decide that it's not, they're in a situation where they've not only been given the internet to learn just what they need to know, but they've been given AI, which can generate all the world's knowledge at a moment's notice. So the people that are going to come out ahead are the people who have learned to appropriate the meta-level tools and use them to engineer work product in a way that exceeds that of their contemporaries. So it used to be how strong you were, And then it used to be how smart you were. But then it became, it's how fast you learn. And now it's how fast you appropriate the new generation of tools. And young people are really good at figuring out where to focus their attention based on what's going to give them the most leverage. So if they perceive a generational gap in meeting those needs, we'll just simply enrollment, we'll see enrollments drop, and we won't be able to explain the classrooms are empty until we wring our hands after the fact and find out that we were prioritizing the right thing, a precedent of which we have this last week.

21:25 - D.D. 
So, yeah, that's my thing with AI detection. When we try to make tools to detect AI, it's that most model has a number of false positives. So if we rely on tools to check to see if somebody's cheating, we're going to capture people that aren't cheating and say that they are.

21:51 - D.B. 
And that is no good. Yep.

21:55 - Unidentified Speaker 
Moralizing.

21:56 - I.U. 
I. had a comment. No, I was agreed.

22:00 - E.G. 
I think it's going to be by vertical too. Using it in literature, it's, I don't know enough about literature to know the ins and outs, but in technology, um, when I mentor, uh, people on my team, I don't ask them how to do something.

22:25 - Multiple Speakers 
I asked them the ins and outs of the wise, because anybody can look in a book, go on, on.

22:35 - E.G. 
ChatGPT and look at the what real easy. Understanding the why and the history of it, it tends to get a little rare because ChatGPT will look at how they picked it up and start spewing or citing references. I want people to explain to me why do I need to do this? When I sat in an interview the other day, I said I could sit down and look at a book and tell you the what's and how to do something. What I bring to the table is why we need to do it this way. What's the impact of these types of decisions? That AI has yet to really synthesize in a technological environment yet.

23:24 - D.B. 
I have a few techniques that I've developed for use in teaching that try to, you know, deal with this situation. Let me give you a quick show of a couple of them. This is a homework. What you see here is a homework that I assigned in one of my classes. And it says, you can see the question is, read the following, answer the questions at the end. Well, you might say, well, all you have to do is paste all this stuff that you see on the screen here into ChatGPT, and then paste in the questions, and then ChatGPT will answer it. But what I did here is this white background text is actually an image of a document. They can't paste it in. I can't highlight it. You know, I'm trying to highlight it right now. It won't highlight because it's an image. So essentially the student can't paste this into chat GPT, they have to read it.

24:15 - V.W. 
And then chat GPT doesn't know why, doesn't know the background for the questions. With respect and great reverence and all that, this, your goose is already cooked. And let me tell you why.

24:26 - Multiple Speakers 
Any student that owns a Mac has access to the default image program called Preview.

24:31 - D.B. 
They can OCR it, I know that.

24:33 - V.W. 
They can OCR it just by scraping with their mouse from left to right. So it turns this into a 15-second errand.

24:40 - D.B. 
Yeah, well, it makes it a little, you know, they have to go through the extra step. So here's the next method. And plus, you know, the students who are clever enough to do that, maybe they're not the ones who are more likely to cheat. Anyway, here's another one. See, it says click to show question one? Okay, well, that means you can't paste question one into ChatGBT. You have to click to show it, and it shows up one word at a time in the same spot on the screen. Rapid Serial Visual Presentation. I'll show it.

25:16 - Unidentified Speaker 
I'm gonna click it. One, two, three, click.

25:20 - E.G. 
That's a good one, but that would That would drive me nuts.

25:25 - Multiple Speakers 
That would drive me nuts.

25:27 - V.W. 
That takes five minutes longer to crack than the previous one.

25:31 - D.D. 
You're doing some really mean things.

25:33 - V.W. 
If my students are not using AI in the course, I'm going to kick them out. That could go into English lit or something. So this is how the question is presented.

25:44 - D.B. 
A clever student might say, well, I'm going to go to the show source and get the question from there. But I encrypted it.

25:53 - V.W. 
So if they go to the- The person has to retain the thought.

25:59 - D.B. 
They'll find the question, but it will be encrypted.

26:02 - D.D. 
So they- But I know how to type. I mean, I can type everything that you typed in just a few minutes.

26:11 - E.G. 
I mean- Yeah, I mean, you can- You might've made them mad.

26:16 - Multiple Speakers 
That's- Yeah, you're just pissing people off here.

26:20 - D.B. 
Don't make me mad.

26:21 - V.W. 
But it's a pain in the neck.

26:24 - D.B. 
It makes it harder to cheat.

26:27 - D.D. 
It is a pain in the neck.

26:30 - J.K. 
The most cynical part of me as a, like, I'm studying instructional design and the most cynical part of me is just saying that trying to crack down on cheating on tasks that AI can complete better than is kind of solving the wrong problem, in my opinion. I've thought for a while that kind of just tests and papers, I mean, those have been the assessments for a long time, but I don't think it's solving the right problem. It really comes down to how do we change what we teach so that we have better ways of understanding whether people are actually retaining information?

27:12 - D.B. 
That brings me to my third method here, which is This is one of my courses. I say, here's how you should answer this question. You tell Chachapiti to teach you using Socratic questioning. So here's my prompt. This is an example. I show them this example. I say, teach me about ethics using Socratic questioning. That means the computer, the AI, asks you the questions, you answer them. And then you ask me, here I described Socratic questioning. You asked me questions to guide my learning. And then I copy pasted an article I wanted Chachi P to teach me about. Okay, and then Chachi P obliged, it says, let's explore ethics using the Socratic method. I'll ask you a series of questions. It asked me a question. I thought thought it through and answered it.

28:07 - Unidentified Speaker 
It asked me a follow up question. I tried to answer it. All right, it adds another.

28:15 - D.B. 
So basically, I've directed the students to use ChatGPT to teach them. And then I say, for this homework, you turn in the transcript of you answering ChatGPT's questions. And it's working pretty well. I mean, I don't know how- That's a good one. Is ChatGPT asking the right questions? Is the student thinking through the answer, or are they just- I like that one.

28:41 - E.G. 
You know, I can't enforce that, but that's, it seems to be working okay. Yeah, you're turning it on its ear, but imagine like 55 years ago, okay, people were against encyclopedias, Encyclopedia Britannica. We wanted you to research through multiple books, multiple resources in libraries. People went to Encyclopedia Britannica and were able to find all of the information they needed in one spot, they were able to refer to that. You had to change our teaching mechanism. Granted, we're going back a couple of years, but we had that same paradigm shift that we have today.

29:25 - V.W. 
I've got a more radical assertion in that we no longer test students on retention because that problem is solved and lives in the cloud. Instead, we test students on producing a viable solution to a novel problem in a specified timeframe. So you give them a problem, you say, I don't care how you solve it, but the timer's ticking and I'm going to test you based on the viability of the solution that you provided me. And that statement is general enough that whether or not the person retains it is moot. It's because we no longer retain. Yes, it's good to have general knowledge. Some of us have read the whole, Encyclopedia Britannica as an exercise, but still, it's the ability to create solutions rapidly. And that means that what we view as general knowledge is the knowledge about getting and obtaining the knowledge that's relevant at the point of application, rather than somehow training a person over two decades of their life to be ready for any problem that they might encounter, because that model of education has now collapsed. Provided they keep making GPUs. Okay. Well, other discussion or updates?

30:42 - D.B. 
We're looking forward at some point to a demo of real-time use of AI to create the Doppler effect. Read and I have started putting something together.

31:00 - V.W. 
We're trying for next week now. She has an important talk on Thursday, a very public talk, but provided that doesn't interfere, we're hoping to have something put together. But here's what's interesting is we've audited everything that we did to facilitate the demonstrations and the back study and all that things we did. And we found out a couple of interesting statistics that AI is 99 times faster than a human and generates 57 times as much relevant content per unit time. We didn't know that before. And we now have done enough of this detailed deep diving that we have statistics on the size of the response, the size of the query, the time of the query drafting, and the time it took the person to create it, and measuring typing speeds, measuring thinking speeds. And we got a lot of information on trying to quantify just how valuable it is. We also found out something that complements with the work of J. You know, J. K. has really an innovator in multi-agent AI. And I still want to sit at his feet and understand that in a deeper way. We found that for one problem we were trying to solve for the Doppler, we had what was an equivalent of a 10 megabyte prompt that was given to Claude 200k. And we figured out that some of the prompt wasn't read because it was an image format. So we've now converted all those images into text. We did that using automation that we reads equations off an image and produces the equivalent Microsoft Word text with all the subscripts, matrix notation, product forms, summation forms. We've now got that automated where we don't have to manually enter those equations and we can just provide an image of equations from a paper and automatically get them in text form, which is much more amenable to simulation. When we did that and rewrote wrote our prompt, it dropped from 10 megabytes to a half a megabyte. And moreover, it's pure text now, which makes it much more likely that more of it will be utilized in producing the answer that we got. So while J. has innovated in this parallelism of multi-agent prompting, we like to feel that we are innovating in the monolithic, too longish prompt that fully primes the AI to get exactly on the page that we're on for demonstrating a given phenom, in this case, the Doppler effect. Okay. Yeah. I don't want to put pressure on Read.

33:36 - D.B. 
I mean, I'm not giving a grade on this or anything. So if you can't do it next week, just do it the week after or something like that. Oh, that might work better.

33:47 - V.W. 
We'll see. Okay.

33:48 - D.B. 
I just know that people here would love to see a real-time demo showing how AI can be used.

33:55 - Multiple Speakers 
Yeah, we all love this stuff.

33:57 - V.W. 
It's so fascinating to do these real-time problem solutions and also have people think about how they would use it and turn around and show what they did and maybe twist on the same problem. For example, multi-agent versus giant sequential prompt.

34:13 - D.B. 
I thought that the Doppler effect animation was particularly of interest because it was simple enough that people would understand it. I think a lot of people are much more hazy about the deep aspects of antenna projection of radio waves. But the Doppler effect, everybody knows something about how it works, but the animation was really quite nice.

34:40 - V.W. 
Well, one thing we've noticed that the antenna gain patterns look like is orbitals and quantum mechanics. And so that's been kind of revelatory and posing a link that looks promising. So I wouldn't dismiss those other things completely. But anyway, whatever the case, doing the postmortem on how they were constructed has proven pretty useful because now we can make more analytic and quantifiable statements about the degree of leverage that AI is producing in terms of time and

35:14 - D.B. 
in terms of that level of effort. Okay. One, this is Girish here.

35:19 - Multiple Speakers 
I think even I was really fascinated and impressed by the demo last week by Read. Will it be possible for you to share the prompts that you have created?

35:29 - V.W. 
That is what we are working on. We're not only wanting to share the prompts, we want to share the meta information that led to that being the prompt we constructed. Because it's that backstory which is most enabling for a person to take their problem and put it in the AI context.

35:48 - Multiple Speakers 
And it's got the little tedium that you have to do a little back work, but it's got the big windfall that once you own that in your head, you can then say, well, I think I would do it this way. And you go off with your own idea, which is exactly the way to get good results.

36:07 - D.B. 
Fantastic. Thank you so much. Okay. There was an event recently in the past week about cyber security awareness. And I don't know what was anyway, J. was there and was kindly willing to tell us what it was. Yeah. Are you guys able to see me? Or do I need to swap my camera?

36:27 - J.K. 
Yes, you look great. Okay, thank you. Yeah, my also I apologize earlier I was I was talking to someone I didn't realize my mic was muted. So hopefully that's that's not in the transcript. But yeah, I went to this this cybersecurity event on campus. It was really cool. The CIO of Euler, Dr. I know I'm going to butcher his last name, but Ergdon. No, no, it was Dr.

36:54 - D.B. 
Ergdon. I'll double check that.

36:56 - J.K. 
He only said his name at the beginning of the presentation. So I'm sorry that I am butchering his name right now. But it was a really good talk. Talking about generative AI. And I had shown up because I have been losing sleep. I know we talk about this in our group, where we get to a point in the meeting where it's like we talk about how powerful it is, and then we talk about how concerned we are about how it can be applied. And one of the things that I've been really concerned about, I've been working on a curriculum for what I call cyborg thinking. It's kind of what we're talking about here, where how do you give people the tools to view AI as a cognitive extension, something that you have a skill and it complements those skills. And the curriculum is coming together really well. It teaches everything that I know how to do as far as multi-agent and prompting. But the issue I've run into, and I wrote a paper, just a really short white paper, and I'm happy if we share that in the group, but I've grown concerned about two different scenarios as I've been thinking about people using these techniques, using multi-agent to boost efficiency. The first of those is just, I can't code. I don't write code, but I've put multiple AIs in the same chat, told some of them they were developers, some of them they were product managers, and it can write better Python in five minutes than some people can in a week. People have been doing this for a while. And so my concern now is that if you if anyone can put a group of experts into a chat and then either produce a deliverable or plan strategically it raises the concern of just like how do you how do you deal with bad actors it's concerning that just like a person with average skills who knows how to use these AI tools is capable of of basically doing what would take a team. And on the front, I mean, there's lots of positive applications of that. We can do rapid disaster planning, things like that. But I'm going more concerned about how we address a lone wolf being able to act as a pack, basically. There's that. And then there's also just the problem that if you use a multi-agent team to optimize a flawed system, all those flaws scale too. If you have a multi-agent team working at a poultry plant and every year you lose three people in the machinery and you scale up that operation without addressing the underlying flaws, you get 300% more efficient poultry plant, but you increase the number of people in the machines to nine. So it's just, there are a lot of things that I'm realizing as we explore multi-agent, as we see these, really drastic productivity gains. The paper that I wrote specifically addresses cybersecurity, and it's this idea that we've secured the internet to be safe from things as smart as people, but I'm starting to worry about just really, really powerful multi-agent teams being able to be leveraged in ways that our current systems aren't designed. So those are just things in my mind, and I go into greater depth in the paper. But it's just this conversation that I think we need to have about, we marvel at the efficiency of AI, but we have to be really careful when we think about, well, if everyone has an assistant on demand that we can prompt to have any body of knowledge, that's terrifying. I mean, on some level, it's just like some percentage of users are going to be able to multiply themselves in ways that are really unsettling. And then also just this fact of if you optimize a flawed system just for growth without addressing some of the other problems first, you scale up problems to where it's just unmaintainable. You basically kill the host by making something more productive that has these drawbacks.

41:24 - Y.I. 
So I, I go ahead.

41:25 - J.K. 
No, sorry. My daughter was trying to tell me something.

41:29 - Y.I. 
I was not on mute. Yeah, no problem.

41:32 - J.K. 
I'll ask my question later. I have questions.

41:35 - V.W. 
J., we have a precedent for dealing with the exact problem that you described. And it's fascinating to me because I wrote a blog about it a couple of years ago and I was concerned about school shootings. I was concerned about the availability of weapons whose ability to harm way outscaled the need for a single person, perhaps, to defend themselves unless they were being attacked by a mob or something. And in the process of writing this blog, I thought, well, what if everybody had a hydrogen bomb in their garage? How would that affect society? How would the notion of mutually assured destruction at both the neighborhood, community, state, and national level affect how we enacted and thought about gun policy. So I would argue that the problem you just described is identical to gun policy and Second Amendment rights, even though personally, I don't like quarreling about Second Amendment rights. But I do like thinking about school shootings and who it might negatively impact, because I feel like that needs some careful thinking to be done. So anyway, the risk that you define falls into two categories. Number of potential victims and the impact of persons and property at large. So then you have to compute the probability that a given exploit that somebody cooks up in this pack of wolves as a single actor will exceed the value of that technology to society. And so you could ask AI, is it worth it to us to have you helping us given that this could happen? And you could then post this thing as a thought experiment to the AI and see what the AI says. And then you have to worry about, does the AI have a conflict of interest? Because you used a viral analogy in which, well, humans start to look like food to me. So what can I do to answer their question properly?

43:30 - J.K. 
I'm just riffing. Yeah. Yeah. Well, I think like this gets this gets to just bigger questions. Like, I mean, we talk about like God tier AI within the next couple of years. And like, I see these things and I'm just like, expecting some God tier AI, I mean, the more powerful you build something, the harder it is to control. And so I think a lot about these like super advanced AIs that in the context of, it's like ants building a person to help them dig tunnels and planning to be able to bribe the human with food, like with a magnitude greater intelligence. And I mean, the fact that like, Like an AI is going to outlive us. Like this kind of God tier artificial general intelligence. It's laughable to me in a lot of ways.

44:23 - V.W. 
So the AI is going to invent chocolate covered ants. Yeah.

44:27 - Multiple Speakers 
We just get to the point where it's just like, it's unnerving to build something smarter than us.

44:35 - J.K. 
That's something that I've come back to in a lot of conversations with people. Um, and so I'm, I am, uh, I have no, I have no simple answers. Like in the, in the paper I wrote, I, I discuss there's some adaptive AI cybersecurity measures. Um, but the problem is, I mean, it's, it's just very unsettling the idea that a person can write a prompt that kind of unpacks itself. And it's, you just tell it like, do this, do this task until you successful? Change your, change your strategy every time you're thwarted? Like, how do you? How do you counter those kinds of things? And so the paper itself is just kind of a jumping off point to say, we need to be thinking about, we marvel at the optimization of processes. And I, I've just I've healthy respect for what multiple AI can do. I mean, I, I've seen it time and time again, as I'm as I'm running these multi agents, like simulations, they're working together, you see emergence, I mean, emergent, like just like behaviors youhadn't anticipated for as they're interacting and things like that. And so I just, I put that paper together, I attended that talk, because I feel like I've seen There aren't many things a team of AI can't do to assist a user. And there are just some users that we don't want to assist. And so that's just kind of an ethical quandary. I mean, we've gotten to the point where Anthropic now has an API available that allows AI to interact with the browser, like type text, interact with the web page, and stuff like that. We're rapidly approaching a point where a single person can get a lot of things done and how do, I mean, how do our current watchdog systems approach a person who is capable of acting as entire teams of people at

46:56 - Y.I. 
scale whenever they leverage this tech?

46:59 - Multiple Speakers 
I got one. Working just real quick with the defense department.

47:03 - V.W. 
So if we take your argument in the context of the shooter and the AI kind of getting the upper hand, and now we're going to unleash that to certain privileged defense departments, that's going to tend to create concentrations of power in the hands of relatively few ungovernable people.

47:20 - J.K. 
Yeah. I mean, I also just think about how the things that we produce as weapons often have a habit of sticking around long after a conflict, like this this kind of dystopian idea of, we create an AI that's capable of locking down a certain perimeter of space against intruders. And then I mean, it's the same thing with minefields, where it's just like, we bury these things, like, like, humans bury these things in order to secure territory, like, what happens when parts of the world become uninhabitable, just because they're, they're inhabited by AI designed to keep people out of them. So I mean, I mean, like AI applications in war, that stuff kind of makes me sick to my stomach sometimes. But yeah, I mean, it's, these are just, it's odd that we're having to have these discussions so quickly after this thaw of the AI winter and that it's just, I mean- It's worse than you think, J.

48:19 - E.G. 
Are you familiar with the neuro rat?

48:21 - J.K. 
That sounds familiar, but could you, elaborate on?

48:26 - E.G. 
Russia was able to install a neural interface into a rat plugging it into their AI system look up neuro rat but I get a feed on my on scary changes now it and and the argument is it is now the smartest rat in the world.

48:57 - J.K. 
I want you to think about this.

49:00 - Multiple Speakers 
Black Plague 2.0, rats strike back.

49:02 - J.K. 
Yeah, I mean it's the, but I mean the craziest thing to me is just we talk about things like singularity and brain chips and things like that, but I mean you don't need to be hardwired into any of this in order to be able to leverage the thinking Like, that's one of the things that I've come back to as I was working on this curriculum is just once you develop the mindset of AI is an extension of me, all I need to do is convey problems and have it come up with solutions and things like that. I mean, it really is a different level of thinking that's capable if you offload a lot of it to AI. And again, like, I, if I need a piece of, if I need a tool done in a day, I can have AI write the code to automate it. And it's, it's just the, the agility that comes with viewing AI as a collaborator, knowing how to break problems down and then distribute it to different members of a team. Um, it's, it's startling. I mean, it's, it's gonna, I, I'm, I'm an optimist. I'm an AI optimist. Like I think, I think that there are a lot of places within healthcare and education and things like that, that we will see drastic improvements from society. But it scares me. It really scares the hell out of me that if you have an agenda that... And I mean, going back to the Second Amendment thing, I just think about the line from The Dark Knight where it's just like, some people just want to watch the world burn. There are lots of people who have no So the goal that they would optimize with AI is antisocial. And it's going to get to the point where it's just like, how do we identify when people are working with teams of AI to accomplish things? We get in kind of this arms race where we really want the people who keep us safe to have access to these tools because otherwise, I mean, Again, to borrow the Second Amendment analogy, the tech is out there to convert a lot of semi-automatic handguns into fully automatic, and it's just the police, now the police have to deal with people who have way more firepower than if before they'd put these little pieces of technology to work. So those are the things that are really on my mind right now. As again, like we, we see that small groups of people are capable of, of large impact. And it's just, there's going to be a magnifying effect where if every person is capable of 10 X, uh, productivity, there are, there's going to be some people that we wish they were less productive.

52:01 - V.W. 
Um, so just to kind of a note, I like to do self-reflection so I can go back and audit our, what we've talked about and think about the ramifications. Long-term. I've noticed that about 439 every week for the past five or six weeks, we've had scary time, which is where we start having today's scary AI thoughts. And I'm just seeing that as a pattern, just as a note. I think it's healthy and it's good, but you know, I think there's a balance between fear and forward progress. And I think the future belongs to the courageous and in either case.

52:41 - D.D. 
Yeah, I'm, you know, it's similar to, go ahead.

52:48 - Y.I. 
Similar to, you know, the analogy that we learned. I've lost him.

52:57 - D.D. 
We lost your feed, Yogesh. We can't hear you anymore.

53:05 - R.S. 
It looks like he's driving.

53:07 - V.W. 
He's driving through a shadow right now, probably.

53:10 - D.D. 
I was just going to say, anytime you get bad people, you know, with tools, they do bad things. Yeah.

53:17 - V.W. 
There could be a self extinguishing side to this. My wife has noted that a lot of gang warfare occurs at the seams of cities where there is two different groups with two different competing interests, but that sometimes when these forces go to war, they go to war with themselves, resulting in fewer of the bad actors. I'm not saying that as a necessarily positive or negative thing. I'm just saying it seems to be a phenomena that sometimes bad acting is self-extinguishing. For example, the virus writers that get viruses on their computers that destroys all their carefully written, handcrafted viral work. And they deserve it.

54:01 - D.B. 
Yeah, completely. Okay. Uh, we're kind of out of time. So, um, uh, next time we will, uh, do what we do next time. If we have, we have something to, you know, demos or anything, we'll do that. If not, we'll continue with, uh, chapter five video, chapter five video. Yeah.

54:18 - V.W. 
Realistically, I don't think we're going to be able to do the kind of job we want to do for this group next week, but I think the week after is definitely within the realm of possibility. Just for planning purposes.

54:32 - D.B. 
Okay. Then we'll work through chapter five, the chapter five video next time. We got partway through it earlier. We might want to redo it.

54:39 - Multiple Speakers 
I don't know. It's been so long since we did it. Start from the beginning. I'll see. I'll think about it.

54:45 - V.W. 
And I put a video link to Rob Miles' stop button problem that was echoing some of the stuff J. was saying. If you want to watch it, it's short and sweet and very interesting. And it was done in 2017, completely anticipated where we are right this moment.

55:04 - Unidentified Speaker 
All right.

55:06 - I.U. 
Well, thanks, everyone.

55:09 - Unidentified Speaker 
We'll see you next time. Bye, guys. Bye. See you. Thank you. Bye-bye. Hello?

55:25 - Y.I. 
Hello. Sorry, my phone froze and I did not know what to do. Oh, did everybody leave?

55:33 - D.B. 
Yeah, we're kind of done. Okay, all right. It was a good topic.

55:38 - Y.I. 
And I started speaking and my temperature of the phone went very high. Oh, and I just could not do anything.

55:48 - D.B. 
And then it came back.

55:50 - Y.I. 
And then I could not get into what mute myself properly.

55:56 - Multiple Speakers 
Well, we'll see you next time.

56:02 - Unidentified Speaker 
Thank you. Bye. Hey, V.

No comments:

Post a Comment