Friday, April 11, 2025

4/11/25: General discussion

 Artificial Intelligence Study Group

Welcome! We meet from 4:00-4:45 p.m. Central Time. Anyone can join. Feel free to attend any or all sessions, or ask to be removed from the invite list as we have no wish to send unneeded emails of which we all certainly get too many. 
Contacts: jdberleant@ualr.edu and mgmilanova@ualr.edu

Agenda & Minutes  (158th meeting, April 11, 2025)

Table of Contents
* Agenda and minutes
* Appendix 1: Syllabus of new proposed 4000/5000 level applied AI course
* Appendix 2: Transcript (when available)

Agenda and Minutes
  • Announcements, updates, questions, presentations, etc. as time allows
    • Today: viewing and discussion.
    • Fri. April 18 at 3:00 p.m. (an hour earlier than our usual meeting time!) GS PhD defense, Optimizing Small AI Models for Biomedical Tasks Through Efficient Knowledge Transfer from Large Domain Models.
    • Th April 24, 2-4 p.m. AI TED talks in the EIT Auditorium. Free to attend. (Want to present? Form to sign up.)
    • Fri. April 25: YP will informally present his draft AI course outline and welcomes comment. See Appendix 1 below.
    • TE is in the informal campus faculty AI discussion group. SL: "I've been asked to lead the DCSTEM College AI Ad Hoc Committee. ... We’ll discuss AI’s role in our curriculum, how to integrate AI literacy into courses, and strategies for guiding students on responsible AI use." 
      • Met last Wednesday, every two weeks.
    • Want to fill out a survey? From an email:

      "On behalf of the AI Integration Team, I would like to invite you to participate in an AI Campus Climate Survey. The team, led by Curriculum and Special Projects Coordinator Nathan Holloway, is seeking information the campus community's experiences, perceptions, and expectations regarding the integration of artificial intelligence (AI). Insights from this survey will inform the development of implementation strategies, programming, and policies to effectively integrate AI across academic, administrative, and student support functions. 

      Survey Link: https://ualr.qualtrics.com/jfe/form/SV_e5up1YI8n1X73Ku

      Deadline: Friday, May 2nd

      Thank you for helping shape the future of UA Little Rock!"

  • We did the Chapter 6 video, https://www.youtube.com/watch?v=eMlx5fFNoYc, up to time 13:08. We can start there next time.
  • Schedule back burner "when possible" items:
    • Anyone read an article recently they can tell us about?
    • If anyone else has a project they would like to help supervise, let me know.
    • (2/14/25) An ad hoc group is forming on campus for people to discuss AI and teaching of diverse subjects by ES. It would be interesting to hear from someone in that group at some point to see what people are thinking and doing regarding AIs and their teaching activities.
    • The campus has assigned a group to participate in the AAC&U AI Institute's activity "AI Pedagogy in the Curriculum." IU is on it and may be able to provide updates now and then.
  • Here is the latest on future readings and viewings
    • https://transformer-circuits.pub/2025/attribution-graphs/biology.html#dives-refusals 
    • https://transformer-circuits.pub/2025/attribution-graphs/methods.html
      (Biology of Large Language Models)
    • We can work through chapter 7: https://www.youtube.com/watch?v=9-Jl0dxWQs8
    • https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/
    • Prompt engineering course:
      https://apps.cognitiveclass.ai/learning/course/course-v1:IBMSkillsNetwork+AI0117EN+v1/home
    • https://arxiv.org/pdf/2001.08361

Appendix 1: New proposed 4000/5000 level applied AI course

In today's AI-driven world, professionals across all levels—graduate, undergraduate, and PhD students—must develop a comprehensive understanding of AI technologies, business applications, and governance frameworks to remain competitive. The Applied AI for Functional Leaders course is designed to bridge the gap between AI innovation and responsible implementation, equipping students with technical skills in AI development, strategic business insights, and expertise in governance, compliance, and risk management.

 

With industries increasingly relying on AI for decision-making, automation, and innovation, graduates with AI proficiency are in high demand across finance, healthcare, retail, cybersecurity, and beyond. This course offers hands-on training with real-world AI tools (Azure AI, ChatGPT, LangChain, TensorFlow), enabling students to develop AI solutions while understanding the ethical and regulatory landscape (NIST AI Risk Framework, EU AI Act).

 

Why This Course Matters for Students:

 

v Future-Proof Career Skills – Gain expertise in AI, ML, and Generative AI to stay relevant in a rapidly evolving job market.

v Business & Strategy Integration – Learn how to apply AI for business growth, decision- making, and competitive advantage.

v Governance & Ethics – Understand AI regulations, ethical AI implementation, and risk management frameworks.

v Hands-on Experience – Work on real-world AI projects using top industry tools (Azure AI, ChatGPT, Python, LangChain).

Why UALR Should Adopt This Course Now:

 

v Industry Demand – AI-skilled professionals are a necessity across sectors, and universities must adapt their curricula.

v Cutting-Edge Curriculum – A balanced mix of technology, business strategy, and governance makes this course unique.

v Reputation & Enrollment Growth – Offering a governance-focused AI course positions UALR as a leader in AI education.

v Cross-Disciplinary Impact – AI knowledge benefits students in business, healthcare, finance, cybersecurity, and STEM fields.

By implementing this course, UALR can produce graduates ready to lead in the AI era, making them highly sought after by top employers while ensuring AI is developed and used responsibly and ethically in business and society.


Applied AI (6 + 8 Weeks Course, 2 Hours/Week)

5-month Applied Artificial Intelligence course outline tailored for techno-functional, functional or technical leaders, integrating technical foundations, business use cases, and governance frameworks.

 

This can be split in 6 weeks certification plus additional funds for credit course with actual use case.

 

I have also leveraged insights from leading universities such as Purdue’s Applied Generative AI Specialization and UT Austin’s AI & ML Executive Program.




 

Balance: 1/3 Technology | 1/3 Business Use Cases | 1/3 Governance, Compliance & AI Resistance




 

Module 1: Foundations of AI and Business Alignment (Weeks 1-4)

 

v Technology: AI fundamentals, Machine Learning, Deep Learning

v Business: Industry Use Cases, AI for Competitive Advantage

v Governance: AI Frameworks, Risk Management, Compliance

 

·         Week 1: Introduction to AI for Business and Leadership

o    Overview of AI capabilities (ML, DL, Generative AI)

o    Business impact: AI-driven innovation in finance, healthcare, and retail

o    Introduction to AI governance frameworks (NIST, EU AI Act)

·         Week 2: AI Lifecycle and Implementation Strategy

o    AI model development, deployment, and monitoring

o    Case study: AI adoption in enterprise settings

o    AI governance structures and risk mitigation strategies

·         Week 3: Key AI Technologies and Tools

o    Supervised vs. Unsupervised Learning

o    Python, Jupyter Notebooks, and cloud-based AI tools (Azure AI Studio, AWS SageMaker)

o    Governance focus: AI compliance and regulatory challenges

·         Week 4: AI for Business Growth and Market Leadership

o    AI-driven automation and decision-making

o    Case study: AI-powered business analysis and forecasting

o    Compliance focus: Ethical AI and responsible AI adoption





 

v Technology: NLP, Computer Vision, Reinforcement Learning

v Business: AI in business functions - Marketing, HR, Finance

v Governance: Bias Mitigation, Explainability, AI Trust

 

·         Week 5: Natural Language Processing (NLP) & AI in Customer Experience

o    Sentiment analysis, text classification, and chatbots

o    Business case: AI in customer service (chatbots, virtual assistants)

o    Governance focus: Privacy and data security concerns (GDPR, CCPA)

·         Week 6: AI for Operational Efficiency

o    Business use cases: AI for fraud detection, surveillance, manufacturing automation

o    Compliance focus: AI security and adversarial attacks

·         Week 7: Reinforcement Learning & AI in Decision-Making

o    Autonomous systems, robotics, and self-learning models

o    Business case: AI-driven investment strategies and risk assessment

o    Resistance focus: Overcoming corporate fear of AI adoption

·         Week 8: AI in Marketing, HR, and Business Optimization

o    AI-driven personalization, recommendation engines

o    Business case: AI in recruitment, talent management

o    Compliance focus: AI bias mitigation and fairness in hiring




 

Module 3: AI Governance, Compliance & Ethics (Weeks 7-10)

 

v Technology: Secure AI Systems, Explainability

v Business: Regulatory Compliance, AI Risk Management

v Governance: Responsible AI, Transparency, Algorithm Audits

 

·         Week 9: AI Governance Frameworks & Global Regulations

o    NIST AI Risk Management, ISO/IEC 23894, EU AI Act

o    Industry-specific regulations (HIPAA for healthcare AI, SEC for AI in finance)

o    AI governance tools (audit logs, explainability reports)

·         Week 10: AI Explainability & Bias Management

o    Interpretable AI techniques

o    Case study: Bias in AI hiring systems and credit risk models

o    Business responsibility in AI model transparency

·         Week 11: AI Security, Privacy, and Risk Management

o    Secure AI model deployment strategies

o    Governance: AI trust frameworks (eg: IBM AI Fairness 360)

o    Case study: Managing AI risks in cloud-based solutions

·         Week 12: AI Resistance and Corporate Change Management

o    Strategies for AI adoption in enterprises


o    Business case: AI integration in legacy systems

o    Ethics: Impact of AI on jobs, social responsibility, and legal liabilities




 

Module 4: AI Strategy, Implementation, and Future Trends (Weeks 11-12)

 

v Technology: AI Product Development

v Business: AI Implementation, Enterprise AI Strategy

v Governance: AI Regulatory Compliance & Future Legislation

 

·         Week 13: Overview of AI Deployment and Scalability

o    Deploying AI models on cloud (Azure AI Studio, AWS, GCP)

o    Business case: Scaling AI solutions in enterprise environments

o    Compliance: AI model monitoring, drift detection

·         Week 14: AI for Competitive Advantage & Industry-Specific Applications

o    AI in industry : e.g.: supply chain, autonomous vehicles, healthcare diagnostics

o    Case study: e.g.: AI-driven drug discovery and logistics optimization

o    Compliance: AI liability and regulatory accountability

·         Week 15: AI Governance and Responsible Innovation

o    Innovating with AI : e.g. financial services (algorithmic trading, fraud detection)

o    Ethics: Ensuring fairness and avoiding discrimination in AI models

o    Risk assessment frameworks for enterprise AI adoption

·         Week 16: The Future of AI: Trends, Risks & Opportunities

o    Generative AI (DALL-E, ChatGPT, LangChain applications)

o    AI and Web3, decentralized AI governance

o    Case study: AI-powered governance in blockchain ecosystems




 

Module 5: Capstone Project & Final Presentations (Weeks 12-14. Process starts in Week 7/8)

 

v Technology: Hands-on AI Application Development

v Business: AI Use Case in Industry

v Governance: Compliance Strategy & Ethical AI

 

·         Weeks 17-19: AI Capstone Project

o    Develop an AI-driven business solution with governance compliance

o    AI application areas: Business analytics, customer engagement, fraud detection

o    Report: Governance strategy and AI risk mitigation plan

·         Week 20: Final Project Presentations & Certification

o    Peer review and feedback

o    Industry guest panel discussion on AI’s role in future business strategies

o    Course completion certification

Tools & Technologies Covered:

·         AI Development: Python, TensorFlow, PyTorch, Scikit-learn, GenAI models

·         Cloud AI Platforms: Azure AI Studio, AWS AI Services, GCP Vertex AI

·         NLP & Generative AI: ChatGPT, DALL-E, LangChain, BERT, Stable Diffusion

·         AI Governance & Risk: SHAP, LIME, AI fairness toolkits

 

Appendix 2: Transcript

AI Discussion Group
Fri, Apr 11, 2025

1:19 - Unidentified Speaker
Hi, everyone.

1:22 - D. B.
So let's give folks another minute or so.

1:35 - Unidentified Speaker
Hello, Van. Hey, hello, hello. Yeah, so.

1:47 - D. B.
How it going?

1:54 - M. M.
I missed one meeting.

2:02 - Unidentified Speaker
Busy, busy. Okay. Well.

2:14 - Unidentified Speaker
I guess we can go ahead. OK, so some announcements. Today, there's no special program or anything. So we're going to finish viewing that video and discussing it.

2:32 - D. B.
And if we don't even have time to go on to something else, we'll see. Next Friday. Not this Friday, but next Friday, a student is going to give his defense, and he's going to do it at three. So it may well go past an hour. So we'll just meet at three instead of four next week. So I'll have to send out an announcement about that. And it is an AI. He did do an AI-related Then the following Thursday, on April 24th, there's going to be those AI TED Talks in the auditorium of the EIT building. And so anybody can go. Encourage to go. People want you to go. I'll probably try to catch some of them. I teach at, well, I could bake it, actually. We'll see how it goes. And if you want to present, fill out a form. It's probably too late to sign up for this one, but they're going to have another one.

3:45 - Unidentified Speaker
And that's a Thursday.

3:46 - D. B.
The next day, Friday, that's two weeks from today, Y. will present his draft AI course outline, which we can comment on. And I've got that in appendix one down below just so I don't lose it. But yeah, they've been pushing this course, which he will be kind of developing syllabus for. Yeah.

4:08 - M. M.
And it's a 4,000, 5,000 level course.

4:11 - D. B.
I don't know who's, I guess we don't have any 4,000 level students in information science, but I guess we do in computer science. So, so we'll see how that goes. It'll be an applied AI course, not a theory course.

4:29 - M. M.
Yeah, at least advertised already. So we have the topics. Yeah.

4:35 - D. B.
I would think something like that could be a general education course for freshmen to take. But I don't know that this course is going to be like that, but it could be. Anyway, T. is not here. Oh, T. Hello, T. He is here. T., are you there?

5:01 - Unidentified Speaker
Yeah.

5:01 - M. M.
Yes. Hey.

5:03 - Unidentified Speaker
Good.

5:03 - T. E.
So has this group that you've joined met at all that you know of Well, they actually met Wednesday. And it was interesting. One of the things that they're discussing is AI and writing. And so, I thought that was kind of an interesting topic as well. You know, I struggle because of my parents. So, Dr. S., is that right? S., S.

5:44 - Unidentified Speaker
S.

5:44 - T. E.
She, her concern is students doing research, you know, and just letting AI crank out everything, or like tech, CBT writing, writing you know she says and she has you know some students that try to do that and they were like no no we need you to so they're struggling with AI and like in writing in terms of you know academic academic style research papers and that kind of thing so I thought it was an interesting discussion.

6:25 - D. B.
Yeah, so this is, I mean, these are not technical faculty, particularly, it's just people who are, you know, students in their class are using AI, and their classes are using AI, and maybe they're using AI, and how do you, how do we make it educational?

6:45 - T. E.
That's right. Yeah, so that was the first meeting I've been able to attend so far. The next one will be in two weeks. So they meet every other week. And there was some discussion on whether they want to continue to meet throughout the summer or break until next fall.

7:09 - D. B.
I don't know that they've decided completely yet, but

7:13 - T. E.
How many people were in the call? Yeah.

7:17 - D. B.
It was a pretty good sized group.

7:20 - T. E.
maybe there was only four or five that really talked, you know, just contributed to the discussion, but there were others, you know, listening. And so they had a pretty good sized group.

7:38 - D. B.
Question.

7:38 - M. M.
Did they show some tools that, for example, Van shared with us, wonderful tools and every they create a new. Did they share some tools? Well, it's interesting.

7:52 - T. E.
One of the instructors that she struggles with this, what she does is she uses Google Docs for her assignments. And then that way it's kind of like a live document. She can tell when students update the document, you know, because they share it and she can see revisions to the document. And so that's kind of one way that she kind of mitigates the the use of AI. But this is not AI.

8:20 - M. M.
Google Docs document. Well, yeah, I think that's, I mean, I don't think Google, they were using Google Docs to actually generate the content of the paper is what I'm saying.

8:33 - T. E.
She was using it to track, you know, when the students were updating the paper itself, or the contents itself. There is no reasoning.

8:43 - M. M.
Right now, the can do reasoning, and there are very interesting articles about exactly which model, what kind of reasoning, maybe V. can help me out.

8:59 - T. E.
So I remember reading some articles.

9:03 - M. M.
They talk about AI and even use AI, I mean, without actually using AI. Yeah Yeah, that's a good point, I'm not sure You remember the articles about which model what kind of reasoning Theory of mine and stuff like this their articles right now. Mm-hmm And then they're gonna invite there's someone in the College of Business that does AI for, I guess, the business department.

9:43 - T. E.
I can't recall his name. I can get these notes and pass them on to you guys. I'm sorry I don't have them with me right now. She's got notes that we have access to for the group discussion. I can send those along. Yeah, there's a faculty member over, I believe, in the business that they're going to invite to share some of the, it's like you said, Dr. Burlingate, they're not, you know, they're in a different field from information science and computer science. So, you know, they're just using the tools. They don't really, you know, understand how, you know, the inner workings of it, you know, like a computer science. I don't want to come across as sounding...

10:37 - M. M.
But people, they don't need to be computer scientists. Yeah, right. Marketing are using constantly.

10:43 - M. M. | T. E.
Yeah.

10:44 - Unidentified Speaker
Yeah.

10:45 - M. M.
Yeah, I'll send that. I'll send that information along.

10:49 - T. E.
There's a, they have a, I think they keep it in Blackboard somewhere and I can send that and you can, you can look at all the notes and Please, please do this.

11:04 - M. M. | T. E.
Okay. Yes, yes.

11:06 - M. M.
Yesterday, I participated in several meetings, of course, about how people in marketing are using. So they show super grow, for example, tool that can do from video to the block text. So, and even I mean, the writers, if they are concerned about how correct is the information, they can learn from these tools and check. Yeah.

11:39 - T. E.
The information is correct or not, but I don't know.

11:43 - M. M.
Super Grow, I can show you.

11:46 - T. E.
I like it. Yeah. There was some discussion about, so like, as we all know, if you use chat gdbt to say write me a literature review for these sources. The sources that ChetGBT gives may or may not be accurate, right? I mean, they may or may not exist. And so they were talking about how important, you know, that's a lot, that's one of the tales that they use when a student turns something in, if it's been AI generated or not, that if the, the references, they go to look up a reference and it doesn't actually exist or they can't find it.

12:31 - V. W.
That was more of a problem a year ago than it is now.

12:36 - M. M.
Exactly. It's gotten a lot better.

12:38 - V. W.
And if you front load your query with a lot of related information and quality sources, then it really filters out the bottom.

12:47 - M. M.
It's perfect. It's perfect now. I mean, what they're talking about is B. mentioned a year ago. Okay. Not even the week ago. Yeah. The rate of change is just stunning right now.

13:01 - T. E.
And one of the things, B., that they brought up was they want to be able for the students to find their voice in their writing. And I thought that was interesting because we just had that discussion last week about that.

13:18 - V. W.
That's interesting. Yeah.

13:19 - Unidentified Speaker
Yeah.

13:20 - M. M.
exactly this what I am talking about super grow. They have options to write in your style and they give examples. The guy give examples from his emails. He say, I never write like I never write, but I have my emails. The emails put the emails that 10 emails that he's doing. And the essay or blog is coming with his style, probably more limited vocabulary, but it's his style. It's his style. So this is done.

13:59 - V. W.
And the later versions of Chad GPT are, they will log all your interaction with them and begin to customize, not only according to your personal profile, which is nice to get started, but you can ask it, tell me what my theme been over the last few weeks, and it will tell you fairly accurately. And you can say, what is something about myself that everybody else knows, but that I have no idea of And it will tell you that too.

14:30 - M. M. | V. W.
It can be very revealing.

14:32 - M. M.
Yeah, it's more and more intelligent than people believe. It's getting crazy good.

14:36 - A. B.
Well, it's interesting too, I did just an experiment to say, hey, give me a visual of what you think I look like and other things. And it actually gave like a, like it knew around the age I was. And, you know, just based on questions that pulled in some other, you know, like, you know, a bookshelf and all this, it was really, I was kind of blown away.

14:59 - V. W.
I was like, wow, it's pretty close. That'd be cool.

15:02 - M. M.
I'd like to see that. Yeah. Yeah. This is, we have to try. Yes. So, and in the reality, people can learn more, not less that this, there are people afraid that the students, if they're are curious and adaptive, and the curiosity is something you have to develop.

15:19 - V. W.
And two years ago, we were just becoming suspect of the emergent properties, that there seemed to be things beyond predicting the next word that were taking place in our discourse with the chatbots. And now that's gone way beyond any of our original expectations. A. B. and I are doing a project, Euler Deep dive, which is a set of number theory problems that are cataloged. And we're, instead of just solving the problem, we're saying, well, we'd like you to solve this problem. And then we would like you to generalize the problem. And then we'd like you to solve the general case that you just generated. And then we'll say, we'd like you to explore idiomatic expressions in the problem and do a certain combination of operations, like add sub-mul divia or int diff log x And then it'll do those eight problems. And then we'll say, well, we'd like a problem that is, gives accessibility to the math that we're talking about. And we'd like that as a demonstration, then it generates that. And so in all this, it's going way beyond any initial expectation we had of emergent properties. It's now like, just sit back and have your, as you can see, in my case, have your hair blown off. So, uh, there you go.

16:40 - D. B.
Well, something I started doing in one of my classes is, you know, they were just handing in so much stuff that was AI generated. I said, write it by hand and upload the picture of your handwritten page. And is that just like going, like saying you can't use a car, so you better ride in on a horse?

17:00 - T. E.
That's kind of going backwards. That's kind of counterintuitive, it seems like, but yeah.

17:05 - V. W.
Yeah, the degree to which it's leveled the education educational playing field is now everyone can talk to somebody who knows everything. And so all of our jobs have just been rinsed away instantly. And we're just having to every week try to find something that lets us reclaim our dignity for a day.

17:25 - D. B.
Well, I think with the handwriting thing, I think some of the students still generated by AI and transcribed by hand, which is such a terrible waste of time. I give them credit for it, but it's their own punishment that they spent all their time copying by hand instead of learning something.

17:45 - V. W.
There actually could be value in that, because if an AI generates enough text, you can lose track of what it generated. And by manually pushing it through your nervous system, by transcribing it, you're at least compelling yourself to have additional rehearsals of the information so that you can go about pretending that you'd written it.

18:07 - T. E.
So when you see that it's AI generated, Dr. B., you should have them rewrite it five times.

18:19 - M. M.
In Louisville, when I went, the main concern, I have a very good friend, R., he's very famous in the negativity, negative aspect of accepting AI. He's not rejecting, but he say that people will lose jobs. Why you're not talking about this in your talks? So he's more concerned that after three years, even say not five, three years, five years, people will not have a job, particularly our students. And the question was, he asked me actually Two years ago, the same question. What do you think that these tools cannot do? I say they are not in the physical level. Genetic AI is right now, yes, but the physical level is not so developed. So I think personally, but I don't know. I need your opinion. What do you think? I mean, do you believe that three years people- I think we're morphing to a change job description rather than an elimination of jobs.

19:34 - V. W.
With everybody having access to infinite smarts, or the smartest people in the room, or the smartest people in the world, or the compilation of the smartest people in the room, this ever-escalating scope that we have, I think it's going to change society where the things of not knowing used to introduce delay into every manufacturing and service process. And now that delay is being reduced or rinsed away. One program are now doing the work of 10 or in our experiment with Read AI, 100. And that leads to things just happening that should have happened, but that took too long. So now, instead of being the person that does the work, you are the robot repair person for the robot that does the work. So everyone's job is just escalating because everybody can go to the eye and say, hey, what's wrong with the AI? What's wrong with the code? What's wrong with the robot? And it says, okay, instead of it being three days or a lost weekend or a month, you just get it fixed that afternoon, and you're good to go.

20:41 - A. B.
Yeah, no, I agree. I do think it'll speed us up in some ways. But like, I do think there will be like real, real impacts to jobs like I can give it. So right now, I have a I have an automation team, and we build bots, right? And we have you know, some of these currently are just kind of linear steps doing claims processing. These jobs have already been offshore. They've been offshore for 20 years, right? So these things that we took out of the United States and moved over to India because we need to get them done in a fraction of the cost. Well, now as we advance these like automation capabilities using what agentic AI or large action models and those sorts of things is that like, if we can essentially, if those types of, you know, we have these bots that can essentially essentially sit on our systems and learn actions over, you know, through all this data and then start to do the thing themselves. Like that's going to be a complete game changer for, you know, like back off the sorts of activities like claims processing or, you know, just the number of types of things that we do around that. So, yeah, I do think it'll be real transformative.

21:47 - V. W.
But in the information age, if you'll forgive me for saying this, the claims process is sort of a manual labor sort of activity where somebody's sitting in a chair at a desk on their phone, eight hours a day, processing human queries in a way that is consistent with company policy, blah, blah, blah. If a robot replaces that mundane job, that person is now freed up to do the kind of thing that is more interesting to them, also generating revenue.

22:16 - A. B. | V. W.
Right. But yeah, while like, so we don't do that in the United but like I said, that job offshore in India that we pay a fraction of the cost for, I don't know what those people do, right, after that's automated, right?

22:31 - A. B.
They might not have more creative jobs that they can fall back to.

22:35 - M. M.
So, I mean, yeah. Yeah, there is a concern, V., that not everybody is adaptive to change. This is the real concern.

22:43 - M. M. | V. W.
Yeah, there is a Luddite component to this, that when the looms come through, to weave the fabrics, we just better destroy all the looms that weave fabrics and we're left with no looms for a while.

22:58 - D. B.
So supposing we automate with AIs and robots and so on, it's going to increase economic productivity per person. And therefore, it's going to increase overall economic output of the country. But the problem is distributing that new wealth, the people who are put out of a job, don't get some of it, then it's increasingly, you know, inequitably distributed, leading to... This is the San Francisco Silicon Valley gentrification problem.

23:27 - V. W.
Yet the overall economic output is greater.

23:30 - D. B.
So if it was distributed, everyone would be richer.

23:33 - V. W.
But instead, it's the pooling of resources at the top and the development of a class of haves and have-nots at a level we've never before seen. Seen, because those who are not adaptable and able to shift positions get whiplashed into oblivion.

23:52 - M. M.
This is what is his concern, and not only his. Many people talk about this concern, exactly, that some people will be behind, they cannot adapt, and universal income is coming.

24:09 - Unidentified Speaker
I don't like this.

24:12 - D. B.
The distribution of wealth is a social decision. It's not decided by the AIs or the automation. It's a social decision made by people.

24:26 - D. D.
If you make a tool that makes a person hyperproductive, then that eliminates all the people that they don't need anymore because of the productivity. There's going to be a lot of people, I think, that lose their job over the next decade that are going to have to retool and figure a new place into the market.

24:55 - T. E.
So do you think that's sector-wide? Are you talking about software development?

25:00 - D. D.
It'll be spread out.

25:02 - M. M.
Everywhere, everywhere.

25:03 - D. D.
Because the AI is an expert. And so they'll be able to you know, increased productivity. So one person can now do the work of 10, let's say like V. said, you know, so there's nine people.

25:17 - V. W.
One of V.'s students said it's archery in the morning and pottery in the evening. It's going to create a lot of free time for people to do things they've never pursued because they were so under the thumb of long hours.

25:32 - Unidentified Speaker
Yeah.

25:32 - D. D.
Now's the time to move for 32 hour work weeks.

25:36 - Unidentified Speaker
Yeah.

25:37 - V. W.
Yeah. It's time to strengthen those unions.

25:40 - D. B. | T. E.
The only days of the country there were, you know, most people worked growing, making food, growing it or farming or whatever. Now it's, you know, a number of years ago, it was 2% in this country were involved with producing food.

25:58 - D. B.
Well, what are the other, what are they all doing?

26:03 - A. B.
Right.

26:04 - D. B.
Yeah, I personally think too, there will be, I think there will be some kind of curbing of it, right?

26:13 - A. B.
So like, for example, like there are already lawsuits of using AI in like, like in healthcare related use cases. So for example, there was a, there was a case where this company auto denied certain like insurance authorizations and then it resulted in a big lawsuit. And I think the government will start to put restrictions on what can and can't be automated and where you can use AI and where you can't. So that's a big discussion right now in healthcare is like what's going to happen. And I'll even, you know, for context, we do a lot of work, well, our company does a lot of work with like implementing Medicaid through different states. And the legislation, like these Medicaid states are basically, you know, putting in their contracts you know, kind of parameters on, you know, if you're using AI, how you can use it, no such thing. So, so I do think they will, they will guardrail this stuff and you won't, I don't think it'll be as broad brush as we might think. Like I, cause just cause of government intervention, I think, so it could be wrong, but

27:19 - V. W.
Well, you know, in terms of government intervention, we've kind of slipped through the cracks. Fortunately, you know, we we've got government in chaos now, and one of our co-leaders of government is very pro-AI. So it is in our interest that the people who are, some of the people who are in leadership right now, are either sympathetic to AI or in some cases doing build-out of large AI data centers. But if that changed to a more Luddite perspective where people felt threatened by AI, and certainly there are people that do, then that could work against the progress that we're currently enjoying with this boom. It's like the industrial revolution on steroids for machine learning. Right. Yeah, exactly.

28:04 - A. B.
And who's to say, too, that the government won't tax those things. We're seeing all these tariffs and whatnot. The government could, in theory, try to charge businesses or whatnot that want to use those to automate jobs. I feel like the possibilities are endless.

28:22 - V. W.
Yeah, like government responses to imagined threats that please a certain constituency or social interest group, they can take irrational actions that appear to appease the imagined threat, which are really bad for society and people overall. But right.

28:37 - A. B.
Like this whole thing we're talking about with terrorists, the government's trying to intervene to bring jobs back. Right. And if that's a good thing or not, I don't leave that discussion aside. But the government could make those sorts of decisions to try to narrow let companies automate jobs or use these technologies more broadly.

28:59 - D. B.
Keep in mind that this AI is infiltrating the entire world and there's lots of different countries with different governments that are going to make different decisions and have potentially different results. There's experimentation on the horizon. And let's put it that way. Oh, I didn't mean to end the discussion.

29:24 - T. E.
Yep, you said it all. We're doomed. So I did have a couple of thoughts. So I've been around a while in terms of software development. And there was a big concern many years ago when companies started outsourcing or offshoring their development work. People were talking about, and I guess that did impact some jobs, but it didn't really last very long. And it didn't really, it's like those jobs eventually came back, or at least most of them, I think.

30:02 - V. W.
And so- I think I might contend with that because if you look at the offshoring of electronic hardware, like silicon chips used to be made domestically, and then the 70s through now, it's all been a Singapore, Indonesia, East Asia, producing all the chips. And in some cases, we've lost the ability to manufacture fairly ordinary chips. We offshore that to TSMC, or in the case of GPUs. So us not having the ability to construct those things, but instead just being middle managers overseeing their offshoring has taken place in the service sector, in the hardware sector, and you're mentioning the software sector. And given D.'s talent at predicting the future with his book on doing the same, I'd be really interested to hear what he sees as the Moore's law of social deterioration that comes from offshoring all these activities. Well, I mean, The progress of technology is a different problem from the ups and downs of social evolution.

31:15 - D. B.
But since you asked, I'm a big fan of P. T.'s model of harmony and disharmony in the progress of a civilization like America. So he predicted the current political situation back in around 2010. He said it was 2020 rough, and he was right.

31:37 - V. W.
And you noted a couple years ago that in American elections, things tend to swing wide. They oscillate between fairly wide boundaries. But then other people will say, well, there is no left in America. We're just a middle that thinks it's a left and then an ultra right.

31:57 - D. B.
Well, let's not talk about politics. Another topic. You said P. T.

32:02 - A. B.
That was interesting.

32:03 - D. B.
Yeah, he's all over. The best thing to do is go to YouTube and watch some of his shorter interviews. P.? P. T., yeah. T-U-R-C-H-I-N. Okay, yeah, I can check.

32:17 - T. E.
And so the other thought I had, which is more of a, it's not really a thought, I guess it's more of a daydream. Could I create an AI that could make for me by doing AI things. In other words, as a software developer, could I create an AI to build software for consulting for companies, and it would be making money for me, if that makes sense. There's like five people who are in the position.

32:52 - V. W.
I believe that you could make the AI in terms of designing the algorithm, but for you to easily marshal warehouses full of GPUs over several months to do the training that you would need could be a harder thing to swing because you'd have to pitch for the funds to do that. And the first thing they would ask is, is anybody else doing that? And if the answer is yes, that's going to make it difficult to compete in that arena unless you're incredibly well infused with funding for training your AI.

33:26 - M. M.
So it's sort of a privilege. Target the product, whatever it is.

33:31 - V. W.
Yeah, target, that's a good point. Yeah, marketing.

33:34 - M. M.
Target an application of AI to specific devices.

33:38 - D. B. | V. W.
To giving it all the stock market data and having it mine it and then have it play the stock market for you automatically.

33:49 - M. M.
Well, there is an article that somebody from Lotto, the numbers, they predict the next numbers. Billions of dollars just predicting the pattern.

33:59 - V. W.
Well, if you have certain people that have access to insider information, they can anticipate change or respond extremely quickly to it in such a way that other people who have secondary access to late-breaking information, they're also not on a footing to move fast enough to generate the revenue that's possible from current sentiment. For example, you can do autocorrelation where you examine the likely tendencies of one stock to follow another, giving you the indicators, leading indicators. But that doesn't account for sentiment analysis, where in one press conference, everything people believed to be true is no longer true, and you get a 2000 point swing in the market. And I think this is why that even though we don't want to talk about politics, because we don't want to have any social toxicity emerge, we need to be aware of the influence of politics in terms of its ability to limit our opportunity landscape for doing what we think we should do technically with this new ability we have.

35:04 - Unidentified Speaker
Okay, what else?

35:06 - D. B.
If you want to fill out a form about campus climate in terms of AI, here's the link.

35:21 - D. B.
doing a survey? Comments?

35:23 - A. B.
Just curious, has the university, did they have a formal stance on AI and use and rights and wrongs and that sort of stuff? I don't know that I've ever seen any of that. They'd create some guidelines, I think, done.

35:42 - M. M.
I think that they work on this and maybe they publish it already.

35:48 - A. B.
Well, I guess what I'm saying as professors, when you're talking about having your students write and stuff, has there been a policy and things like that developed? How to tackle it? Are we for or against it? It seems like it might just be at your discretion. I think the latter. I think it's still up to people's discretion.

36:14 - D. B.
People are talking, but no, there's not much conclusions, you know, at the university level. Yeah.

36:20 - A. B.
Yeah, I saw some, it's interesting, I read some article, it was a PhD student somewhere that got kicked out of his program or something to that effect. I was trying to find, I was trying to pull him up. But I think, then I think it like generated some lawsuit where, because he was saying he didn't do it and then, but yeah, it's interesting. Interesting story, so. And what is this form, D.?

36:46 - M. M.
Oh, I don't know.

36:48 - D. B.
It's the AI integration team of the university, and they want to do a campus climate survey. So, yeah, so there's sort of an answer to A. There is a group on campus called the AI integration team. I don't know what they do other than send out and ask people to go out campus.

37:13 - D. D.
The use of AI in academics is really, you know, it's just an ethics question. You know, are you going to try to take credit for something the AI did or are you going to try to take credit for what you did, you know?

37:29 - V. W.
It's also a Turing test thing that if you give someone a task and they complete the task, do you care, as provided the choices were moral and sustainable. Do you care what tools they employed in order to accomplish the task? And once they've accomplished the task and they initiated the process by which the task was completed, why shouldn't they avail themselves of the most leverage possible, especially if doing that continues to improve their ability to do so in other work they might have to do? What about a calculator?

38:06 - D. D. | M. M.
and teaching math.

38:07 - D. D.
If you can teach a sixth grader how to use a calculator, does that mean they're forever lost on how to do math with the pencil? Well, look at it another way.

38:19 - V. W.
If as a result of using the calculator, they're doing more calculations than they would have ever done otherwise correctly, are they not learning by the process of using the calculator what the correct answer is? And that's me escalate this slightly. Let's say you have a table of integrals and you can teach a student to memorize a table of integrals. And I suppose that's a normal enterprise and we've all done that. But then being able to say, okay, you have to integrate this volume or this moment of inertia to stabilize the spacecraft. And I can now turn it around in seconds rather than hours or days. Why shouldn't we be training people to do that? Because then the landscape that they see of mathematics, is much greater because now they know the correct answer to the integral, and they can see the plot of the original function and the integrated function, and they can work it.

39:14 - D. D.
They learn. The process of learning, though, that's the part where you have to be able to do it without the calculator. Then you know how to do it.

39:24 - V. W.
Well, there's integrals that are on the border of my ability to do.

39:30 - D. D. | V. W.
And so I would immediately say, what was the answer to that?

39:34 - V. W.
What were the set of steps that took place to accomplish that computation? And in the process of accomplishing the job and getting it done correctly and verifiably, I now have something that is at or past the edge of my ability to produce on short notice. So why is it even an issue for discussion? Because I take your point, though, that one needs know how to ask the right question or how to specify that it is an integral and how to recognize those things. That's pretty important. But I would also argue that is a learn by using the calculator or the symbolic manipulation program or the symbolic geometry program like I use with learning calculus with geometry expressions. But I don't I think that are that the seconds that we take in decision making to force someone to do rote work versus using a tool. We're really, we're tool users as much as we are. I mean, we can all dig in the dirt. We're definitely tool users. We can dig in the dirt with a stick to grow our crops and we could wash our food. We could wash our food and our clothes down the Arkansas river and then all our time would be spent doing those things. But somehow we've abstracted those daily mundane things away to the point where we can operate at a more erudite level.

40:59 - M. M.
Exactly. That's a good point. Good analogy. All right.

41:03 - D. B.
Well, I'd like to move on to our next regular item. So the master's degree projects, or currently one project involving using AI to write a book. And I'm wondering if E. could update us on the latest on that project? Of course.

41:22 - E. T.
Actually, I've enjoyed the conversations going on.

41:25 - Unidentified Speaker
As an educator, I have the same problems with my students using AI.

41:32 - E. T.
But at the same time, I couldn't agree more with Dr. V. Why don't we use these available tools if the world is going through that way? So anyways, I really enjoyed the conversation. And thank you for your opinions on these. About my project, I'm still working on completing the book. So that was pretty much what I have been doing this week as well. I also have a question for Dr. M. I tried to complete the course and unfortunately, I'm not sure if it's because of my firewall or wires blocker thingies, whatever, it kept blocking me to log into virtual environment to complete the course. Which course?

42:28 - M. M.
What is the name of the course?

42:33 - E. T.
I'm looking it up right now.

42:37 - M. M.
You can always create a new account if this is...

42:43 - E. T.
There's no with the account it's building llm applications with prompt engineering yeah it works for me always I'm not sure it's I mean I can see the video but I it requires me to log into the virtual environment and complete the rest of the hands-on part on the virtual environment but for some reason it kept blocking me as saying that this is a malicious website and stuff like that. I'm just right now in NVIDIA server.

43:17 - M. M.
You can create a new account and use the same discount code.

43:23 - V. W.
You might have to put a pinhole in your firewall that permits that address to be interacted with at that level because your own browser machine environment may be limiting you from that because of antivirus virus software, anti-exploit software.

43:41 - M. M.
So it may be something in between.

43:44 - V. W.
There's a middle man, a man in the middle, that's preventing you from being authorized.

43:50 - E. T.
And that's important that that be solved. Do you use Chrome? I do. Chrome is...

43:56 - M. M.
No, but maybe V. is correct. Yeah, I think that's the issue.

44:01 - E. T. | M. M.
I'll look through that. And if I cannot find another way, I'll create another accountant.

44:07 - V. W.
And another trick M. alludes to is if you switch browsers, if you try Firefox, try Safari, if you're on a Mac, try, just try different browsers until you, or even Microsoft Edge and see if you're able to authenticate in that different browser environment. And if it is, you might be able to clear your cache on Chrome or something like that and get it to work.

44:30 - M. M. | V. W.
This happened with me. Yeah. I switched the browser and that was okay.

44:35 - Unidentified Speaker
Okay.

44:35 - E. T.
I'll do that.

44:37 - D. B. | M. M.
Browser or new account. So this is the suggestion. But browser, if she change, this will be okay.

44:46 - Unidentified Speaker
Yeah.

44:47 - M. M.
Yeah, it might be the browser.

44:50 - E. T.
I mean, I usually use Chrome. I don't know.

44:54 - M. M.
It's just a habit, I guess. But there is a security issue. And for Microsoft, maybe not. So. Anything else anyone?

45:05 - D. B.
anyone wants to bring up. Well, again, so next week we're going to meet at, let's see, meet at three o'clock or that, no, next week we're going to meet at 3 p.m.

45:20 - Unidentified Speaker
and it'll be a defense, so it'll be a different meeting.

45:25 - M. M.
And then we'll go from there.

45:28 - D. B.
So thanks everyone for attending and we'll go ahead and- Thank you.

45:33 - D. D.
Thank you so much. Thank you.

45:35 - M. M.
Have a good weekend. Thanks.

46:08 - D. B.
Thanks.