Artificial Intelligence Study Group
|
- Announcements, updates, questions, presentations, etc. as time allows
- Today: viewing and discussion. Also:
AI Tech Talks - invitation to present or attend - from:
Marla Johnson
TechLaunch is hosting an afternoon of AI Ted Talks on April 24, from 2:00-4:00 in the EIT auditorium. We would like to invite you to participate either through attendance or by presenting work that you are doing here at UA Little Rock that utilizes AI.
You can use this form to sign up to be a presenter or attend.
Please share with your students, who are also welcome to sign up to present. Presentation slots are limited so please sign up quickly.
If you are just interested in seeing what others are doing on campus, please join us as we try to ignite discussions and opportunities for collaboration across campus.
Marla Johnson | Tech Entrepreneur-in-Residence
c: 501.551.0095 | e: mkjohnson@ualr.edu
- Today, tomorrow ... JOLT hackathon at the Tech park bldg. on Main St. https://www.venturecenter.co/entrepreneurs/programs/jolt/. You can go.
- Fri. April 11: viewing and discussion.
- Fri. April 18 at 3:00 p.m. (an hour earlier than our usual meeting time!) GS PhD defense, Optimizing Small AI Models for Biomedical Tasks Through Efficient Knowledge Transfer from Large Domain Models.
- Th April 24, 2-4 p.m. AI TED talks in the EIT Auditorium. Sign up to present! form to sign up
- Fri. April 25: YP will informally present his draft AI course outline and welcomes comment. See Appendix 1 below.
- JH update: What are AI lifecycles and governance. (E.g., https://www.ibm.com/products/watsonx-governance/model-governance.)
- SDLC=Software Development Life Cycle
- TE is in the informal campus faculty AI discussion group. SL: "I've been asked to lead the
DCSTEM College AI Ad Hoc Committee. ... We’ll discuss AI’s role in our curriculum, how to integrate
AI literacy into courses, and strategies for guiding students on responsible AI
use."
- Recall the masters project that some students are doing and need our suggestions about:
- Suppose a generative AI like ChatGPT or Claude.ai was used to write a book or content-focused website about a simply stated task, like "how to scramble an egg," "how to plant and care for a persimmon tree," "how to check and change the oil in your car," or any other question like that. Interact with an AI to collaboratively write a book or an informationally near-equivalent website about it!
- Is this good? https://spectrum.library.concordia.ca/id/eprint/993284/
- ET: Growing vegetables from seeds.
- Proposal drafted.
- Committee: DB, RS, MM.
- Going through prompt engineering course recommended by MM, planning to use it extensively.
- Course is at: https://apps.cognitiveclass.ai/learning/course/course-v1:IBMSkillsNetwork+AI0117EN+v1/home
- Got 4,000+ word count outputs
- Gemini: writes well compared to ChatGPT
- We did the Chapter 6 video, https://www.youtube.com/watch?v=eMlx5fFNoYc, up to time 13:08. We can start there next time.
- Schedule back burner "when possible" items:
- If anyone else has a project they would like to help supervise, let me know.
- (2/14/25) An ad hoc group is forming on campus for people to discuss AI and teaching of diverse subjects by ES. It would be interesting to hear from someone in that group at some point to see what people are thinking and doing regarding AIs and their teaching activities.
- The campus has assigned a group to participate in the AAC&U AI Institute's activity "AI Pedagogy in the Curriculum." IU is on it and may be able to provide updates now and then.
- Here is the latest on future readings and viewings
- https://transformer-circuits.pub/2025/attribution-graphs/biology.html#dives-refusals
- https://transformer-circuits.pub/2025/attribution-graphs/methods.html
(Biology of Large Language Models) - We can work through chapter 7: https://www.youtube.com/watch?v=9-Jl0dxWQs8
- https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/
- Prompt engineering course:
https://apps.cognitiveclass.ai/learning/course/course-v1:IBMSkillsNetwork+AI0117EN+v1/home - https://arxiv.org/pdf/2001.08361
- Computer scientists win Nobel prize in physics! Https://www.nobelprize.org/uploads/2024/10/
- popular-physicsprize2024-2.pdf got a evaluation of 5.0 for a detailed reading.
- Neural Networks, Deep Learning: The basics of neural networks, and the math behind how they learn, https://www.3blue1brown.com/topics/neural-networks
- LangChain free tutorial,https://www.youtube.com/@LangChain/videos
- We can evaluate https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10718663 for reading & discussion.
- Chapter 6 recommends material by Andrej Karpathy, https://www.youtube.com/@AndrejKarpathy/videos for learning more.
- Chapter 6 recommends material by Chris Olah, https://www.youtube.com/results?search_query=chris+olah
- Chapter 6 recommended https://www.youtube.com/c/VCubingX for relevant material, in particular https://www.youtube.com/watch?v=1il-s4mgNdI
- Chapter 6 recommended Art of the Problem, in particular https://www.youtube.com/watch?v=OFS90-FX6pg
- LLMs and the singularity: https://philpapers.org/go.pl?id=ISHLLM&u=https%3A%2F%2Fphilpapers.org%2Farchive%2FISHLLM.pdf (summarized at: https://poe.com/s/WuYyhuciNwlFuSR0SVEt).
6/7/24: vote was 4 3/7. We read the abstract. We could start it any
time. We could even spend some time on this and some time on something
else in the same meeting.
Appendix 1: New proposed 4000/5000 level applied AI course
With industries increasingly relying on AI for decision-making, automation, and innovation, graduates with AI proficiency are in high demand across finance, healthcare, retail, cybersecurity, and beyond. This course offers hands-on training with real-world AI tools (Azure AI, ChatGPT, LangChain, TensorFlow), enabling students to develop AI solutions while understanding the ethical and regulatory landscape (NIST AI Risk Framework, EU AI Act).
Why This Course Matters for Students:
■v Future-Proof Career Skills – Gain expertise in AI, ML, and Generative AI to stay relevant in a rapidly evolving job market.
■v Business & Strategy Integration – Learn how to apply AI for business growth, decision- making, and competitive advantage.
v■ Governance & Ethics – Understand AI regulations, ethical AI implementation, and risk management frameworks.
v■ Hands-on Experience – Work on real-world AI projects using top industry tools (Azure AI, ChatGPT, Python, LangChain).
Why UALR Should Adopt This Course Now:
v■ Industry Demand – AI-skilled professionals are a necessity across sectors, and universities must adapt their curricula.
v■ Cutting-Edge Curriculum – A balanced mix of technology, business strategy, and governance makes this course unique.
v■ Reputation & Enrollment Growth – Offering a governance-focused AI course positions UALR as a leader in AI education.
v■ Cross-Disciplinary Impact – AI knowledge benefits students in business, healthcare, finance, cybersecurity, and STEM fields.
By implementing this course, UALR can produce graduates ready to lead in the AI era, making them highly sought after by top employers while ensuring AI is developed and used responsibly and ethically in business and society.
Applied AI (6 + 8 Weeks Course, 2 Hours/Week)
5-month Applied Artificial Intelligence course outline tailored for techno-functional, functional or technical leaders, integrating technical foundations, business use cases, and governance frameworks.
This can be split in 6 weeks certification plus additional funds for credit course with actual use case.
I have also leveraged insights from leading universities such as Purdue’s Applied Generative AI Specialization and UT Austin’s AI & ML Executive Program.
![]() |
Balance: 1/3 Technology | 1/3 Business Use Cases | 1/3 Governance, Compliance & AI Resistance
![]() |
Module 1: Foundations of AI and Business Alignment (Weeks 1-4)
v■ Technology: AI fundamentals, Machine Learning, Deep Learning
v■ Business: Industry Use Cases, AI for Competitive Advantage
■v Governance: AI Frameworks, Risk Management, Compliance
· Week 1: Introduction to AI for Business and Leadership
o Overview of AI capabilities (ML, DL, Generative AI)
o Business impact: AI-driven innovation in finance, healthcare, and retail
o Introduction to AI governance frameworks (NIST, EU AI Act)
· Week 2: AI Lifecycle and Implementation Strategy
o AI model development, deployment, and monitoring
o Case study: AI adoption in enterprise settings
o AI governance structures and risk mitigation strategies
· Week 3: Key AI Technologies and Tools
o Supervised vs. Unsupervised Learning
o Python, Jupyter Notebooks, and cloud-based AI tools (Azure AI Studio, AWS SageMaker)
o Governance focus: AI compliance and regulatory challenges
· Week 4: AI for Business Growth and Market Leadership
o AI-driven automation and decision-making
o Case study: AI-powered business analysis and forecasting
o Compliance focus: Ethical AI and responsible AI adoption
![]() |
■v Technology: NLP, Computer Vision, Reinforcement Learning
v■ Business: AI in business functions - Marketing, HR, Finance
■v Governance: Bias Mitigation, Explainability, AI Trust
· Week 5: Natural Language Processing (NLP) & AI in Customer Experience
o Sentiment analysis, text classification, and chatbots
o Business case: AI in customer service (chatbots, virtual assistants)
o Governance focus: Privacy and data security concerns (GDPR, CCPA)
· Week 6: AI for Operational Efficiency
o Business use cases: AI for fraud detection, surveillance, manufacturing automation
o Compliance focus: AI security and adversarial attacks
· Week 7: Reinforcement Learning & AI in Decision-Making
o Autonomous systems, robotics, and self-learning models
o Business case: AI-driven investment strategies and risk assessment
o Resistance focus: Overcoming corporate fear of AI adoption
· Week 8: AI in Marketing, HR, and Business Optimization
o AI-driven personalization, recommendation engines
o Business case: AI in recruitment, talent management
o Compliance focus: AI bias mitigation and fairness in hiring
![]() |
Module 3: AI Governance, Compliance & Ethics (Weeks 7-10)
v■ Technology: Secure AI Systems, Explainability
■v Business: Regulatory Compliance, AI Risk Management
■v Governance: Responsible AI, Transparency, Algorithm Audits
· Week 9: AI Governance Frameworks & Global Regulations
o NIST AI Risk Management, ISO/IEC 23894, EU AI Act
o Industry-specific regulations (HIPAA for healthcare AI, SEC for AI in finance)
o AI governance tools (audit logs, explainability reports)
· Week 10: AI Explainability & Bias Management
o Interpretable AI techniques
o Case study: Bias in AI hiring systems and credit risk models
o Business responsibility in AI model transparency
· Week 11: AI Security, Privacy, and Risk Management
o Secure AI model deployment strategies
o Governance: AI trust frameworks (eg: IBM AI Fairness 360)
o Case study: Managing AI risks in cloud-based solutions
· Week 12: AI Resistance and Corporate Change Management
o Strategies for AI adoption in enterprises
o Business case: AI integration in legacy systems
o Ethics: Impact of AI on jobs, social responsibility, and legal liabilities
![]() |
Module 4: AI Strategy, Implementation, and Future Trends (Weeks 11-12)
■v Technology: AI Product Development
■v Business: AI Implementation, Enterprise AI Strategy
v■ Governance: AI Regulatory Compliance & Future Legislation
· Week 13: Overview of AI Deployment and Scalability
o Deploying AI models on cloud (Azure AI Studio, AWS, GCP)
o Business case: Scaling AI solutions in enterprise environments
o Compliance: AI model monitoring, drift detection
· Week 14: AI for Competitive Advantage & Industry-Specific Applications
o AI in industry : e.g.: supply chain, autonomous vehicles, healthcare diagnostics
o Case study: e.g.: AI-driven drug discovery and logistics optimization
o Compliance: AI liability and regulatory accountability
· Week 15: AI Governance and Responsible Innovation
o Innovating with AI : e.g. financial services (algorithmic trading, fraud detection)
o Ethics: Ensuring fairness and avoiding discrimination in AI models
o Risk assessment frameworks for enterprise AI adoption
· Week 16: The Future of AI: Trends, Risks & Opportunities
o Generative AI (DALL-E, ChatGPT, LangChain applications)
o AI and Web3, decentralized AI governance
o Case study: AI-powered governance in blockchain ecosystems
![]() |
Module 5: Capstone Project & Final Presentations (Weeks 12-14. Process starts in Week 7/8)
■v Technology: Hands-on AI Application Development
v■ Business: AI Use Case in Industry
■v Governance: Compliance Strategy & Ethical AI
· Weeks 17-19: AI Capstone Project
o Develop an AI-driven business solution with governance compliance
o AI application areas: Business analytics, customer engagement, fraud detection
o Report: Governance strategy and AI risk mitigation plan
· Week 20: Final Project Presentations & Certification
o Peer review and feedback
o Industry guest panel discussion on AI’s role in future business strategies
o Course completion certification
Tools & Technologies Covered:
· AI Development: Python, TensorFlow, PyTorch, Scikit-learn, GenAI models
· Cloud AI Platforms: Azure AI Studio, AWS AI Services, GCP Vertex AI
· NLP & Generative AI: ChatGPT, DALL-E, LangChain, BERT, Stable Diffusion
· AI Governance & Risk: SHAP, LIME, AI fairness toolkits
Appendix 2: Transcript
AI Discussion Group
Fri, Apr 4, 2025
0:00 - T.E.
use it responsibly and ethically.
0:04 - D.B.
So, yeah. All right, well, we'll start off in a minute or so. I am sharing my screen, right?
0:18 - Unidentified Speaker
Yep.
0:20 - H.J.
Yes.
0:21 - Unidentified Speaker
All right, well, why don't we go ahead and get started.
0:39 - D.B.
So welcome back to the AI study group and announcements for today. We're going to focus on viewing the discussion as there's time for it. Also, there's an announcement that's been going around about a request for AI TED Talks to be done locally. And M.J. is sponsored by the tech launch unit at UA Little Rock. She's the entrepreneur-in-residence, and she's with us here today. So hello, Ms. J. Thank you for attending. Yeah, thank you for inviting me.
1:25 - H.J.
I definitely want to be a part of this group and study and learn with you all.
1:30 - D.B.
Well, great. Do you want to tell us a little bit about this? I mean, I've kind of printed the text here, but if you'd like to just kind of give a quick elevator pitch about it, Go ahead, L.J.
1:45 - H.J.
So when I started, which was right before Thanksgiving here at UA Little Rock as tech entrepreneur in residence, one of my goals was to develop some corporate support services around AI and upskilling in AI for the companies in the region. Is the Build Good AI Institute. And we have started on campus with some students and faculty members to begin building some AI products for use on campus, just so you know. So as part of this, I've been talking to a lot of people across the campus to see what they were doing with AI. And every day, I learned about something amazing happening in the education department, in nursing, every department, art and graphic design. So we decided to, B.B. is my boss. He's the vice provost of graduate and studies and research. And we decided to throw this AI TED Talks program. We announced it yesterday morning. We have 13 people already who want to present. We only have eight slots. So we will be. We will have to do it again. We will need to be doing it again. And I already know three other people who want to present. So the idea is really just for us to share on campus what people are doing research wise or product development wise or what their interest is, how they're using it. So that's what I'm going to be sending out to the presenters that are selected for this one. We are kind of using the TED Talks, kind of 10 minutes, really high level, you know, don't get into the weeds too much, you know, focus on the value and who it helps. And that's what we'll be doing. And I hope all of you attend. I hope we have a huge attendance. We are, the provost has asked that for this first one, we just do it for UALR students and faculty, but the idea is that we can also help our presenters over time to make really compelling presentations about their research and work. And then we will open up another TED Talks for the business community, for the community at large. So that's what we're doing. Cool. All right.
4:25 - D.B.
Well, folks, it'll be on April 24 from 2 to 4. Make a mention of that, April.
4:32 - H.J.
Yeah, and you have a link, I think, to sign up, don't you?
4:37 - D.B.
But I can put it in the chat if you don't. Yeah, let me just.
4:43 - H.J.
Now, if you don't, if you have, if you're doing it and you don't have a UALR.edu email address, you won't be able to sign up to attend. So let me know if that happens. Like Y.P., I don't know if you're going to be to attend but you know if you want to attend and you don't have a ualr.edu email address yet let me know so that I can get you on there.
5:08 - D.B.
Okay and of course anyone who knows about it can you know can go to the can go to the EIT auditorium nobody's gonna check your student ID anything like that.
5:18 - H.J.
Now don't tell people that.
5:19 - Y.P.
Oh maybe I shouldn't have said that.
5:22 - H.J.
Okay so you guys do have a link I'll put it in the chat just to make sure okay here's a form yep that's the link okay but I probably should like make it a real entry here you know yep you can sign up to be a presenter or just attend um if you um and just so you know if you do want to present even though I just told you that we already have 13 people asking to present still go ahead because we'll be you know we'll be figuring could present at another date for anybody who we won't be able to fit into this first one. And yes, you do have to sing.
6:09 - D.B.
Okay. I nominate B. I think he could give a TED Talk if he wanted to do, if he wanted to prepare one. Yeah, I had to remove the word TED Talk from my actual flyer and turn it to tech talk. Oh, it's probably a copyrighted trademark.
6:42 - H.J.
Yeah, it is. Okay. So it's now tech talk instead of a TED talk.
6:50 - D.D.
Then they're all going to be called TED talks just like refrigerator and all the rest of them.
7:00 - D.B.
Or maybe a TE I don't know. Should I change it? I don't want to interfere with anybody's copyrights.
7:09 - Unidentified Speaker
Don't worry about it.
7:11 - H.J.
I've changed it on everything that's out there, and it'll be fine.
7:17 - D.B. | H.J.
I'm not worried about it. All right.
7:20 - D.B.
OK, so next week, we're going to do more or less like this week. We're going to have viewing and discussion nothing special scheduled. Friday, April 18th, that's in two weeks, G.S. is going to give his PhD defense, which is an AI dissertation he's working on. And it'll be at three o'clock instead of four o'clock. So we're just going to change this meeting to three o'clock in two weeks instead of four o'clock. And he actually lives in Australia, so it's really early in the morning for him.
7:59 - D.B.
And then on the 24th, oh, that'll be the TED Tech Talks. And then on the 25th, we're going to meet and Y.P. will talk about his AI draft of his AI course outline that he's going to be teaching or leading the teaching of in the fall.
8:24 - Y.P.
Yes, sir.
8:28 - D.B.
Okay, so J.H. is here and she's doing some research on AI life cycles and governance. And I thought, wow, gee, maybe people would like to know what an AI life cycle is and what AI governance is. So J., if you could say a few words about that, that would be great.
8:49 - J.H.
Sure. I'm J.H. I am a second semester PhD student with Dr. B. working on a systematic literature review. So I have spent my career in technical security and data privacy roles for large high stakes distributed systems. Previously, I did Android remote, like medical monitoring devices, then went to security knowledge graphs. And for the past few years, I've been working at Meta in a role on the messaging infrastructure, which is infrastructure used by Facebook Messenger, Instagram Direct, and then a little bit of WhatsApp as well. And what I've discovered these past few years is that large generative AI models and features that are sort of owned and developed by Meta have really broken everything I knew previously in my career about managing the security and privacy of large systems throughout the lifecycle, which that means from sort of inception, systems design to the development, deployment, and post-deployment monitoring phases. So I am doing a systematic literature review on literature from the CIS domain. I did narrow it. On approaches to managing security and privacy risks that organizations can use who are creating and deploying generative AI models. Okay, so what's the difference between an AI lifecycle and a software lifecycle?
10:33 - D.B.
What is the difference between an AI?
10:37 - J.H.
Well, I mean, I don't have an answer prepared for that, but in my experience, it's a lot harder and there's a lot different challenges than what I was used to in the past. And a pretty painful example of that is in the past when I was working with cloud infrastructure, we would burn down, we had a programmatic job running to just recreate our cloud devices once a week so that they didn't accrue vulnerabilities. And in the generative AI data infrastructure space, I'm dealing with much different challenges. Especially due to some regulatory changes that are really providing consumers with the right to revoke their consent to processing their data at any time, which is really challenging to honor quickly when you have thousands and thousands of downstream assets that are used for training different AI features. So it's just much different, but I think that hopefully by the end of my dissertation, I can definitively answer that question.
11:43 - D.B.
Does anyone else have any questions for J.?
11:48 - E.G.
I do. As somebody who's spent decades in SDLC, one of the things that we're able to understand is all of the code is evident. When you go to production, you can see everything with LLMs. Assuming what you're talking about is LLMs versus defined models using arithmetic representation of data. So in LLMs, the training is hidden from you. So it's kind of like black box. That must make it more difficult, because the nuances associated to that black box is out of your sphere of control, where in an SDLC, you control every aspect right down to your test-driven development, your defensive programming approach, your dry catch blocks. Yeah, very true. That's a really good point.
12:51 - J.H.
And I think it's an especially interesting contrast with the regulatory pressures we're facing to have extreme transparency in for end consumers. It's kind of hard to have transparency when you can't see it.
13:08 - T.E.
So that brings up a good question, E. In terms of audit, like a lot of times at the company I work for, the auditors want to see, you know, the piece of code that you use for authentication and that kind of stuff. So if that's hidden, you know, how are you going to deal with auditors in that situation, yeah.
13:33 - E.G.
To me, it opens a whole new can of worms because, as you said, audits, we provide the code, we show where our delineation is. So you have your development, your test, your QA, your production environments, but an LLM you don't have that delineation. It crossed the board. How do you encapsulate it, isolate it, test it?
14:07 - D.B.
I mean, you know, there's this whole big AI research, you know, focus on explainable AI because nobody knows what's really going on and how it really does what it does. You know, it's sort of this apparent and intelligence is an emergent property. It's not something that was designed in. Right.
14:32 - J.H.
And part of my job involves helping teams of lawyers explain some of these products to regulators through the course of things like RFIs and similar. And one thing I can say that we're doing is a lot more of our answers revolve around the sort of robust post-deployment environments and guardrails and thresholds and reference models because you can't explain everything but you can explain what you are doing to to make sure things remain safe in production and that's probably the way that audits are going to go.
15:10 - H.J.
I just shared with you guys an article that I read that I shared with several others at UA Little Rock and it's from Forbes and it showed up I think I think, Sunday this week. And it's about what they're predicting in terms of an AI product manager, as opposed to a regular product manager. So you might find it a little bit interesting. They have a section specifically about what is required. And it's not as specific as dealing with the lifecycle. But it is specifically about managing a product over time that's AI. Things are specifically different about that than a regular software product. So not only what skills are required for that product manager, but also how you're dealing more with model performance, acceptable rates of error, understanding new types of user interaction, and management of risk and cost trade-offs. So I just found it a really interesting article, and so I just popped it in there. I just copied it so you wouldn't have to deal with the ads.
16:24 - Y.P.
But you can also go get it.
16:27 - D.B.
It works. I'll go ahead and add that to our list of potential future readings, just when someone suggests something, I like to do that. I have a lot.
16:41 - D.B. | Y.P.
J., this is Y.
16:43 - Y.P.
A couple of thoughts came By the way, this is a great topic to discuss and research on and with your experience of Facebook, I think you can get a lot of value to what you're doing. So, NIST, then CIPP, IEEE, there are various institutions that are already doing a lot of interesting work in this area. And are you considering collaborating? I don't know whether collaborating is the right word, but ensuring that these standards that these formal institutions are setting up and a potential regulation that may come up from the government standpoint. How are you thinking about these frameworks, regulations, some states already have regulations around governance of AI. How are you thinking or correlating what you're doing to what is already there or maybe there? Yeah, that's a really great question.
17:55 - J.H.
Like many folks in engineering careers, I have a little bit of problem with scope creep. So I think that I am probably keeping for the systematic literature review, of this project, I think that I'm probably going to try to not dive too deep into standards because it is absolutely a rabbit hole. I think there is a very, very clear need for public interest technology in this space and some engagement there. But I think that would probably be more of a phase two thing.
18:30 - Y.P.
And I personally think it's super critical, especially with the differences between regulations right now to have standardized requirements such as NIST.
18:42 - J.H.
Really great point, thank you. No problem.
18:46 - D.B.
Any other questions, anything, discussion points for J.?
18:51 - H.J.
Well J., in your work are you working with any specific companies kind of to talk about their governance and kind of use that as an example?
19:05 - J.H.
I'm not working with any companies outside of my employer at this time. Meta does have an open source LLM called LLAMA. So I think that would probably be the best sort of sandbox environment for the research once I get past the systematic literature review, since that is something that can be shared with the public. Test it in a real world environment.
19:33 - H.J.
You want some help with that?
19:36 - E.G. | J.H.
I run Olama locally, but you need a larger card to run it unless you use one of the very, very small models.
19:47 - E.G.
I could show you how to go through that.
19:51 - Unidentified Speaker
Yeah.
19:51 - E.G.
Or V., I think V., you do the same thing.
19:56 - Unidentified Speaker
Right.
19:57 - H.J.
I'm on the board of a company, and I'm on their AI and cybersecurity committee, where we're currently coming up with our AI governance rules.
20:13 - J.H.
So it's a very interesting process. That's fantastic.
20:18 - Y.P.
So J., when you say AI lifecycle, because of your employer or I don't know whether you have any restrictions, will you be thinking AI lifecycle as it pertains only to Lama or are you going to other models? Is it going to be independent of model or structures or are you focusing on your employer's models?
20:50 - J.H.
I personally think that what is needed would be research into governance practices that are agnostic of model, but I think there's also an opportunity to apply some of this research and gain some data within the context of the open source model.
21:08 - Y.P.
Got it. Sorry if I'm taking you off track, but some questions popping in, which may help you also to focus. Thank you for answering. Great questions.
21:20 - Unidentified Speaker
Anything else?
21:21 - D.B.
Okay, well, let's move on to the next item, which is another sort of standard item we handle weekly. And so earlier, I had asked for master's students interested in using an AI to write a book to do that as their project. And E.T. is doing that. And E.T., if you could give us an update on what you're doing and what problems you're having or not having on your book, that'd be great.
21:54 - E.T.
As of this week, I have mostly worked on completing the chapters of my book. So honestly, I haven't tried anything new. My goal is to finish the book as soon as possible within the time frame of next week or And then I'm planning to get my book and ask a small group of my co-workers to review the book without telling them it's AI made and get their feedback on it. And I'm planning to add that to my project. And once I got their feedback, I will tell them that this is AI made, and I'll get their feedback according to, you know, knowing that it's AI made. So I'm thinking it's going to be interesting to get their reaction if they don't realize if it's AI made or if they will actually realize that it is AI made. So yes, as of now, that's my plan for the next week.
23:04 - Unidentified Speaker
Cool.
23:05 - D.B.
All right. Any questions for E.T.?
23:08 - V.W.
E.T., have you done anything to keep the voice that your book is written in, in your native way of expressing yourself so that it will be harder for the casual reader to pick up on the cues that this was written by AI? One thing I noticed is that when text is generated, it'll drift off of my voice and I have to continually be bringing the AI back into my way of speaking and the idiomatic expressions that I use that are signature for me. And I've gotten, I read a lot of AI generated material on Quora and I can usually pick up fairly soon. And the first thing that I pick up is that there's a lot of detail articulated in this answer that would not be articulated by a person of normal intelligence. Of normal way of expressing themselves, of even highly technically trained people will not be in breadth and depth as thorough as the typical AI response is. And I'm wondering if, as you do your controlled experiment with your colleagues, your single-blind experiment, if you've taken any measures to prevent the obvious tells that telegraph that this was written by AI out of your work, and have it more in your voice? Yes, I do.
24:35 - E.T.
I'm sorry, go ahead. No, go ahead. So I do try that most of the time. As you said, it is very obvious that it's AI generated for majority of the texts. Anytime I use AI for this project as well and for my general use with AI, I do try to get it to back to my style. But again, I've never tried it before. I've never tried getting AI-generated text or as long as a book, not short text. But I do try to keep it in my way instead of just putting something and asking AI to complete because it sounds very artificial and it sounds very robotic.
25:26 - V.W.
If I could make a small suggestion, because I'm sure your work's going to be good no matter what, is I found that for the burden of writing that AI offloads from us, that if we then go back over the writing and re-articulate the very same statements in our own voice, with our own quips, with our own anecdotes, with our own background, I find that it not only makes it more pleasant to read, but it humanizes it enough that people will accept that this content is really me because I've only used the AI to build the edifice, but not to populate the building as it were. And so I guess the point I'm trying to make here is that the real workload that AI offloads from us is creating the initial structure that we can then rather effortlessly go through and say, Oh, wait, that's not me. Oh, wait, I could correct this. Oh, wait, this is a nuance. I don't really want to communicate or this is one that I do want to communicate. So for very little work, you can re-articulate it as your own work, because then you take ownership of it again. So it's like, we lose our voice, we steal our voice back. And now the combination of the two is an amplified version of ourselves, rather than a substitution of ourselves by some higher order intelligence.
26:56 - E.G.
Dan, I'm going to get rid of you here just a little bit. Dan, I think you're being very generous in that when you talked about the AI creating the edifice or the building, I was thinking, well, what you do when you add your own kind of style to it, you're really just painting it. So the AI is building the building and you're just putting a coat of paint on it.
27:19 - V.W.
And Dan, I just have no problem with that.
27:24 - Unidentified Speaker
Okay.
27:25 - E.G.
Ernie? Well, that may be the case for neurotypical, but neurodivergent, you'll find that AI responses tend to align more with how a neurodivergent person would communicate. So the... The, I guess the gap, the, the travel from an AI response is going to differ based on how they're approaching. I don't know why it's doing a thumbs up.
28:01 - V.W.
I'll give you a thumbs up. Okay. I, I think that the neurotypical versus neurodivergent topic interacting with AI is almost a topic that's too wonderful for me. I think it's above my pay scale. The thing that does cause me joy is that whether neurotypical or neurodivergent, we can tailor content to most match our learning and communication styles, or in the case of translation, when we want to translate from one world of perception to another, we now have leverage to do that we never had before, and that makes me super happy because I want everybody to be able to consume what I do. And I want to be able to consume what everyone else does. And I have neurodivergent people that I follow because they fascinate me in the skill sets that they express. And so I want to make sure that I'm communicating effectively both ways. Sometimes it's a translation and sometimes it's too close for comfort. So there you go.
29:06 - E.G.
No, I fully agree. All I was trying to allude to is the amount of work for a neurodivergent person to have it sound in their voice is far less distance to travel because we tend to be...
29:23 - H.J.
Hey, E.T., are you experimenting with putting in prompts that try to have it write like another writer or like D.P. or with a particular person Have you played around with that?
29:38 - E.T.
We have.
29:39 - E.G.
My favorite is to actually have it sound as a sixth grade reading because most business documents have to be written in such a way. I read a book a while ago that it has to be at a sixth grade reading level. Which I find somewhat comical.
30:05 - H.J.
What about you, E.T., in your work?
30:09 - E.T.
Yes, I did try those prompts. I actually finished a course on cognitive AI using different types of prompts, using persona, using different styles of prompt engineering. And yes, that was one of them. So I do use a specific style. I haven't tried sixth grade, but I try to keep my writings as engaging as possible at the same time, giving specific details on how to garden, how to grow the seeds. And the other thing I am planning is actually, I'm a public school teacher, so I have English literature teachers. One group will be the literature literacy teachers, who's reviewing and giving feedback. And I have a couple of people who are really good at gardening. And I'm planning to get their feedback on the details if the AI created some false information, or was the knowledge and the information given by AI is actually useful.
31:24 - T.E.
So that's another Do you think it would be, would there be any value into running your book through like an AI detection software just to see if it would detect if it was AI generated or not? Or is that, is that technology really not where it needs to be at this point? I'm not sure.
31:49 - E.T.
I've never thought about it, but I can definitely add that to as well. To my project.
31:56 - D.B.
It probably would say it was AI generated. I mean, based on, you know, my TA's running students' homeworks through AI checkers.
32:05 - T.E.
Right, but I've heard good things about that, and I've heard that they're not so accurate.
32:12 - D.B.
Well, I've had students complain, you know, I did it myself, I don't care what the AI checker said, I did it myself. Whether to believe him or not.
32:24 - V.W.
I'd like to get back to an assertion that E. made that, you know, you want it in a sixth grade voice for a certain audience. And I want to slightly challenge that, that E. actually proposed a great hypothesis and follow on experiment. And that was in the statements he made subsequent to that, he made assertions that I would expect just because it's earnest and I trust his opinion, but I'd also like to test scientifically to see if they're borne out and the degree to which they're borne out. And I noticed we tend to, in these meetings, make assertions to each other without factually knowing if they're true. We just project them true because of our own opinion. And I think we're living in the age where we can really put that to the test to create a level of rigor in our communication that did not exist beforehand because we all trust each other, we all come from different backgrounds, and we all assert opinions that may or may not be based in fact, including my own.
33:30 - E.G.
I'm putting some links in chat. I actually read a book many, many years ago, but these are articles that are more recent that allude to the, and it varies, sixth grade is what it is, but you're saying six, seven, and eight, But here are the ones that are sixth grade.
33:51 - V.W.
We have this internal clock or register that tells us whether or not we are communicating with our audience. And we use different cues to pick up on that, the prosody of their voice, their expressions on their face, their body language. And it would be great to have an indicator to know when we're landing and when we're not. Just for the purposes of communicating what's on our mind, much less in persuasive context where we're trying to get funded for this or impressed by that. So, yeah.
34:28 - E.G.
Well, that's the rub here, because neurodivergent people aren't able to pick up on those cues. So what you have to do is you have to basically use a blanket approach. In this setting, we tend to use terms, words that aren't actively used in business. Therefore, we're able to know that we're speaking to an audience of academics who are able to understand what we need without having to look for facial expressions or listen to intonations and responses.
35:04 - V.W.
When I watched J.R. in the hearings for his approval to be the of the Supreme Court. One thing that I took away from it more than anything he said was with each statement that he would make, he would survey the audience of Congress persons and he would. Analyze whether or not what he had just said had registered or not, not just in terms of being heard, but in the way he made each statement and assessed the reaction to it. And I thought, OK, this a guy that's clearly ascended to the highest position of the courts of the land. And he has learned not only to judge by the letter of the law, he's learned to judge by the spirit of the situation, exactly what's going on. And I was impressed by that more than any of the specifics. And I set aside any political opinions I may carry for or against that.
36:05 - E.G.
appointment.
36:12 - H.J. | E.G.
That was one of the things that they identified is looking at. Everybody looking at their body language, looking at how their arms are leaning forward, their breathing, whether or not they're listening to you or waiting to ask their own question.
36:36 - V.W.
There's a YouTube contributor who's quite famous in helping people be successful in communicating from a TED talk perspective. And he'll talk about things that you wouldn't even think of in terms of the method of engaging your audience. One thing he said that really influenced me recently is that our webcams can influence people's perception of it. And I'll put it like this. If I'm sitting back from my webcam, I'm at what you would call a public perception spacing. My head size relative to my body size implies a psychological closeness that I'm imposing upon my audience. But if I sit closer to my webcam, I'm beginning to intrude into a personal space that begins to have other psychological overtones and that I want to be careful about appearing familiar when I'm not. And so he talked about there is a proper escalation of distance as we grow close to people, as we engage with them, as we talk about detailed sorts of things, there's a proper spacing that we should have. And the moral of his story was, don't put your webcam so close that you're intruding into someone's personal space because they'll be freaking out and you won't realize it. And you won't be heard because their defenses, their photon shields will have gone up. So I'm trying to, I'm trying to be sensitive to issues like that. And I can't say that I'm, I'm mastering it, but I'm trying.
38:02 - D.B.
Somebody, somebody got cut off, was that you E.? Somebody was going to say something a minute ago.
38:08 - E.T.
No, I was just going to say that I think that the awesome thing about these AI tools is that I, when I studied journalism, you know, I, it was not clear for me.
38:18 - E.T. | H.J.
And when I was a teacher, I didn't really know what it meant to communicate at a fifth grade level or a sixth level. You know, we have a lot of vocabulary that's very special, especially, you know, scientists and a lot of people may be on this call. And we think we're communicating, but we're not because we don't there. You know, the people we're talking to just don't share our vocabulary. And so I came up with a plan, by the way, for this course I took at MIT on AI for product design and development.
38:51 - H.J.
And it's about using these tools my business plan involved using these tools to explain the impact along with other kind of, you know, crowdsourcing of content around legislation.
39:03 - E.T.
So proposed legislation, well, what does that really mean?
39:07 - E.T. | H.J.
What will it really do?
39:09 - E.T.
How will it really impact me?
39:12 - H.J.
So I was, you know, I was basically exploring the possibility of using these tools to help people understand the impact that that legislation would have on their lives. And a lot of that just has to deal with, has to do with like the legalese language that's in legislation, you know, getting rid of that, you know, somehow translating that into things, something that humans can understand so that we can actually share an opinion about it, as opposed to only lobbyists and whoever, you know, have any opinion about it. So what I learned is that a lot of resistance from lawyers and lobbyists. Because there's a lot in legal language that when you try to put it at a fifth or a sixth or a seventh grade reading level, changes the meaning of the law. So they really were struggling with this. But I do still think it's a really useful endeavor. Probably has to be combined with other data points, and people kind of weighing in on what, for example, whether or not legislation that's proposed actually does what it says it's going to do, and whether or not, and what the intent of the person is who's trying to get that legislation passed. So those were some of the data points that I was recommending we pick up on legislation.
40:43 - D.B. | E.G.
because I think a lot of these people try to obfuscate, hide, to get something in so that it confuses the reader.
40:50 - H.J.
By the way, so do scientists. And mathematicians, notoriously.
40:53 - V.W.
There was a recent thing I voted on, an Arkansas bill, and in the middle of my vote, I lifted up my voice to the assistants in the polling place and I said, can you explain to me exactly what it means if I vote yes on this proposed amendment? Because sitting here, I am unable to parse the consequence of a yes or a no on what should be a text that I would be able to consume. And it was so cleverly intricately written that no matter what position you took in your mind, you could find a flip side way to look at it. And it was just stunningly difficult. And I imagine other people may have experienced this as well.
41:37 - H.J.
And of course, as we know, these titles are written purposely to obfuscate what they're going to do. And to, you know, they're written as persuasive, you know, headlines.
41:48 - D.B.
Right, right. E., are you, are you in the cybersecurity high school teachers certificate program?
41:54 - E.T.
Is that, or were you? Yes, sir. Yes, sir. I've completed that and I'm continuing with my master's.
42:01 - Unidentified Speaker
Gotcha.
42:02 - D.B.
Okay, good. Yeah, we have quite a number of people who do that.
42:07 - E.T.
Yeah. D., I just shared something.
42:11 - Y.P.
We are at Technology Park, Tech Park on Main Street. There's a cyber jolt event happening today, tomorrow, day after. People are interested in hearing about security. I think they're going to cover AI security. I heard J. talking about AI governance. There are going to be some CISOs and other people. Coming here and talking, but if you are interested, you can check. It's called Jolt and it's happening starting today from five and the final presentations are on Sunday.
42:48 - V.W. | Y.P.
Just wanted to announce and I'm already there. You may want to check in the chat that you're addressing the thing, the notice to everyone, because I'm not seeing it in the chat and I'm looking forward to it.
43:04 - H.J.
I guess you posted it to me, but I don't think you posted it to everyone.
43:11 - Y.P.
I can copy it and paste it to everyone.
43:14 - T.E.
OK, sorry about that.
43:16 - Y.P.
Yeah, my son's on his way down there right now, actually.
43:21 - D.B. | H.J.
Interesting. Excellent.
43:22 - H.J.
So I don't see the link, though, Y.
43:25 - Y.P.
I'll send it. I'll send it.
43:28 - H.J.
OK. Yeah, just make sure it's everyone and not just me. Did you have that great photograph of you and your son?
43:45 - V.W. | Y.P.
And he's smart.
43:48 - H.J.
There are some other events that I'd love to share with you guys, I went Monday night.
43:59 - Y.P.
I don't know.
44:01 - H.J.
I don't think any of you went on Monday to the meetup in Little Rock down at Flying Saucer for people interested in talking about AI.
44:15 - D.B.
It was very cool.
44:17 - Unidentified Speaker
Yes.
44:17 - D.B. | Y.P.
Okay, so where is this? Did you get the link? And I'll send the photo also.
44:28 - Y.P.
Here you go. This event is at Tech Park on Main Street. Yeah. M.J., I did go for that event on Monday, but I had to cut off early because of my son's award ceremony, but I met a few people there. There are about 30 people there, and they're going to start doing it every month.
45:00 - H.J.
And I'll make sure that I, as I get information about it, that I share it with this group. I don't know if you all are in Little Rock. I don't know where everybody is.
45:18 - V.W.
It would be great if you could share it.
45:23 - Y.P.
And majority of people were there who are kind of interested in AI. So I did not see a lot of interaction in terms of especially very less technical people. They were mainly age. Marketing, other people who are interested in knowing about AI.
45:49 - H.J.
Some of my former employees and former competitors were there, and they're definitely using AI. They're using it in their jobs. I'm one in a bank who's saving his bank $75,000 a year by using an AI tool. Some people are using Um, I met a woman who's a data scientist, who's doing interesting things in her job. And I also met a guy who's, uh, helps manage, uh, uh, a PE, uh, you know, an, uh, equity companies portfolio and evaluates their AI companies. You're what M.G. calls a connector in his book, tipping point.
46:39 - V.W.
Yeah. I like that a lot. It makes me happy. I like, I think that's where the magic happens.
46:47 - T.E. | V.W.
You mentioned brings life into it.
46:49 - V.W.
You mentioned your friend was work for a company that was a bank maybe that you said was saving $73,000 a year.
46:59 - H.J.
Yeah. $75,000.
46:59 - Unidentified Speaker
Yeah.
47:00 - T.E.
$75,000. Was that to replace a human person?
47:03 - H.J.
It wasn't actually. Uh, yeah, no, but it, it, It did involve marketing, though. He is in the marketing department of his bank. But he used to be a developer for my company, which was called Aristotle. So I had an internet company, and I hired him at a high school. And he taught himself how to program several languages, and now he's dabbling in AI.
47:28 - T.E.
Oh, so Aristotle has been around a long time, a long time. Yeah, I'm sure.
47:34 - H.J.
I know I just like tipped you off how I am.
47:38 - T.E.
So there was a person that worked in competing services L.J. that I think did some work with you guys way back in the day.
47:52 - H.J.
I don't know if you remember.
47:55 - T.E.
B.J. or B.J.? L.J.
47:59 - H.J.
Can you spell that? L-O-N.
48:01 - T.E.
L-O-N, J. Does not sound, does not ring a bell for me.
48:08 - H.J.
In our 20 plus years of history, we had over 250 employees.
48:13 - T.E.
Yeah, I think he was basically a, he was like a contractor or not really a contractor, but just he would, he worked at ULR, but he would help you help them out with different things. Thanks.
48:29 - H.J.
Yeah, because we didn't know what we were doing. We had a lot of consultants helping us set up our network.
48:43 - D.B.
There were no classes on internet networking at the time. Well, everyone, I think we've reached the point of adjournment. Great discussion. Thanks for attending and hopefully we'll see you back next week. Bye everyone.
No comments:
Post a Comment