Artificial Intelligence Study Group
|
- Announcements, updates, questions, presentations, etc. as time allows
- Today: CM will informally present. His "prospective [PhD] topic involves researching the perceptions and use of AI in academic publishing."
- Fri. March 21: DD will informally present. His topic will be NLP requirements analysis and the age of AI.
- Recall the masters project that some students are doing and need our suggestions about:
- Suppose a generative AI like ChatGPT or Claude.ai was used to write a book or content-focused website about a simply stated task, like "how to scramble an egg," "how to plant and care for a persimmon tree," "how to check and change the oil in your car," or any other question like that. Interact with an AI to collaboratively write a book or an informationally near-equivalent website about it!
- LG: Thinking of changing to "How to plan for retirement." (2/14/25)
- Looking at CrewAI multi-agent tool, http://crewai.com, but hard to customize, now looking at LangChain platform which federates different AIs. They call it an "orchestration" tool.
- MM has students who are leveraging agents and LG could consult with them
- ET: Growing vegetables from seeds. (2/21/25)
- Found an online course on prompt engineering
- It was good, helpful!
- Course is at: https://apps.cognitiveclass.ai/learning/course/course-v1:IBMSkillsNetwork+AI0117EN+v1/home
- Got 4,000+ word count outputs
- Gemini: writes well compared to ChatGPT
- Plan to make a website, integrating things together.
- VW: you can ask AIs to improve your prompt and suggest another prompt.
- We are up to 19:19 in the Chapter 6 video, https://www.youtube.com/watch?v=eMlx5fFNoYc and can start there.
- Schedule back burner "when possible" items:
- If anyone else has a project they would like to help supervise, let me know.
- (2/14/25) An ad hoc group is forming on campus for people to discuss AI and teaching of diverse subjects by ES. It would be interesting to hear from someone in that group at some point to see what people are thinking and doing regarding AIs and their teaching activities.
- The campus has assigned a group to participate in the AAC&U AI Institute's activity "AI Pedagogy in the Curriculum." IU is on it and may be able to provide updates now and then.
- Here is the latest on future readings and viewings
- We can work through chapter 7: https://www.youtube.com/watch?v=9-Jl0dxWQs8
- https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/
- Prompt engineering course:
https://apps.cognitiveclass.ai/learning/course/course-v1:IBMSkillsNetwork+AI0117EN+v1/home - https://arxiv.org/pdf/2001.08361
- Computer scientists win Nobel prize in physics! Https://www.nobelprize.org/uploads/2024/10/
- popular-physicsprize2024-2.pdf got a evaluation of 5.0 for a detailed reading.
- Neural Networks, Deep Learning: The basics of neural networks, and the math behind how they learn, https://www.3blue1brown.com/topics/neural-networks
- LangChain free tutorial,https://www.youtube.com/@LangChain/videos
- We can evaluate https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10718663 for reading & discussion.
- Chapter 6 recommends material by Andrej Karpathy, https://www.youtube.com/@AndrejKarpathy/videos for learning more.
- Chapter 6 recommends material by Chris Olah, https://www.youtube.com/results?search_query=chris+olah
- Chapter 6 recommended https://www.youtube.com/c/VCubingX for relevant material, in particular https://www.youtube.com/watch?v=1il-s4mgNdI
- Chapter 6 recommended Art of the Problem, in particular https://www.youtube.com/watch?v=OFS90-FX6pg
- LLMs and the singularity: https://philpapers.org/go.pl?id=ISHLLM&u=https%3A%2F%2Fphilpapers.org%2Farchive%2FISHLLM.pdf (summarized at: https://poe.com/s/WuYyhuciNwlFuSR0SVEt).
6/7/24: vote was 4 3/7. We read the abstract. We could start it any
time. We could even spend some time on this and some time on something
else in the same meeting.
Appendix 1: New proposed 4000/5000 level applied AI course
With industries increasingly relying on AI for decision-making, automation, and innovation, graduates with AI proficiency are in high demand across finance, healthcare, retail, cybersecurity, and beyond. This course offers hands-on training with real-world AI tools (Azure AI, ChatGPT, LangChain, TensorFlow), enabling students to develop AI solutions while understanding the ethical and regulatory landscape (NIST AI Risk Framework, EU AI Act).
Why This Course Matters for Students:
■v Future-Proof Career Skills – Gain expertise in AI, ML, and Generative AI to stay relevant in a rapidly evolving job market.
■v Business & Strategy Integration – Learn how to apply AI for business growth, decision- making, and competitive advantage.
v■ Governance & Ethics – Understand AI regulations, ethical AI implementation, and risk management frameworks.
v■ Hands-on Experience – Work on real-world AI projects using top industry tools (Azure AI, ChatGPT, Python, LangChain).
Why UALR Should Adopt This Course Now:
v■ Industry Demand – AI-skilled professionals are a necessity across sectors, and universities must adapt their curricula.
v■ Cutting-Edge Curriculum – A balanced mix of technology, business strategy, and governance makes this course unique.
v■ Reputation & Enrollment Growth – Offering a governance-focused AI course positions UALR as a leader in AI education.
v■ Cross-Disciplinary Impact – AI knowledge benefits students in business, healthcare, finance, cybersecurity, and STEM fields.
By implementing this course, UALR can produce graduates ready to lead in the AI era, making them highly sought after by top employers while ensuring AI is developed and used responsibly and ethically in business and society.
Applied AI (6 + 8 Weeks Course, 2 Hours/Week)
5-month Applied Artificial Intelligence course outline tailored for techno-functional, functional or technical leaders, integrating technical foundations, business use cases, and governance frameworks.
This can be split in 6 weeks certification plus additional funds for credit course with actual use case.
I have also leveraged insights from leading universities such as Purdue’s Applied Generative AI Specialization and UT Austin’s AI & ML Executive Program.
![]() |
Balance: 1/3 Technology | 1/3 Business Use Cases | 1/3 Governance, Compliance & AI Resistance
![]() |
Module 1: Foundations of AI and Business Alignment (Weeks 1-4)
v■ Technology: AI fundamentals, Machine Learning, Deep Learning
v■ Business: Industry Use Cases, AI for Competitive Advantage
■v Governance: AI Frameworks, Risk Management, Compliance
· Week 1: Introduction to AI for Business and Leadership
o Overview of AI capabilities (ML, DL, Generative AI)
o Business impact: AI-driven innovation in finance, healthcare, and retail
o Introduction to AI governance frameworks (NIST, EU AI Act)
· Week 2: AI Lifecycle and Implementation Strategy
o AI model development, deployment, and monitoring
o Case study: AI adoption in enterprise settings
o AI governance structures and risk mitigation strategies
· Week 3: Key AI Technologies and Tools
o Supervised vs. Unsupervised Learning
o Python, Jupyter Notebooks, and cloud-based AI tools (Azure AI Studio, AWS SageMaker)
o Governance focus: AI compliance and regulatory challenges
· Week 4: AI for Business Growth and Market Leadership
o AI-driven automation and decision-making
o Case study: AI-powered business analysis and forecasting
o Compliance focus: Ethical AI and responsible AI adoption
![]() |
■v Technology: NLP, Computer Vision, Reinforcement Learning
v■ Business: AI in business functions - Marketing, HR, Finance
■v Governance: Bias Mitigation, Explainability, AI Trust
· Week 5: Natural Language Processing (NLP) & AI in Customer Experience
o Sentiment analysis, text classification, and chatbots
o Business case: AI in customer service (chatbots, virtual assistants)
o Governance focus: Privacy and data security concerns (GDPR, CCPA)
· Week 6: AI for Operational Efficiency
o Business use cases: AI for fraud detection, surveillance, manufacturing automation
o Compliance focus: AI security and adversarial attacks
· Week 7: Reinforcement Learning & AI in Decision-Making
o Autonomous systems, robotics, and self-learning models
o Business case: AI-driven investment strategies and risk assessment
o Resistance focus: Overcoming corporate fear of AI adoption
· Week 8: AI in Marketing, HR, and Business Optimization
o AI-driven personalization, recommendation engines
o Business case: AI in recruitment, talent management
o Compliance focus: AI bias mitigation and fairness in hiring
![]() |
Module 3: AI Governance, Compliance & Ethics (Weeks 7-10)
v■ Technology: Secure AI Systems, Explainability
■v Business: Regulatory Compliance, AI Risk Management
■v Governance: Responsible AI, Transparency, Algorithm Audits
· Week 9: AI Governance Frameworks & Global Regulations
o NIST AI Risk Management, ISO/IEC 23894, EU AI Act
o Industry-specific regulations (HIPAA for healthcare AI, SEC for AI in finance)
o AI governance tools (audit logs, explainability reports)
· Week 10: AI Explainability & Bias Management
o Interpretable AI techniques
o Case study: Bias in AI hiring systems and credit risk models
o Business responsibility in AI model transparency
· Week 11: AI Security, Privacy, and Risk Management
o Secure AI model deployment strategies
o Governance: AI trust frameworks (eg: IBM AI Fairness 360)
o Case study: Managing AI risks in cloud-based solutions
· Week 12: AI Resistance and Corporate Change Management
o Strategies for AI adoption in enterprises
o Business case: AI integration in legacy systems
o Ethics: Impact of AI on jobs, social responsibility, and legal liabilities
![]() |
Module 4: AI Strategy, Implementation, and Future Trends (Weeks 11-12)
■v Technology: AI Product Development
■v Business: AI Implementation, Enterprise AI Strategy
v■ Governance: AI Regulatory Compliance & Future Legislation
· Week 13: Overview of AI Deployment and Scalability
o Deploying AI models on cloud (Azure AI Studio, AWS, GCP)
o Business case: Scaling AI solutions in enterprise environments
o Compliance: AI model monitoring, drift detection
· Week 14: AI for Competitive Advantage & Industry-Specific Applications
o AI in industry : e.g.: supply chain, autonomous vehicles, healthcare diagnostics
o Case study: e.g.: AI-driven drug discovery and logistics optimization
o Compliance: AI liability and regulatory accountability
· Week 15: AI Governance and Responsible Innovation
o Innovating with AI : e.g. financial services (algorithmic trading, fraud detection)
o Ethics: Ensuring fairness and avoiding discrimination in AI models
o Risk assessment frameworks for enterprise AI adoption
· Week 16: The Future of AI: Trends, Risks & Opportunities
o Generative AI (DALL-E, ChatGPT, LangChain applications)
o AI and Web3, decentralized AI governance
o Case study: AI-powered governance in blockchain ecosystems
![]() |
Module 5: Capstone Project & Final Presentations (Weeks 12-14. Process starts in Week 7/8)
■v Technology: Hands-on AI Application Development
v■ Business: AI Use Case in Industry
■v Governance: Compliance Strategy & Ethical AI
· Weeks 17-19: AI Capstone Project
o Develop an AI-driven business solution with governance compliance
o AI application areas: Business analytics, customer engagement, fraud detection
o Report: Governance strategy and AI risk mitigation plan
· Week 20: Final Project Presentations & Certification
o Peer review and feedback
o Industry guest panel discussion on AI’s role in future business strategies
o Course completion certification
Tools & Technologies Covered:
· AI Development: Python, TensorFlow, PyTorch, Scikit-learn, GenAI models
· Cloud AI Platforms: Azure AI Studio, AWS AI Services, GCP Vertex AI
· NLP & Generative AI: ChatGPT, DALL-E, LangChain, BERT, Stable Diffusion
· AI Governance & Risk: SHAP, LIME, AI fairness toolkits
Appendix 2: Transcript
Fri, Mar 7, 2025
0:00 - D. B.
review the questions people asked or whatever. The recording, I didn't mean a video recording, it's a transcript. Probably an imperfect transcript.
0:10 - C. M.
I don't know, I was looking back through some of the past meeting notes and it seems to do a pretty decent job of it. Oh, good.
0:21 - D. B.
I noticed sometimes it gets the, it mistakes who's speaking. But the actual text is pretty good.
1:05 - D. B.
Okay, well, why don't we go ahead. So welcome to the Artificial Intelligence Study Group. And today, C. M. will informally present his prospective PhD topic. Next week, we'll kind of go back to our usual program of reading or viewing videos and talking about them and other standards, kind of repeat topics. But then the week after, D. D. will informally present his PhD topic. So with that, I'm gonna unshare my screen so that C. can share his and he'll tell us about his PhD topic. Sounds good, thank you.
1:52 - C. M.
Take me one moment here to pull it up. Okay, is that visible for you all now?
1:59 - Unidentified Speaker
Yes.
2:00 - C. M.
Okay, so before I get into the actual write-up that I have for the dissertation topic, a little bit of background about me. I do not come from a computer and information science background. I actually come more from a library sciences background, and I am now pushing into the field of information quality and picking up the computer elements kind of as I go. I work for the government in the field of information management, essentially a government librarian of sorts, and my master's is in library and information science. So much more on the collection and categorization and curation of information more so than how to construct models or develop So it's much more on that side of things. So I'll kind of push through the background here pretty quickly. It's pretty straightforward stuff as far as an introduction, just talking about the proliferation of generative AI tools, particularly after it exploded back in 2022 after chat GPT 3.5 was released. The myriad of capabilities that emerged in the six to 12 months following, and also how that kind of spurred a new, I guess, generation of questions about generated content for research purposes. Some of the pros and cons of these tools and potential uses and applications. A very simple search of generative AI in Google Scholar, looking just at articles published since 2023, returns over 47,000 results. And if we connect that generative AI and academia specifically, it still returns over 14,000 results. So the amount of content coming out within the academic community is pretty substantial. That said, most of what I have found in my research to this point has been what the capabilities are how it's used. There's been a little bit about the perception of these tools and how they're used by the person themselves versus how they're used by third parties, but most of those types of studies are typically smaller institutions or at the undergraduate level, so it's not specifically looking at academic publication by PhD students or faculty at institutions. So what I'm really wanting to dig into here is within that larger conversation that's happening at that higher level. So it's not necessarily at the undergraduate level and those sorts of things. So one of the challenges that I run into here is it's just going to be looking at a brief window of perspective, capturing the shifting views of utilizing these AI tools for research. In the computer and information science community, it seems that we've been much more forward-leaning into accepting those tools, whereas other fields and academic disciplines may not be as accepting of generative AI or even LLM in general in assisting with research. So I want to kind of look at both what the perceptions are of, if I use it, is it going to be something that's credible? Do I need to do a lot more digging on? Or if it's used by another party, do I need to have increased scrutiny? So the types of questions that I'm really wanting to dig into with my research is just kind of outlined here. In what ways do PhD candidates and university faculty use these tools? Then shifting into what are the challenges using those tools, including hallucinations, misattribution, ethical concerns. And do those concerns differ between academic disciplines? So any kind of questions up to that point about direction? Because this is very much in an early stage, and I even have a quick survey that I'm in the rough draft phase of putting together asking about these perceptions, but I hadn't seen a lot published at this level, but it's something that it could have fallen between my gaps. So part of wanting to present this today is seeing what types of things have you all seen about perceptions of AI use, particularly when it's pointed at publication that I might not have seen. So I don't know if anyone has any thoughts on that.
7:26 - A. B.
Just a thought, like, does it, I don't know, from the perspective of, cause you, like, I think this happened with like the New York times or something, right. Where their, their information was kind of embedded within an LLM and kind of getting, you know, used without reference and that sort of stuff. Is that like one of the potential ethical things that we would encounter here? If we, you know, in mass use them is that, you know, sometimes things aren't getting referenced or that, that sort of stuff in the right way. It could be referenced.
7:54 - C. M.
it could be hesitancy towards attributing the use of AI in research. I know a lot of journal publications currently are in the early phases of creating their own AI guidelines, but a lot of that's still in flux. So as that is in flux, what should the expectations be for attributing AI? Is it a footnote? Is it, I know a couple of journals said that you need actually have a separate acknowledgments portion for where you're citing AI so that it's not just looking like you produced everything on your own, that you're actually, it's not a co-author, which is something that we saw last year coming out a couple of times where someone actually wrote, put the AI tool in as a co-author in their paper. So it's not to that level, but are we attributing it in any way and how are we attributing that material? Another question that this raises is if there are particular knowledge domains, again, computer and information science is more forward-leaning into adopting these tools, are they just able to outproduce other knowledge domains? And in doing so, if, say, there's 100 computer and information science journal articles coming out for the same time frame that 10 in another discipline are coming out, future LLMs and generative AI tools, is their learning being shifted towards a particular academic discipline? And how does that impact the biases that may exist or even the writing style of tools as they emerge? That's a pretty interesting issue.
9:38 - D. B.
I never thought of that. You're suggesting if one academic field publishes a lot more than another, maybe it'll have more, it'll have more impact on what the LLM learns about how to write academic style stuff. Yeah, and good. Well, just piggyback off your thoughts there.
9:57 - A. B.
Because like, you know, experiencing this firsthand, right? Where it's like, hey, you know, in computer science, it's very, you know, code driven, right? And come up with it, you can come up with a new idea and kind of in your in your domain. And, you know, okay, with a brand new concept or something. Right. And I think that's what you see a lot in computer science research, but I've kind of experienced that firsthand where it's like, well, Hey, you know, I got this idea, you know, this domain I'm in like, Hey, what are some other ideas that I can use? And then ask, you know, working with chat GPT, maybe, you know, get some working code that actually implements it. Right. But which, uh, you know, kind of really, you know, opens up the possibilities, whereas, you know, in a more of a Arts and literature or something like that, or like, you don't, you don't necessarily have that, that that creative tool, right, to kind of, you know, smash ideas together and come up with something new, I guess.
10:53 - C. M.
Well, and kind of this next question goes to the quantity of what's being produced by these different disciplines. So another layer of it is there has been some degree of hesitation, again, not only to acknowledge, but potentially to utilize another person's within your own research and to cite it and include that if it has a lot of AI generated content. So I'm wanting to ask questions about whether or not when you come across a source, if it is acknowledging the use of those generative AI tools, are people in particular domains within academia hesitant to include those sources in their own papers. Because it's not just a quantity issue, it's also, depending on how these models learn, if it's being cited 10 times versus 1,000 times, there might be more weight put towards a particular publication. So we're looking not only at what's being produced, but the human interaction of, am I willing to cite this article? Am I willing to, again, essentially give it that rubber stamp through my own research. So that's kind of the next point and what that's really getting down to. The next one's pretty standard. Do PhD candidates and university faculty draw a line between use of those types of tools in coursework versus publication? This more, again I'm looking at a snapshot, so if myself or someone else were to reproduce this research in five years, does that shift? In what ways does it shift? Again, this is getting back to the numbers and rapid adoption. We're already a couple of years out from this initial kind of generative AI boom. So we're going to start seeing some of those early departures across different fields of who has adopted and who hasn't. And in some ways, this type of research might be a little late. But then again, I haven't seen a lot of this out there.
13:10 - D. B.
In the past, it's been pretty much accepted that especially authors from foreign countries would have various grammatical glitches and spelling mistakes in their manuscripts that are even published and may be influential because their English is not perfect. But I'm wondering if now the expectation is that people will do spell checks and grammar checks and kind of style upgrades using AIs. And you don't even have to acknowledge it because the AI isn't writing, it's just catching mistakes. And I think maybe the expectation is that authors should not have those kinds of problems anymore. If there are such problems, then it just shows they didn't use the AI tools appropriately.
13:58 - C. M.
I think that's going to get a little bit more challenging moving forward though, especially if we look at Microsoft products now, it used to be that they had basic editorial tools built in. Well, now we have Copilot built in. So whereas it might've just been looking at grammatical errors or improving the conciseness of language, if you open up Word and if you set it up right, you can actually have it start assisting in the generation of, okay, what should the next sentence be? What types of things should I be adding into my write-up here. So is that really the researcher doing all that work, or are they relying too heavily on that generative AI assistance? Again, within the computer and information science realm, it looks like we've been pushing more towards accepting all of these tools as soon as they come out. I mean, Copilot for the use in research or presentations seems like it's going 100 miles an hour.
14:58 - D. B.
Whereas, again, I come from a library science background.
15:01 - C. M.
When I'm working in a library, working with history majors or literature majors, they don't want to see really any use, at least in the spheres that I've been in, of these generative AIs in the production of products or in the creation of products.
15:20 - D. B.
My perception is that if something reads like it was generated from an AI, AI, like if you read something and it just sort of hits you that it reads like it's AI generated, I will tend to discount it. At least if it's not obvious to me that it was written by an AI, then I'm willing to accept that an AI might have been used. But if it sort of looks obvious to me that it's AI generated, then I will lose some respect for it. I don't know how other people feel.
15:58 - Multiple Speakers
And that kind of gets to the next point is right now a lot of that assessment is based off of, okay, does it seem to make sense?
16:07 - C. M.
Is the language correct? Is it citing things properly? Does it seem like it's actually written by a human versus an AI? But as these tools evolve and get better and better, some researchers are concerned about even if it reads like a human wrote it, if I acknowledge that I used AI in the production of, or in the generations of questions, or in the brainstorming phase of my research, if I just put that acknowledgement in, like some of these journals are wanting, is that going to immediately impact the perception of my own product? We're calling the question, well, what part of this was my own contribution versus the AI's contribution. And that's a lot of what I'm seeing right now in the journals. And they're back and forth about how do we want acknowledgment to look.
16:59 - D. B.
I recently submitted a paper and the journal had a requirement like that. And I did use some AI to draft. I fed in the whole paper and said, draft me an abstract. And then I revise it by hand, but I couldn't get the the abstract to the point where an AI checker would register it as 0% AI. I got it partway, and I did revise it, but I couldn't erase that taint, I guess, is what it felt like, And so I felt I had to admit that I used AI in the required acknowledgments. So I'll show you.
17:40 - C. M.
I'm going to share my screen. I'll stop sharing here so that you can show it then.
17:49 - A. B.
Okay.
17:50 - D. B.
Those AI checkers too are pretty well known to produce a lot of false positives, right?
17:58 - Unidentified Speaker
Yeah, that too. It'll take me a minute to share screen. Okay.
18:04 - D. B.
So this is the journal, it's called Technological Forecasting and Social Change, and this is you're supposed to do, you know?
18:17 - C. M.
Well, and there it's specifically pointing to improving the process of readability and those sorts of things. Other journals, from what I've seen so far, their policies may be more open to the use of generating content with the AIs, again, as long as it's acknowledged, whereas others don't want any AI use whatsoever, which kind of begs the question, well, does that include things like Grammarly, or does that include, or do we even consider that AI at that point? And I don't necessarily know that I would consider that generative AI, because it's just correcting some of those issues. But the journals may have differences of opinion.
19:11 - D. B.
If you're asking how people who publish papers feel about it, I was uncomfortable with it, but I did do what it said. And I did make a statement like this, whatever. I don't remember exactly what I said, but I said, yeah, I used it to help draft some initial passages, which I then revised. I don't know. I said something like that. And that's what it said I have to do. Now, I'm wondering if maybe reviewers aren't going to like that. Did I kind of shoot myself in the foot by admitting it?
19:48 - C. M.
I don't know. Right. And even if it's not a reviewer, if someone else is doing research in a similar field, will that impact, if this journal puts out there that you did use AI to some degree, will that impact other people wanting to cite your research? Because it does have that AI tag to it. And that is going to depend very much so on the specific academic discipline.
20:13 - D. B.
Well, I'm going to unshare my screen since you can share yours again. So your research questions are really about how do people feel about this, not how well does AI work or something like that.
20:33 - C. M.
Right. I don't come from necessarily a technical background, so what has more informed my direction on research is almost more like a library use study. So, how are people engaging with these tools and how do they feel about other people using these tools is kind of the first half of it.
20:53 - D. B.
I'd be curious to know how, you know, maybe some people here who have published papers view this in terms of their personal opinions or personal reservations, because that's kind of what you're interested in, right? We got a few people here who have published.
21:13 - J. H.
I personally view it as a positive because it could eliminate bias in how individuals introduce and understand the scope of sources, sort of creating a more level and inclusive playing ground for thought and innovation on top that playing field. You know, I think that ultimately what we need is probably unified guidance for the ethical use of AI and academia, and that may vary by field. But I think that that would be something really interesting to get to in the next few years.
21:53 - D. B.
Maybe, C., maybe, you know, in view of what J. just said, Maybe your type of work could help inform academic professional organizations to generate policies and ethical guidelines. They don't know what people are thinking, and they need to in order to generate guidelines that make sense to the people involved. Right.
22:18 - C. M.
And so the first, there are kind of two prongs to the proposed research. Versus essentially a survey of researchers and people that have submitted things for publication to see what their concerns and perceptions are. The other side of the research is going to be looking at journals that have high turnover. So looking at like cables journals catalog to see the top journals out there who has an AI policy and then comparing those policies against each other to see what those standards currently are, how they differ across disciplines, and then inform just general researchers kind of what they're looking at when they go to publish in a particular arena. One of the potential contact points that I have to get the initial survey out that I was looking into is actually the American Library Association. They're very well connected across academia across different domains, and they're struggling with a lot of these same questions. So as far as getting it out there and getting a response, I thought they might be a really good foot in the door, and I've been a member of the American Library Association for a few years now.
23:43 - D. B.
Anyone else have any comments or opinions, questions? Yeah. Well, I had another question.
23:48 - A. B.
I know it's just seems like this really from the angle of like kind of, writing the, you know, written research or whatnot, but what about from the perspective of just kind of like aggregating, you know, research like source material, right? So like, for example, I know they, the library department here had showed off a, uh, this, uh, this additional kind of feature in the in the, the library database where it's essentially taking a prompt, so if I have a topic that I want to look into and essentially parses that and then translates it into a bunch of library database queries and then pulls the results and then summarizes them for you and so forth. Then I personally have used them to take maybe a long article that I'm not really sure if it's got what I need in it, but I'll feed it through a LLM to get a quick summary to to say, hey, is this even something that I want to research?
24:50 - C. M.
Let me actually really quickly, I wasn't initially going to pull it up here because it's in a very, very early draft phase. We'll see if it comes up. OK. So I've started putting together just a At this point, it's more of a brain dump of things that I was thinking could lead to questions. And one of the first questions that I do have here is, if you've used generative AI tools in your research, in what way are you using it? It's everything from brainstorming, gathering and summarizing, editorial work, creating visualizations, summarizing content, both for within the paper, but also for abstracts, potentially a journal for the publication. So it has quite a bit there. That doesn't necessarily mean that it's completely, doesn't cover necessarily every option. But that seems to be kind of getting what you're talking about there. Yeah. No, that makes sense. Yep.
25:58 - Unidentified Speaker
Yep.
25:59 - C. M.
So that type of question is definitely something that I'm going to be asking in the survey.
26:08 - C. M.
And then not only if people are doing it, but what is the perception from a third party? If you see that someone else has, again, AI acknowledgement, does it really matter if they used it during the brainstorming portion of that research? Maybe, maybe not. Is it more potentially problematic if they're using it to summarize content for an introduction or for a conclusion paragraph? Those types of things might be a little bit more, there might be some wariness about the content there. So it's not only seeing how people are using it, but at each of these steps, what is the perception of your own use and your confidence in using it for that? But also, are you confident in someone else using it for that.
26:56 - Unidentified Speaker
To mention, and thank you, Dr.
27:04 - M. M.
V., because he created very good links. If you check these links, you will see you can use it for many different tasks. So I just want to say thank you, V., and remember everybody to go and check the links. They have summarization, you have everything for research, and V. W. probably can add a little information more. Well, thanks, Dr. M.
27:38 - V. W.
It was really inspired by your providing NVIDIA special purpose links that we can all use to improve our work, and that kind of got me thinking about that whole area. And so I have 81 tools from LLMs to generative AI tools, to image tools, to text to speech, blah, blah, blah. And if you go to WDV, WDV dot com slash capital A I T lowercase O O L S A I tools, it'll bring up the 81 topic areas. And the purpose of it was to be a shortcut for people who are really trying to make fast tracks of trying different LLMs or generative AIs for their particular field. I will say on the library science side of this, that a graduate student I've been working with, R. M., has made enormous strides in developing pre-dissertation appendices. This was inspired by D. B.'s policy that when he gets a new PhD student, he first has them do a literature survey to not only find the lay of the land of the literature, but to begin to kind of level up your understanding of the whole field. And so R. has created this incredibly detailed appendix. It's now up to 15 dependencies and all of the specific areas of her dissertation research. And she's also compiled references that are associated with each of the now 15 sections, excuse me, And so it's terrific because here's the unintended consequence of the payoff that directly relates to library science. Now, whenever she or I are prompting LLMs to generate code for this topic area, we take a stripped-down version of all the appendices and we load them in the front end and saying, we're getting ready to ask you some questions to code an application or a demonstration. Please review this material that's been compiled to contextualize your response to us. So that's prompt number one is this giant brain dump of everything we've collected about the field that fits within the token length of the LLM, 200K for Claude level models. And then the second prompt is here are some applications that we have written that are technically rigorous and graphically beautiful that satisfy our requirements for interactivity and explainability and demonstration of the particular phenomena that we're studying. And so our second prompt is here's work we've already programmed successfully through many shots. And here's the look and feel that we want on the result. And then the third prompt is whatever the heck it is we're currently doing in the context of those first two prompts. And so recently with the incursion of 4.7 Claude Sonnet and ChatGPT 4.5, we now have the LLMs coming back and saying, your token cost per query is going up. Do you want to process this query and use more of your points? And it gives us three choices, small, medium, and large. So we peg it and we say, we want to use all our points per query to have you satisfy this. So now instead of giving us 100 lines of code, we're getting 1,500 lines of code, 2,000 lines of code, and developing these tools in many fewer iterations or shots, as we call them. And so like in the last two evenings, we've generated 2,500 lines of code. And it works out, if you say that a good programmer does 50 lines a day, working out to a month per day of equivalent work that we accomplish using basically a library science approach that D. asks his PhD students to use, which is a great thing. So that's it.
31:55 - D. B.
Any other comments or questions for C.? Or discussion issues? So this is Y.
32:05 - Y. P.
First great topic and I missed some of the comments in the middle but one group that might be interested in this topic is lawmakers. So when they are making and right now there is no official act in the U.S. There are some states that have it, but how do people feel about use of generative AI? Now, I know there is focus on certain sect, but I think that is going to play an important role in building even regulations, policies, or rules. So that thought came to my mind, I wanted to just share case.
32:58 - J. H.
I think that's actually really good feedback and I would actually encourage you to look at the draft Brazil legislature which is sort of the most fleshed out look at IP and copyright issues and it also has a specific carve out for copyright within non-commercial domains so it seems like they are sort of for academic use of AI and other potential public interest use cases. That's a very good point.
33:33 - C. M.
I hadn't thought about the legal side of things terribly much, so I'll definitely look into that.
33:41 - J. H.
And what was that resource that you mentioned? So the Brazil draft legislation, I've got it open in a tab here. It's the major Brazil draft legislation. Is it written in Portuguese? I found an English version the other day. So it's bill number 2338, and it's called the Proposed AI Bill.
34:19 - D. B.
All right. Any other comments? Anybody?
34:24 - M. M.
Jose wants to talk. Those resources, I think they're very interesting.
34:29 - D. B.
Hello, everyone. The only problem I found with the resources with AI in use is that they don't focus on application.
34:38 - J. A. O.
They focus on how to maintain accountability, fairness, and how to train the model itself, not rather to apply it. And it's well known that in most universities, the decision is made by the professor of subject rather than is a mandatory use in all education. And that's creating a big gap, I think, in the universities that allow the use and actually teach students how to check, fact check the prompts, fact check the facts that the elements are throwing. So also, I think that's one of the things that it hasn't been actually study yet the mandatory use of it.
35:22 - M. M.
But many universities, they have a contract already with OpenAI, many universities, so it's allowed to use it. Then the specific names to give, I just don't remember right now, but at least I think at least 20 or 30 But universities already have a contract with open...
35:50 - Multiple Speakers
Yeah, they have invested a lot in the application itself, but they have not yet found a way to make it used, like really used in all classes.
36:01 - J. A. O.
It's just for the moment is to the decision of each professor in each subject. It's not like we need to teach this. It's more like if you want, you can use it. If you don't want, You cannot use it, like a free choice, more of it.
36:20 - M. M.
We can check this, yeah, we can check this. Right.
36:24 - J. A. O.
There's actually a study, I will check it, that I think it's 87% of the universities, they allow to use, but when you see the professors, like 97%, they don't allow it. Even though it's allowed, they don't want it in the class. Big contradictory thing because it's allowed but it's not used.
36:48 - M. M.
97 is a huge percentage. Yeah, and even they have like big contracts.
36:53 - J. A. O.
It was like 500 million that they invested and they are not using even 10% of it.
37:01 - M. M.
Exactly, because they invested, they have contracts, they have with OpenAI contracts. And I'm sure that this University of Arizona was the first one and I didn't, and I work with this university in California also, they have a contract. Well, that's not good, not good. They will start, don't worry about this, they will start. This is like internet, you know, not everybody was using, but after one or two years everybody will use it. Or maybe they use it, but they don't say that they're using the classes.
37:46 - V. W.
Another issue that comes up along the library science slash exploding IA axis is the fact that things are moving so fast that former notions of releasing information like publication cycles that require review, the use of the post office, the use of slow email, these are giving way to major releases of tools every week or so that are changing the landscape. So then the question is, how do you vet and distribute knowledge content that's been generated at a rate that matches the rate of change of the area? Because we want to distribute our results, we want to distribute our findings, we want to bring people along as V. W. mentioned to make sure that everybody's being able to participate to the maximum degree that they can in this revolution. And it's like we're just the drinking from the fire hose has just become too much because we want other people to drink from the fire hose and not everybody is a willing subject.
39:03 - Y. P.
C., this is Y. again. I have one more question. Are you considering any psychological dimensions? Will you be also considering how people think, how people behave, how they act, trust factor, all these dimensions into your research? I'm sorry, I was having a little bit of trouble.
39:31 - C. M.
It seems you're breaking up there, so I didn't quite get your question there.
39:36 - Y. P.
I'm sorry, I'll switch off my video. So are you also considering the psychological dimensions into your research, like how people feel, their trust and their fears and all those things into your research? A little bit.
39:50 - C. M.
I'm not going to be able to go into a deep, deep dive in that, but it definitely will have a touch on trust. And confidence and capabilities. So that absolutely would come through in the research.
40:05 - M. M.
But Y. asked mostly about mental issues and psychology. And there is Y. P. very good study about the stress, accepting the new technology. And I really recommend you, I don't remember the name of this book, but I like the book. So the stress It is involved, obviously, and everybody has a level of stress to accept or not accept it, you know? So you're completely right. Not everybody will accept it and not everybody will feel comfortable. Because there is also, yesterday, very good from M., what is learning and what is survival mode, okay? So if you are in the survival mode, means that you are in stress, actually you cannot learn, okay? So this is the tricky part, that you can make people really like it, love it, to be able to learn. Otherwise, they're in stress, so they're not capable. They're in the survival mode. Watch the video, and there are books like this, but it's very good. Good question, and we will talk additional because I like this. I don't, I mean, maybe more of you can comment about everybody's learning and everybody express distress. We express this or feel this way when we started the beginning, this video conferences, remember? We don't know what kind of tool Zoom or Team Google. Yeah, so we express this stress over there. And on the top of this now is coming with the AI and more stress and more. But we have survived and we seek to thrive.
42:05 - V. W.
Do you have a link to this, V?
42:08 - Multiple Speakers
If you have a link to this content area, you could throw it in the chat. It would be nice for us to review.
42:16 - M. M.
I would love to, but right now I don't remember. Try for next time.
42:22 - V. W.
Okay, thanks. Yeah, I will try it.
42:26 - M. M.
It's very good. And maybe Y. can add a lot more, because this is his area.
42:35 - Unidentified Speaker
Yes, ma'am.
42:36 - Y. P.
Y. can add this.
42:38 - M. M.
But this is what we need to include. I'm thinking about this way, because AR augmenting people give us the more capabilities, but at the same time, they are helping us or they damage us? It's your kind of your question too, because maybe in one moment they damage. And this is what they discuss, that they don't give us enough cognitive capabilities or stuff like this. Yeah, they're interesting studies. Very interesting. I love this very much. All right.
43:19 - D. B.
I'd like to go on and talk about one more thing here, if I can get my screen to behave. So some master's degree students are working on using AIs to write books or equivalent informational websites. And one of, at least one of them is here today. E. T. is here. And I wonder if you could give us a brief update on what you're doing. And if you have any questions or need any guidance on how to proceed, anything like that.
44:08 - E. T.
Hello. I want to say I started, I'm sorry. Lastly, actually, about, it wasn't last week but the week before, I talked about the prompt engineering course. So I finally finished that course and I can say that I really enjoyed using some of the prompts in that course. It really helped me to refine more information. The best one that I liked is NOAA system. And it gives a different expert's perspective to analyze the problem and give each expert's guidance on the problem. So I'm definitely planning to use more and refine more information using NOAA system. And I also checked out Dr. V.'s link and I was amazed, honestly, with the number of AIs, because at this point, I lost count of the AIs. And it's really nice to have one place to be able to reach all of those AIs. And the variety is, again, amazing. There are very different AIs for almost everything.
45:36 - Unidentified Speaker
OK.
45:37 - E. T.
How is your proposal going?
45:39 - D. B.
I have a good progress on that.
45:43 - E. T.
It's still not completed, unfortunately. The school I work at is going very busy at this point. We're going towards the end of the year and the testing time. So unfortunately, I couldn't finish it yet. But I have a good progress on that.
46:03 - D. B.
OK. All right. I don't think anybody else in the group is here. OK, any other thoughts or comments before we adjourn? Next time, I guess we'll view the video that we've been viewing and talk about it.
46:30 - Unidentified Speaker
All right, well, thanks.
46:33 - D. D.
for being here and we'll see you next time. See you guys. Bye. Take care.
46:42 - Y. P.
Take care. Thank you.
No comments:
Post a Comment