Artificial Intelligence Study Group
|
- Announcements, updates, questions, presentations, etc. as time allows
- Anyone go to the Wednesday Feb. 26, presentation on AI by Windstream? Thanks MM and YP for briefing us.
- When possible: Briefing from VK on the AI content of a healthcare data analytics conference attended in FL (informal); CDO Healthcare Exchange (Fort Lauderdale)
- Fri. March 7: CM will informally present. His "prospective [PhD] topic involves researching the perceptions and use of AI in academic publishing."
- Fri. March 21: DD will informally present. His topic will be NLP requirements analysis and the age of AI.
- News: new freshman level AI course! See details in the appendix below.
- News: https://acceleratelearning.stanford.edu/story/the-future-is-already-here-ai-and-education-in-2025 (From TE).
- Recall the masters project that some students are doing and need our suggestions about:
- Suppose a generative AI like ChatGPT or Claude.ai was used to write a book or content-focused website about a simply stated task, like "how to scramble an egg," "how to plant and care for a persimmon tree," "how to check and change the oil in your car," or any other question like that. Interact with an AI to collaboratively write a book or an informationally near-equivalent website about it!
- LG: Thinking of changing to "How to plan for retirement." (2/14/25)
- Looking at CrewAI multi-agent tool, http://crewai.com, but hard to customize, now looking at LangChain platform which federates different AIs. They call it an "orchestration" tool.
- MM has students who are leveraging agents and LG could consult with them
- ET: Growing vegetables from seeds. (2/21/25)
- Found an online course on prompt engineering
- It was good, helpful!
- Course is at: https://apps.cognitiveclass.ai/learning/course/course-v1:IBMSkillsNetwork+AI0117EN+v1/home
- Got 4,000+ word count outputs
- Gemini: writes well compared to ChatGPT
- Plan to make a website, integrating things together.
- VW: you can ask AIs to improve your prompt and suggest another prompt.
- We are up to 19:19 in the Chapter 6 video, https://www.youtube.com/watch?v=eMlx5fFNoYc and can start there.
- Schedule back burner "when possible" items:
- If anyone else has a project they would like to help supervise, let me know.
- (2/14/25) An ad hoc group is forming on campus for people to discuss AI and teaching of diverse subjects by ES. It would be interesting to hear from someone in that group at some point to see what people are thinking and doing regarding AIs and their teaching activities.
- The campus has assigned a group to participate in the AAC&U AI Institute's activity "AI Pedagogy in the Curriculum." IU is on it and may be able to provide updates now and then.
- Here is the latest on future readings and viewings
- We can work through chapter 7: https://www.youtube.com/watch?v=9-Jl0dxWQs8
- https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/
- Prompt engineering course:
https://apps.cognitiveclass.ai/learning/course/course-v1:IBMSkillsNetwork+AI0117EN+v1/home - https://arxiv.org/pdf/2001.08361
- Computer scientists win Nobel prize in physics! Https://www.nobelprize.org/uploads/2024/10/
- popular-physicsprize2024-2.pdf got a evaluation of 5.0 for a detailed reading.
- Neural Networks, Deep Learning: The basics of neural networks, and the math behind how they learn, https://www.3blue1brown.com/topics/neural-networks
- LangChain free tutorial,https://www.youtube.com/@LangChain/videos
- We can evaluate https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10718663 for reading & discussion.
- Chapter 6 recommends material by Andrej Karpathy, https://www.youtube.com/@AndrejKarpathy/videos for learning more.
- Chapter 6 recommends material by Chris Olah, https://www.youtube.com/results?search_query=chris+olah
- Chapter 6 recommended https://www.youtube.com/c/VCubingX for relevant material, in particular https://www.youtube.com/watch?v=1il-s4mgNdI
- Chapter 6 recommended Art of the Problem, in particular https://www.youtube.com/watch?v=OFS90-FX6pg
- LLMs and the singularity: https://philpapers.org/go.pl?id=ISHLLM&u=https%3A%2F%2Fphilpapers.org%2Farchive%2FISHLLM.pdf (summarized at: https://poe.com/s/WuYyhuciNwlFuSR0SVEt).
6/7/24: vote was 4 3/7. We read the abstract. We could start it any
time. We could even spend some time on this and some time on something
else in the same meeting.
Appendix 1: Details on new freshman level AI course
-
CPSC 1380: Artificial Intelligence Foundations
Course Description
Credit Hour(s): 3
Description: This course introduces key principles and practical applications of Artificial Intelligence. Students will examine central AI challenges and review real-world implementations, while exploring historical milestones and philosophical considerations that shed light on the nature of intelligent behavior. Additionally, the course investigates the diverse types of agents and provides an overview of the societal impact of AI applications.
Prerequisites: None
Course Learning Objectives
Upon successful completion of this course, students will be able to:
· Describe the Turing test and the “Chinese Room” thought experiment.
· Differentiate between optimal reasoning/behavior and human-like reasoning/behavior.
· Differentiate the terms: AI, machine learning, and deep learning.
· Enumerate the characteristics of a specific problem related to Artificial Intelligence.
Learning Activities· Overview of AI Challenges and Applications - Introduces central AI problems and highlights examples of successful, recent AI applications.
· Historical and Philosophical Considerations in AI – Discusses historical milestones in AI and the philosophical issues that underpin our understanding of artificial intelligence.
· Exploring Intelligent Behavior
o The Turing Test and Its Limitations
o Multimodal Input and Output in AI
o Simulation of Intelligent Behavior
o Rational Versus Non-Rational Reasoning
· Understanding Problem Characteristics in AI
o Observability: Fully Versus Partially Observable Environments
o Agent Dynamics: Single versus Multi-Agent Systems
o System Dynamics: Deterministic versus Stochastic Processes
o Temporal Aspects: Static versus Dynamic Problems
o Data Structures: Discrete versus Continuous Domains
· Defining Intelligent Agents - Explores definitions and examples of agents (e.g., reactive vs. deliberative).
· The Nature of Agents
o Degrees of Autonomy: Autonomous, Semi-Autonomous, and Mixed-Initiative Agents
o Decision-Making Paradigms: Reflexive, Goal-Based, and Utility-Based Approaches
o Decision Making Under Uncertainty and Incomplete Information
o Perception and Environmental Interactions
o Learning-Based Agents
o Embodied Agents: Sensors, Dynamics, and Effectors
· AI Applications, Growth, and Societal Impact - Provides an overview of AI applications and discusses their economic, societal, and ethical implications.
· Practical Analysis: Identifying Problem Characteristics - Engages students in exercises to practice identifying key characteristics in example environments.
Tentative Course Schedule
Subject to change at the discretion of instructor.
Week
Topics
Learning Activities
1
Course Introduction & Overview of AI Problems
· Overview of central AI challenges
· Examples of recent successful applications
· Lecture introducing course objectives and structure
· Reading assignment on current AI trends
2
Philosophical Issues and History of AI
· Examination of philosophical issues in AI
· Overview of AI’s historical evolution
· Student presentations summarizing key course takeaways
· Course review session and Q&A in preparation for the final assessment
3
What is Intelligent Behavior? I – The Turing Test and Beyond
· The Turing test and its flaws
· Introduction to related philosophical debates (e.g., Chinese Room)
· Lecture with historical context
· Small-group discussion on Turing test limitations
· Reading assignment on classic AI thought experiments
4
What is Intelligent Behavior? II – Multimodal I/O & Simulation
· Multimodal input and output in AI
· Simulation of intelligent behavior
· Demonstration of multimodal systems (videos/demos)
· Lab session: Explore a simple simulation environment
· Reflective writing: How does simulation approximate intelligence?
5
Intelligent Behavior: Rational vs. Non-Rational Reasoning
· Comparison of optimal (rational) decision-making and human-like (non-rational) behavior
· In-class debate on the merits of optimality vs. human-like reasoning
· Case study analysis
6
Problem Characteristics I – Observability and Agent Interactions
· Fully vs. partially observable environments
· Single vs. multi-agent systems
· Group workshop: Analyze example environments for observability and interaction challenges
7
Problem Characteristics II – Determinism, Dynamics, and Discreteness
· Deterministic vs. stochastic systems
· Static vs. dynamic and discrete vs. continuous problem spaces
· Hands-on group exercise: Map out characteristics of a provided problem scenario
· Group discussion on design implications
8
Defining Agents: Reactive and Deliberative
· What constitutes an agent
· Examples of reactive versus deliberative agents
· Interactive lecture with in-class examples
· Group exercise: Classify agents from provided case studies
9
Nature of Agents I – Autonomy and Decision-Making Models
· Autonomous, semi-autonomous, and mixed-initiative agents
· Reflexive, goal-based, and utility-based decision frameworks
· Interactive exercise: Design a decision-making framework for a hypothetical agent
· Group presentations of frameworks
10
Nature of Agents II – Decision Making Under Uncertainty & Perception
· Handling uncertainty and incomplete information
· The role of perception and environmental interactions in agent behavior
· Lab: Experiment with a simple decision-making simulation
· Group discussion on sensor integration challenges
11
Nature of Agents III – Learning and Embodiment
· Overview of learning-based agents
· Embodied agents: Sensors, dynamics, and effectors
· Group lab: Explore embodied agent models using simulation tools
· Group discussion on design trade-offs
12
AI Applications, Growth, and Impact
· Survey of AI applications across industries
· Economic, societal, ethical, and security implications
· Case study analysis: Evaluate the societal impact of an AI application
· Group discussion on ethical dilemmas and future trends
13
Deepening Understanding Through Application
· Practice identifying problem characteristics in real/simulated environments
· Additional examples on the nature of agents
· Extended discussion on AI’s broader impacts
· Interactive workshop: Analyze a complex AI scenario in small groups
· Peer review of group findings
· Hands-on exercises using simulation tools or provided datasets
In today's AI-driven world, professionals across all levels—graduate, undergraduate, and PhD students—must develop a comprehensive understanding of AI technologies, business applications, and governance frameworks to remain competitive. The Applied AI for Functional Leaders course is designed to bridge the gap between AI innovation and responsible implementation, equipping students with technical skills in AI development, strategic business insights, and expertise in governance, compliance, and risk management.
With industries increasingly relying on AI for decision-making, automation, and innovation, graduates with AI proficiency are in high demand across finance, healthcare, retail, cybersecurity, and beyond. This course offers hands-on training with real-world AI tools (Azure AI, ChatGPT, LangChain, TensorFlow), enabling students to develop AI solutions while understanding the ethical and regulatory landscape (NIST AI Risk Framework, EU AI Act).
Why This Course Matters for Students:
■v Future-Proof Career Skills – Gain expertise in AI, ML, and Generative AI to stay relevant in a rapidly evolving job market.
■v Business & Strategy Integration – Learn how to apply AI for business growth, decision- making, and competitive advantage.
v■ Governance & Ethics – Understand AI regulations, ethical AI implementation, and risk management frameworks.
v■ Hands-on Experience – Work on real-world AI projects using top industry tools (Azure AI, ChatGPT, Python, LangChain).
Why UALR Should Adopt This Course Now:
v■ Industry Demand – AI-skilled professionals are a necessity across sectors, and universities must adapt their curricula.
v■ Cutting-Edge Curriculum – A balanced mix of technology, business strategy, and governance makes this course unique.
v■ Reputation & Enrollment Growth – Offering a governance-focused AI course positions UALR as a leader in AI education.
v■ Cross-Disciplinary Impact – AI knowledge benefits students in business, healthcare, finance, cybersecurity, and STEM fields.
By implementing this course, UALR can produce graduates ready to lead in the AI era, making them highly sought after by top employers while ensuring AI is developed and used responsibly and ethically in business and society.
Applied AI (6 + 8 Weeks Course, 2 Hours/Week)
5-month Applied Artificial Intelligence course outline tailored for techno-functional, functional or technical leaders, integrating technical foundations, business use cases, and governance frameworks.
This can be split in 6 weeks certification plus additional funds for credit course with actual use case.
I have also leveraged insights from leading universities such as Purdue’s Applied Generative AI Specialization and UT Austin’s AI & ML Executive Program.
![]() |
Balance: 1/3 Technology | 1/3 Business Use Cases | 1/3 Governance, Compliance & AI Resistance
![]() |
Module 1: Foundations of AI and Business Alignment (Weeks 1-4)
v■ Technology: AI fundamentals, Machine Learning, Deep Learning
v■ Business: Industry Use Cases, AI for Competitive Advantage
■v Governance: AI Frameworks, Risk Management, Compliance
· Week 1: Introduction to AI for Business and Leadership
o Overview of AI capabilities (ML, DL, Generative AI)
o Business impact: AI-driven innovation in finance, healthcare, and retail
o Introduction to AI governance frameworks (NIST, EU AI Act)
· Week 2: AI Lifecycle and Implementation Strategy
o AI model development, deployment, and monitoring
o Case study: AI adoption in enterprise settings
o AI governance structures and risk mitigation strategies
· Week 3: Key AI Technologies and Tools
o Supervised vs. Unsupervised Learning
o Python, Jupyter Notebooks, and cloud-based AI tools (Azure AI Studio, AWS SageMaker)
o Governance focus: AI compliance and regulatory challenges
· Week 4: AI for Business Growth and Market Leadership
o AI-driven automation and decision-making
o Case study: AI-powered business analysis and forecasting
o Compliance focus: Ethical AI and responsible AI adoption
![]() |
■v Technology: NLP, Computer Vision, Reinforcement Learning
v■ Business: AI in business functions - Marketing, HR, Finance
■v Governance: Bias Mitigation, Explainability, AI Trust
· Week 5: Natural Language Processing (NLP) & AI in Customer Experience
o Sentiment analysis, text classification, and chatbots
o Business case: AI in customer service (chatbots, virtual assistants)
o Governance focus: Privacy and data security concerns (GDPR, CCPA)
· Week 6: AI for Operational Efficiency
o Business use cases: AI for fraud detection, surveillance, manufacturing automation
o Compliance focus: AI security and adversarial attacks
· Week 7: Reinforcement Learning & AI in Decision-Making
o Autonomous systems, robotics, and self-learning models
o Business case: AI-driven investment strategies and risk assessment
o Resistance focus: Overcoming corporate fear of AI adoption
· Week 8: AI in Marketing, HR, and Business Optimization
o AI-driven personalization, recommendation engines
o Business case: AI in recruitment, talent management
o Compliance focus: AI bias mitigation and fairness in hiring
![]() |
Module 3: AI Governance, Compliance & Ethics (Weeks 7-10)
v■ Technology: Secure AI Systems, Explainability
■v Business: Regulatory Compliance, AI Risk Management
■v Governance: Responsible AI, Transparency, Algorithm Audits
· Week 9: AI Governance Frameworks & Global Regulations
o NIST AI Risk Management, ISO/IEC 23894, EU AI Act
o Industry-specific regulations (HIPAA for healthcare AI, SEC for AI in finance)
o AI governance tools (audit logs, explainability reports)
· Week 10: AI Explainability & Bias Management
o Interpretable AI techniques
o Case study: Bias in AI hiring systems and credit risk models
o Business responsibility in AI model transparency
· Week 11: AI Security, Privacy, and Risk Management
o Secure AI model deployment strategies
o Governance: AI trust frameworks (eg: IBM AI Fairness 360)
o Case study: Managing AI risks in cloud-based solutions
· Week 12: AI Resistance and Corporate Change Management
o Strategies for AI adoption in enterprises
o Business case: AI integration in legacy systems
o Ethics: Impact of AI on jobs, social responsibility, and legal liabilities
![]() |
Module 4: AI Strategy, Implementation, and Future Trends (Weeks 11-12)
■v Technology: AI Product Development
■v Business: AI Implementation, Enterprise AI Strategy
v■ Governance: AI Regulatory Compliance & Future Legislation
· Week 13: Overview of AI Deployment and Scalability
o Deploying AI models on cloud (Azure AI Studio, AWS, GCP)
o Business case: Scaling AI solutions in enterprise environments
o Compliance: AI model monitoring, drift detection
· Week 14: AI for Competitive Advantage & Industry-Specific Applications
o AI in industry : e.g.: supply chain, autonomous vehicles, healthcare diagnostics
o Case study: e.g.: AI-driven drug discovery and logistics optimization
o Compliance: AI liability and regulatory accountability
· Week 15: AI Governance and Responsible Innovation
o Innovating with AI : e.g. financial services (algorithmic trading, fraud detection)
o Ethics: Ensuring fairness and avoiding discrimination in AI models
o Risk assessment frameworks for enterprise AI adoption
· Week 16: The Future of AI: Trends, Risks & Opportunities
o Generative AI (DALL-E, ChatGPT, LangChain applications)
o AI and Web3, decentralized AI governance
o Case study: AI-powered governance in blockchain ecosystems
![]() |
Module 5: Capstone Project & Final Presentations (Weeks 12-14. Process starts in Week 7/8)
■v Technology: Hands-on AI Application Development
v■ Business: AI Use Case in Industry
■v Governance: Compliance Strategy & Ethical AI
· Weeks 17-19: AI Capstone Project
o Develop an AI-driven business solution with governance compliance
o AI application areas: Business analytics, customer engagement, fraud detection
o Report: Governance strategy and AI risk mitigation plan
· Week 20: Final Project Presentations & Certification
o Peer review and feedback
o Industry guest panel discussion on AI’s role in future business strategies
o Course completion certification
Tools & Technologies Covered:
· AI Development: Python, TensorFlow, PyTorch, Scikit-learn, GenAI models
· Cloud AI Platforms: Azure AI Studio, AWS AI Services, GCP Vertex AI
· NLP & Generative AI: ChatGPT, DALL-E, LangChain, BERT, Stable Diffusion
· AI Governance & Risk: SHAP, LIME, AI fairness toolkits
Appendix 3: Transcript
3:29 - D. B.
Welcome everybody.
3:31 - R. S.
Okay, so here's what we got.RAG
3:38 - D. B.
So there was a wind stream presentation of what they're doing with AI on campus on Wednesday in the chair, Dr. P. was pushing it pretty hard. She's sending out a lot of emails and everything.
4:06 - M. M.
Did anyone go to it? Yes, we went there.
4:10 - D. B.
Yeah. Can you brief us on what happened?
4:14 - M. M.
Y. probably can do better. I like it very much. Y. can add more. They have everything. They have inspections with computer vision. They have virtual assistants with chatbots. They create your own chatbot, and they have augmented reality. So three aspects that we also, our teams here are working on, they cover it. So is there- Y., you can add more.
4:46 - Multiple Speakers
Yes, ma'am.
4:47 - D. B.
And do you want to ask the question first before I, Okay, so the question was just, you know, what do you think of that AI presentation by Windstream on Wednesday? And what did they brief us on it?
5:07 - Y. P.
Okay, so that presentation was very broad and for general people, right from what is their team made up of, like what kind of skillset do they have on the team so helping us what kind of team do you really need to have a successful project in an enterprise then they also spoke about what kind of tools methodologies that they are using to make their AI implementation successful and Then after that, they started talking about their use cases, as Dr. V. explained, those three use cases. And then they gave a demo on how those use cases actually are functioning in their environment. And they have built like a GPT, internal GPT, to answer a lot of questions like for customer success, where they have connected a lot of sources to internal data sources like Snowflakes and other document repository and few other areas. It seems that they have also built a RAG model. So they are doing a lot of interesting things. It was a good session. They also answered a lot of questions. What I like is the fact that, you know, there are people, industry here locally who are doing those kind of things. And then they have some other plans also. They also spoke about what they want to do, so on and so forth. Does anybody have any questions for Y. or V.?
7:19 - D. B.
Some students participate also from this group.
7:23 - M. M.
I think J. was there. Anyway, I'm excited to see a local company with a lot of AI actually, not a few. And we have several students that my two my students, master's students graduate and they are working for this wind stream.
7:48 - Multiple Speakers
Okay, is it a good source of employment for our foreign master's students?
7:54 - D. B.
Yes, yes.
7:55 - M. M.
They hire our students, they hire. Good to know. And they actually, yeah, they offer, remember they offer group, mentoring group, that they will work with our So it's a good opportunity. I think it was excellent. I appreciate L. P.'s help.
8:18 - D. B.
She was pushing it pretty hard, yeah.
8:22 - M. M.
Yeah, yeah, it was really good.
8:25 - A. B.
It's a communications company, right?
8:28 - M. M.
Windstream? Yes, telecommunications, yes. But they have, like they say, 8,000 employers.
8:34 - A. B.
Ah, 20,000, not to be surprised.
8:38 - Multiple Speakers
20,000? Yes.
8:39 - M. M.
How many in Little Rock?
8:41 - D. B.
I have no idea.
8:43 - M. M.
But he mentioned that most of them, at least in AI group, are outside Little Rock. So this is why we're pushing them to have and to start working with our students. I mean, why only with people from outside? Yeah, that's right. Cool.
9:06 - Y. P.
All right.
9:07 - D. B.
We like it very much. They probably have recording. L. always record.
9:11 - M. M.
Yeah, there is a recording.
9:13 - D. B.
I didn't post it because it has a password, but if someone wants to see the recording and doesn't have it, I can try to hunt down the email with the recording. Please ask L.
9:25 - M. M.
Yes. Ask Dr. P. Yeah. Yeah. So this is good for our group because it's really Very, very encouraging to know that people are heavy using AI, heavy, in all aspects. They even, you know, for computer vision, several applications in computer vision, they teach what kind of devices, they have different kinds of devices, object recognition. And after that, another object recognition task, So with deep learning models, large language models, and argumented reality. All three aspects that are important are covered. That's right.
10:09 - Multiple Speakers
Yeah, very good.
10:10 - M. M.
Excellent, I will say. Please ask for recording. Two hours. We spent more than two hours. All right.
10:20 - D. B.
Well, another briefing I'd like to get at some point. V. went to a AI and healthcare data analytics conference in Florida, and I'm hoping when he comes back, he'll be able to just kind of tell us informally what happened, how it was, was it any good, and so on.
10:42 - M. M.
He's here now. Oh. Yeah, I'm here.
10:45 - Multiple Speakers
Oh, hi.
10:46 - D. B.
V., which one did you go to?
10:48 - V. K. ( )
This is the CDO Healthcare Exchange. Is that the one in Tampa?
10:54 - E. G.
No, this was at Fort Lauderdale.
10:57 - V. K. ( )
OK.
10:58 - E. G.
Did you go for something?
11:01 - V. K. ( )
No, H.
11:02 - E. G.
H. is in Las Vegas this time.
11:06 - V. K. ( )
That's in Las Vegas. OK.
11:09 - Unidentified Speaker
Yeah.
11:10 - V. K. ( )
H., you get like 100,000 people coming over there. Yeah, I usually try to do that one.
11:19 - Multiple Speakers
Yeah. I did that.
11:22 - V. K. ( )
Like six and a half years back, and I stopped going to conference and suddenly this one. I went to this and Doctor Brilliant asked me to come back and give some feedback on the conference so. Basically, this is this conference is invitation only. It's there only 60 people from health care or near hospitals and big to small, all kinds, and there were people from payer side and also the provider. And most of them were chief data officers and chief analytical officers. And there were some leaders also, but totally, it was only 100 people, including the sponsors. It's more like a knowledge share kind of people coming from all this organization, talking about all the innovative things they're doing on their end. Everything was on AI. Is not an option anymore. It's become the thing in all the organizations. They've gone two, three years doing things on that. They talked about a lot of data-driven automation using AI and generative AI. Ambient listening was one thing that came up. I mean, we are using ambient listening at Kataev and basically, I mean, we are not using, we are right now in the current stage of trying to implement it through EMR, electronic medical records. So what this, it's called DeepScribe. So when the physician is talking to the patient, it records all the conversation and creates a clinical note that's all just the ambient listening. So they talked about that and they talked about automated whether it's necessary, then also the regulatory challenges. I know A. is from the payer side and I'm from the provider side, but basically there's a lot of policy changes that keeps happening and there's a lot of payer contracting that happens. So there is So they talked about how to address those challenges and how to make data and AI governance work. They were heavy on data governance. That was a big topic. A lot of people discussed on how to implement. So data governance has a lot of pieces, data, tech, service management, security, BI, and data quality and all that. Data fluency, literacies, data stable, data catalog, all these topics came in. And also the policy engine, which I mentioned that IBM had a product and they're working on a product where they can link the medical evidence and the policy changes. I don't know practically how much far they have gone in this, but that was another thing. That will definitely be helpful for organizations, but we want to see the practical implementation of that. And they talked about data literacy and how to, with AI, a lot of, even if I, when I came back and gave an update about AI, a lot of people saying that we just hear the word AI, but we don't know where to start, what actually we need to do, and all that. So training, like having a one-pager document, or having some AI consult, or data cam, all these discussions came up. And there were a couple of case studies people came up with. One was from L. Health, Florida. Probably E. will be aware of it. But they came and talked about how they signed up for a performance model and they were bleeding badly, paying millions of dollars to the CMS because they were not performing well. Then how they embraced AI to support patient care and improve outcome and increase efficiency. So that was a big success for them. Moved from a one star to five star from 2022 to 2024.
16:04 - E. G.
One of the things I had done at both for for C. and F. C. was in their models they have to have patient outcomes because part of Medicare and Medicaid their reimbursement is based on patient outcomes. Also quality of billing.
16:29 - A. B.
Quality of billing.
16:31 - E. G.
And so what we had done is actually by building out these models, we were able to look at the billing and compare that to the doctor's notes to ensure that we had the supporting documentation. And we were able to basically save tens of millions of dollars through that mechanism. And what we had done at that point is, in the billing aspect of it, is we would, as the doctor puts in the notes, automatically put out the billing codes and then have the doctor approve them at the end. So that way it would minimize any physician error. But looking at protocols we had that's done through CDS. We had went through a CDS system and looked at which providers were having better outcomes and started modifying protocols based on that. But also it identifies things that a lot of times the doctors miss. So it's able to look at and interrogate a patient record to identify a gotcha or a, what we call them concerns, event identifiers.
18:00 - A. B.
Yeah, we kind of have a similar use case we do where we take, we're on the payer side, so we will validate claims against records. So anyway, we'll, we have, essentially like code level policies that have to be, you know, checked off to say, hey, this code or DRG is supported. But then it's essentially kind of linking to the medical record and then summarizing to say, you know, and using Gen AI, it's a vendor solution where we partner with the vendor to work on. But it's essentially saying, hey, yeah, this code or DRG is supported in the medical record because in XYZ, and it'll kind of spell it out in, you know, human language, and a nurse or coder would then review that and validate that it's appropriate or not, and then update if there was some error or hallucination and whatnot, and then we'll get sent over to an MD to get kind of final stamp.
19:00 - V. K. ( )
Yeah. Yeah, that's good. Awesome. So basically, as both have mentioned, that's how they were able to change the provider behavior so that they can get to five-star that was one of the And another one was from Moffitt Cancer Center. And they, I mean, they're quite big, actually. They have 680,000 cancer patients. So they developed a platform for different shared services, like analytical, analytic microscope, static, I mean, like, several revenue and all that, which is very robust and it was a multi-year journey for them. And they started from 2010 and they talked about how that kind of did a big destruction in their organization. So heavy on AI and using different people, you're having different techniques. They were talking about their success.
20:07 - A. B.
sharing quite a bit. I've gone to some of these other and thanks for the referral. I apologize I couldn't make it, but I appreciate you referring sending referral over. But I've gone to some of these conferences in the past too, where it's like everybody's real closed mouth. They don't want to share too much because there's kind of like intellectual property and trade secrets and that sort of stuff.
20:29 - Multiple Speakers
Yeah, that's why this was good. Actually, you get lost. That's why I said the H. is like 100,000.
20:35 - V. K. ( )
lost, everybody's trying to sell something.
20:38 - A. B.
Right.
20:39 - V. K. ( )
Here, they're only people like very focused group of 60 people who are not shying away from sharing their knowledge, what they do, at least on the surface level. And the Moffitt presentation was really good. They were heavy on the clinical side. And they asked to share the slides, but they because of privacy, they share the slides. Otherwise, I could have shown the big picture of how they were doing the whole analytics in their own language. Oh, wow.
21:15 - Unidentified Speaker
That's great.
21:16 - Multiple Speakers
Yeah. V., I have a question.
21:19 - Y. P.
Was there any talk about interoperability, data sharing?
21:23 - V. K. ( )
Yes, that was a panel discussion was going on, was there on that actually, how they can overcome the challenges on interoperability and how can they share in a, I mean, it's not within the organization, but within other providers and all that. There's a panel discussion on that, actually.
21:49 - Y. P.
Okay. All right.
21:51 - D. B.
Well, thanks for briefing us, everyone. We've got a couple of announcements scheduling, so next Friday, C. M., who is visiting us today, will informally present his PhD topic, which involves the perceptions and use of AI in academic publishing. And I'm looking forward to that. And then on March 21, D. D. will also informally present. His topic will be NLP, Natural Language Processing Requirements Analysis and the Age of AI. And this is related to his dissertation research as I understand it. I got news that there's a computer science department is proposing a new freshman level AI course. And I got the curriculum or the syllabus for it. I thought we could take a look and make sure we all know the stuff in it. While I bring it up, B., do you have any comments?
22:56 - D. B.
Anything? Not really. I remember they offer machine learning.
23:00 - M. M.
I didn't see this. Yeah, we were asked to- I have no idea.
23:06 - Multiple Speakers
We were asked to approve the new course form or something like that. Anyway, here's what they're talking about.
23:15 - D. B.
Freshman level. I thought we'd just kind of read through it like we normally read through things. Why don't we take a look at this? If there's any comments. It's already approved or is just no, no, I think it's not approved. Not not approved yet, but it's proposed. It probably will be approved.
23:45 - E. G.
Yeah, this looks more like a survey.
23:49 - D. B.
Yeah, yeah, I would be a kind of applied, like, like, you know, how do you make a prompt? How do you use chat, GPT, things like that, as opposed to, you know, getting into the theory of it.
24:07 - Unidentified Speaker
Yeah.
24:07 - E. G.
Well, when I see examine the central AI challenges and review real world implementations while exploring historical milestones, this tells me it's a survey. Yeah.
24:20 - E. G.
All right.
24:21 - Multiple Speakers
This doesn't.
24:23 - E. G.
I mean, right now we are. And this is kind of late in the game, but we are in. The The industrial boom of AI, we're getting models, new models every day, 4.5 is out now with chat GPT. 3.7.
24:44 - V. W.
Anthropic was released yesterday for Claude Simon.
24:49 - E. G.
Exactly. And then you have China coming out and saying, well, we could do the same thing on hardware. That is one tenth the capabilities. We're building power stations and causing global warming to accommodate this. This is the same thing that happened in our industrial age. We are in the midst of it right now, and this is a survey Yeah, it looks academic. It looks very academic kind of way.
25:25 - D. B.
All right, well, let's course learning objectives. Let's take a look at those, see if any comments on that.
25:36 - M. M.
History of AI. Kind of Like E. says, it's a little bit late, I think.
25:45 - Unidentified Speaker
Yeah.
25:46 - M. M.
better late than never, so.
25:48 - E. G.
True, but a lot of times the kids who are gonna be interested in this should already know this. Exactly, exactly.
25:56 - M. M.
They're already using. Exactly. They're already users, so how to impress them? All right, well, you know, I used to know what the Chinese room thought experiment was, but I forgot.
26:08 - A. B.
Can anyone- Is this the one where you, like if you were to pass message to someone in a room and then they like, they're, you know, they're essentially just kind of just, you're not really like, it's the perception that they really know what's going on, but they don't really, they're just kind of passing a message.
26:30 - V. W.
And that's essentially what AI is doing.
26:33 - Multiple Speakers
Right. It's a, it's a symbol.
26:35 - V. W.
It appears to be because it's yeah. Symbol transfer version of the Turing test.
26:41 - Unidentified Speaker
Yeah. Yes.
26:41 - M. M.
and actually people can speak the different languages, you know, without speaking the language to understand the meaning. J. wants to add something, J.
26:54 - E. G.
Now, about the only thing that really jumps out at me is differentiate between optimal reasoning and behavior and human-like reasoning and behavior, because as we a general A.I. We'll we'll we'll start blurring the lines, which I think is going to happen within the next few years.
27:21 - V. W.
Or as we discovered in the pandemic, the decision you think you're going to have to make in two weeks is the decision you should be making today because time is really compressing the whole industrial age.
27:40 - E. G.
In fact, there was an article about the impact of AI in global warming.
27:48 - V. W.
Yeah, at NVIDIA, the engineers have a saying, boiling the oceans.
27:53 - E. G.
Yeah, I mean, in fact, NVIDIA, if you've, M. M., you must have saw this. There was such a shortage of the 50-90 cards because they were anticipated of these chips going out to these AI farms. But since China came off with this, NVIDIA lost the sales of a lot of these chips, and now they're turning them into 50-90 chips for new cards out next month. Yeah, but they recover now.
28:30 - M. M.
The deep-seeker ask NVIDIA chips. And actually, I saw very good research in quantum computing. Like I told you, I'm doing this quantum computing. So they do a lot of research with several universities on quantum computing. And this is really, really powerful.
28:49 - D. B.
So if DeepSea can do the same AI processing at one-tenth the hardware cost, that means if they use the same hardware cost, they could get 10 times the computing.
29:02 - V. W.
I think we need to do a check on the one-tenth of the cost because there's a lot of evidence that NVIDIA chips were routed through Singapore to provide the DeepSeek folks a lot more computing capability than they were claiming because they had to avoid export sanctions. So I think we're living in a little bit of a murky area because there's a certain amount of processing that has to be done to handle trillion token large language models. And it looks like DeepSeek was claiming one thing, but doing another. So I think we ought to hedge our bets in terms of that. Certainly the stock markets rebounded a little bit from the initial news of it apparently being very cheap. So I think we're, I think we're in some murky, murky water territory.
29:52 - D. D.
I heard that they, that what they did was they, they prompt chat GPT for, and then use that That was the distillation argument, right? Yeah, the distillation argument.
30:07 - M. M.
Distillation, yeah.
30:08 - E. G.
Well, that doesn't, well, that may, and in truth, I wouldn't trust anything that did come out of China. But right now, even with these models that started, GPT coming off, new stuff. Almost daily. We are we are making huge leaps.
30:31 - V. W.
The Claude sonnet 3.7 will write program that are about five to eight times longer per turn or per shot than their previous versions did. And I ask it to build me a medical informatics systems that show me a cell level, pharmacological view, patient centric view and physician centric view and it's like kabam you've got a console that you can now populate with these things and so then I did another project and I was getting similar kinds of high performance I mean it was so overwhelming to me it was like being promoted to CEO but not really knowing how to be a CEO like there are skills that people have that manage very large projects and I'd like to think I have those skills. But as these tools amplify our ability, it's like it's feeling like a little disproportionate in my relationship with the cyber organism that I feel like it's getting really big. And I'm having to start looking up to it instead of being horizontal with it, even though at this writing, it's still doing what I ask it to do.
31:47 - E. G.
I've got 45 years of programming experience. I had passed it some junior programming code and this is I think the chat GPT 01 mini produced code that is quality senior level developer code from a junior developer. It built all the test cases for test-driven development. It abstracted the ORMs, The ability to generate high quality code now is, as V identified, is now democratized. You're working it at CEO level with a lollipop education. That's a good metaphor.
32:42 - V. W.
Another thing is the Industrial Revolution came up when we were talking earlier and in Dickinsonian England they used to have warehouses full of guys with quill pens doing books in with you know hundreds of people just doing the math to push the bank calculations each day and then a hundred and fifty years went by and then Excel came along and Lotus one two three came and went and now that's just washed away and now that whole warehouse is one person and so the Then in our era, we had warehouses full of programmers at Google and Facebook and Microsoft, and they were the analogs of their Dickinsonian counterparts, and they have now been rinsed away because, as the work we did earlier with Read's showed, we're having somewhere between 50 to 100 times the productivity of lines of usable code per And it's just and now HP in Santa Clara and one other company have, oh, Autodesk, who makes the Fusion 360 3D modeler, massive layoffs, you know, hundreds of people. And so so and now S. B. has come on today and said he wants all the Google engineers working 60 hours a week, coding as hard as they can to work themselves out a job, the last part being my paraphrase. And so it's going to be harder and harder to get people to work 60 hours a week if they know that they're basically burning their bridge at both ends as they do so.
34:29 - Multiple Speakers
So let's give an opportunity, J. J. wants to add something, please.
34:33 - J. O.
So I think I think we're even don't look at a full picture right now, for example, all the coding, those repetitive tasks. I'm not astonished that AI is going to be better than us and doing mathematics is actually something that's going to be happening. All the GPUs accelerating, the GPT, the binary is programmed by AI and without limitation because, you know, we don't know how it will develop if you let it open. I think the problem here is that we don't see how just for training, how to get to AGI, we can might create something that can act, speak or listen and everything that we do as humans but don't feel actually any emotion related to that.
35:25 - E. G.
That's something that we've been doing that in clinical decision support systems for decades. We've built out infrastructure to support, now granted the physicians still make the decisions, but what it does is it takes the emotion. In fact, just on LinkedIn, I just wrote a paper that discusses the, let me pull it up real quick, a review of healthcare and AI, how we're pumping out new bottles. But we're taking the emotion, the, uh, the, the, the concern out of the treatment of patients.
36:09 - Multiple Speakers
Uh, I think that bench to bedside is becoming more bench and less bedside.
36:16 - E. G.
Exactly. And, and, and, and that's being forced upon them, not by the physicians, but by the health payers by identifying possible cost reductions, right, wrong, or indifferent. If you're able to identify, say, that this is an opportunity that's being implemented. Now, granted, the physician still has to make a lot of the decisions, but you know that they're going to go with something that works.
36:55 - Y. P.
Right. I also have a response on the emotion thing, that if you think about what happened in the Congress a few years ago about Facebook, M. Z. talking about it, Facebook does understand people's emotions and accordingly sends information. And negative emotion is supplemented with negative feeds. Positive information is supplemented with positive feeds. And the other thing that I want to say is I am something with mental health and AI and for that I mean understanding emotions is fundamental to solving that problem. So yes it cannot cry maybe like the physical manifestations may not happen in AI but based on conversation based on a lot of things now AI is able to understand the emotions sentiments and feelings of the human being it's interacting with so I wanted to share that and then one more thing I wanted to say is I'm going to talk on March 14 you had the calendar it just came to me today I'll send you the details but it's mainly around skills I'm still trying to confirm the topic with Dr. P. it's the curriculum that she holds every month but I It's I think she wants me to speak mainly on the skills and requirements for the students, but once I have clarity maybe on Monday or Tuesday, I'll send you the details, but it is on 14th of March and I think it is the same time when colloquium is normally held. It's during the day. Yeah, I'll let you know. Yes, 11 o'clock.
38:44 - M. M.
Yeah, please send us the link.
38:46 - Y. P.
Yes, by the way, way the wind stream people say what they expect from new students or new employers, creativities, okay, open mind and creativities, because the only programming skills and technical skills, not enough.
39:05 - M. M.
But by the way, I write the article many years ago with psychology professor about the Juhari window, something that answered the question of J. So you need to communicate with agents, like communication with people. The Juhari window is about the management or communication between people, like we communicate right now to understand each other, but it's not just, there are four quadrants, you know, something that people know and see for you that you don't see it. It's also important.
39:45 - V. W.
Open, blind, hidden, and unknown.
39:48 - Unidentified Speaker
Exactly.
39:48 - M. M.
Blind spot, blind spot.
39:51 - J. O.
I do totally agree with all of the comments. And I know that, for example, AI is getting better and better at recognizing emotions. For example, they were doing the face recognition. Eyebrow going up, meaning frustration and stuff like that. I'm not saying AI cannot recognize human emotion. I'm saying like when we introduce it in healthcare, maybe we have to understand that it's just a value that is getting from our input. It is not feeling or is responding like he's feeling to feel care for the patient, but it's only a response of value made It's not really, and that's the thing that is actually... J., I agree with you.
40:45 - Y. P.
I'm not disagreeing with you. I'm 100% agree with you. We cannot replace AI with natural intelligence. It's still artificial intelligence, but it is a tool that may help us in solving mental health and other issues. Like, you know, the examples that people gave earlier. Gives a clue. When it comes to coding also, it is suggesting something. It does not mean somebody still has to review. And the second thing is without a human being's interference, it is suggesting that these people are sad and feeling something that they can consume when they are sad or it is suggesting that these people are happy and sending happy. So, absolutely, I am in agreement with you. We have to understand the difference between humans and technology, and it has always been considered as a tool. Just to add what you said real quick, Y., we can't replace walking with a 737, but in some situations, it really comes in handy.
41:52 - V. W.
Exactly.
41:53 - Y. P.
I want to add to two points Dr. V. mentioned, which are actually very important. The first point that she mentioned when it comes to the event that happened and what the companies expect from students and she mentioned some soft skills than technology skills and one of the things that the speaker mentioned and there was an event yesterday also by the former CIO of Walmart and both the presenters that gave presentation yesterday and day before mentioned that what the students need to know is what they are solving, the business. That is more important than knowing any coding language at this point of time. That was eye opening for I think many people there. And I think I want to reiterate that as we help students, candidates in my case, like when we recruit, that is one thing that we are highly expecting. And I'll give an example of recent discussion that happened. The definition of full stack developer, which used to be that I know front-end technologies, I know React, and I know Python on the back end. But now the full stack development with generative AI, which will code 70%, 80% of the code, as E. mentioned, that generative AI is able to create fantastic beautiful code. In that sense, that full stack definition now is, do you understand the business? And they might actually extend their skills to being an analyst, being the interface between the people who are actually solving problems or creating opportunities for business and themselves directly. So that thin line layer in the middle is diminishing and the expectations of of the developers is more going towards application. So that is one thing that I wanted to add to what Dr. V. said. And Dr. V., if I'm saying something wrong, please correct me.
44:09 - M. M.
Oh, no, this is completely correct.
44:12 - Y. P.
And this is missing in our educational system, OK?
44:15 - M. M.
Because we concentrate on technical programming skills, stuff like this, but don't concentrate for people to think. They're not thinking.
44:24 - E. G.
That's why when I said that I passed in the program, it was actually the structure of the program to solve a specific business problem. But I'm going to, J., one of the things that you may not have experienced that older people have is we've seen things that we've been somewhat reluctant to adopt. Right now, we're adopting AI so fast that I think the aspect of emotion may be trained out of the humans so that way we could get better information. We had WebMD for a while. I think the emotional aspect, the emotional connection The sentiment analysis may be something that will go away. That'll be weeded out of us. Spot on. Spot on.
45:29 - Multiple Speakers
That's what I was trying to say. Yeah. All right.
45:35 - D. B.
So just kind of moving down here. Learning activities, we've actually seen these are up above there, but we haven't seen these. So these are learning activities in the course. Any, any comments on these?
45:53 - V. W.
You know, something that sticks out to me, just scanning these is how we are slowly transitioning to a, how can we equip robotic agents and what, however they're manifested with this intelligent behavior. And I was telling my wife about the PC, the internet, and now the AI revolution. Having lived through all three of them. And now it seems like we're going to be bumping up against the robotic revolution with these new large language models that can build an application space that allows them to compartmentalize tasks and execute tasks, both virtually and also in the physical space. And so I'm really interested in how this large language model revolution is going to give way in a few days, weeks, months, or to a robotic revolution that is corresponding to it. It's because we've sort of like we're bordering on having sentience down. And once we get the motor nervous system under control and the task organization part under control, that brain compartment, we're pretty much good to go on highly functional robots.
47:05 - A. B.
Yeah, there's a do kind of buzz around large, large action models. And that's kind of where I think I do some automation in my work, like bot work, you know, mostly like just kind of, you know, objective steps, right? Like build a bunch of linear steps and then have the bots repeat those steps through different scenarios. But yeah, this idea of large language models kind of up that alley around, like you have, you know, kind of listening, you know, devices that kind of pick up like in the same way that uh large language models learn on transformers and whatnot you would kind of do something similar there where it's where it's essentially kind of learning to figure out tasks and then bam you can deploy a large robotic apprenticeships because you know the the
47:58 - V. W.
real frontier of the future is go fold the laundry because ai already does better music composition and better artistic composition than a lot of people, but it still can't fold a load of laundry or do the dishes, but that that's coming a bit, but, but the acquisition of motor skills through reinforcement learning techniques and others are going to be the linchpin for enabling that part of the revolution to happen.
48:25 - E. G.
I could speak to that because, uh, uh, MIT, uh, had, I think it was last weekend or the weekend before, uh, uh, where they take their robots and they have a kind of like a maze, but it's not just a maze. It has areas where it's going to have soft areas where they'll fall down. So it has to manage balancing issues. It'll have things where it has to look at timing. Think of it like a gauntlet. Right. And it has to run the gauntlet.
48:59 - V. W.
Run the gauntlet.
49:01 - E. G.
And it looks at the robots that can actually look at test something and then run it, but to recognize where a failure may occur so it knows to check its steps if it's when it's changing terrain. Exactly.
49:20 - V. W.
It's funny, you know, if we look at the Boston Dynamics robots and their evolution from PogoStick, M. R.'s original work, to dog that now opens the door for the other dog to go through. And it's all autonomous and its power supplies are now small enough with lithium batteries to be tractable to the gymnastic Boston Dynamics robot to the loading and warehousing Boston Dynamics robots. Those guys are approaching the fold, go fold the clothes mountain pretty rapidly. And when we fuse the Boston Dynamics level of robotic evolution with the large language model, large now action model dynamic, that's going to be worth getting some popcorn for.
50:09 - E. G.
All right.
50:10 - D. B.
Anybody who hasn't had a chance to contribute much have a comment on this? All right. Well, it's more learning tasks.
50:21 - A. B.
Go ahead.
50:22 - D. B.
No, This is just the freshman course. Any thoughts on this part of it, these learning activities?
50:34 - E. G.
Well, on the previous section at Georgia Tech, we had a whole course on stochastic processes, a whole course on deterministic processes. I cannot imagine that they're going to learn as other than just a definition on some of these things. Yeah, it's a lot here.
50:57 - Multiple Speakers
But a lot of the fluff is being weeded out so that the distinction between freshman, sophomore, junior, senior, grad student is disappearing.
51:05 - V. W.
Because if you bring in a well-prepared set of STEM students who have good thinking skills and say, we're going to solve this problem along this line of reasoning with these AI tools, you may have work that's competitive with much higher level class people.
51:22 - E. G.
Yeah, and I think that they'll be well beyond this. As I mentioned earlier, somebody coming in who'd be interested in this, they already know most of all of this.
51:33 - V. W.
And they've already had a year or two running LLMs to do all their routine stuff. So they know how to get things done. All right.
51:42 - D. B.
Any comments on on this part here? Well, actually, we're at a pretty good stopping point.
51:51 - Unidentified Speaker
All right, so there you go. The freshman level course proposed.
51:56 - D. B.
Let's see if there's anything else here. Oh, so T. E., who's been here a few times, sent me this link. He said that at Stanford, there's They're doing, they had a conference on, or a series of conferences on AI and education. So yeah, AI and education is becoming a big topic and a lot of people are getting interested in it. I wonder if, let's see, I. is here and my recollection is she was in the, joined a UALR or a intercollegiate inter-university group discussing AI in education? Yes. Can you maybe brief us briefly on what's happening there? Anything interesting about it?
52:53 - I. U.
Nothing has happened since the last time we talked at the very end of last semester, except that the survey to go out to the campus about how faculty or using AI, and I think some selected students maybe, has gone to the IRB for approval. Okay.
53:15 - D. B.
Well, you know, the UALR has just started a discussion group for AI in pedagogy, and that's being started by L. S. in the psychology department, and I'm not in it.
53:28 - I. U.
Yes, and they are meeting at a time that I, a day and a time that I couldn't meet, but I've talked to her about it and we'll stay in touch with her so that we can keep kind of up-to-date on what they're talking about as part of our committee as well.
53:49 - D. B.
All right well thank you for updating us and I'm gonna go back to if I can find where we are.
53:58 - Y. P.
D. one more thing I wanted to share today was I'm also giving input to applied AI education. So I don't know.
54:09 - Unidentified Speaker
Hello. Yeah, go ahead. Yeah.
54:12 - Y. P.
So there's applied AI course, and it will be more practical in nature than technical in nature. And I was having a conversation with Dr. W. last week about how application of technology is becoming more and more important. And also available because of generative AI and other things. So I'm giving inputs on that and the other thing is like what was told in the lectures yesterday and day before the seminars of business use actually is reinforcement of you know of that coursework. So it's more of applied AI and it will be in coordination coordination with more of business people, so it will be like a techno functional coursework.
55:07 - D. B.
Would you be interested in informally presenting the curriculum or the syllabus to this group at some point? Because I've seen that you do have a syllabus that you've proposed for the course.
55:21 - Y. P.
Yeah, so I will get back to you because I don't know where is the final thing. So once I I have the final version, and then I will ask if I can share.
55:33 - D. B.
Well, I'm sure you can share it, because I've got a copy of it somewhere that was getting sent around by email to everybody.
55:41 - Y. P.
Right, for review, and it is getting updated. So that's why I said I can share the old version.
55:47 - D. B.
OK.
55:47 - Y. P.
People are getting inputs, and I will try to get the final version and share it, either in the next meeting or meeting after next. OK, sounds great.
55:57 - D. B.
All right. Well, we had a pretty good group here today, but I don't see any of the master's students, and we're kind of out of time. So I guess we'll adjourn. And I'll put a little note here that we ended here.
56:22 - Unidentified Speaker
And thanks, everybody, for attending.
56:25 - Y. P.
and we'll see you next time.
56:30 - Unidentified Speaker
Thank you. Have a good weekend. Take care. Bye, everyone. Bye, guys. Bye. Bye.
56:43 - N. M.
That was a really good meeting.
56:48 - Unidentified Speaker
Yeah.
56:49 - D. B.
OK.