Agenda & Minutes
- Welcome to the 24th meeting, July 22, 2022.
- Updates/news/inputs/comments?
- Readings, viewings, etc.
- Source to read/view in depth.
- https://e2eml.school/transformers.html: "Transformers From Scratch." We read up to the subsection "Sequence Completion" and can read from there, but are taking a break from it...
- We finished viewing https://www.youtube.com/watch?v=-QH8fRhqFHM. We will discuss reading the accompanying article at https://jalammar.github.io/illustrated-transformer next time. Then we can vote on another video,
https://www.youtube.com/watch?v=TQQlZhbC5ps, and eventually circle back
and finish "Transformers From Scratch" if we want to.
- Sources to scan to see if we want to read more. Please send in more suggestions for readings. We can read/view the first paragraph/minute or so of each, assessing each. Should we read it in more depth? 5=strongly agree, 4=agree, 3=neutral, 2=disagree, 1=strongly disagree.
- https://jalammar.github.io/illustrated-transformer/
- https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
- https://www.youtube.com/watch?v=4Bdc55j80l8&ab_channel=The A.I.Hacker-MichaelPhi as a transformer video.
- The Dutch Tax Authority Was Felled by AI—What Comes Next? https://spectrum.ieee.org/artificial-intelligence-in-government
- Anticipative Video Transformer, https://facebookresearch.github.io/AVT/?fbclid=IwAR1RurSM33v8baN10H9JCX_dvVNtscydsLupaB8NMgKOmNIPjIwD3XO2vOA.
- "Deep learning—a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact," https://peerj.com/articles/cs-773. We read the abstract. It is not clear whether we should continue reading material from it. Any opinions/thoughts/comments?
- "Attention is all you need," https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Attention is all you need, A Vaswani, N Shazeer, N Parmar… - Advances in neural …, 2017 - proceedings.neurips.cc … Cited by 35,980
- Featured resource: Short and long videos -
- A longer video (44 min. but can skip last 10 minutes about negative result): https://www.youtube.com/watch?v=HfnjQIhQzME&authuser=1. We watched up to time 16:00. However this is a bit ahead of what we want so we'll put it on hold.
- Some quantum computing references we could read as needed:
- - Quantum crossing threshold (free): https://www.nature.com/articles/s41586-021-04273-w
- - Crossing threshold in silicon: https://www.nature.com/articles/s41586-021-04182-y
- - Three-qubit donor processor in Si: https://www.nature.com/articles/s41586-021-04292-7
- Potential future readings that we have assessed in earlier meetings.
- 2021 Turing Award lecture paper: https://dl.acm.org/doi/pdf/10.1145/3448250. We read the first two paragraphs. Vote was 4.5 to read more.
- Vote on: Dalle-E 2 - how it works: https://www.youtube.com/watch?v=F1X4fHzF4mQ. Should we read/view more of this? Vote was 4.
- Explainable AI as a reading/discussion topic: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence. 6/24/22: vote was 4.0 based on up to but not including the last paragraph of the Goals section.
- 6/10/22: vote was 4.0 on the following article coauthored by Timnit Gebru. https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
- Ni et al., Learning Good State and Action Representations via Tensor Decomposition, https://arxiv.org/abs/2105.01136. We read the title and 1st sentence. Vote to read more was 4 out of 5.
- We have read through the fifth paragraph of https://www.marktechpost.com/2022/03/07/an-introduction-to-saliency-maps-in-deep-learning, then voted on the priority for reading more of it. Vote: 3.67 out of 5.
- CNN basics: https://towardsdatascience.com/the-most-intuitive-and-easiest-guide-for-convolutional-neural-network-3607be47480. We previously read 2 paragraphs. Read more? Vote was 3.6 out of 5.
- https://www.youtube.com/watch?v=BolevVGJk18. This introduces Jonschkowski, Brock, Learning State Representations with Robotic Priors. Should we try the first paragraph(s) of the paper? Vote was 3.6 out of 5.
- https://en.wikipedia.org/wiki/Markov_decision_process. Should we read/view more of this? Vote was 3 1/5.
- 6/10/22: vote was 3.0 on the following article. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/.
- Brooks, R., 2017, Seven Deadly Sins of AI Prediction, in serveinfo\AIstudyGroup. Vote was 2.6 out of 5.
- Readings/videos we have finished.
- The Narrated Transformer Language Model, Jay Alammar, https://www.youtube.com/watch?v=-QH8fRhqFHM, 7/22/22.
No comments:
Post a Comment