Agenda & Minutes
- Welcome to the 11th meeting.
- Any updates/news/inputs/comments?
- Readings, viewings, etc.:
- We can read/view the first paragraph/minute or so of several sources, and then vote to pick one to do in more depth. Here is a way to vote to pick the next reading:
- Vote whether to read more, using a scale 1-5:
- Should we read/view more of this?
- 5=strongly agree, 4=agree, 3=neutral, 2=disagree, 1=strongly disagree.
- Repeat the process for another article.
- After going through a few articles this way, we can pick the one with the best voting result to read more from.
- Last time we read through the fourth paragraph of https://www.marktechpost.com/2022/03/07/an-introduction-to-saliency-maps-in-deep-learning/. Previously we read the fifth paragraph, then voted on the priority for reading more of it. Vote: 3.67 out of 5.
- CNN basics: https://towardsdatascience.com/the-most-intuitive-and-easiest-guide-for-convolutional-neural-network-3607be47480. We previously read 2 paragraphs. Read more? Vote was 3.6 out of 5.
- https://e2eml.school/transformers.html: "Transformers From Scratch." We previously read through the 2nd paragraph. This time we will read the 3rd paragraph and vote on continuing with the document in the future. Vote was 4 3/8 out of 5.
- https://www.youtube.com/watch?v=BolevVGJk18. This introduces Jonschkowski, Brock, Learning State Representations with Robotic Priors. Should we try the first paragraph(s) of the paper? Vote was 3.6 out of 5.
- Ni et al., Learning Good State and Action Representations via Tensor Decomposition, https://arxiv.org/abs/2105.01136. We read the title and 1st sentence. Vote to read more was 4 out of 5.
- Brooks, R., 2017, Seven Deadly Sins of AI Prediction, in serveinfo\AIstudyGroup. Vote was 2.6 out of 5.
We got to here.
- https://en.wikipedia.org/wiki/Markov_decision_process.
- MM would like to step us through some of the resources available from NVIDIA.
- MM suggests explainable AI as a reading/discussion topic.
- MM suggests https://www.youtube.com/watch?v=4Bdc55j80l8&ab_channel=The A.I.Hacker-MichaelPhi as a transformer video.
- 2021 Turing Award lecture paper: https://dl.acm.org/doi/pdf/10.1145/3448250
- Anticipative Video Transformer, https://facebookresearch.github.io/AVT/?fbclid=IwAR1RurSM33v8baN10H9JCX_dvVNtscydsLupaB8NMgKOmNIPjIwD3XO2vOA.
- "Deep learning—a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact," https://peerj.com/articles/cs-773. We read the abstract. It is not clear whether we should continue reading material from it. Any opinions/thoughts/comments?
- Attention is all you need," https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Attention is all you need, A Vaswani, N Shazeer, N Parmar… - Advances in neural …, 2017 - proceedings.neurips.cc … Cited by 35,980
- We have read through section 3 so we could start with 3.1 next time we look at it.
- Featured resource: Short and long videos -
- A longer video (44 min. but can skip last 10 minutes about negative result): https://www.youtube.com/watch?v=HfnjQIhQzME&authuser=1. We watched up to time 16:00. However this is a bit ahead of what we want so we'll put it on hold.
- Some quantum computing references we could read as needed (from VW):
- - Quantum crossing threshold (free): https://www.nature.com/articles/s41586-021-04273-w
- - Crossing threshold in silicon: https://www.nature.com/articles/s41586-021-04182-y
- - Three-qubit donor processor in Si: https://www.nature.com/articles/s41586-021-04292-7
No comments:
Post a Comment