Agenda & Minutes
- Welcome to the 14th meeting.
- Any updates/news/inputs/comments?
- Readings, viewings, etc.:
- Sources to read/view in more depth.
- https://e2eml.school/transformers.html: "Transformers From Scratch." We read up to "First order sequence model" so next time we will start there.
- Remember that the vote was 4 3/8 out of 5 for this document, which can always be updated as we progress through it.
The meeting ended here.
- Potential future readings that we have assessed in earlier meetings.
- In a previous meeting we read through the fourth paragraph of https://www.marktechpost.com/2022/03/07/an-introduction-to-saliency-maps-in-deep-learning/. Previously we read the fifth paragraph, then voted on the priority for reading more of it. Vote: 3.67 out of 5.
- CNN basics: https://towardsdatascience.com/the-most-intuitive-and-easiest-guide-for-convolutional-neural-network-3607be47480. We previously read 2 paragraphs. Read more? Vote was 3.6 out of 5.
- https://www.youtube.com/watch?v=BolevVGJk18. This introduces Jonschkowski, Brock, Learning State Representations with Robotic Priors. Should we try the first paragraph(s) of the paper? Vote was 3.6 out of 5.
- Ni et al., Learning Good State and Action Representations via Tensor Decomposition, https://arxiv.org/abs/2105.01136. We read the title and 1st sentence. Vote to read more was 4 out of 5.
- Brooks, R., 2017, Seven Deadly Sins of AI Prediction, in serveinfo\AIstudyGroup. Vote was 2.6 out of 5.
- We can read/view the first paragraph/minute or so of different sources, assessing each whether to go over it in more depth. To assess each one, vote: Should we read/view more of this? 5=strongly agree, 4=agree, 3=neutral, 2=disagree, 1=strongly disagree.
- https://en.wikipedia.org/wiki/Markov_decision_process.
- MM suggests explainable AI as a reading/discussion topic.
- MM suggests https://www.youtube.com/watch?v=4Bdc55j80l8&ab_channel=The A.I.Hacker-MichaelPhi as a transformer video.
- 2021 Turing Award lecture paper: https://dl.acm.org/doi/pdf/10.1145/3448250
- Anticipative Video Transformer, https://facebookresearch.github.io/AVT/?fbclid=IwAR1RurSM33v8baN10H9JCX_dvVNtscydsLupaB8NMgKOmNIPjIwD3XO2vOA.
- "Deep learning—a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact," https://peerj.com/articles/cs-773. We read the abstract. It is not clear whether we should continue reading material from it. Any opinions/thoughts/comments?
- Attention is all you need," https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Attention is all you need, A Vaswani, N Shazeer, N Parmar… - Advances in neural …, 2017 - proceedings.neurips.cc … Cited by 35,980
- We have read through section 3 so we could start with 3.1 next time we look at it.
- Featured resource: Short and long videos -
- A longer video (44 min. but can skip last 10 minutes about negative result): https://www.youtube.com/watch?v=HfnjQIhQzME&authuser=1. We watched up to time 16:00. However this is a bit ahead of what we want so we'll put it on hold.
- Some quantum computing references we could read as needed:
- - Quantum crossing threshold (free): https://www.nature.com/articles/s41586-021-04273-w
- - Crossing threshold in silicon: https://www.nature.com/articles/s41586-021-04182-y
- - Three-qubit donor processor in Si: https://www.nature.com/articles/s41586-021-04292-7
No comments:
Post a Comment