Agenda & Minutes
- Welcome to the 25th meeting, July 29, 2022.
- Schedule note. We will skip the next two weeks, and the next meeting will be August 19.
- Updates/news/inputs/comments?
- This website is reorganized. See the bulleted links to pages on this website in the right column.
- Readings, viewings, etc.
- Thanks to VK for suggesting some videos to vote on.
- Sources to scan today.
- We considered the article associated with last week's video at https://jalammar.github.io/illustrated-transformer this time. We read the first section called "A Higher Level Look." Should we read it in more depth? 5=strongly agree, 4=agree, 3=neutral, 2=disagree, 1=strongly disagree. Vote to read more was: 4.
- Then we watched to 4:40 of another video,
https://www.youtube.com/watch?v=TQQlZhbC5ps. Vote to finish was 4.
- We ended here but next time could consider additional sources:
- Read another bit of "Transformers From Scratch," https://e2eml.school/transformers.html, and decide if we want to circle back and finish it. We have read up to the subsection "Sequence
Completion" earlier and can read on from there if we wish.
- We can discuss and vote on other sources, as time allows. See the list on the page of sources.
- Readings/videos we have finished.
- The Narrated Transformer Language Model, Jay Alammar, https://www.youtube.com/watch?v=-QH8fRhqFHM, finished 7/22/22.
- Other things from before this list of finished sources was created.
Agenda & Minutes
- Welcome to the 24th meeting, July 22, 2022.
- Updates/news/inputs/comments?
- Readings, viewings, etc.
- Source to read/view in depth.
- https://e2eml.school/transformers.html:
"Transformers From Scratch." We read up to the subsection "Sequence Completion" and can read from there, but are taking a break from it...
- We finished viewing https://www.youtube.com/watch?v=-QH8fRhqFHM. We will discuss reading the accompanying article at https://jalammar.github.io/illustrated-transformer next time. Then we can vote on another video,
https://www.youtube.com/watch?v=TQQlZhbC5ps, and eventually circle back
and finish "Transformers From Scratch" if we want to.
- Sources
to scan to see if we want to read more. Please send in more suggestions
for readings. We can read/view the first paragraph/minute or so of
each, assessing each. Should we read it in more depth? 5=strongly agree, 4=agree, 3=neutral, 2=disagree, 1=strongly disagree.
- https://jalammar.github.io/illustrated-transformer/
- https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
- https://www.youtube.com/watch?v=4Bdc55j80l8&ab_channel=The A.I.Hacker-MichaelPhi as a transformer video.
- The Dutch Tax Authority Was Felled by AI—What Comes Next? https://spectrum.ieee.org/artificial-intelligence-in-government
- Anticipative Video Transformer, https://facebookresearch.github.io/AVT/?fbclid=IwAR1RurSM33v8baN10H9JCX_dvVNtscydsLupaB8NMgKOmNIPjIwD3XO2vOA.
- "Deep
learning—a first meta-survey of selected reviews across scientific
disciplines, their commonalities, challenges and research impact," https://peerj.com/articles/cs-773. We read the abstract. It is not clear whether we should continue reading material from it. Any opinions/thoughts/comments?
- "Attention is all you need," https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Attention
is all you need, A Vaswani, N Shazeer, N Parmar… - Advances in neural
…, 2017 - proceedings.neurips.cc … Cited by 35,980
- Featured resource: Short and long videos -
- Some quantum computing references we could read as needed:
- - Quantum crossing threshold (free): https://www.nature.com/articles/s41586-021-04273-w
- - Crossing threshold in silicon: https://www.nature.com/articles/s41586-021-04182-y
- - Three-qubit donor processor in Si: https://www.nature.com/articles/s41586-021-04292-7
- Potential future readings that we have assessed in earlier meetings.
- 2021 Turing Award lecture paper: https://dl.acm.org/doi/pdf/10.1145/3448250. We read the first two paragraphs. Vote was 4.5 to read more.
- Vote on: Dalle-E 2 - how it works: https://www.youtube.com/watch?v=F1X4fHzF4mQ. Should we read/view more of this? Vote was 4.
- Explainable
AI as a reading/discussion topic:
https://en.wikipedia.org/wiki/Explainable_artificial_intelligence.
6/24/22: vote was 4.0 based on up to but not including the last paragraph
of the Goals section.
- 6/10/22: vote was 4.0 on the following article coauthored by Timnit Gebru. https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
- Ni
et al., Learning Good State and Action Representations via Tensor
Decomposition, https://arxiv.org/abs/2105.01136. We read the title and
1st sentence. Vote to read more was 4 out of 5.
- We
have read through the fifth paragraph of
https://www.marktechpost.com/2022/03/07/an-introduction-to-saliency-maps-in-deep-learning,
then voted on the priority for reading more of it. Vote: 3.67 out of 5.
- CNN
basics:
https://towardsdatascience.com/the-most-intuitive-and-easiest-guide-for-convolutional-neural-network-3607be47480.
We previously read 2 paragraphs. Read more? Vote was 3.6 out of 5.
- https://www.youtube.com/watch?v=BolevVGJk18.
This introduces Jonschkowski, Brock, Learning State Representations
with Robotic Priors. Should we try the first paragraph(s) of the paper?
Vote was 3.6 out of 5.
- https://en.wikipedia.org/wiki/Markov_decision_process. Should we read/view more of this? Vote was 3 1/5.
- 6/10/22: vote was 3.0 on the following article. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/.
- Brooks, R., 2017, Seven Deadly Sins of AI Prediction, in serveinfo\AIstudyGroup. Vote was 2.6 out of 5.
- Readings/videos we have finished.
- The Narrated Transformer Language Model, Jay Alammar, https://www.youtube.com/watch?v=-QH8fRhqFHM, 7/22/22.
Agenda & Minutes
- Welcome to the 23rd meeting, July 15, 2022.
- Updates/news/inputs/comments
- Readings, viewings, etc.
- Source to read/view in depth.
- https://e2eml.school/transformers.html:
"Transformers From Scratch." We read up to the subsection "Sequence Completion" and can read from there, but first...
- ...let us take a break by going to another transformer source, https://www.youtube.com/watch?v=-QH8fRhqFHM, which is about the article https://jalammar.github.io/illustrated-transformer/. We got to 14:58 / 29:29, and voted 4 1/3 to finish it next time. Then we can vote on another one, https://www.youtube.com/watch?v=TQQlZhbC5ps, and eventually circle back to the document if we want to.
- Sources
to scan to see if we want to read more. Please send in more suggestions
for readings. We can read/view the first paragraph/minute or so of
each, assessing each. Should we read it in more depth? 5=strongly agree, 4=agree, 3=neutral, 2=disagree, 1=strongly disagree.
- https://jalammar.github.io/illustrated-transformer/
- https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
- https://www.youtube.com/watch?v=4Bdc55j80l8&ab_channel=The A.I.Hacker-MichaelPhi as a transformer video.
- The Dutch Tax Authority Was Felled by AI—What Comes Next? https://spectrum.ieee.org/artificial-intelligence-in-government
- Anticipative Video Transformer, https://facebookresearch.github.io/AVT/?fbclid=IwAR1RurSM33v8baN10H9JCX_dvVNtscydsLupaB8NMgKOmNIPjIwD3XO2vOA.
- "Deep
learning—a first meta-survey of selected reviews across scientific
disciplines, their commonalities, challenges and research impact," https://peerj.com/articles/cs-773. We read the abstract. It is not clear whether we should continue reading material from it. Any opinions/thoughts/comments?
- "Attention is all you need," https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Attention
is all you need, A Vaswani, N Shazeer, N Parmar… - Advances in neural
…, 2017 - proceedings.neurips.cc … Cited by 35,980
- Featured resource: Short and long videos -
- Some quantum computing references we could read as needed:
- - Quantum crossing threshold (free): https://www.nature.com/articles/s41586-021-04273-w
- - Crossing threshold in silicon: https://www.nature.com/articles/s41586-021-04182-y
- - Three-qubit donor processor in Si: https://www.nature.com/articles/s41586-021-04292-7
- Potential future readings that we have assessed in earlier meetings.
- 2021 Turing Award lecture paper: https://dl.acm.org/doi/pdf/10.1145/3448250. We read the first two paragraphs. Vote was 4.5 to read more.
- Vote on: Dalle-E 2 - how it works: https://www.youtube.com/watch?v=F1X4fHzF4mQ. Should we read/view more of this? Vote was 4.
- Explainable
AI as a reading/discussion topic:
https://en.wikipedia.org/wiki/Explainable_artificial_intelligence.
6/24/22: vote was 4.0 based on up to but not including the last paragraph
of the Goals section.
- 6/10/22: vote was 4.0 on the following article coauthored by Timnit Gebru. https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
- Ni
et al., Learning Good State and Action Representations via Tensor
Decomposition, https://arxiv.org/abs/2105.01136. We read the title and
1st sentence. Vote to read more was 4 out of 5.
- We
have read through the fifth paragraph of
https://www.marktechpost.com/2022/03/07/an-introduction-to-saliency-maps-in-deep-learning,
then voted on the priority for reading more of it. Vote: 3.67 out of 5.
- CNN
basics:
https://towardsdatascience.com/the-most-intuitive-and-easiest-guide-for-convolutional-neural-network-3607be47480.
We previously read 2 paragraphs. Read more? Vote was 3.6 out of 5.
- https://www.youtube.com/watch?v=BolevVGJk18.
This introduces Jonschkowski, Brock, Learning State Representations
with Robotic Priors. Should we try the first paragraph(s) of the paper?
Vote was 3.6 out of 5.
- https://en.wikipedia.org/wiki/Markov_decision_process. Should we read/view more of this? Vote was 3 1/5.
- 6/10/22: vote was 3.0 on the following article. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/.
- Brooks, R., 2017, Seven Deadly Sins of AI Prediction, in serveinfo\AIstudyGroup. Vote was 2.6 out of 5.
Agenda & Minutes
- Welcome to the 22nd meeting, July 8, 2022.
- Updates/news/inputs/comments
- VK paper status? Still waiting for publication.
- Readings, viewings, etc.
- Source to read/view in depth.
- https://e2eml.school/transformers.html:
"Transformers From Scratch." We read up to the subsection "Sequence Completion" and can start there next time. Or we could switch perhaps temporarily to another transformers article like https://jalammar.github.io/illustrated-transformer/. Maybe we could spend 15 min. on each, and then decide. The accompanying video is good too, at https://www.youtube.com/watch?v=-QH8fRhqFHM. VK and I agreed that we should watch the video next, and then return to the article (or the other article). The original vote was 4 3/8 out
of 5 for this document, though we can always revote as we progress
through it.
- Sources
to scan to see if we want to read more. Please send in more suggestions
for readings. We can read/view the first paragraph/minute or so of
each, assessing each. Should we read it in more depth? 5=strongly agree, 4=agree, 3=neutral, 2=disagree, 1=strongly disagree.
- https://jalammar.github.io/illustrated-transformer/
- https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
- https://www.youtube.com/watch?v=4Bdc55j80l8&ab_channel=The A.I.Hacker-MichaelPhi as a transformer video.
- The Dutch Tax Authority Was Felled by AI—What Comes Next? https://spectrum.ieee.org/artificial-intelligence-in-government
- Anticipative Video Transformer, https://facebookresearch.github.io/AVT/?fbclid=IwAR1RurSM33v8baN10H9JCX_dvVNtscydsLupaB8NMgKOmNIPjIwD3XO2vOA.
- "Deep
learning—a first meta-survey of selected reviews across scientific
disciplines, their commonalities, challenges and research impact," https://peerj.com/articles/cs-773. We read the abstract. It is not clear whether we should continue reading material from it. Any opinions/thoughts/comments?
- "Attention is all you need," https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Attention
is all you need, A Vaswani, N Shazeer, N Parmar… - Advances in neural
…, 2017 - proceedings.neurips.cc … Cited by 35,980
- Featured resource: Short and long videos -
- Some quantum computing references we could read as needed:
- - Quantum crossing threshold (free): https://www.nature.com/articles/s41586-021-04273-w
- - Crossing threshold in silicon: https://www.nature.com/articles/s41586-021-04182-y
- - Three-qubit donor processor in Si: https://www.nature.com/articles/s41586-021-04292-7
- Potential future readings that we have assessed in earlier meetings.
- 2021 Turing Award lecture paper: https://dl.acm.org/doi/pdf/10.1145/3448250. We read the first two paragraphs. Vote was 4.5 to read more.
- Vote on: Dalle-E 2 - how it works: https://www.youtube.com/watch?v=F1X4fHzF4mQ. Should we read/view more of this? Vote was 4.
- Explainable
AI as a reading/discussion topic:
https://en.wikipedia.org/wiki/Explainable_artificial_intelligence.
6/24/22: vote was 4.0 based on up to but not including the last paragraph
of the Goals section.
- 6/10/22: vote was 4.0 on the following article coauthored by Timnit Gebru. https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
- Ni
et al., Learning Good State and Action Representations via Tensor
Decomposition, https://arxiv.org/abs/2105.01136. We read the title and
1st sentence. Vote to read more was 4 out of 5.
- We
have read through the fifth paragraph of
https://www.marktechpost.com/2022/03/07/an-introduction-to-saliency-maps-in-deep-learning,
then voted on the priority for reading more of it. Vote: 3.67 out of 5.
- CNN
basics:
https://towardsdatascience.com/the-most-intuitive-and-easiest-guide-for-convolutional-neural-network-3607be47480.
We previously read 2 paragraphs. Read more? Vote was 3.6 out of 5.
- https://www.youtube.com/watch?v=BolevVGJk18.
This introduces Jonschkowski, Brock, Learning State Representations
with Robotic Priors. Should we try the first paragraph(s) of the paper?
Vote was 3.6 out of 5.
- https://en.wikipedia.org/wiki/Markov_decision_process. Should we read/view more of this? Vote was 3 1/5.
- 6/10/22: vote was 3.0 on the following article. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/.
- Brooks, R., 2017, Seven Deadly Sins of AI Prediction, in serveinfo\AIstudyGroup. Vote was 2.6 out of 5.
Agenda & Minutes
- Welcome to the 21st meeting, July 1, 2022.
- Updates/news/inputs/comments
- VK paper status? Still waiting for publication.
- Readings, viewings, etc.
- Source to read/view in more depth.
- https://e2eml.school/transformers.html:
"Transformers From Scratch." We read up to "To see how a neural network layer can create these pairs, ..." in the section "Second order sequence model as matrix multiplications." We decided to keep reading from there next time. The original vote was 4 3/8 out
of 5 for this document, though we can always revote as we progress
through it (which we kind of did today).
- Sources
to scan to see if we want to read more. Please send in more suggestions
for readings. We can read/view the first paragraph/minute or so of
each, assessing each. Should we read it in more depth? 5=strongly agree, 4=agree, 3=neutral, 2=disagree, 1=strongly disagree.
- https://www.youtube.com/watch?v=4Bdc55j80l8&ab_channel=The A.I.Hacker-MichaelPhi as a transformer video.
- The Dutch Tax Authority Was Felled by AI—What Comes Next? https://spectrum.ieee.org/artificial-intelligence-in-government
- Anticipative Video Transformer, https://facebookresearch.github.io/AVT/?fbclid=IwAR1RurSM33v8baN10H9JCX_dvVNtscydsLupaB8NMgKOmNIPjIwD3XO2vOA.
- "Deep
learning—a first meta-survey of selected reviews across scientific
disciplines, their commonalities, challenges and research impact," https://peerj.com/articles/cs-773. We read the abstract. It is not clear whether we should continue reading material from it. Any opinions/thoughts/comments?
- "Attention is all you need," https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Attention
is all you need, A Vaswani, N Shazeer, N Parmar… - Advances in neural
…, 2017 - proceedings.neurips.cc … Cited by 35,980
- Featured resource: Short and long videos -
- Some quantum computing references we could read as needed:
- - Quantum crossing threshold (free): https://www.nature.com/articles/s41586-021-04273-w
- - Crossing threshold in silicon: https://www.nature.com/articles/s41586-021-04182-y
- - Three-qubit donor processor in Si: https://www.nature.com/articles/s41586-021-04292-7
- Potential future readings that we have assessed in earlier meetings.
- 2021 Turing Award lecture paper: https://dl.acm.org/doi/pdf/10.1145/3448250. We read the first two paragraphs. Vote was 4.5 to read more.
- Vote on: Dalle-E 2 - how it works: https://www.youtube.com/watch?v=F1X4fHzF4mQ. Should we read/view more of this? Vote was 4.
- Explainable
AI as a reading/discussion topic:
https://en.wikipedia.org/wiki/Explainable_artificial_intelligence.
6/24/22: vote was 4.0 based on up to but not including the last paragraph
of the Goals section.
- 6/10/22: vote was 4.0 on the following article coauthored by Timnit Gebru. https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
- Ni
et al., Learning Good State and Action Representations via Tensor
Decomposition, https://arxiv.org/abs/2105.01136. We read the title and
1st sentence. Vote to read more was 4 out of 5.
- We
have read through the fifth paragraph of
https://www.marktechpost.com/2022/03/07/an-introduction-to-saliency-maps-in-deep-learning,
then voted on the priority for reading more of it. Vote: 3.67 out of 5.
- CNN
basics:
https://towardsdatascience.com/the-most-intuitive-and-easiest-guide-for-convolutional-neural-network-3607be47480.
We previously read 2 paragraphs. Read more? Vote was 3.6 out of 5.
- https://www.youtube.com/watch?v=BolevVGJk18.
This introduces Jonschkowski, Brock, Learning State Representations
with Robotic Priors. Should we try the first paragraph(s) of the paper?
Vote was 3.6 out of 5.
- https://en.wikipedia.org/wiki/Markov_decision_process. Should we read/view more of this? Vote was 3 1/5.
- 6/10/22: vote was 3.0 on the following article. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/.
- Brooks, R., 2017, Seven Deadly Sins of AI Prediction, in serveinfo\AIstudyGroup. Vote was 2.6 out of 5.