Loading...
This site is best viewed in a modern browser with JavaScript enabled.
Something went wrong while trying to load the full version of this site. Try hard-refreshing this page to fix the error.
MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018
13. Randomized Matrix Multiplication
14. Low Rank Changes in A and Its Inverse
15. Matrices A(t) Depending on t, Derivative = dA/dt
16. Derivatives of Inverse and Singular Values
17. Rapidly Decreasing Singular Values
18. Counting Parameters in SVD, LU, QR, Saddle Points
19. Saddle Points Continued, Maxmin Principle
20. Definitions and Inequalities
21. Minimizing a Function Step by Step
22. Gradient Descent: Downhill to a Minimum
23. Accelerating Gradient Descent (Use Momentum)
24. Linear Programming and Two-Person Games
25. Stochastic Gradient Descent
27. Backpropagation: Find Partial Derivatives
26. Structure of Neural Nets for Deep Learning
30. Completing a Rank-One Matrix, Circulants!
31. Eigenvectors of Circulant Matrices: Fourier Matrix
32. ImageNet is a Convolutional Neural Network (CNN), The Convolution Rule
33. Neural Nets and the Learning Function
34. Distance Matrices, Procrustes Problem
35. Finding Clusters in Graphs
36. Alan Edelman and Julia Language
7. Eckart-Young: The Closest Rank k Matrix to A
An Interview with Gilbert Strang on Teaching Matrix Methods in Data Analysis, Signal Processing,...