professor
Satyen Kale

Apr 2017

I'd heard horror stories of other ML professors at Columbia, but Satyen turned out to be a dream. I believe he used Daniel Hsu's syllabus and materials throughout the course, but his style was much different. While Daniel focuses heavily on the math and the proofs and (I would assume) hope that intuition on the mechanisms of the ML models follows, Satyen would focus heavily on the visuals and step-by-step walkthroughs of everything before touching on the math. He hadn't taught before, but I feel like coming from industry was a boon because he seemed comfortable going through the same concepts a couple times to make sure everyone got it. He managed to make almost everything (even kernels!) readily digestible, and made me feel confident in being to implement models later in the homeworks. My only gripe with him was the mathematical rigor of the exams. He lifted them straight from Daniel, who spends _his_ classes delving into math, so the dissonance between Satyen's style and the first midterm was jarring. When students pointed this out on Piazza, he made an effort to include more proof-based questions on the homeworks to give us exposure to the type of questions we might expect on the final. I still didn't do too well on the final, but neither did anyone else.

May 2016

Prof. Kale is a nice guy, but as a first-time professor, he is not very good at lecturing. His explanations of the conceptual stuff is very clear, but when it comes to the derivations and proofs, I don't think anyone in the class really understood what was going on. Which sucks, because the exams are all about math. His strategy for explaining math seems to be to click through the slides as quickly as possible. The good part is that Prof. Kale takes student feedback _extremely_ seriously, so after the midterm he included more math questions on the homework so that we could have some preparation. (Some would argue that he takes feedback seriously to a fault, in that there are too many extensions) As much as I am complaining, this course did a good job of introducing me to a wide variety of machine learning techniques which is exactly what an introductory course should do. It demystified machine learning, by showing that most of the algorithms are intuitive and simple. Realizing this also gave me more confidence when doing math by teaching me how to look past the weird symbols and realize that the ideas behind them are also simple.