I was lucky enough to attend NIPS 2017 this month. Since it was in my home city, I didn't have to travel at all! There is a lot to discuss, but here is a summary of some key topics and the takeaways I think are important to know about.
The live photo style transfer demos were a huge distraction but the poster on text style transfer was great too.
Ali Rahimi gave keynote talk about lack of theoretical understanding, interpretability, and rigor in modern machine learning.
The DNN vanguard responded defensively. This became a “the emperor has no clothes” moment and an ongoing debate throughout the conference.
Many posters and talks address the idea of “fairness” as defined by many social sciences. This is best illustrated by the keynote talk by Kate Crawford and a ProPublica article about ML bias in law enforcement.
There are fundamental decisions and issues on the level of “what is a probability?” that lead to confusion and conflated use of “bias”. When subgroups behave differently in all available data, how do you prevent models from being misunderstood or applied in unfair ways in practice?
There are clever approaches to promote fairness, but I think the most important work in this area was done by Weinberger's group dealing with Calibration and Fairness. It addresses different approaches to address bias issues, proves important relationships between data and the real world.
This is the fun paper and one of the best-paper award winners for the conference: Safe and Nested Subgame Solving for Imperfect-Information Games.