Good advice, but I think it should talk more about the distinction between the training objective and the true objective. For classic machine learning problems, like speech recognition or face detection, these were so close that we didn't even notice there was a difference. However, now ML models are being trained to predict clicks or or other proxies of "engagement" and these can be wildly divergent from the humane objectives we want in our products. In these cases it's really important to understand the gap between what you really want and what you can encode into an objective function.
reply