Recently, Peterson et al. provided evidence of the benefits of using probabilistic soft labels generated from crowd annotations for training a computer vision model, showing that using such labels maximizes performance of the models over unseen data. In this paper, we generalize these results by showing that training with soft labels is an effective method for using crowd annotations in several other AI tasks besides the one studied by Peterson et al., and also when their performance is compared with that of state-of-the-art methods for learning from crowdsourced data.