In this talk we introduce Bayesian Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. Deep learning pipelines like MXnet and Tensorflow are notoriously expensive to train and often have many tunable parameters including hyperparameters, the architecture, and data pre-processing parameters that can have a large impact on the efficacy of the model. We will motivate the problem by giving several example applications using open source deep learning frameworks and open datasets. We’ll compare the results of Bayesian Optimization to standard techniques like grid search, random search, and expert tuning and show that Bayesian Optimization allows you to get better results in fewer evaluations.