Deep probabilistic Modelling with Pyro | Chi Nhan Nguyen

Learn how to build robust and reliable deep learning models with probabilistic programming in Pyro, a probabilistic language built on top of PyTorch.

Key takeaways
  • Neural networks are great, but they have limitations. They perform poorly on out-of-sample examples, are overconfident, and don’t provide good uncertainty estimates.
  • Uncertainty is important for making informative decisions, and a proper uncertainty assessment of predictions is crucial.
  • Deep probabilistic modeling involves modeling uncertainties in neural networks.
  • Probabilistic neural networks provide better calibration of uncertainties and can be used for classification, regression, and sequence prediction.
  • Pyro is a probabilistic programming language built on top of PyTorch that provides high-level abstractions to sample from many different probability distributions.
  • Probabilistic models can help obtain robust uncertainties, which is important for critical decisions.
  • Monte Carlo dropout and variational inference are methods for approximating intractable distributions.
  • Gaussian processes are a generative model that can be used for regression and classification.
  • Uncertainty can provide expected variances, which enables planning more efficiently.
  • Adversarial attacks can be used to test the robustness of neural networks.
  • Softmax is not a reliable way to model uncertainty in neural networks.
  • Probabilistic models can be used for time series prediction, sequence prediction, and image classification.
  • A good model should have a low variance in its predictions.
  • A confidence interval is a subjective way of representing uncertainty.
  • Uncertainty can be represented using the KL divergence.
  • Variational inference is a method for approximating intractable distributions.
  • The prior distribution represents the prior knowledge of a problem.
  • Bayes’ Rule is used to update the prior distribution based on new data.
  • The likelihood function represents the probability of the observed data given the model parameters.
  • The posterior distribution represents the updated prior distribution after new data is observed.
  • In probabilistic modeling, we infer probability distributions conditioned on the observed data.
  • Deep probabilistic modeling can be used for healthcare, biology, and more complex systems.
  • It’s important to consider shooting from a probability distribution that is closely related to a specific Gaussian processacie
  • Variational inference can be used to approximate intractable distributions and can be more computationally efficient than Monte Carlo methods.
  • Autograd is a method for automatic differentiation that is used in variational inference.