A Hands-on Introduction to Physics-informed Machine Learning
nanohubtechtalks nanohubtechtalks
33.7K subscribers
51,476 views
0

 Published On Jun 16, 2021

2021.05.26 Ilias Bilionis, Atharva Hans, Purdue University
Table of Contents below.

This video is part of NCN's Hands-on Data Science and Machine Learning Training Series which can be found at: https://nanohub.org/groups/ml/handson...

Can you make a neural network satisfy a physical law? There are two main types of these laws: symmetries and ordinary/partial differential equations. I will focus on differential equations in this short presentation. The simplest way to bake information about a differential equation with neural networks is to create a regularization term for the loss function used in training. I will explain the mathematics of this idea. I will also talk about applying physics-informed neural networks to a plethora of applications spanning the range from solving differential equations for all possible parameters in one sweep (e.g., solve for all boundary conditions) to calibrating differential equations using data to design optimization. Then, we will work on a hands-on activity that shows you to implement the ideas in PyTorch. I am assuming some familiarity with how conventional neural networks are trained (stochastic gradient descent).

The nanoHUB tool "A Hands-on Introduction to Physics-Informed Neural Networks" is used in this hands-on tutorial and can be found on nanoHUB.org at: https://nanohub.org/tools/handsonpinns

Also, you need to know the basics of PyTorch to follow along. Going over this tutorial should be sufficient: https://pytorch.org/tutorials/beginne...

This presentation and additional downloads can be found on nanoHUB.org at: https://nanohub.org/resources/35060

Table of Contents:
00:00 A Hands-on Introduction to Physics-informed Machine Learning
01:57 Objective
02:08 Reminder - What are neural networks?
03:09 Reminder - How do we train neural networks?
04:28 Reminder - How do we train neural networks?
06:28 Illustrative Example 1: Solving an ODE
07:15 From ODE to a loss function
09:35 Solving the problem with stochastic gradient descent
10:59 Results (Part of Hands-on activity)
11:32 Illustrative Example 2: Solving an elliptic PDE
11:40 From PDEs to a loss function - Integrated squared approach
12:57 From PDEs to a loss function - Energy approach
14:36 I can already solve ODEs/PDEs. Why is this useful?
15:14 Illustrative Example 3: Solving PDEs for all possible parameterizations
16:31 Representing the solution of the PDE with a DNN
17:05 From PDEs to a loss function - Energy approach
18:02 One network for all kinds of random fields
18:19 One network for all kinds of random fields
19:03 What are the applications of this?
22:11 What is the catch?
24:04 Hands-on activity led by Atharva Hans
24:09 Demonstration
41:37 Q&A

show more

Share/Embed