-
Notifications
You must be signed in to change notification settings - Fork 0
Exp: Use Muti-objective learning #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Experiment Dual Cone method on DRMUse
Dual Cone method for PINN + DRM
Dual Cone method for PINN + DRM
|
Exp: Verify dual-cone optimization helps PINNs + soft BC lossDual Cone
snapshot of weights of gradients
PINN with
|
@liruipeng Please take a look. It's ready. The main change is the loss function now has two output |
Fix CI package and conflict |
u = model.get_solution(x) | ||
loss = loss_func(u, mesh.u_ex) | ||
return loss | ||
return loss, [loss,] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
major change
This implementation provides an API for multi-objective learning.
where A is an aggregator that combines gradients
For$loss = 1.0 loss_{drm} + 500 loss_{bc}$
python pinn_1d.py --levels 1 --epochs 10000 --lr 1e-4 --activation gelu --sweeps 1 --hidden_dims 256 256 256 --high_freq 3 --loss_type 1 --bc_weight 1 --nx 2000 --aggregator "Constant" 1. 500.
For Dual Cone optimization
This optimize pinn loss and boundary loss at the same time.
Monitor weights
add
--monitor_aggregator
TorchJD package for Jacobian method
Many ways to combine gradients: