-
Notifications
You must be signed in to change notification settings - Fork 0
Investigating Spectral Bias on Functional Autoencoder #14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Exp: Functional autoencoder has spectral biasThe data is generated by cosine function with multiple radial speed.
more example |
Copy post on Teams: The preliminary result shows that the functional encoder has spectral bias. This might not be surprising since the it's still a DNN approach. I believe we can play some tricks like in MultiDNN or multi-step training on this. For example Cascading functional encoder, and each AE learn to recover the residual function that previous AE produced. This can be same structure (like Residual NN) or different. The ultimate goal is to produce fix dimensional latent factor of function to do operator learning with DeepONet. |
|
This comment was marked as duplicate.
This comment was marked as duplicate.
* wip: problem * add m_score matrixes * add regularization on norm * use more basis
Functional autoencoder can not extrapolate different frequencies. |
Objective
This work investigate the functional autoencoder that encode-decode functional data. This is a neural approach for functional principle component analysis. The data is randomly generated from function that is infinite-diemensional domain. I suspect the functional autoencoder has same issue as regression problem that has spectral bias issue.
Method
The functional autoencoder uses basis fucntions decomposition to encoder function:
Suppose we have a function$u: \mathbb{R}\to \mathbb{R}$ , and we want to decompose $u$ into
where the basis function$\phi_i$ for $i=1,\cdots, n$ is given by a neural network $NN: \mathbb{R} \times \Omega->\mathbb{R}$
The score of coefficient$c_i$ is given by
the integration is approximated by trapezoidal rule. The parameter$\theta$ is optimized by
Also , the orthonormal property is induced by loss function.
Related work
Functional encoding on operator learning$x$ can be in arbitrary resolution (not good for DeepONet) and irregular spacing (not good for FNO) once trained. As long as, the inner product is successfully estimated. The complexity of encoding is $O(n)$ , and high dimensional space integration can be approximated by Monte Carlo estimation.
The functional encoding is a process to use finite dimensional domain to describe infinite dimensional domain. The Basis Operator Network uses functional encoder to derive the latent vector of target/input function. The advantage of this encoding method is sensor point
Multiscale
The MultiscaleDNN can be applied to help mitigate the spectral bias. There are two ideas of doing this:
Sequential layer: The autoencoder is in$l$ layer (or sequentially use same encoder), first autoencoder get $\hat{u}_i$ , and second autoencoder recover $\hat{u}_2 = u-\hat{u}_1$ . The multilevel structure can also use scaling to decrease frequency spectrum or customized activation function for frequency modulation. The drawback of this structure is each layer need to apply sequential.
Horizontal layer: Use scaling series$[x, ..., nx]$ as input, and output multiple basis functions. The tricky part is how to make basis function orthonormal.
Reference:
ToDo
To complete the implementation
To research