Abstracts
Robust Iteration Methods for Non-linear PDEs
Florin Adrian Radu (University Bergen, Norway)
Robust iterative splitting schemes for linear or nonlinear coupled PDEs (I and II)
In this work we consider solvers for fully coupled partial differential equations (PDEs) by splitting schemes. There are plenty of relevant applications behind, e.g. water and soil pollution, CO2 storage, enhanced geothermal energy extraction or nuclear waste management. In the first part we will consider coupled linear PDEs and introduce robust iterative splitting solvers. The convergence (stabilization) and optimization will be discussed. As example we consider the Biot equations.
In the second part we will consider nonlinear, fully coupled PDEs. Splitting and linearization techniques will be discussed. Different nonlinear Biot models and reactive transport in unsaturated porous media will be used as examples.
Martin Vohralik (Inria, France)
Adaptive iterative approximation in nonlinear PDEs
Nonlinear partial differential equations (PDEs) are omnipresent in description of physical phenomena. Their numerical approximation, however, poses numerous difficulties. In this talk, we describe a holistic approach for finding a discrete (piecewise polynomial) approximation of the unknown exact solution with error below a given desired tolerance and at the expense of the minimal computational cost. The central tool are a posteriori estimates that assess the error at each stage of a numerical simulation: on each time step, on each spatial mesh, on each iterative regularization step, on each iterative linearization step, on each iterative algebraic solver step... The estimates give a guaranteed upper bound (reliability), are mathematically equivalent to the error (efficiency), and distinguish the different error components like the temporal, spatial, regularization, linearization, and algebraic solver ones. For model nonlinear problems, it is possible to obtain estimates that are provably robust, i.e., of quality independent of the strength of the nonlinearities; robustness with respect to the final time is ensured for time-dependent problems. The developed theory encompasses all standard numerical methods (finite elements, finite volumes, mixed finite elements, discontinuous Galerkin schemes, polytopal discretization schemes), standard time stepping schemes, any iterative regularization, iterative linearizations like Zarantonello, Picard, Kačanov, Newton, M-scheme, or L-scheme, and any iterative linear algebraic solver. The approach is not a novel scheme but rather consists of efficient use of existing building blocks.
We characterize the cost of our adaptive algorithms as the cumulative sum of the number of degrees of freedom of the given numerical method over all time steps, spatial meshes, and all regularization, linearization, and algebraic solver steps; by construction, this scales as the CPU time. For model steady problems, a particular theoretical effort has been dedicated to rigorously proving that the derived adaptive algorithms indeed give a convergence rate optimal wrt this computational cost, or, in other words, that the rate of decrease of the error wrt this cost cannot be made better.
The subject is an interplay between analysis of partial differential equations, numerical analysis, and numerical linear algebra. Efficient computer implementation, assessment on academic benchmarks, and applications to environmental problems with nonsmooth and degenerate nonlinearities like the geological sequestration of CO2 will be discussed.
Ben Gharbia I., Ferzly J., Vohralík M., Yousef S. Semismooth and smoothing Newton methods for nonlinear systems with complementarity constraints: Adaptivity and inexact resolution, J. Comput. Appl. Math. 420 (2023), 114765.
Ern A., Vohralík M., Adaptive inexact Newton methods with a posteriori stopping criteria for nonlinear diffusion PDEs, SIAM J. Sci. Comput. 35 (2013), A1761–A1791.
Févotte F., Rappaport A., Vohralík M., Adaptive regularization, discretization, and linearization for nonsmooth problems based on primal-dual gap estimators, Comput. Methods Appl. Mech. Engrg. 418 (2024), 116558.
Haberl A., Praetorius D., Schimanko S., Vohralík M., Convergence and quasi-optimal cost of adaptive algorithms for nonlinear operators including iterative linearization and algebraic solver, Numer. Math. 147 (2021), 679–725.
Mitra K., Vohralík M., A posteriori error estimates for the Richards equation, Math. Comp. 93 (2024), 1053–1096.
Mitra K., Vohralík M., Guaranteed, locally efficient, and robust a posteriori estimates for nonlinear elliptic problems in iteration-dependent norms. An orthogonal decomposition result based on iterative linearization, HAL Preprint 04156711, 2023.
Scientific Machine Learning
Christoph Brune (University of Twente, the Netherlands)
Felix Dietrich (TU Munich, Germany)
Talk 1: Learning dynamical systems from data
Dynamic processes have been modelled successfully for hundreds of years, often using ordinary or partial differential equations. Using data-driven methods, these processes can now also be inferred directly from measurements. In this talk, we will discuss my group's work in this direction. We will discuss learning differential equations on reduced spaces, how to utilize numerical integration schemes to train neural networks for stochastic dynamics, and close with an alternative view on system identification with the Koopman operator framework.
Talk 2: Random feature methods and their applications
In the second talk, we discuss a sampling scheme for a specific, training data-dependent probability distribution of the parameters of
feed-forward neural networks that removes the need for iterative updates of the hidden parameters. After they have been chosen at random from the constructed distribution, only a single linear problem must be solved to obtain a fully trained network. Such networks fall in the class of random feature models, but their hidden parameters now depend on the training data. They are provably dense in the continuous functions, and have a convergence rate in the number of neurons that is independent of the input dimension. Using sampled neurons as basis functions in an ansatz allow us to effectively construct models for regression and
classification tasks, create recurrent networks, construct neural operators, and solve partial differential equations. In computational
experiments, the sampling scheme outperforms iterative, gradient-based optimization by several orders of magnitude in both training speed and accuracy. We will discuss benefits and drawbacks of the approach, as well as future directions regarding new network architectures.
Geometric Integration
Mathieu Desbrun (Inria, France)
Talk 1:
Exploiting the "unreasonable effectiveness" of geometry in computing
It has been repeatedly noted in computational science that the quality of computing tools ultimately boils down to properties of a fundamentally geometric or topological nature. This talk will describe our approach to computing through the lens of geometry to offer a versatile and efficient toolbox for a variety of applications --- from shape processing to tangent vector field editing, to variational mechanics and non-linear dimensionality reduction, to even matrix preconditioners. Through a series of examples, we will point out how a strong grasp of classical differential geometry paired with a good understanding of the typical computational constraints in research and industry can bring forth novel theoretical and practical foundations for general-purpose computations. The importance of preserving differential geometric properties in the discrete setting will be a recurring theme throughout the talk to demonstrate the value of geometry in computations.
Talk 2:
A Discrete Exterior Calculus of Bundle-Valued Forms
Exploring novel geometry-driven discretizations of continuous equations or physical models is often ripe with unexpected difficulties. This talk discusses the development of structure-preserving discretizations of the exterior calculus of differential forms with values in a vector bundle over a combinatorial manifold equipped with a connection. Compared to their scalar-based counterparts which admit a well-established discretization via cochains, bundle-valued forms, (e.g., with values in the group of rotation matrices) present numerous difficulties when one tries to properly define a discrete counterpart to them and to the exterior covariant derivative operator acting on them. We show however that the use of specifically-selected local frame fields allows the construction of a discrete exterior covariant derivative of bundle-valued forms that not only satisfies the well-known Bianchi identities in this discrete realm, but also converges to its smooth equivalent under mesh refinement. If time allows, I will mention other ongoing projects where geometry-driven discretization is thought after.
Elena Celledoni (NTNU, Norway)
Talk 1: Deep learning and numerical analysis
Deep learning neural networks have recently been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. The (discrete) optimal control point of view to neural networks offers an interpretation of deep learning from a numerical analysis perspective and opens the way to mathematical insight [9, 7, 2].
We show how classical results of stability of ODEs are useful to construct contractive neural networks architectures. Thus, neural networks can be designed with guaranteed stability properties. This can be used to ensure robustness against adversarial attacks and to obtain converging “Plug-and-Play” algorithms for inverse problems in imaging [3, 6, 11].
In the second part of the talk, we consider extensions of these ideas to the manifold valued case leading to classical stability analysis of geometric integrators on Riemannian manifolds. In particular, we will discuss B-stability on Riemannian manifolds for the backward Euler method [1], and conditional stability for the explicit Euler method for manifolds of constant sectional curvature [8].
Talk 2: Shape analysis and structure preservation
Shape analysis is a framework for treating complex data and obtain metrics on spaces of data. Examples are spaces of unparametrized curves, timesignals, surfaces and images. In this talk we discuss structure preservation for classifying, analysing and manipulating shapes. A computationally demanding task for estimating distances between shapes, e.g. in object recognition, is the computation of optimal reparametrizations. This is an optimisation problem on the infinite dimensional group of orientation preserving diffeomorphisms, [5]. We will discuss useful geometric properties in this context e.g. reparametrization invariance of the distance function and inherent geometric structure of the data, e.g. Lie group structure [4]. Another interesting set of related problems arises when learning dynamical systems from (human motion) data, [10].
References
[1] M Arnold, E Celledoni, E C, okaj, B Owren, D Tumiotto, (2024) Bstability of numerical integrators on Riemannian manifolds, Journal of Computational Dynamics. [2] M. Benning, E. Celledoni, M. J. Ehrhardt, B. Owren, and C. B. Sch¨onlieb, (2019) Deep learning as optimal control problems: models and numericalmethods. Journal of Computational Dynamics, 6(2):171–198, 2019. [3] E. Celledoni, M. J. Ehrhardt, C. Etmann, R. I. McLachlan, B. Owren, B. Sch¨onlieb, F. Sherry, (2021) Structure preserving deep learning. European journal of applied mathematics. [4] E. Celledoni, M. Eslitzbichler, and A. Schmeding, (2016) Shape analysis on Lie groups with applications in computer animation. J. Geom. Mech., 8(3):273–304, 2016. [5] E. Celledoni, H. Gl¨ockner, J. Riseth, A. Schmeding, (2023) Deep neural networks on diffeomorphism groups for optimal shape reparameterization, BIT Numerical Mathematics, 63 (4), 1-38. [6] E. Celledoni, D. Murari, B. Owren, C.-B. Sch¨onlieb, and F. Sherry, (2022) Dynamical systems’ based neural networks, SISC, 2023. [7] W. E, (2017) A Proposal on Machine Learning via Dynamical Systems, Commun. Math. Stat. 5, 1–11. [8] M. Ghirardelli, B. Owren, E. Celledoni, Conditional Stability of the Euler Method on Riemannian Manifolds, Xiv:2503.09434v2.
[9] E. Haber and L. Ruthotto, (2017) Stable architectures for Deep Neural Networks, Inverse Problems 34 (1). [10] M. D. Hansen, E. Celledoni, B. K. Tapley, Learning mechanical systems from real-world data using discrete forced Lagrangian dynamics, arXiv:2505.20370v1. [11] F. Sherry, (2024) E. Celledoni, M. J. Ehrhardt, D. Murari, B. Owren, and C.-B. Sch¨onlieb, Designing Stable Neural Networks using Convex Analysis and ODEs, Physica D: Nonlinear Phenomena 463.