2023 Spring Meeting

Participants of the 2023 Spring Meeting at TU Eindhoven

Wednesday May 31, 2023 , the Dutch-Flemish Scientific Computing Society organized its annual Spring Meeting. This year it took place at Eindhoven University of Technology. A mix of young and senior researchers were invited to present their research.

Participation including lunch was free of charge. This year we had 46 registrations.

TU Eindhoven

Filmzaal, Zwarte Doos, ( building 4 on the map) is at a 10 minute walk from the NS station Eindhoven Centraal


The spring meeting is organized yearly by the Dutch-Flemish Scientific Computing Society (SCS), this year in cooperation with Eindhoven University of Technology.

Organizing committee: Barry Koren (TU Eindhoven) and Martine Anholt (CWI, Secretary SCS).

Support for this meeting has been obtained from Centrum Wiskunde & Informatica (CWI) and TU Eindhoven.

More pictures can be found here.


Links to the talks will be shared soon

Program 2023


Registration, coffee and tea


Svetlana Dubinkina, VU Amsterdam


Emil Løvbak, KU Leuven


Pascal den Boef, TU Eindhoven


Coffee and tea


Philipp Horn,TU Eindhoven


Jonas Thies, TU Delft


Group picture




Mariya Ishteva, KU Leuven


Fang Fang, TU Delft


Coffee, tea and refreshments


Anne Eggels, Sioux Technologies


Wim Vanroose, University of Antwerp




Speakers Spring meeting SCS 2023

Svetlana Dubinkina, VU Amsterdam
Svetlana Dubinkina is an associate professor at the VU Amsterdam. She works on development and analysis of numerical methods to reduce uncertainty in predictions and estimations. The applications that she has been working on are climate predictions and subsurface oil reservoir estimations. She explores multidisciplinary not only between mathematics and climatology but also withing the mathematics itself, to name a few: statistical equilibrium mechanics, shadowing, optimal transport, and PCA.
Since 2022, Svetlana is chairing the Dutch association of Women in Mathematics (https://www.ewmnetherlands.nl), whose mission is to support women mathematicians in their career.

Wim Vanroose, University of Antwerp
Wim Vanroose is professor of Applied Mathematics at University of Antwerpen where he works on numerical methods for large-scale complex systems.   His group developed pipelined Krylov methods where communication and computation are pipelined which leads to better scalability on supercomputers.  He is now focussing his research on combining Krylov methods and optimization methods.

He co-founded two companies. Motulus.aero provides optimization software to the airline industry and Polygonal introduces shape optimization techniques in the textile and fashion industry.


Anne Eggels, Sioux Technologies
Anne Eggels is working at Sioux Technologies as a Mathware Designer. Her PhD project was at the Centrum Wiskunde & Informatica on uncertainty quantification with dependent input data, mainly applied to offshore wind farms. At Sioux Technologies, she works on industrial problems regarding computational physics and optimization.
Fang Fang, TU Delft
Dr. Fang Fang obtained a PhD in Computational Finance from TU Delft in 2010, based on the innovation of “the COS method”. Since 2021 she has been working for TU Delft as a part-time assistant professor. She is also a senior quant consultant and a modelling expert, with 14 years hands-on experience in pricing model validation and risk model development at Tier-1 financial institutions in The Netherlands.
Her research interest lies in improving numerical methods and models for 1) risk quantification and allocation, 2) derivative pricing and 3) time series predictions. Courses she teaches/moderates include Computational Finance (MSc), Advanced Credit Risk Management (MOOC course jointly prepared by TU Delft and Deloitte) and Introduction of Credit Risk Management (MOOC by TU Delft).

Mariya Ishteva, KU Leuven
Mariya Ishteva is an assistant professor at KU Leuven, department of Computer Science, working on tensor methods and their application for representing, modeling and extracting information from complex data. Her main research interests are in the fields of (multi)linear algebra, system identification, machine learning, data mining, and optimization. Previously, she has studied and worked in four different but complementary domains (Computer Science, Mathematics, Engineering and Machine Learning/Data Mining) and in four different countries (Bulgaria, Germany, Belgium, and USA).


Jonas Thies, TU Delft
Jonas Thies has a Bachelor degree in Computational Engineering (Erlangen 2003), a Master in Scientific Computing (KTH Stockholm, 2006) and a PhD in Applied Mathematics (Groningen 2011). He spent two years at the Center for Interdisciplinary Mathematics in Uppsala, after which he moved to Cologne as a Scientific Employee of the German Aerospace Center (DLR) for Software Technology. There he led a research group on parallel numerics from 2017 to 2021. Since June 2021 he is an Assistant Professor at the Delft High Performance Computing Center DHPC.
Pascal den Boef, TU Eindhoven
Pascal den Boef is a PhD student in the COMPAS project at Eindhoven University of Technology. The focus of his studies is model reduction of nonlinear dynamic models with applications in virtual sensing and design optimization. Of special interest are thermo-mechanical models of automotive components. He received his MSc. degree in Electrical Engineering in 2019 from the same university with a thesis on system identification of LPV systems. Besides his PhD studies, he co-founded two companies: Drebble provides consultancy in the area of control engineering and Hawkeye Recognition develops computer vision solutions for edge devices.
Philipp Horn, TU Eindhoven
Philipp Horn is a PhD student in the UNRAVEL project at TU Eindhoven. The current focus of his research is formed by structure preserving neural networks for Hamiltonian systems. He obtained his B.Sc. degree in Simulation Technology from the University of Stuttgart. Followed by a double master program in Simulation Technology at the University of Stuttgart and Industrial and Applied Mathematics at TU Eindhoven. After his studies he shortly had a position as Junior Researcher at DIFFER in Eindhoven, researching structure preserving neural network surrogate models for fusion simulation
Emil Løvbak, KU Leuven
Emil obtained a BSc degree in Computer Science and Electrical engineering and an MSc degree in Mathematical Engineering from KU Leuven. After four years as a PhD Fellow of the Research Foundation Flanders, he is currently a researcher in the NUMA group at KU Leuven. His research areas cover multilevel Monte Carlo methods, stochastic optimization and kinetic equations.

Abstracts Spring Meeting SCS 2023

Svetlana Dubinkina,
VU Amsterdam

Projected ensemble data assimilation
Data assimilation is broadly used in atmosphere and ocean science to correct model error by periodically incorporating information from measurements (e.g., satellites) into the mathematical model. Both linear and nonlinear data assimilation methods propagate an ensemble of multiple solutions (using different initial conditions with the same numerical model) to approximate the evolution of the probability distribution function (PDF) of plausible states. Linear data assimilation assumes the PDF is Gaussian, while nonlinear data assimilation does not make any assumptions about the PDF. However, the existing nonlinear data-assimilation methods are not used in high-dimensional models as they require a computationally unfeasible ensemble size due to the curse of dimensionality. It is when an ensemble of small size is unable to reduce an error of the estimate. A typical remedy to the curse of dimensionality is distance-based localization. Distance-based localization reduces the model state dimension by taking into account only a few numerical cells of the model state near each observation. Even though distance-based localization reduces the error substantially for both linear data-assimilation methods such as ensemble Kalman filter and nonlinear data-assimilation methods such as particle filtering, linear data-assimilation methods still considerably outperform nonlinear data-assimilation methods in linear and quasi-linear regimes. We propose a further dimension reduction based on projection. We analyze the proposed projected ensemble Kalman filter and the projected particle filter in terms of error propagation. The numerical results show considerable error decrease when used with small ensemble sizes. 

Wim Vanroose,
University of Antwerp

Krylov-Simplex, Residual Subspace QPAS  and other subspace methods for Inverse Problems and constraint optimization.
Many large-scale inverse problems in data science are formulated as an optimization problem with an objective that is a combination of the 2-norm, max-norm and 1-norm. Examples are Tikhonov regularisation, Lasso or Elastic-net problems. The max-norm also appears in model-calibration of neural networks.
Krylov methods are linear algebra algorithms that are used to solve extremely large unconstrained optimization problems in industry and science such as fluid flow or mechanical vibrations.  They are easy to parallelise and scale on the largest supercomputers. They work by projecting the large problem on a small subspace. By choosing the basis vectors in a special way, the projected problem becomes tridiagonal and can be solved by simple recurrences, e.g conjugate gradients.  Similarly,  for a general matrix,  the problem becomes a small least-squares problem with a Hessenberg structure that can be solved by Givens rotations leading to the GMRES algorithm.
In this talk we discuss what happens when we project inverse problems that have a combination  1-,2- or infinity-norms on a subspace of specially chosen basis vectors.  The resulting projected problems are then small linear programming ( LP) and quadratic QP problems that can be solved very rapidly using rank-one updates.  We discuss the convergence theory which shows many similarities to Krylov convergence theory.
We give examples from inverse problems. We also discuss how we can use these techniques to accelerate column generation, a well known technique to solve large-scale planning problems.

Anne Eggels,
Sioux Technologies
Applications of mathematics to industrial problems
Mathematics, and especially mathematical modelling is a very useful tool to improve the world around us. At Sioux Mathware, we combine scientific knowledge with a pragmatic focus on engineering and operations research.
One clear example of this are high-tech systems which can always be improved to be faster, more accurate, more robust, and more autonomous. The complexity of these systems, together with a fast development cycle, makes computational physics a challenging topic. It creates understanding about physical processes and the behavior of individual components in a larger system.
During this talk, I will show some examples of applications and give more details on a specific project.
Fang Fang,
TU Delft
A Novel Fourier-cosine method for risk quantification and allocation of credit portfolios
Credit risk quantification and allocation in the factor-copula model framework underlies various practical applications in the banking industry. The popular numerical method in the banking industry is Monte Carlo (MC) simulation, which not only takes a considerable amount of computational time for large portfolios, but also fails to return reliable results when it comes to risk allocation at a standard high quantile like 99.9%. We present a novel Fourier-cosine method, which not only serves as a fast solver for portfolio-level risk quantification, but also  fills the niche in literature of an accurate numerical method for risk allocation. The key insight is that, compared to directly estimating the portfolio loss distribution, it can be much more efficient to solve the characteristic function (ch.f.) instead, after which the ch.f. can be inverted to recover the cumulative distribution function (CDF) semi-analytically via the popular Fourier-cosine (COS) method in the  field of option pricing but with some extension. We therefore name this method the COS method. As for allocation of risk measures, we show that, via the Bayes law, the original problem can be transformed to the evaluation of a conditional CDF, which can again be solved following the same insight. Theoretical proof of the error convergence is also provided, which effectively justifies the stability and accuracy of this method in recovering CDFs of discrete random variables in general. For real-sized portfolios, the calculation speed and accuracy are tested to be significantly superior to Monte Carlo simulation in the two-factor set-up. A Gaussian copula and a Gaussian-t hybrid copula are taken as examples to illustrate the flexibility of this method regarding copula choices; Value-at-Risk, Expected Shortfall (ES) and Euler allocation of ES are risk metrics selected for testing. The potential application scope is wide: Economic Capital for Banking Book, Default Risk Charge for Trading Book, valuation of credit derivatives, etc.
Mariya Ishteva,
KU Leuven
Decoupling multivariate functions using tensors
While linear functions are well-understood, defining and reducing the complexity, and increasing the interpretability of nonlinear multivariate vector functions remains challenging.
We propose a decomposition of nonlinear functions [1], which can be viewed as a generalization of the singular value decomposition. In this decomposition, univariate nonlinear mappings replace the simpler scaling performed by the singular values. We discuss the computation of the decomposition, which is based on tensor techniques. We also mention applications in nonlinear system identification. Recent extensions of this decomposition allow for its wider applicability, e.g., for neural network compression.

[1] P. Dreesen, M. Ishteva, and J. Schoukens. Decoupling multivariate  polynomials using first-order information and tensor decompositions. SIMAX, 36:864--879, 2015.
Jonas Thies,
TU Delft
Scaling the Memory Wall for Sparse Iterative Solvers
Krylov subspace methods for solving sparse linear and eigenvalue problems are nowadays
at the core of many simulations across disciplines. On High Performnace Computers at any scale,
the fact that their core operations (sparse matrix-vector and BLAS1 operations) need to be executed
in sequence for optimal numerical behavior is a limiting factor.
We discuss how sparse matrix polynomials can be evaluated in a cache-efficient way on multi-core CPUs. We then demonstratehow the availability of such fast polynomial evaluations may influence the design and choice of iterative solvers and preconditionersin practical applications.
Pascal den Boef,
TU Eindhoven

Stochastic Gradient Descent for Optimization of Large-Scale Dynamic Systems
Problems involving optimization of dynamic systems are encountered in areas such as model reduction, control design and design optimization. Often, the considered dynamics are large-scale (e.g., Finite Element Method (FEM) models), while the amount of optimization variables stays relatively small (e.g., compact controllers are required to achieve real-time performance). To render the large-scale problem numerically tractable, model reduction is applied and then the optimization is performed in a reduced space. However, the solution to the reduced problem does not generally match the solution to the original problem. To avoid these issues, inspiration can be drawn from work on optimization of large-scale static problems, which are intensively studied in the field of deep learning. The vast amount of data renders exact computation of gradients intractable. A successful solution to this issue is to use Stochastic Gradient Descent (SGD), which substitutes exact gradients by stochastic estimates obtained by evaluating the cost function on randomly sampled subsets of the data. The practical success of SGD is supported by theoretical convergence guarantees. In this work, an extension of SGD for large-scale dynamic optimization problems is proposed. Its main features are: 1) A novel stochastic algorithm to minimize the H2-norm for a large-scale dynamic system; and 2) probabilistic convergence guarantees to the solution of the large scale optimization problem, without ever evaluating the exact gradient. The method is demonstrated on several numerical examples.

Philipp Horn,
TU Eindhoven

Structure-Preserving Neural Networks for Hamiltonian Systems
When solving Hamiltonian systems using numerical integrators, preserving the symplectic structure is crucial. We analyze whether the same is true if neural networks (NNs) are used. In order to include the symplectic structure in the NNs topology we formulate a generalized framework for two well-known NN topologies and discover a novel topology outperforming all others. We find that symplectic NNs generalize better and give more accurate long-term predictions than physics-unaware NNs.

Emil Løvbak,
KU Leuven

Adjoint Monte Carlo particle methods with reversible random number generators
When solving optimization problems constrained by high-dimensional PDEs, Monte Carlo particle methods are often the only practical approach to simulate the PDE. Unfortunately, these methods introduce noise in the computed particle distributions and, as a consequence, evaluations of the objective function. Through an adjoint-based approach, we can compute the corresponding gradient down to machine precision through a similar Monte Carlo simulation. However, this approach requires retracing the particle trajectories from the constraint simulation, backwards in time, when computing the gradient. When storing these paths for large-scale simulations, one quickly runs into memory issues. In this talk, we solve these memory issues by regenerating particle trajectories backward in time. To do so, we reverse the pseudorandom number generator used to generate the paths in the constraint simulation. After describing our reversible approach, we demonstrate how it outperforms prior approaches on some concrete test-problems.