Werkgemeenschap Scientific Computing

Abstracts Woudschoten Conferentie 2008

 Numeriek oplossen van hoog-dimensionale problemen (Numerical solution of high-dimensional problems) Martin J Mohlenkamp, Ohio University Computing in high dimensions with sums of separable functions Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension, a phenomenon dubbed the Curse of Dimensionality. I will present a method to bypass this curse, based on representing functions of many variables as sums of separabale functions. We will first consider what kinds of functions can be well-represented in this way, and what these representations look like. Then we will consider what algorithms are needed to compute using functions in this representation. [back] Martin J Mohlenkamp, Ohio University Approximating the wavefunction of the multiparticle Schrodinger equation The multiparticle Schrodinger equation is the basic governing equation in Quantum Mechanics. Its solution, called a wavefunction, is a function of many variables and is constrained to be antisymmetric under exchange of these variables. I will describe a Green's function iteration to construct the wavefunction, and our method to represent the wavefunction as a sum of separable functions. We will then go into selected detail of the algorithm, such as the use of antisymmetric inner products and the incorporation of the potential operators. [back] Ronald Cools, Katholieke Universiteit Leuven The approximation of multivariate integrals A cubature formula an approximation of a multi-variate integral by a weighted sum of function values. Several criteria are used to construct such approximations. The best known criterion is probably that of (algebraic) degree, indicating that the approximation is exact for polynomials up to that degree. The type of rules that receives most attention nowadays are lattice rules. In the 1970-80's many cubature formulas were constructed for low dimensional standard regions. Several theories were developed for cubature formulas of algebraic degree. In practice both turned out to be very limited. In recent years some old methods were used again and simply because computers became more powerful, new results were obtained. Progress even for 2- and 3-dimensions and standard regions such as a cube or simplex was rather small. We will sketch the fundamentally different approaches used to construct cubature formulas of algebraic degree, emphasizing their merits and limitations. In recent years the focus of research on multi-variate integration moved to higher and higher dimensions. A few decades ago, Monte Carlo methods were reigning there without competition. Recently the impact of quasi-Monte Carlo methods increased. These methods are developed with a totally different quality criterion in mind and are developed for hypercubes only. Based on the name, many people still believe these are stochastic methods, some variant of Monte Carlo methods. Quasi-Monte Carlo methods are however fully deterministic methods, using points that are designed to be `better than random', aiming at a faster convergence. Meanwhile quasi-Monte Carlo methods have shown that for some type of problems they are to be preferred. The fact that they are developed for hypercubes can be worked around: Using some transformations quasi-Monte Carlo methods can also be used for simplices and the entire space. We will point the attention of the audience to these recent trends, emphasizing a particular class of methods known as lattice rules. [back] Ronald Cools, Katholieke Universiteit Leuven Lattice rules for multivariate integration In this talk I will focus on lattice rules and study them from two perspectives. From a first perspective they are integration rules exact for some space of trigonometric functions. This corresponds with the view in older texts that say that lattice rules are for integrating periodic functions. If one wants to apply them for integrating non-periodic functions, one first needs a periodising transformation to make the integrand periodic. Construction of such rules is done for low dimensions only. From another perspective lattice rules are just a set of low discrepancy points. Then they are constructed to minimise, e.g., the worst case error is some reproducing kernel Hilbert space. Construction of lattice rules using this criterion can nowadays be done extremely fast for hundreds and even thousands of dimensions. [back] Christoph Schwab, ETH Zurich Sparse Adaptive Tensor FEM for Operator Equations with Stochastic Data[pdf] Christoph Schwab, ETH Zurich Convergence Rates of Stochastic Galerkin FEM for Elliptic SPDEs[pdf]   Bio-wiskunde (Bio-mathematics) Spencer Sherwin, Imperial College London Arteries and Algorithms: Reduced modelling of cardiovascular networks Flow in the arterial networks exerts numerous effects on the vessels by virtue of the stresses it imposes on them and the mass and heat it transports. The biological and mechanical interactions in the vessels involve complex multi-scale coupling between fluid dynamics, vascular mechanics and vascular biology. The largest scale of this system is the pulse wave mechanics which are of order of 5-10 m in length. Pulse waves are generated at the heart as blood is ejected into the compliant arteries. These waves are then propagated and reflected throughout the bifurcating network of arteries. The large wavelength of these pulses as compared to the diameter of the vessels makes the system amenable to reduced modelling. In this presentation we will start by discussing the historical modelling of pulse waves in the cardiovascular system dating back to the work of Euler in 1775. A series of subsequent mathematical developments, including computational modelling techniques, now allows for a more complete solution of the wave propagation in the larger arterial vessels. However analysing the wave dynamics in large bifurcating networks, where model parameters and boundary conditions are often uncertain, highlights the current challenges with which we are faced in applying this type of modelling to clinically relevant problems. [back] Spencer Sherwin, Imperial College London Arteries and Algorithms: Fluid-dynamics and mixing in arterial geometries After introducing the current state of the art in modelling of pulse wave propagation in the arterial system, in this presentation we shall discuss how mathematical and computational modelling can be applied to simulate the complex fluid dynamics and mixing that arises in regions of the arterial geometries, such as bifurcations, and that are associated with the occurrence of arterial disease, Over the last decade, advances in medical imaging have permitted computational flow modelling to be applied in a variety of anatomically correct geometries. Whilst such analysis can generate "complex" flow features it does not always provide much understanding of the fundamental features of the fluid mechanics under physiological conditions. Therefore we will instead use anatomical geometries to motivate a series of idealised models which encapsulate much of the pertinent fluid mechanics from which we can apply techniques [back] Jean-Frederic Gerbeau, INRIA Paris-Rocquencourt Fluid-structure interaction problems in the cardiovascular system This talk will address various computational issues related to fluid-structure interaction problems in the cardiovascular system. We will focus on the artery wall / blood interaction and on the cardiac valves simulation. We will in particular address the design of robust and efficient coupling algorithms. Significant progress have been done in this area in the recent years, in particular due to a better theoretical understanding of the underlying difficulties. In spite of these progress, many important issues remain open. We will address some of them. We will also propose a framework to include general constraints in fluid-structure simulation, like multibody contact or kinematic constraints. [back] Jean-Frederic Gerbeau, INRIA Paris-Rocquencourt Numerical simulation of the electrical activity of the heart We present the basic material to model and compute realistic electrocardiograms with partial differential equations (models based on cellular automatons will not be considered). We use the so-called bidomain equations to model the electrical activity of the heart and a Laplace equation for the torso. Various modelling assumptions will be discussed, for example the ionic activity of the cell membranes, the relevance of cells heterogeneity, the fiber orientation and the coupling conditions with the torso. Potential applications of these simulations will also be presented. [back] Luca Formaggia, Politecnico di Milano The interplay of different models for the simulation of blood flow in the cardiovascular system Blood flow in the human cardiovascular system is of high complexity. Different numerical models have been introduced, differing in the level of details that can be captured, computational costs and, of course, range of applicability. In the last years several efforts have been carried out to couple these models together to be able to simulate large parts, if not the whole, system, with the desired level of detail at acceptable computing costs. In this lecture we will give an overview of these techniques and present some results. [back] Luca Formaggia, Politecnico di Milano Defective boundary conditions for the Navier-Stokes equations An issue which arises when computing blood flow in an artery with a three-dimensional model is that on some boundary sections we often have at disposal only averaged quantities (flow rate, mean pressure, etc.). They have to be properly fed as boundary data to the system of partial differential equations under consideration (typically the Navier-Stokes equations, possibly coupled with a model for the vessel wall dynamics). We will present some numerical techniques that have been developed to this aim. [back]
 Comment to Margreet Nool