Research Areas
THE DEPARTMENT HAS ACTIVE RESEARCH IN THE FOLLOWING MAIN DOMAINS
FACULTY WITH RESEARCH IN THIS FIELD: J. Chipalkatti, A. Clay, S. Cooper, S. Kirkland, D. Krepski, T. Kucera, S. Sankaran, Yang Zhang, R. Craigen
Fermat’s last theorem says that there are no solutions to the equation xn + yn = zn with x, y, n, z integers and n>2; its proof was completed by Andrew Wiles more than 350 years after the problem first appeared. The significance of the proof, however, went far beyond settling this long-standing conjecture: it illustrated a profound connection between the arithmetic of certain geometric objects (elliptic curves) and objects from number theory (automorphic forms). Relations of this spirit are at the heart of many of the key questions and conjectures in modern number theory, including the Birch–Swinnerton-Dyer conjectures and their generalizations and the Langlands programme.
Shimura varieties are geometric objects that encode a wealth of number-theoretic information; they form a bridge between the arithmetic-geometric and automorphic worlds, and are therefore particularly useful in understanding the links between these two domains. Of particular interest in the department is a broad system of conjectures, at present still quite mysterious, relating arithmetic cycles on some Shimura varieties to the Fourier coefficients of automorphic forms.
Hosting many intriguing problems with wide-ranging applications such as coding theory, computer science, statistics, and physics, Commutative Algebra is a riveting field to work in. Of particular interest in Commutative Algebra is the study of algebraic structures via numerical invariants of objects living in geometric and combinatorial settings. The rich overlap between Commutative Algebra, Combinatorics, and Geometry empowers us to solve problems using different viewpoints.
A simple example of the intersection of Algebra and Geometry is the use of equations to describe lines in the plane. Deep geometric information is encoded in numeric invariants of point sets. For example, an invariant called the Hilbert function which is a sequence of vector space dimensions can be used to determine configuration data about the point set (such as guaranteeing situations when some points lie on a line).
Similarly, there is an overlap between Algebra and Graph Theory. A graph is a collection of nodes with edges connecting the nodes. We assign to each node a variable and to each edge a product of variables called a monomial which leads to the study of monomial ideals. In this direction one uses properties of the graph such as the chromatic number to bound or describe algebraic information about the ideal.
Computer Algebra ≠ Computer + Algebra
Computer algebra (also called symbolic computation) is a relatively recent research area in computer science and mathematics, where the mathematical computation is symbolic rather than numeric. Although numerical computations provide efficient solutions in many practical questions, computer algebra strives to address the different and arguably more difficult problems by providing exact solutions, or solutions parameterized by variables in the input. Another important component of computer algebra is to find/prove mathematical theorems and formulas.
In the past few decades, this method has developed very quickly and successfully been applied to many areas, for example, cryptography, control theory, physics, signal process, etc.
This theorem can be proved by computer algebra techniques! Sudoku can also be solved using computer algebra.
Because of its flexibility, utility and well-developed theory, linear algebra is a tool that is used ubiquitously throughout science and engineering. The theory of matrices all of whose entries are nonnegative is particularly rich, and can be approached from analytic, combinatorial and computational perspectives. Further, nonnegative matrices and their variants appear in a number of contexts including demographic models, electrical networks and stochastic processes. Thus, nonnegative matrix theory not only enjoys a wide variety of motivating problems, but also offers several avenues of entry for mathematical exploration. The Department has interest and expertise in developing nonnegative matrix theory in the analytical/combinatorial/computational directions, and in applying the results to problems in population dynamics, Markov chains, and spectral graph theory.
Group theory began with the study of symmetries, that is, rigid motions of geometric objects, and has today become a flourishing field of abstract mathematics that brings together tools from algebra, combinatorics, topology, and analysis (to name a few). In particular, one of the most exciting developments of the last few decades has been an approach to abstract algebraic questions that leans heavily on geometric intuition and topology, what has now become known as geometric group theory. This relatively new field of pure mathematics studies groups via their actions on topological spaces, and via the geometry of their Cayley graphs, and has proved essential in resolving several longstanding open questions both in the field of algebra, and the field of topology (e.g. the Hannah Neumann conjecture, or the virtual first Betti number conjecture).
FACULTY WITH RESEARCH IN THIS FIELD: T. Kucera
The main research focus is in mathematical logic (model theory) and applications of model theory to module theory and ring theory. Of particular interest is in the structure of injective modules and related objects, such as pure-injectives, and their connections to parts of ring theory, especially non-commutative localization.
The key concept is that of a pure-injective module, which has natural, well-motivated definitions both as an object of interest in mathematical logic and as an object of interest in algebra. Injective modules over (even one-sided) noetherian rings are natural examples of a special kind of pure-injective module, called “totally transcendental”. It is hoped that model-theoretic techniques for studying structural questions will lead to interesting and useful purely algebraic results.
Current research focuses on definable chain conditions (elementary socles and radicals) related to natural algebraic concepts, and on aspects of infinitary logic for modules.
FACULTY WITH RESEARCH IN THIS FIELD: L. Butler, R. Clouatre, C. Cowan, R. Martin, E. Schippers, Yong Zhang, N. Zorboska
Vector spaces of analytic functions, and the linear maps between them are an important tool in several branches of Pure and Applied Mathematics. The Hardy Spaces of analytic functions in the complex unit disk, in particular, provide natural models for very general classes of linear operators, and many major themes in Operator Theory, Operator Algebra Theory, and Analytic Function Theory have been inspired by the Hardy Space results.
Motivated by the utility of classical Hardy Space theory, we seek several variable extensions of Hardy Space results and apply them to the modern fields of Multi-variable Operator Theory (the study of linear maps from several copies of a Hilbert Space into itself), Several-variable Complex Function Theory, and Non-commutative Function Theory – a recent and exciting generalization of Complex Analysis and Function Theory to the setting of analytic functions in several non-commuting matrix variables.
When algebra meets topology, exciting things happen.
Studying a vector space equipped with a norm topology gives rise to the theory ofBanach spaces. The topological tool here is tremendously profitable. With it one is no longer tied with finiteness restrictions. Many Banach spaces that arise naturally have also the ring structure. Studying a ring with an appropriate norm topology leads to the theory of Banach algebras. In a Banach algebra the algebraic and the topological structures interplay miraculously, defining profound properties of the space. Early research on Banach algebras goes back to 1930’s when J. von Neumann and F. J. Murray launched pioneer investigations on the object. Since then Banach algebras has become a major field in modern functional analysis. The theory developed so far is filled with ingenious ideas and elegant proofs. Standing between analysis and algebra, it has had a deep influence on modern mathematics.
Harmonic analysis is an area dealing with functions and actions of a locally compact topological group. There are many Banach algebras associated with a locally compact group. They reflect intrinsic structure properties of the group. Amenability for a locally compact group is about the existence of a mean (a sort of average operation on bounded functions) that is invariant under translation by the group elements. The theory was established by M. M. Day in 1949 to settle the Banach-Tarski paradox, which states that there is a way to decompose a ball (or to cut an apple) into finite pieces that can be reassembled to get two balls (apples) identical to the original. B. E. Johnson discovered in 1972 that amenability of a locally compact group can be characterized by a class of bounded linear mappings, called derivations, from the group convolution algebra of the group. This initiated the amenability theory for Banach algebras, which has recently been extended further to various generalized notions of amenability for Banach algebras. Amenability of a group/semigroup can also determine common fixed point property of the group/semigroup acting on a subset of a locally convex space. Fixed point theorems have many important applications in many areas of Mathematics.
Complex analysis and geometric function theory studies classes of complex analytic functions. One aspect of geometric function theory is extremal problems, which are connected to some of the deepest themes in analysis: compactness or completeness of families of functions, the Dirichlet principle, variational methods and semigroups. These analytic problems are intimately connected to geometry. For example, one can investigate the geometric properties of extremal maps or the relation between analytic and geometric characterization of classes of analytic maps. Geometric function theory also relates to conformal metrics, hyperbolic geometry, Riemann surfaces and Teichmüller theory.
Chaos theory traces its origins to Poincaré’s groundbreaking discovery that the equations of motion of 3 massive bodies (e.g. the earth, moon and sun) is non-integrable. Most differential equations that a student may encounter are exactly solvable, or integrable, but these are the exception not the rule. Mathematicians study integrable systems because, like precious gems, their rarity bestows value.
The theory of integrability in dynamical systems is loosely analogous to the theory of solvability of polynomial equations by radicals. The purely algebraic aspect is studied using the tools of differential algebra. However, there are also analytical and even topological aspects to the theory. Today, many fundamental questions remain unresolved. For example, it is not known if being algebraically non-integrable implies analytic non-integrability or the existence of chaos. In many cases, the answer is known, but no general theory exists. It is also not known if chaos can co-exist with analytic integrability, although it is known that chaos and smooth integrability do co-exist.
In addition to their intrinsic worth, integrable systems are useful starting places to understand `near-integrable’ systems. With the tools of KAM theory, for example, it is possible to prove that certain models used in molecular dynamics exhibit paradoxical behaviour: when the temperature is increased, the model predicts the material freezes!
This research sits at the interface between complex analysis, functional analysis and operator algebras. The main interest is in the study of bounded linear operators acting on Hilbert space. These are ubiquitous in natural science, appearing as the basic objects of quantum mechanical systems. In the last twenty years or so, there has been a flurry of activity in multivariate operator theory, that is, the study of several operators at a time. At this level of generality however, not much can be said. Accordingly, concrete operators such as those acting on Hilbert spaces of analytic functions receive considerable attention. Our research centres around the building of bridges between these concrete operators and general abstract ones via the very powerful idea of dilation. This has prompted us to approach operator theory from two distinct angles: either function theoretic through the study of function spaces and their multiplier algebras, or operator algebraic through the consideration of various natural algebras generated by a given operator.
Operator theory and complex analysis are both parts of analysis and their main goal is exploration of properties of functions defined either on the complex plane, or on infinite dimensional spaces of functions. If a function on a vector space is linear, it is also called an operator. The goal is mostly to establish the connections between these two important mathematical objects: functions and operators. While functions have been part of mathematics and other sciences throughout history, the theory of operators is part of modern mathematics that has been historically motivated by its applications to modern physics. Operator theory is also closely related to other more recent, rapidly expanding areas of applied research such as control theory, quantum information and dynamical systems.
Our interests are mostly in two specific classes of operators: Toeplitz and Composition Operators. These operators are related to the basic mathematical operations on functions such as multiplying or composing with a fixed function. The properties of the operator depend on the inducing function and on the space on which they act, and are related to the specific requirements for the solutions of a variety of concrete problems in operator theory, complex analysis and mathematical physics.
Our research is in the area of elliptic partial differential equations (PDEs), particularly in semilinear PDEs and in questions related to regularity, existence and multiplicity of their solutions. A lot of the focus has been on the regularity of extremal solutions (stable solutions) and on the related topic of Liouville Theorems for nonlinear PDEs.
FACULTY WITH RESEARCH IN THIS FIELD: K. Kopotun, A. Prymak, R. M. Slevinsky
Approximation Theory studies how one can approximate general, possibly complicated, functions/curves/surfaces, etc. by simpler and more easily calculated objects. For instance, the Weierstrass approximation theorem shows that any continuous function can be uniformly approximated by polynomials (which are infinitely smooth), while Weierstrass himself constructed an example of a nowhere differentiable continuous function. In modern approximation theory, a variety of tools, algorithms and methods are available, which are used in different areas of analysis (e.g., in harmonic analysis and Fourier analysis) and mathematics (e.g., foundations for numerical methods), and also have very practical applications such as image compression, signal processing, curve and surface fitting.
Shape preserving approximation is a type of constrained approximation: we demand that the approximating tool preserves certain geometric properties of the function, such as sign, monotonicity, convexity, etc. Normally the element of best unconstrained approximation is oscillating around the target function, but such oscillation may be undesired in applications, so specialized methods need to be developed. Shape preserving requirements appear naturally in some problems of computer aided geometric design. Research focuses on how much of the accuracy of approximation we must sacrifice in order to fulfill shape constraints.
One of the main tasks of approximation theory is to investigate relations between smoothness of the approximated function and the error of approximation. If a function is “smoother” (e.g. has more derivatives), one can usually achieve a better approximation rate. Conversely, the magnitude of the approximation error carries information about smoothness of the function. To achieve such direct and converse results, one needs to properly measure smoothness of the function, and derivatives are usually not sensitive enough for the task. One of the applications of measures of smoothness is analysis of speed of numerical solutions of integral and partial differential equations.
Students in approximation theory can find jobs in academia and industry. Many of our M.Sc. graduates continue their studies at the Ph.D. level in various prestigious universities both in Canada and abroad (University of Alberta, University of British Columbia, University of Montreal, Max Planck Institute for Mathematics in Bonn), and our Ph.D. graduates have been successful in finding jobs in academia or industry.
FACULTY WITH RESEARCH IN THIS FIELD: R. Craigen, M. Doob, D. Gunderson, K. Gunderson, S. Kirkland, M. Ferrari
Combinatorics is the study of finite or countably infinite discrete structures. Graph theory is a sub-discipline of combinatorics that concerns itself with the structure and properties of graphs – a graph is a (finite or countable) collection of objects, called vertices, and 2-element subsets of those objects, called edges. The Department has expertise in combinatorial matrix theory, spectral graph theory, and ramsey theory, and below is a quick sketch of the research done in those areas at the university of Manitoba.
Combinatorial matrix theory studies the special properties of matrices subject to combinatorial restrictions such as that the entries must come from a certain set, or pairs of rows must be similar or dissimilar in specified ways. These objects model important things such as statistical experiments, codes for cryptological purposes or error-correction in the presence of noise, and are used to design schemes for optical masking, filtering, telephone conferencing, radar, GPS and quantum cryptography. Key questions concern conditions for their existence, methods for their construction, their constituent structures and properties, and their connections to other fields of mathematics. one of the most celebrated unsolved problems of modern combinatorics is the hadamard Matrix Conjecture.
Expertise in the Department also includes work on hadamard matrices, weighing matrices, orthogonal designs, finite projective planes, as well as entry-wise nonnegative matrices.
Many questions in extremal combinatorics are of the form “how dense must a structure be before some property is guaranteed?”, and what do the “largest” structures look like that don’t have that property. In graph theory, one may ask for the densest graphs that avoid a particular kind of subgraph. In arithmetic combinatorics, one might ask how many numbers from 1 to n are required before some arithmetic property is satisfied.
In Ramsey theory, one is concerned with preservation of structure under partitioning or “colouring” of some small substructure (like an edge). The simplest cases in Ramsey theory follow from the pigeonhole principle. Extremal combinatorics and Ramsey theory are closely related; often a Ramsey-type question has an analogous formulation in extremal graph theory.
Tools used in both areas include graphs (and their generalizations), finite geometries, partial orders, topology, number theory, and the probabilistic method.
Random graphs are probability distributions on spaces of graphs and are studied to try to understand what sort of properties are typical or expected. The Erdős-Rényi random graphs are defined by taking a graph on some finite number of vertices with each possible edge included independently at random with some fixed probability. There are models of infinite random graphs like the family trees of Galton-Watson branching processes in which a random tree grows from a root with each vertex having a random number of child vertices independently according to a fixed distribution. Random graphs can also be defined by taking random subsets of a fixed (often large) graph.
Many properties of random graphs exhibit a threshold behaviour: very small changes in the value of some parameter lead to drastic changes in the likelihood of a graph having that property. Of interest is how processes on random graphs evolve over time. How does randomly scattered information spread in a graph? If some random subset of vertices are active and vertices become newly activated exactly when their number activated neighbours is above some threshold, are all vertices eventually activated? How does a graph change when new edges are added to reinforce local connections? These questions are related to percolation and bootstrap percolation processes.
Spectral graph theory is the study of graph properties in terms of the eigenvalues and eigenvectors of certain matrices that are associatedwith a graph. Such matrices include the adjacency matrix, the Laplacian matrix, the signless Laplacian matrix, and the normalized Laplacian matrix. Each encodes the graph information in a different way, and the eigen-properties of these matrices reflect structural properties of the graph such as connectivity and biparticity. Expertise in the Department in spectral graph theory includes work in all four of the types of matrices listed above, and includes applications to food webs, protein interaction networks, and quantum walks on graphs.
Signal Processing is a branch of Mathematics and Communication Engineering whose central task is to discretize, and later reconstruct a good approximation to a given signal, e.g. a video or audio signal to be recorded for later playback. In many practical applications one can assume that the Fourier Transform of the signal does not contain arbitrarily large frequencies. For example, the human ear cannot sense pressure frequencies greater than about 22000Hz, and so one may assume that any audio signal has no frequencies greater than this value (called the bandlimit). Such signals or functions have remarkable reconstruction properties: The Shannon Sampling Formula perfectly reconstructs any such function from its values recorded at a rate inversely proportional to the bandlimit–it “connects the dots” without error! One can apply Functional Analysis and (linear) Operator Theory to generalize Shannon Sampling Theory and construct spaces of functions which can be perfectly recovered from their values on non-equidistantly spaced points in time. Our research goal is then to develop more efficient Signal Processing methods and Shannon-type reconstruction formulas using these generalized bandlimited functions.
Today, it is possible to observe natural and man-made complex systems (e.g., protein and metabolic networks, smart grids) involving spatio-temporal interactions of many elements on multiple scales. A prominent example is provided by the brain and the possibility to simultaneously record the activity of neurons from multiple electrodes. A first step toward understanding such systems from data requires the use of complex data representations, such as graphs, for encoding their spatio-temporal behaviour. Accordingly, data-driven procedures, like prediction and change detection methods, need to be designed with the ability to process sequences of graphs. The first challenge encountered in processing graph sequences is the definition of metric for comparing graphs. Depending on the particular type of graphs under consideration, defining a metric in the graph space could be challenging and therefore we usually rely on graph matching algorithms.
Artificial recurrent neural networks (RNN s) are non-autonomous dynamical systems driven by (time-varying) inputs that perform computations exploiting short-term memory mechanisms. RNN s have shown remarkable results in several applications, including natural language generation and various signal processing tasks, e.g., audio and video processing. However, training RNN s is hard as a consequence of the so-called “vanishing/exploding gradient problem”. Moreover, their high-dimensional, non-linear structure complicates interpretability of internal dynamics, which are characterized by complex, input-dependent spatio-temporal patterns. This poses constraintson the applicability of RNN s, which are usually treated as black-box thus preventing the extraction of scientific knowledge (novel scientific insights) from experimental data. Similar issues affect also other architectures, stressing the need to develop general methodologies for explaining the behaviour of machine learning methods when used for decision-making (e.g., credit and insurance risk assessment) and in scientifically-relevant applications (e.g., bio-markers discovery for genetic diseases).
FACULTY WITH RESEARCH IN THIS FIELD: L. Butler, J. Chipalkatti, A. Clay, D. Krepski, A. Prymak, E. Schippers
A circle in a plane is a simple example of an algebraic curve, because it has a polynomial equation in the variables x and y. In general, if we are given several such equations in a multidimensional space, then the solution set is called an algebraic variety. This is the basic object of study in Algebraic Geometry.
This is a very rich subject with connections to almost every other area of mathematics. On the pure side, it involves deep questions about the topology, intersection theory and the arithmetic of varieties. On the applied side, it is useful in cryptography, control theory, computer science and even statistics. It is a fascinating and inexhaustible field of study in which much remains to be learnt and discovered.
Invariant theory is a sub-area of Algebraic Geometry, where the algebraic variety admits a group of symmetries. (For example, the circle has an obvious rotational symmetry, but in general, more complicated groups can come into play.) This ties in with several intriguing questions about the corresponding rings of invariants which can be solved using computer algebra.
Low-dimensional topology is the study of phenomena that are specific to manifolds of dimension four or fewer. One approach to problems in this field is to translate, whenever possible, the available information into the language of groups. The most common method of doing this is via the “fundamental group” of a space, which encodes a remarkable amount of topological information about a space and its structure. There are also other ways of connecting group theory to the world of low-dimensional topology, for example via braid groups, covering spaces and certain topological groups.
In the broadest sense, the goal is then to become adept at translating algebraic information about the groups at hand into topological information about their corresponding spaces, and vice versa. In particular, one can study the actions of a group on spaces such as the circle, real line or a tree, and depending on the group these actions (or even the existence of such an action) can carry an incredible amount of information about the space. The exploration of these connections has been the driving force in recent years behind a reemergence of the study of orderable groups, and a new group-theoretic perspective on subtle questions relating to codimension one foliations.
A Riemann surface is a complex two-dimensional surface; a moduli space of Riemann surfaces is a parametric family of Riemann surfaces.
Riemann surfaces appear naturally in geometry and topology, and play an important role for example in complex analysis, algebraic geometry, and mathematical physics.
There are interesting interactions between an (infinite-dimensional) moduli space called Teichmuller space, and a branch of mathematicalphysics called conformal field theory. Conformal field theory studies quantum or statistical systems which are invariant under local rescaling and rotations. In the last few decades it has been the focus of a great deal of interest by mathematicians because of its rich structure and deep connections to diverse fields. Mathematical and physical models of conformal field theory involve moduli spaces of Riemann surfaces.
Symplectic geometry is a branch of differential geometry that has origins in the mathematical framework of classical mechanics (e.g. such as Newton’s laws of motion). The main objects of study are symplectic manifolds—generalizations of Euclidean space that allow for non-trivial curvature and topology—that play the role of classical phase space (the parameter space of position and momentum coordinates).
The notion of symmetry plays an important role in symplectic geometry, and there is great interest in understanding properties of symplectic manifolds that can detect and distinguish various kinds of symmetry. In the classical viewpoint, symmetry is manifested by an underlying group, whereby every symmetry is viewed as an invertible transformation and one can compose/multiply any two symmetries. A modern perspective on symmetry is through a generalization called a groupoid, where the composition/multiplication law is only defined for certain pairs of ‘compatible’ symmetries. Groupoid symmetries provide a robust framework in differential geometry that can handle many interesting ‘singular’ (i.e. non-manifold) spaces of interest and have permeated current research at the interface of geometry, topology and mathematical physics.
FACULTY WITH RESEARCH IN THIS FIELD: J. Arino, K. L. Liao, S. Portet, M. Ferrari
Mathematical Biology is a multi-disciplinary research program lying at the interface between mathematics and biology. It entails the development and rigorous analysis of mathematical models for gaining qualitative and quantitative insight into biological phenomena and processes, such as cellular biology, ecology, epidemiology, evolution, immunology, neurophysiology, etc.
Part of the mathematical biology research program at the University of Manitoba focuses on the design and analysis of models for the spread and control of emerging and re-emerging diseases of public health importance. Some work concerns childhood disease, focusing on the mathematics of imperfect vaccines and understanding the distinct epidemiological signatures of different modes of vaccine failure. Infectious disease modelling involves a combination of theoretical analysis, numerical experimentation and modern statistical inference techniques. Another area of research concerns the worldwide spatio-temporal spread of infectious pathogens in relation with travel, with emphasis on providing advanced warning to public health officials regarding infectious disease risks. The objective of the epidemiology research program is to contribute to the development of effective public health policy for combating the spread of diseases.
Another active research program is in the area of cellular biology, and more specifically, cytoskeletal networks, which are protein networks within cells. The organization of a cytoskeletal network is the main determinant of its cellular function. Specific interests include models of the organization of networks and assembly of filaments composing those networks to characterize the determinants of their structures and mechanical properties. The research involves a tight collaboration with experimentalists, mathematical conclusions and model responses are calibrated and validated by comparison to experimental data. The objective of the cytoskeletal research program is to contribute to the understanding of the structure and function of the cytoskeleton in cells.
Additionally to the traditional outcomes available to mathematics students, mathematical biology students can find jobs in regional, national and international public health agencies, environmental agencies, pharmaceutical companies, etc. In particular, graduates of our mathematical biology program are accepted for PhD programs at top universities and have won prestigious doctoral and postdoctoral fellowships; some hold tenure track positions in universities around the world and others work in local public health agencies.
I have been using interdisciplinary approaches to solve questions in cellular signaling. I combined mathematical analysis and computation of differential equations (DDE, ODE, and PDE) to investigate the dynamics in morphology, cancer immunotherapy, and G protein signaling in plant cells. I also work closely with biologists and wet lab experimentalists to create mathematical models that can capture and predict real phenomena to help experimental design. In morphology, I investigate the mechanisms of Notch signaling in segmentation via interdisciplinary approaches to overcome technological limitations and discover the fundamental mechanisms of signal transmission and pattern formation in segmentation. In cancer modeling, I focus on immunotherapy. I constructed PDE models to fit the tumor growth data, provide hypotheses, and develop efficient therapeutic protocols. For G protein signaling, I perform experiments on plant cells and use fluorescence microscopy to capture the behaviour of proteins. Based on the data that I collected and analyzed, I construct mathematical models by machine learning methods to discover the mechanism of G protein signaling. I further designed and performed experiments to verify my model predictions.
FACULTY WITH RESEARCH IN THIS FIELD: K. L. Liao, S. Lui, R. M. Slevinsky
The focus of research is on designing accurate and efficient numerical methods for linear and nonlinear partial differential equations (PDEs). Two main research themes are p and hp finite element methods and domain decomposition methods. We shall briefly discuss the latter, which are a class of parallel methods to solve PDEs. The idea is to divide the domain into overlapping or non-overlapping subdomains and solve the PDEs in each subdomain in parallel. A global solution is formed by gluing together the subdomain solutions. These are iterative methods which can be shown to converge optimally in some sense. When the subdomains overlap, the methods are known as Schwarz methods, while on non-overlapping subdomains, they are called substructuring methods. While the theory for linear PDEs is well known, the main purpose of this research is to show convergence of these methods for some classes of nonlinear PDEs. The nonlinear PDEs considered may involve fractional differential operators, or may be fully nonlinear.
Another direction involves numerical solution of PDEs by spectral methods. When the solution is analytic, the error decays exponentially quickly. For time-dependent problems, most of the focus has been on low-order finite difference schemes for the time derivative and spectral schemes for the spatial deriviatives. Our research examines space-time spectral methods which converge exponentially in both space and time.
Singular integral equations arise in the reformulation of elliptic partial differential equations where data is defined on prescribed boundaries of reduced dimensionality. The reduction of dimensionality and transformation from an elliptic partial differential equation into a singular integral equation arises naturally from Green’s representation theorem.
Singular integral equations have a rich history in scattering problems for electromagnetics and seismic imaging, fracture mechanics, fluid dynamics, and beam physics. For applications including random matrix theory, asymptotics of orthogonal polynomials, and integrable systems, singular integral equations arise via reformulation as Riemann–Hilbert problems.
In this group, we develop a new class of fast, stable, and well-conditioned spectral methods for singular integral equations. Combination ofdirect solvers with hierarchical solvers allows numerical simulations with domains consisting of thousands of disjoint boundaries with millions of degrees of freedom. Use of a hierarchical solver as a pre-conditioner in a parallel iterative solver will extend this to new problems involving millions of disjoint boundaries. To extend the spectral method to more complicated geometries, we develop fast and stable algorithms for transforming expansion coefficients in Chebyshev bases to more exotic polynomial bases. As the spectral method is extended to three-dimensional elliptic partial differential equations, new fast and stable algorithms for transforming expansion coefficients will be required and new applications will also be explored including cloaking, scattering from fractal antennae, and scattering in parabolically stratified media such as optical fibres.