## Past Talks

Ryan Lukeman, "A Numerical Investigation of a Two-Layer Frontal Geostrophic Model of the Antarctic Circumpolar Current", Wednesday, August 24

Abstract: The numerical simulation of oceanic flow is a primary research tool for understanding the physical properties of the world ocean. These models range from complex, high-resolution models to simplified models in idealized domains. In the spirit of the latter, a two-layer frontal geostrophic model is discussed for a wind-driven circumpolar flow via an asymptotic reduction of the shallow-water equations. The model is implemented using the finite element method via the software package FEMLAB. The model is used to study the meridional balance, lower-layer outcropping, and parameter variation in the Antarctic Circumpolar Current, the dominant oceanic flow in the Southern Ocean. The effects of varying resolution and timestepping parameters is discussed.  Experiments are performed in a number of domain and bottom topography regimes to examine the effects of the Drake Passage and a topographic ridge on the meridional balance and transport that prevails in the current. The results support a mechanism of balance by which momentum imparted by winds at the surface is transferred to the lower layer via eddies and dissipated by the ocean bottom.

Garbor Lukacs , "Introduction To Topological Groups", Monday, August 22

Abstract: We present some basic, classical results concerning topological groups, with focus on locally compact abelian and compact groups, and their duality theories. Our aim is to make the talk as self-contained as possible.

Michael Dowd , "Fitting  Dynamic Models to Data", Tuesday, August 16

Abstract: In this talk I will discuss the estimation of the state and parameters of a dynamic system from time series observations. The dynamic system considered here is governed by system of coupled nonlinear differential equation based models, numerical implemented as (stochastic) difference equations. The specific example considered here is an stochastic ecological model for population dynamics which exhibits interesting dynamical behaviour, such as a Hopf bifurcation. It will be illustrated how noisy and incomplete observations of the system state can be used for online estimation of this system using a statistical time series framework.

C. C. A. Sastri , "Unobserved Outcomes and Unobserved Probability", Friday, July 15

Abstract: Suppose that an experiment with an unknown number of possible outcomes is performed and that these outcomes occur according to some random mechanism. Suppose that n independent trials are carried out and that N distinct outcomes have been observed. We attempt to answer the following questions: What is the probability that, on the next trial, an outcome not observed before occurs?  (This is called the problem of unobserved probability.) What is the total number of outcomes not observed? (Equivalently, what is the total number of outcomes of the experiment?) This second problem has a long history going back to Turing and is, apart from its mathematical interest, important  in many areas such as biology (species sampling), intelligence gathering, numismatics, and literary scholarship.

We'll give a brief survey of past work and also discuss recent joint work with Alberto Gandolfi in which a Bayes-like estimator for the number of unobserved outcomes is derived. Such an estimator has the advantage over the existing estimators -- due to Chao and Lee and others -- in that, modulo the fact that Turing's ansatz is used (it is used by everyone else as well), it is derived from first principles, without any ad hoc assumptions, and includes previous estimators as special cases.

Geoff Cruttwell , "A Category Theory View of  Products and Sums", Monday, July 11

Abstract: The idea of taking sums or products of sets has been well-known for quite some time.  Looking at these concepts from a category theory point of view, however, demonstrates an interesting relationship between them. Hopefully this talk will demonstrate an instance of why taking the categorical viewpoint can provide new insight into existing ideas.  No knowledge of category theory required.

Le Bao , "Model  Based on Clustering  Among Codon Sites", Tuesday, June 28

Abstract: I will introduce a clustering method in phylogeny which can identify the class labels for site-models. Bielawski, Hong and I name it by MBC (model based clustering), because it is based on the likelihood. We also extend the existing codon models by different combinations of parameters and use LRT to select the best model for real data analysis. These methods are then applied on simulated data and real data for comparison.

Paul Sheridan , "Constructing Confidence Regions for Evolutionary Tree Topologies", Friday, June 24

Abstract: I will talk about how to quantify uncertainty when estimating evolutionary tree topologies using generalized least squares. My intent is to explain the background for my masters thesis, so don't expect anything overly technical.

Caroline Adlam , "The Kepler Problem and Superintegrability", Friday, June 17

Abstract: The notion of a completely integrable Hamiltonian system introduced in the 19th century by Joseph Liouville (1809-1882) has seen many interesting developments since then. I will discuss a generalization of this concept and explain when a completely integrable Hamiltonian system is said to be superintegrable. The classical Kepler problem will be used as an illustrative example to explain these and other properties of Hamiltonian systems.

Huaichun Wang , "Quantifying Codon Usage Bias of  the Genes", Thursday, June 9

Abstract: The genetic code is redundant because there are 61 sense codons codingfor 20 amino acids. Codons that encode the same amino acid are called synonymous codons. Eighteen of the 20 types of amino acids have synonymous codons. However, the choice of synonymous codons is not random and they differ among genes within a genome, and among genomes. I will show some statistical measures to qualify the codon usage bias, ranging from purely mathematical terms, such as Shannons entropy and the effective number of codons, to purely biological model, such as the codon adaptation index.

Steven Noble , "Newton Polygons and Irreducible Polynomials", Wednesday, June 1

Abstract: In this talk I will introduce Newton Polygons, discuss some of their properties, and show how they can be used to create a general Eisenstein criterion. The usual Eisenstein condition says that a n-th degree polynomial f(x) with integer coefficients cannot be factored into lower degree polynomials with integer coefficients if there exists a prime p such that p does not divide the coefficient of x^n, p does divide all the other coefficients, and p^2 does not divide the constant term.  This condition can be phrased in terms of Newton Polygons and even be generalized to allow p to divide the constant term more than once.

Jihua Wu , "Some Problems about t Distribution, Dirichlet Distribution and F Distribution", Monday, May 30

Abstract: When the variance of a normal population is unknown, as to the test of mean for the population, we usually use t test. In fact, this method of test does not require that the population is normal distribution; even the independence among the samples is unnecessary. The purpose of this paper is to find the applying region of the t test, i.e. the necessary and sufficient conditions for a two dimension variable forms a test, which possesses a t distribution. As to the similar purpose, we also give the necessary and sufficient conditions for two tests, one has a Dirichlet distribution, and the other has an F distribution respect.

Pat Keast , "Integration over the Hypercube Using Lattice Methods", Thursday, May 26

Abstract: In low dimensions there are many options for performing numerical integration, mostly based on methods designed to be exact for polynomials. Reliable software packages exist which automatically choose sampling points adapted to the integrand. But when the dimension goes above 8 or 10, these methods become extremely computationally expensive. Traditionally, for higher dimensions, the method of choice has been some variant of Monte Carlo. In the past 30 years, however, there has been a growing interest in what are called {\em Pseudo Monte Carlo methods}. The talk will give an introduction to these methods, and describe more recent work on a particular class of these methods called Lattice Rules.

Robert Milson , "Algebraic Solutions of the Schrödinger Equation", Wednesday, May 18

Abstract: Mathematically, the key problem in classical quantum mechanics is the diagonalization of a Hermitian operator.  The difficulty is that the operators are, usually, second-order differential operators, with an infinite dimensional underlying state space.  Nonetheless, many important models admit a polynomial basis which reduces the operator to an upper triangular matrix, and thereby allow for an exact calculation of the spectrum.  We call such operators exactly solvable.

A recent generalization is the notion of a quasi-exactly solvable operator.  Here again, we can represent our operator as a matrix relative to a polynomial basis. However, now  the matrix is not upper triangular, but does possess a finite-dimensional invariant block.

Richard Wood , "Adjoint Functors", Wednesday, May 4

Abstract: Categories, functors, natural transformations --- almost every mathematician has heard of these terms and has some understanding of them, in so far as they pertain to the category of objects with which he or she works. Most mathematicians know that you need the first term to define the second, the second to define the third and that natural transformations' were sighted long before the others. For example, it was known that, for every vector space V, there is a natural linear map' from V to its double dual V**. It's natural because you can describe it without mentioning, or even knowing, a basis for V. In fact it really doesn't have anything to do with vectors. (It has more to do with the natural function X--->PPX, where P denotes the power-set construction. But I digress.)

You further need natural transformations to define `adjoint functors' which are much more interesting than anything that appears earlier in the sequence. Unfortunately, most textbook appendices that purport to tell you everything you {\em really} need to know about CT run out of steam well before adjoints or make them look very messy, technical, unappealing and useless. In fact, adjoints provide the means to compute in categories. Almost all interesting constructions in mathematics are adjoint to some other functor, very often a trivial functor --- even when the construction in question is itself highly non-trivial. (If you have an interesting construction that doesn't appear to be adjoint to anything else it's sometimes an indication that the categories in question are not as artfully defined as they should be.)

Finally, just as the notion of {\em isomorphism} becomes truly elegant in a category, so the notion of adjunction becomes truly elegant, and its relation to isomorphism is exposed, when the concept is explored in a 2-category. To make this as simple as possible will be the goal of the talk.

Jin Yue , "The Gauss Bonnet Theorem", Monday, April 11

Abstract: In elementary geometry, for example, in the Euclidean 2-space, the sum of the interior angles of a (geodesic) triangle is 2\pi, we can consider the similar problems for other spaces, e.g., (geodesic) triangles in a 2-sphere or a hyperbolic space of dimension 2. What will happen? For a surface in $R^3$, or more generally, a Riemannian manifold which is a natural generalization of surfaces, a central issue is to understand its topological structure . We need to use some invariants of the surface to get some information about its topology. What we knew mostly about the surface is its first and second fundamental forms from which one can form various curvatures, Gauss curvature which in fact depends only on the first fundamental form, is intrinsic -- a property that is preserved under isometries. The Gauss-Bonnet theorem then establishes a bridge between the Gauss curvature and the topology of the surface, which is one of the deepest and most beautiful results in differential geometry.

We will state explicitly the 2-dimensional Gauss-Bonnet theorem. First give the definition of the Gaussian curvature in a simple way, then some interesting applications, e.g., the sum of the interior angles of a geodesic triangles in 2-dim space form; the special case of the theorem for closed surfaces; the Hadamard theorem stating that any two closed geodesics in an orientable closed surface with positive Gauss curvature must intersect (this is a generalization of the fact that any two great circles in a sphere must intersect); etc. For these applications, I will give the proofs if time permits.

I will then move to something about its generalizations -- Chern's theorem. Chern's theorem is one of S. S. Chern's  most important work. which is part of the work for which he received the Wolf prize --  ''For his outstanding contributions to global differential geometry which influence all mathematics". After Chern's theorem, the Atiya-Singer index theorem which contains Chern's theorem as a special case is another great theorem in Math. Except its important applications, the idea contained in Gauss-Bonnet theorem has great influence to Global Riemannian geometry. Now curvature and topology is a central issue in Riemannian geometry. Riemannian geometry is closely related to topology, analysis, mathematical physics, algebraic geometry, etc.

We need to thank the Gauss-Bonnet theorem. Maybe without the Gauss-Bonnet theorem, the classical differential geometry couldn't have changed its direction to global problems earlier and so global Riemannian geometry couldn't develop so fast, thus mathematics would not look so nice as now. But the idea behind the Gauss-Bonnet's theorem is surprisingly simple.

Jeffery Praught , "Hormonal effects on glucose regulation", Friday, April 1

Abstract: A dynamical-systems model of plasma glucose concentration, and its regulation by insulin and glucagon, is described, as pertains to type 1 and 2 diabetes.  The hyperglycemic case is seen to be dependent only on insulin concentration, while the hypoglycemic case requires consideration of both insulin and glucagon concentration.  The role of healthy alpha-cells in maintaining proper levels of glucose and the hormones is also highlighted. In this  talk we will discuss a mathematical model of diabetes.

Sigbjorn Hervik , "Symmetries and Lie groups", Monday, March 21

Abstract: "I am certain, absolutely certain that...these theories will be recognized as fundamental at some point in the future."  Sophus Lie said these words more than one hundred years ago. Today the notions of  "Lie groups" and "Lie algebras" are in the vocabulary of most mathematicians and theoretical physicists; Lie's theories have become indispensable tools for understanding the physical laws of Nature.

In this seminar we will provide some examples of theories where Lie groups  play an important role and we will give an introduction to the concepts of Lie groups and Lie algebras.

Steven Noble , "p-adic Tools Involved in the ABC-Conjecture", Friday, March 11

Abstract: The intention of this talk is to introduce the uninitiated to p-adic numbers, the ABC-conjecture, and how the two can work together.

The p-adic norm is an alternate way to measure the distance between rational numbers by measuring divisibility.  Just as the Reals are the completion of the Rationals under the standard norm the p-adic numbers are the completion under the p-adic norm.  As an alternate completion the p-adic numbers are interesting in their own right.  They also allow for certain sums to converge that do not converge regularly.  This can be useful in result in number theory.

The ABC conjecture states that for an epsilon>0 there exists a constant C such that for any pair relatively prime numbers a,b,
where rad(n) is the product of the primes that divide n.

This is interesting because there is a range of modern number theory problems that follow easily from this conjecture.   This includes statements about p-adic numbers which in turn make statements about the natural numbers.  In this I talk I will discuss one such set of statements.

David Iron , "Stability and Dynamics of Multi-spike Solution", Friday, March 4

Abstract: The study of pattern formation in reaction-diffusion equations dates back to the work of Turing in the early 50's.  He proposed that the formation of chemical patterns could be one of the mechanisms responsible for the generation of localized structures during embryonic  development.  In the 1970's Gierer and Meinhardt performed numerical simulations on a reaction-diffusion system designed to explain some of the experimental results from the study of hydra development.

I will start with a brief history of the topic.  I will then go on to construct pattern solutions in the Gierer-Meinhardt equations.  I will then go on to examine the stability  and the dynamic interaction of these patterns.

S. Swaminathan , "Hilbert's Problems (continued)", Monday, February 28

Abstract: Although the title says 'continued', the talk will be independent of the first one. A recapitulation of the Introduction to the 1900 Paris Congress and a quick review of the first ten problems will be given in the first part of the talk. Then the talk will focus on Problems 11 to 23.

Josh MacArthur , "Solving Linear Systems of First Order PDEs", Friday, February 18

Abstract: As is well known, the method of characteristics which traces back to Lagrange, is a standard and powerful technique for solving systems of  linear homogeneous PDEs. As a result, it is a preferred choice for dealing with a wide range of PDEs arising in various areas of Mathematics and Physics.

However, as is the case with many mathematical techniques, the method of characteristics has its limitations. Recently, I have encountered these limitations first hand in my research and learned that one can employ a technique which can effectively alleviate the difficulties arising from them. I will discuss this technique and present illustrative examples.

Joey Latta , "Phantom Cosmology", Wednesday, February 16

Abstract: Recent astronomical data suggests that the expansion of the universe is actually increasing. There have been many attempts to explain such a phenomenon. One of the most curious explanations is that of phantom energy, ie. a type of energy with negative pressure. The purpose of this talk is to examine the standard cosmological model with such an energy, and determine whether or not phantom energy “makes sense”, in that it preserves the successes of the standard model, and explains the observed Perlmutter-Riess acceleration.

S. Swaminathan , "Hilbert's Problems (to be continued)", Friday, February 11

Abstract: In the Paris International Congress of Mathematicians, 1900, David Hilbert delivered a talk on 'Mathematical Problems' suggesting 23 unsolved problems which influenced the development of mathematics in the 20th century. It is proposed to present the story of the problems and the solutions achieved.

Richard Hoshino , "Roots of Independence Polynomials", Wednesday, February 2

Abstract: In a graph G, a set of vertices S is independent if no two vertices in S are adjacent.  For any graph G, we define the independence polynomial as I(G,x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ..., where a_k is the number of independent sets in G of cardinality k. For example, if G is the 6-cycle C_6 (with vertices 1,2,3,4,5,6 in that order), then I(C_6 ,x) = 1 + 6x + 9x^2 + 2x^3, since there is 1 independent set of size 0 (the empty set), 6 independent sets of size 1 (each of the six vertices), 9 independent sets of size 2 (the sets {13},{14}, {15}, {24}, {25}, {26}, {35}, {36}, {46}), and 2 independent sets of size 3 (the sets {135} and {246}).

In this talk, we will investigate the roots of I(C_n, x) with the hope of determining some interesting properties.  Using Maple, it appears that the root of largest magnitude is approximately -n^2/10 if n is even, and -n^2/40 if n is odd.  At first glance, there is no apparent reason why this should be true.  The analysis is especially difficult because there is no obvious way to determine a formula for the roots of I(C_n, x).

But if we examine the Chebyshev polynomial T_n(x), we can make a beautiful connection between T_n(x) andI(C_n, x), and we will develop a method to compute the roots of I(C_n, x) explicitly.  As a corollary, we will show that the root of largest magnitude is approximately -n^2/10 if n is even, and -n^2/40 if n is odd.  (Hint: Pi^2 is very close to 10).

We will conclude the talk by mentioning some other results on independence polynomials, which form a chapter of my Ph.D thesis.

No knowledge of graph theory or polynomial theory will be assumed - the only prerequisite is a knowledge of L'Hopital's Rule.

Larissa Lorenz (University of Waterloo and University of Jena), "Short Distance Modifications in Inflation and The Quest for The Right Vacuum", Wednesday, December 1

Abstract: Inflation provides us with a mechanism for generating large scale structure: It traces the origin of galaxies and other structures back to small quantum fluctuations in the inflation field. Imprints of these Fluctuations are today observable as small anisotropies in the CMB radiation. Recent studies have shown that inflation might even be able to predict imprints of as yet unknown small scale physics in the CMB power spectrum. In this context, an interesting model for short distance physics is that of a finite minimum length uncertainty, which expresses an ultraviolet cutoff at some natural scale such as the  Planck or string scale. In this talk, I will show how the mode equation for inflaton field  modes is modified in the presence of this cutoff, and I will present  its exact solutions. The choice of solution corresponds to the choice of the vacuum and I will examine various criteria. These results should enable us to better address the issue of vacuum energy production in the expanding Universe.

Givanni Ratelli(University of Turin, Italy), "Integration by separation of variables of the Hamilton-Jacobi equation: The first 100 years of the Levi-Civita criterion", Wednesday, November 17

Abstract: In 1904 Tullio Levi-Civita derived necessary and sufficient conditions for integrability of Hamiltonian systems by the method of separation of variables. I will discuss the developments of this result during the last one hundred years.

Jonathan M. Borwein, "Maximum Entropy Methods for Inverse Problems", Wednesday, September 29

Abstract: I shall discuss in "tutorial mode" the formalization of inverse problems  such as signal recovery and option pricing as (convex and non-convex) optimization problems over the infinite dimensional space of signals.

Maintained by: Andrew Hoefel and Rob Noble
Chase Building | Dalhousie University | Halifax, Nova Scotia, Canada B3H 3J5 |