2017 Fields Undergraduate Summer Research Program
July 4 to August 31, 2017
The Fields Undergraduate Summer Research Program (FUSRP) welcomes carefully selected undergraduate students from around the world for a rich mathematical research experience in July and August.
Each year, up to 25 students are selected from hundreds of applicants from mathematics-related disciplines to participate in the Program.
Students accepted for the Program will have most of their travel and on-site expenses covered by the Institute. Most Program funding supports student expenses and most student placements are based at Fields.
This year, the Fields Directorate will also consider requests for placements at the supervisor's institution within Ontario (see the application form for the applicable criteria).
To provide a high-quality and enriching mathematics research experience for undergraduates.
The project experience, quality mentorship, and team/independent work are intended to foster enthusiasm for continued research. Students work closely with each other and with their supervisor in a collaborative research team.
Call for Supervisor/Project Submissions. See here for details.
|December 31||Supervisor/Project Submission Deadline.|
|January 15||Call for Student Applications. See here for details.
Selected projects/supervisors are posted on the Fields website.
|February 28||Student Application Deadline.|
|March 15||Successful students are contacted and offered a placement in FUSRP.|
|March 15-30||Names of successful/accepting students are posted on the Fields website.|
|April-June||Students make appropriate travel/visa arrangements.|
|July 4||Program launch.|
|August 31||Program adjourns.|
|Week 1||July 4||
9:30 am - 10:30 am
10:30 am - 12:00 pm
12:00 pm - 1:00 pm
|Week 1||July 5-7||Students meet informally with their supervisor(s) and with other students in their group to work on their assigned research project.|
|Weeks 2 and 3||July 10-14 and 17-21||Students meet informally with their supervisor(s) and with other students in their group to work on their assigned research project. Site visit to the supervisor's host institution. Possible site visit to the University of Waterloo's Faculty of Mathematics.|
|Weeks 4-8||July 24-August 25||Students meet informally with their supervisor(s) and with other students in their group to work on their assigned research project. Group excursion (all students welcome) organized and sponsored by the Fileds Institute.|
|Week 9||August 28||Mini-conference: The results of all summer student projects must be summarized and presented to other supervisor/student teams. Supervisors (or a qualified substitute) are required to make themselves available for the Mini-conference.|
|Week 9||August 28-31||Students prepare project and narrative reports about their experience in the Program.|
|Week 9||August 31||Student project and narrative reports due.|
|Week 9||September 1||Last day to check-out from Woodsworth College. We hope you have a safe trip home!|
2017 Research Problems
Project 1: A Problem from Verifying Integer Programming Results
Supervisor: Kevin Cheung, Carleton University
Many optimization problems that involve indivisible quantities are often formulated as mixed-integer linear programming (MILP) problems and solved using optimization software. Despite the amount of care and discipline put into the design and development of solvers, incorrect results can be returned for a number of reasons, one of which being the use of finite-precision floating-point arithmetic. Even when exact rational arithmetic is employed, there can be programming bugs and oversight in the implementation of solution methods. Hence, a means for independently verifying MILP computational results is highly desirable. For linear programming, a dual optimal solution is sufficient for verifying optimality and therefore serves as a certificate of optimality. For MILP, many forms for such a certificate have been proposed as there currently is no known polynomial-size certificate for MILP in general. A recent technical report  proposed a certificate format designed with simplicity in mind and demonstrated its practicality. Using such a certificate for verification can be done via sequential processing of a list of derived constraints rather than handling a branch-and-bound tree. It turns out that the ordering of the derived constraints can have a significant impact on the memory requirement of a verifier.
The proposed summer research project is to study the constraint ordering problem and develop efficient approximation methods and heuristics for obtaining good orderings. Test cases will be generated and implementations of algorithmic ideas will be benchmarked against optimal orderings obtained through an MILP solver. The first two weeks will be for studying the required mathematical background and the various computing tools. Weeks three to seven will be for the development of ideas and implementations. Weeks eight and nine will be for writing up test results and documenting all the work done, including the required final reports. Reference Cheung, K.K.H., Gleixner, A., Steffy D.E., Verifying integer programming results (2016) https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/6104 Prerequisite knowledge: Graph theory, NP-hardness, integer linear programming, computer programming in C++ and/or Python in Linux or Unix environments.
Project 2: Acoustic Properties of Laminated Structures
Supervisor: Peter Gibson, York University
Architectural acoustics is concerned with the way buildings interact with ambient sound, and is typically based on trial and error rather than mathematical physics. Depending on the circumstances, one may seek to dampen sound from outside sources, or to control echoes from inside sources such as in a concert hall. The purpose of this project is to use established mathematical models of sound propagation through layered media to investigate ways that sound may be mitigated or enhanced by constructing walls, or the cladding of buildings with a diversity of materials having differing acoustic properties.
The project will proceed in three stages, as follows. 1. Background mathematics. Students will learn the necessary background material concerning the one-dimensional wave equation (which governs plane wave propagation through layered media), the Fourier transform, and time-frequency analysis of acoustic signals. In addition, basic notions from mathematical physics such as acoustic impedance and sound pressure levels will be introduced. 2. Coding and research. Students will research the acoustic properties of various building material and code up the relevant formulas from the first stage so as to be able to visualize the acoustic properties of laminated structures. There is some flexibility in terms of coding, but this will likely be carried out in Matlab, Mathematica or Python. Using their code, students will investigate the properties of various configurations, and experiment to see to what effect the desired acoustic effects can be achieved by judicious material choice. There will also be the option at this stage of analyzing the design problem mathematically, depending on student ability and background. 3. Compilation and presentation. Students will compile their results in report form and explain their ideas and results to one another by giving short formal talks. Following the talks, students will reconvene to critically assess their collective work and to synthesise their reassessment into a single short report.
The key tasks and objectives are: background study and mastery of requisite mathematics; coding and online research; compilation and presentation; critical re-examination of previous work. Whatever the particular results, students should come away with: a basic understanding of the one-dimensional wave equation and the propagation of sound; familiarity with the Fourier transform; a feeling for the interaction between mathematical analysis and practical real-world problems; enhanced written and oral communication skills.
Students will be responsible for: handing in problem sets; preparing outlines, writing short pieces of code, and completing reports according to a given timeline; working in a group in a constructive and supportive manner.
Project 3: Application of Marked Branching Diffusion techniques to XVA Calculation
Supervisor: Andrew Green, Scotiabank
In the period following the financial crisis of 2007-9, valuation adjustments or XVAs have become a critical component of derivative valuation (see for example Green 2015, Gregory 2015). Monte Carlo techniques are widely used for XVA calculation because of the high dimensionality of the problem and are amongst the most computationally demanding calculations required of banks. Furthermore, these Monte Carlo simulations involve the calculation of conditional future valuations of derivative contracts. Frequently these derivatives may have optionality and indeed require solution by Monte Carlo techniques, leading a potential need for Monte Carlo-in-Monte Carlo. The computational cost of such an approach would be prohibitive and hence XVA models have most commonly adopted the use of American Monte Carlo techniques (Longstaff and Schwartz, 2001) in such cases (see for example Cesari et al. 2009, Green 2015}. Recently Labordere (2012) has introduced the marked branching diffusions as an alternative.
The objective of this research project is to apply branching techniques to a simple example of an XVA calculation. This will require the student to implement branching diffusion for the test case and then assess performance. The student will be responsible for implementation and numerical testing and hence should be familiar with a suitable prototyping environment such as Python (SciPy) or MATLAB. Appropriate training in XVA methodology will be provided.
Project 4: Counting Bi-tangent Lines to Curves
Supervisor: Yoav Len, University of Waterloo
In this project, we will wield the exciting and emerging field of tropical geometry to pursue a classical problem in geometry. In 1834, the German mathematician PlÃ¼cker showed that every smooth plane quartic curve has exactly 28 bi-tangent lines. Over the years, this problem has been shown to hide rich algebraic and combinatorial structures, and is related to del-Pezzo surfaces, Weil representations, and theta characteristics. Through tropical geometry, the problem may be reformulated in graph theoretic language, and approached via techniques from combinatorics and convex geometry.
The case of smooth tropical quartics has been studied recently by various authors, and is mostly solved. However, there is yet much to be discovered. The aforementioned authors relied heavily on the fact that the curves are smooth and in the plane. Moreover, many questions remain open when the degree of the curve is higher than four. Possible research directions, therefore, are counting bi-tangents to smooth tropical quintic curves, counting bi-tangents to non-smooth curves, lifting tropical to algebraic bi-tangents, and studying curves in space rather than in the plane.
The project will involve ideas from geometry (curves and tangent lines), algebra (the equation defining the geometric objects) and combinatorics (tropical geometry).
Due to its combinatorial nature, the project is suitable for students with varying backgrounds and levels of experience. Students with little experience will find the problem easily accessible (yet challenging!), and will learn basic concepts in algebraic geometry through concrete examples. Advanced students may be able to gain a broader perspective, and explore the interaction between combinatorial and algebro-geometric aspects of the problem.
Project 5: Local Dimensions of Self-similar Measures With Overlap
Supervisor: Kevin Hare and Kathryn Hare, University of Waterloo
One way to quantify the level of concentration of a singular measure is to determine its set of local dimensions. For self-similar measures such as Cantor measures or Bernoulli convolutions, which satisfy a separation condition, the set of local dimensions (or spectrum) is an interval. But when there is 'overlap', there can be an isolated point in the spectrum. In particular, this is true for the 3-fold convolution of the standard Cantor measure and for the Bernoulli convolution with ratio the golden mean. A general theory for self-similar measures whose overlap is sufficiently nicely structured, such as is the case for the examples mentioned above, has recently been formulated. However, many questions remain open, such as investigating the nature of the set of local dimensions of these measures. This would be co-supervised between Kathryn Hare and Kevin Hare (no relation).
Project 6: Lower Bounds for Semidefinite Programming Relaxations to Combinatorial Optimization Problems
Supervisor: Konstantinos Georgiou, Ryerson University
Min Vertex Cover (VC) is a famous combinatorial optimization problem. The input is a graph, and the output should be the smallest possible collection of vertices touching all edges. Assuming the famous P< >NP conjecture, the problem cannot be solved optimally in polynomial time. More interestingly, the same complexity assumption implies that it is impossible to guess the value of the optimal solution within a factor better than 1.36. At the same time, there are various algorithmic techniques that can guess the optimal solution within a factor of 2. A fascinating open problem in the area is to close the gap between 1.36 and 2. In this project, we want to understand the power of Linear Programs, and a generalization of them known as Semidefinite Programs (SDPs), as tools to approximate the optimal solution to VC. A lot of research has been devoted to proving that many such Programs fail to find solutions to VC that are less than 2 times off from the optimal. Our goal will be to extend such results by showing that a large family of stronger Linear (and Semidefinite) Programs fail to perform any better. More specifically, the objective of this project will be the study of SDPs derived by the so-called Sum-of-Squares (or Lasserre) hierarchy, in particular when applied to known relaxations to VC and related problems.
A number of techniques have evolved recently for determining the computational boundaries of the subject SDPs. The primary idea behind them relies on a probabilistic interpretation of solutions to SDPs related to their integrality. Since Sum-of-Squares SDPs treat integrality and feasibility conditions similarly, the key task of the project will be to investigate whether there is an alternative probabilistic interpretation based on solutions' feasibility. It is expected that literature review will span the first 2-3 weeks, while the rest will be devoted to the resolution of concrete research questions.
The student is expected to have a background in Linear Algebra, Combinatorics, Theory of Linear Programming, Algorithmic Design and Complexity Theory. First, the student will familiarize with the literature of Combinatorial Optimization Problems under the lens of Convex Programming (Linear and Semidefinite Programming). Subsequently, the student will learn the basic theory of Semidefinite Programming, which is a generalization of Linear Programming. Then, the student will analyze the performance of contemporary SDPs associated with the Vertex Cover problem. In particular, the goal will be to solve (to near optimality) such programs for specific inputs. Actual instances are too big to be handled by computers, and as such, the analysis of the programs will be abstract.
The most anticipated outcome for the project will be to investigate whether techniques already known to work for linear programming can be adjusted to work for semidefinite programming. Progress in this project will give rise to new techniques for showing lower bounds for the subject algorithmic paradigm of SDPs.
Project 7: Pseudo-spectral and Collocation Methods for Solving the Schrodinger Equation
Supervisor: Tucker Carrington, Queen’s University
The student who works on this project will compare various pseudo-spectral type methods for solving the Schroedinger equation. The goal is the development of new mathematical/computational methods for understanding the motion of atoms in molecules and during reactions. The motion of atoms is easy to calculate and understand if they remain confined very close to an equilibrium geometry. Real chemistry, however, involves large amplitude motion and the making and breaking of bonds. To understand, at a detailed level, the motion of atoms, one must apply the laws of quantum mechanics and solve the SchrÃ¶dinger equation by representing wavefunctions (functions from which one can calculate all observable properties) in terms of basic functions and using methods of linear algebra to compute observables.
The plan is to incorporate what is learned during the summer into the solution of multi-dimensional problems by using sparse-grid methods. However, in the summer we will start with 1d problems. To solve the Schroedinger equation one solves a matrix eigenvalue problem. One way to set up the matrix eigenvalue problem is to use collocation. In general, this gives a generalized eigenvalue problem which is hard to solve if the matrices are large. The number of collocation points is almost always equal to the number of basis functions. Another way to set up the matrix eigenvalue problem is to use a Galerkin approach, but to compute elements of the kinetic, potential, and overlap (Gram) matrix with quadrature. This allows one to use more points than basis functions. In many cases, it is possible to use exact kinetic and overlap matrices. Is using exact matrices a good idea? Does using the same quadrature for all three matrices have important advantages? Related questions were studied for linear equations by Cohen, Davenport, and Leviatan. Yet another way to set up the matrix eigenvalue problem is to use a pseudo-spectral approach, this is not the same as collocation if there are more points than basis functions. How important is the advantage of being able to take more points than basis functions? When the overlap matrix is not exact then the pseudo-spectral approach is not equivalent to the Galerkin+ quadrature approach. These various methods will be compared and analysed. The methods will be implemented one by one, in the order they are presented in this paragraph. I estimate that implementation of each method will take about one week. Comparing and analysing should take about three weeks.
Working on this project, the student will both use existing computer programs and write new ones. S/he will spend most of her/his time reading scientific papers, talking to me and others in the group in order to understand them, and writing and debugging computer programs. I shall spend time with the student teaching her or him the basic theory and numerical analysis required to complete the project. At the end of the summer, the student will submit a report summarizing the theoretical/computation methods he/she has used and the results obtained.
Project 8: Sensitivity of the Orr-Sommerfeld Equation to Changes in Base Flow
Supervisor: Pierre Sullivan, University of Toronto
Low Reynolds number airfoils (Re_c < 10^6) are of interest in many contemporary applications such as micro-air vehicles and wind turbines. These airfoils, however, are prone to boundary layer separation and can exhibit poor performance under these conditions. Following separation, the shear layer can either remain separated or transition to turbulence. Transition occurs due to the amplification of instabilities. The understanding of these instabilities and their growth mechanism is therefore very important for airfoil performance prediction and the application of flow control strategies. Hydrodynamic stability is typically investigated using linear stability analysis. In most local linear stability analyses, a parallel flow assumption is made. The solution of the Orr-Sommerfeld equation dictates the stability of a base flow, U. This has been used to analyze a number of flows and has provided fairly accurate predictions. However, most flows of engineering interest, such as the separated ow over an airfoil, are essentially non-parallel. The effect of three-dimensionality in the base flow and perturbations often must be taken into account in order to better characterize the stability behaviour of the flow field. To this end, more sophisticated analyses have been developed such as global stability analysis, often termed TriGlobal analysis. This analysis requires the accurate computation of the entire flow field by either direct numerical simulation (DNS) or high-resolution large-eddy simulation (LES). The variation of the flow field in one direction, such as the spanwise direction on an airfoil, is often less pronounced than the other two directions, leading to a somewhat simplified analysis termed BiGlobal stability analysis. This analysis, therefore, leads to eigenvectors in two dimensions and a normal mode perturbation in the spanwise direction. Both TriGlobal and BiGlobal analysis lead to very large eigenvalue problems and are very computationally expensive. Furthermore, these methods cannot be used with experimentally-obtained base flow data which does not describe the entire flow field, is prone to data scatter, and does not have adequate resolution. Non-normal matrices and operators are those that do not commute with their adjoint; that is, AA* neq A*A, where A* is the conjugate transpose. These operators have nonorthogonal eigenfunctions. The analysis of the sensitivity of these types of operators has been performed using matrix perturbation techniques and the concept of the epsilon-pseudospectrum, introduced by Trefethen. Linear stability analysis using the Orr-Sommerfeld equation remains an important tool in aerodynamics. This present proposal was motivated by a recent study comparing the LES computations of a low-Reynolds number airfoil with hot-wire measurements conducted in a low-turbulence recirculating wind tunnel at the University of Toronto. Despite very good agreement between the computations and experiment, the minor deviations in base flow resulted in very large discrepancies in predicted disturbance growth rates using the Orr-Sommerfeld equation and linear stability analysis. This proposal is an attempt to quantify the effects of base flow variation on the resulting growth rates predicted by linear stability analysis. In particular, the quantification of reverse flow variation, inflection point location variation, and overall level of velocity data scatter is sought.
Project 9: Solving PDEs on Deconstructed Domains
Supervisor: Alec Jacobson, University of Toronto
Partial differential equations (PDEs) arise in many different disciplines, from physics, through signal processing, to computer graphics and computer vision. Often analytic solutions to PDEs are only known for simple problems over very simple geometric domains. In all other cases, numerical approximations must be computed using a discretization of the problem into a finite set of variables. In particular, the finite element method triangulates the geometric domain of the problem into small "elements" and then approximates the PDE over a finite set of simple functions defined over these elements. A great advantage of the finite element method is the ability to accommodate complex geometric domains with intricate boundaries. In theory, any non-degenerate domain can be triangulated, but in practice, triangulation is far from robust. Especially in three dimensions--where triangulation means filling a solid volume with non-overlapping tetrahedra--implementations of state-of-the-art tetrahedralization algorithms have notoriously high failure rates. Worse still, if the boundary changes, even slightly, the entire triangulation may need to be updated or replaced.
In this project, we will alleviate this problem by allowing the geometric domain to be defined as the union of overlapping subdomains. These subdomains necessarily afford simpler boundaries and easier triangulations. If the domain changes, only the triangulations of the affected subdomains need updating. The challenge, now, is to couple these domains together so that it is possible to solve the original PDE as if only considering the union of these shapes without sacrificing accuracy or introducing locking (where a solution is artificially constrained). Initial derivations in the continuous setting show that only a sparse set of coupling constraints are needed to exactly recover the correct solution.
After an introduction to the related literature in the first week of the summer, the student will begin by implementing coupling constraints in two dimensions and validating this through experiments for a variety of common linear PDEs. The middle weeks will be spent generalizing this idea to three dimensions and then experimenting with non-linear PDEs common to physically based computer animation. The final weeks will be split between experiments with other applications and preparing a report on the summer's work. A robust, accurate, and efficient solution to this problem will have immediate impact in computer graphics for character animation where the character's body parts compose the dynamically changing domain and the PDE in question are the governing equations of elastic motion. All disciplines relying on triangulation for solving PDEs should benefit; we will experiment with computational fluid dynamics and electrostatics.
The student will not only work on novel research with the intent to publish an academic paper but will also have the opportunity to become familiar with the computer graphics and geometry processing scientific literature. This project will gather topics in numerical methods, sparse linear algebra, partial differential equations and computational geometry. In addition, the student will be invited to join the Dynamic Graphics Project (dgp) at the University of Toronto, Department of Computer Science, where we host weekly seminars and group research discussions with graduate students.
Project 10: Unsupervised Learning on Enterprise Real Time Data
Supervisor: Gabriel Silberman, CerebriAI
Customer data by enterprises is on the rise and being used to understand customer behavior for many purposes. Unsupervised learning studies show how systems can learn to represent input patterns in a way that reflects the statistical structure of the overall collection of input patterns. This project uses unsupervised learning where there are no target outputs with each input. This project will assess the value of social media data for predicting behavior in the context of unsupervised learning.
The project will use categories of social media content, such as Twitter, Facebook, etc. with enterprise-supplied data (transactions, CRM, correspondence, etc.) and explore the wrapper framework for unsupervised learning. We will identify the issues involved in developing a feature selection algorithm for unsupervised learning and make recommendations on how to tackle these issues. We will train a variety of machine learning models on different combinations of enterprise and external data, and compare the supervised and unsupervised solutions.
During the first week the students will become familiar with the experimentation toolkit Cerebri has created to build and evaluate machine learning models, including data cleansing tools, run time scripts, execution and monitoring environment, and become familiar with unsupervised frameworks. In the subsequent two weeks, they will be working on diverse experiments, using existing models and various combinations of enterprise and social media data in the context of our regular agile research sprints. In weeks 4-5, the students will be asked to develop their own models and test them on the same data as in weeks 2-3. In week 6, they will prepare, under the supervision of Dr. Gabby Silberman and Cerebri research personnel, a presentation of their results and insights. These results will be shared with the R&D organization for their feedback. The weeks 6-7 will be spent designing the final set of experiments, including testing ideas on what other types of social media data may be useful to gather. During weeks 8-9, the students will run experiments using the feedback and new ideas from the previous sprint, and present results to the executive team. Data Cerebri has access to enterprise data, as well as tools for creating synthetic datasets for testing models and algorithms. We also have access to social media data we have used for early experimentation. If the students uncover other useful sources for social media data, we will assess the feasibility and proper avenues for gathering the information.
Students will learn and experiment with state-of-the-art machine learning tools with both supervised and unsupervised learning techniques, models and algorithms. They will get a sense of the potential for insights extracted from social media data to complement enterprise information for predicting a customer's behavior. Also, they will assess the relative effectiveness of machine learning models and algorithms, as well as the comparative cost and predictive value of various types of social media data. If warranted, a report/paper will be written to report on the project results to the broader community.
Project 11: Using the Lattice Boltzmann Method to Simulate a Rapidly Spinning Baseball
Supervisors: Mary Pugh, University of Toronto and Tyler Wilson, The Fields Institute
The Lattice Boltzmann Method (LBM) is an increasingly popular method for simulating complex fluid flows with complicated geometries, such as through porous media. Its popularity in these types of flows is due to its simple approach to implementing boundary conditions. Unlike traditional approaches to Computational Fluid Dynamics, the LBM lends itself to straightforward parallelization making it very suitable for high performance computing methods. This combination makes the LBM a perfect tool for studying an easily understood but challenging problem; the trajectory of a fast moving, rapidly spinning baseball. While the problem of spheres moving through fluids is well studied, studies of pitched baseballs are more difficult to come by. When such studies appear, it is often experimental in nature [1,2] or involves idealizing the surface or the trajectory . One major reason for this is because the large difference in scale between the deformities on the baseball surface (~5mm), to that of the pitch displacement (~20m) makes the boundary conditions difficult. The goal of this project is to use the Lattice Boltzmann Method to simulate the trajectory of a rapidly spinning baseball. Even though computational fluid dynamics is a notoriously challenging subject, this project in particular is an accessible introduction to the field due to the simplicity of the Lattice Boltzmann Method. The LBM is simple enough for almost any math or science undergraduate to understand, but robust enough to allow for deep exploration and simulation of complicated systems.
The first week of the project will involve studying the physics of fluids, aerodynamics and acquiring benchmarks in the literature for comparison. The next two weeks will be spent learning the mathematical basis of the Lattice Boltzmann Method and performing some basic simulations using open source LBM code. Weeks four and five will be focused on simulating 3D flow around a sphere, and a rotating sphere respectively. The incorporating the movement of the baseball will be studied in week 6 while weeks 7 and 8 will be used to incorporate some aspects of High Performance Computing (possibly using SciNet/SharcNet). Finally, week 9 will allow us to present and discuss results and approaches to future challenges.
Students working on this project should have a basic background in computer programming, specifically in Python, Java, Matlab or C++. C++ is a plus. If a student is not familiar with C++, they will learn the required C++ during the project. This project will introduce students to the field of Computational Fluid Dynamics, partial differential equations and high performance computing while still being accessible enough for skilled undergraduates. Students will leave the project with improved scientific computing skills, experience with data science tools and high performance computing. The scientific community will benefit from the improvements and extensions to the LBM, which is will be valuable in areas well beyond baseball such as aerospace and automotive design.  Borg, John P. AJOP 82.10 (2014): 921-927.  Nosaki, T., Journal of Visualization 19.2 (2016): 283-290.  Jalilian, American Journal of Sports Science 2.5 (2014): 115-121.