News

Thursday, 21 March 2019

Would you like a taster of Postgraduate study at Oxford? Six weeks this Summer perhaps?

If you are Interested in postgraduate study and curious about what it would be like to do research at Oxford we are delighted to announce that Oxford Mathematics is taking part in the UNIQ+ pilot programme, a six-week summer school encouraging access to postgraduate study from under-represented groups in UK universities.

UNIQ+ is free to take part in and includes a £2,500 stipend, plus free accommodation in an Oxford college. It will give you the opportunity to experience postgraduate research at Oxford by carrying out a research project, and a chance to meet our staff and student community.

UNIQ+ will run from 1 July to 9 August 2019.

To see if you qualify for UNIQ+, and to apply, click here.

Wednesday, 20 March 2019

Are You Training A Horse?

Oxford Mathematician Vinayak Abrol talks about his and colleagues's work on using mathematical tools to provide insights in to deep learning.       

Why it Matters!

From 4.5 billion photos uploaded to Facebook/Instagram per day, 400 hr of video uploaded to YouTube per minute, to the 320GB the large Hadron collider records per second, we know it is the age of big data. Indeed, last year the amount of data existing worldwide is estimated to have reached 2.5 quintillion bytes and while only 30% of this data is described as useful, if analysed only 1% actually is. However, the boom in big data has brought success in many fields such as weather forecasting, image recognition and language translation.

So how do we deal with this big data challenge? To fundamentally understand real-world data, we need to look into the intersection of mathematics, signal processing and machine learning, and combine tools from these areas. One such emerging field of study is 'deep learning' under the broader category of 'artificial intelligence'. There have been remarkable advances in deep learning over the decade, and it has found uses in many day-to-day applications such as on smartphones where we have automatic face detection, recognition, tagging in photos or speech recognition for voice search and setting up reminders; and in homes predictive analysis and resource planning using the Internet of Things or smart home sensors. This shift in paradigm is due to the fact that in many tasks machines are efficient, can work continuously and perform better than humans. However, recently deep neural networks have been found to be vulnerable. For instance, they can be fooled by well-designed examples, called adversarials, which is one of the major risks for applying deep neural networks in, say safety-critical scenarios. In addition most such systems are like a black box without any intuitive explanation, and with little or no innate knowledge about human psychology, meaning that the huge buzz about automation due to artificial intelligence is yet to be justified. Hence, understanding how these networks are vulnerable to attacks is attracting great attention, for example in bio-metric access to banking systems.

We try to present a different way of approaching this problem. We know deep networks work well in many applications but they can't abstractly reason and generalise about the world or automate ordinary human activities. For instance, a robot which can pick up a bottle, can pick up a cup only if it is retrained (not the case with humans). Hence, we need to understand the limits of such learning systems. Any other questions are meaningless if our resulting system appears to be solving the problem but is actually not. This is dubbed as the 'Clever Hans effect'. So, instead, we are interested in: 1) Does the system actually address the problem? 2) What is the system actually learning? 3) How can we make it address the actual problem?

General Methodology

In Oxford Mathematics, this work is funded under the Oxford-Emirates Data Science Initiative, headed by Prof. Peter Grindrod. I am being supervised by Prof. Jared Tanner, and within the Data Science Research Group we are working to advance our understanding about deep learning approaches. Although, many recent studies have been proposed to explain and interpret deep networks, currently there is no unified coherent framework for understanding their insights. Our work here aims to mathematically study the reasons why deep architectures are working well, what they are learning and are they really addressing the problem they are built for. We aim to develop a theoretical understanding of deep architectures, which in general processes the data with a cascade of linear filters, non-linearities, and contraction mapping via existing well understood mathematical tools. In particular, we focus on the underlying invariants learned by the network such as multiscale contractions, hierarchical structures etc. Our earliest research explored the key concept of sparsity, i.e., the low complexity of high-dimensional data when represented in a suitable domain. In particular we are interested in the learning of representation systems (dictionaries) providing compact (sparse) descriptions for high dimensional data from a theoretical and algorithmic point of view, and in applying these systems to data processing and analysis tasks. Another direction is to employ tools from random matrix theory (RTM) that allow us to compute an approximation of such learning systems under a set of simplifying assumptions. Our previous studies have revealed that the representations obtained at different intermediate stages of a deep network have complimentary information and sparsity plays a very important role in the overall learning objective. In-fact the so called non-linearities and pooling operations which seems complicated can be explained via constrained matrix factorization problems along with hierarchical/sparsity aware signal processing. In terms of the choice of underlying building blocks of such systems such as convolutional networks (popular for image applications), these can be analysed via convolutional matrix factorization and wavelet analysis, and fully connected networks via discrete Weiner and Volterra Series analysis. Overall our main tools come from sparse approximation, RTM, geometric functional analysis, harmonic analysis and optimisation.

Wednesday, 20 March 2019

Automorphism groups of right-angled Artin groups

Oxford Mathematician Ric Wade explains how right-angled Artin groups, once neglected, are now central figures in low-dimension topology. 

"Right-angled Artin groups (colloquially known as a RAAGs or 'rags') form a family of finitely presented discrete groups that generalise free-abelian groups (where all the generators commute) and free groups (where no generators commute). Every finite graph $\Gamma$ defines a finitely presented group $A_\Gamma$ generated by the vertex set, with the restriction that $vw=wv$ if two vertices $v$ and $w$ are connected by an edge (in other words, adjacent vertices commute). If the graph is complete (so that any two vertices are connected by an edge) then $A_\Gamma$ is a free-abelian group, and if the graph $\Gamma$ has no edges, then $A_\Gamma$ is a free group. Loops and multiple edges between vertices do not affect the group that we get using this construction, so in graph-theoretic language we can assume that $\Gamma$ is simple.

 

    Figure 1: A graph $\Gamma$ and a topological space $X$ such that $\pi_1(X)=A_\Gamma$.

One example is given by the graph $\Gamma$ which consists of three edges joined in a line. In this case, the group $A_\Gamma$ that we get is obtained by taking the fundamental group of three tori, where the middle tours has its meridian and equator glued to the equators of the other two tori (see Figure 1).

Possibly because the definition of these groups sounds quite contrived, right-angled Artin groups saw relatively little interest until the 21st century. However, they have become central figures in recent progress in low-dimension topology. I can highlight two reasons for this:

- RAAGs have a nice simple combinatorial description, which is helpful for proving group-theoretic results.

- RAAGs have a surprisingly diverse subgroup structure, which means that many groups that are harder to understand can be embedded in these 'simpler'  groups.

The flagship result with respect to this viewpoint is a theorem of Agol (building upon work of Wise, Kahn-Markovic, Sageev, and others), which states that the fundamental group of every closed hyperbolic 3-manifold has a finite index subgroup which embeds into a right-angled Artin group in a particularly nice way. This was a key step in the resolution of Thurston's virtual fibering conjecturewhich was one of the major open problems in the area not to fall to Perelman's solution of the Poincare conjecture.

I'm interested in automorphism groups of right-angled Artin groups: the groups of their self-symmetries. Going back to thinking about RAAGs interpolating between free and free-abelian groups, these automorphism groups should have behaviour similar to $GL(n,\mathbb{Z})=Aut(\mathbb{Z}^n)$ from the free-abelian side, and automorphism groups of free groups on the free side. In practice, such interpolation results are much harder to obtain. In recent work with Matthew Day at the University of Arkansas, we have been studying the finiteness properties of these automorphism groups. Roughly speaking, this is the search for nice topological spaces (specifically, classifying spaces) that describe $Aut(A_\Gamma)$ for an arbitrary RAAG. For $GL(n,\mathbb{Z})$ the construction of such spaces goes back to work of Borel and Serre, who used deformation spaces of tori to build classifying spaces of certain finite index subgroups of $GL(n,\mathbb{Z})$ known as congruence subgroups. A classifying space for a finite-index subgroup of $Aut(F_n)$ was built by Culler and Vogtmann, using a deformation space of metric graphs (these appear as tropical curves in the world of tropical geometry). In a paper due to appear in the Journal of Topology, we prove the following theorem:

Theorem 1 (Day-W). For any graph $\Gamma$, the group $Aut(A_\Gamma)$ has a finite-index subgroup with a classifying space given by a finite cell complex.

Our classifying space is built by carefully splicing together deformation spaces of tori, deformation spaces of graphs, and classifying spaces for right-angled Artin groups called Salvetti complexes. This theorem gives restrictions on the algebraic invariants of $Aut(A_\Gamma)$; in particular it says that the cohomology ring $H^*(Aut(A_\Gamma);\mathbb{Q})$ is finitely generated as a $\mathbb{Q}$-vector space. Our construction is inductive rather than completely concrete, however the construction does allow for some calculations (via spectral sequence arguments, for instance). The next step for us in this programme is to give an algorithm to find the virtual cohomological dimension of any automorphism group $Aut(A_\Gamma)$, and we hope that other problems involving these automorphism groups can be understood through these classifying spaces as well."

Thursday, 14 March 2019

Kristian Kiradjiev wins Gold Award at this year’s STEM for Britain

Oxford Mathematician Kristian Kiradjiev has won the Gold Award in the Mathematical Sciences category at this year’s STEM for Britain at the House of Commons on 13th March. This prestigious competition provides an opportunity for researchers to communicate their research to parliamentarians.  

Kristian’s poster covered his research into the mathematical modelling of flue-gas purification and the removal of toxic chemicals from the gas.

As reported last week, Kristian was one of three Oxford Mathematicians presenting in the Commons.

Friday, 8 March 2019

Apala Majumdar wins 2019 FDM Everywoman in Tech award

Oxford Mathematics Visiting Fellow and Reader in Applied Mathematics at the University of Bath, Apala Majumdar has been awarded the 2019 FDM Everywoman in Tech Academic Award. This is awarded to a woman in academia who has made an outstanding contribution to technology and science and whose work has made or has the potential to make a significant long-term impact in STEM.

Apala is an applied mathematician researching fundamental mathematical theories in material science. She specialises in Liquid Crystals and has published over 40 papers to date. Moreover, Apala works to inspire female researchers globally through mentorship and is deeply committed to teaching and training young people.

Apala was nominated by Oxford Mathematician and Director of the Oxford Centre for Industrial and Applied Mathematics (OCIAM), Alain Goriely, who said: “I cannot think of a more deserving candidate for an academic award for young women who are inspiring other female researchers around the world. Apala has single-handedly built an international network spanning four continents, making her one of the world leaders in her field and most internationally recognised of her generation."

The FDM Tech Awards take place in the week of International Women’s Day and celebrate 50 of the most talented individuals shaking up the tech industry.

 

Wednesday, 6 March 2019

Three Oxford Mathematicians to present their research in the House of Commons

Three Oxford Mathematicians, Kristian Kiradjiev, Liam Brown and Tom Crawford are to present their research in Parliament at this year’s STEM for Britain competition at the House of Commons on 13th March. This prestigious competition provides an opportunity for researchers to communicate their research to parliamentarians.  

Kristian’s poster covers his research into the mathematical modelling of flue-gas purification, Liam's poster researches computational models of cancer immunotherapy while Tom is researching the spread of pollution in the ocean.

Judged by leading academics, the gold medalist receives £2,000, while silver and bronze receive £1,250 and £750 respectively. 

Thursday, 28 February 2019

Heather Harrington awarded the Adams Prize

Oxford Mathematics' Heather Harrington is the joint winner of the 2019 Adams Prize. The prize is one of the University of Cambridge's oldest and most prestigious prizes. Named after the mathematician John Couch Adams and endowed by members of St John's College, it commemorates Adams's role in the discovery of the planet Neptune. Previous prize-winners include James Clerk Maxwell, Roger Penrose and Stephen Hawking.

This year's Prize has been awarded for achievements in the field of The Mathematics of Networks. Heather's work uses mathematical and statistical techniques including numerical algebraic geometry, Bayesian statistics, network science and optimisation, in order to solve interdisciplinary problems. She is the Co-Director of the recently established Centre for Topological Data Analysis.

Tuesday, 26 February 2019

Bendotaxis - when droplets are self-propelled in response to bending

We’re all familiar with liquid droplets moving under gravity (especially if you live somewhere as rainy as Oxford). However, emerging applications such as lab-on-a-chip technologies require precise control of extremely small droplets; on these scales, the forces associated with surface tension become dominant over gravity, and it is therefore not practical to rely on the weight of the drops for motion. Many active processes (requiring external energy inputs), such as those involving the use of temperature gradients, electric fields, and mechanical actuation, have been used successfully to move small droplets. Recently, however, there has been increasing interest in passive processes, which do not require external driving. One example of this is durotaxis, in which droplets spontaneously move in response to rigidity gradients (similar to the active motion of biological cells, which generally move to stiffer regions of a deformable substrate). Here, the suffix ‘taxis’ refers to the self-propulsive nature of the motion. In a recent study, Oxford Mathematicians Alex Bradley, Finn Box, Ian Hewitt and Dominic Vella introduced another such mechanism; Bendotaxis is self-propelled droplet motion in response to bending. What is particularly interesting is that the motion occurs in the same direction, regardless of whether the drop has an affinity to (referred to as ‘wetting’) the channel walls or not (‘non-wetting’), which is atypical for droplet physics.

A small drop confined to a channel exerts a force on the walls, as a result of surface tension; this force pulls the walls together when the drop wets them, and pushes them apart otherwise. By manipulating the geometry of the channel (leaving one end free, and clamping the other end), the deformation that results from this surface tension force is asymmetric—it creates a tapering in the channel. The drop subsequently moves in response to this tapering, which is towards the free end in both the wetting and non-wetting cases.

Using a combination of scaling arguments and numerical solutions to a mathematical model of the problem, the team were able to verify that it is indeed the capillary induced elastic deformation of the channel that drives the experimentally observed motion. This model allowed them to understand the dynamic nature of bendotaxis, and predict the motion of drops in these deformable channels. In particular, they identified several interesting features of the motion; counter-intuitively, it is predicted (and observed) that the time taken for a drop to move along the channel decreases as it increases in length. However, relatively long channels are susceptible to ‘trapping’, whereby the force exerted by the drop is sufficient to bring the channel walls into contact. It is hoped that understanding the motion will pave the way for its application on a variety of scales - for example, drug delivery on a laboratory-scale, and self-cleaning surfaces on a micro-scale.

Thursday, 21 February 2019

Oxford Mathematics Student Tutorial now online

The Oxford Mathematics educational experience is a journey, a journey like any other educational experience. It builds on what you learn at school. It is not unfamiliar and we don't want it to invisible. But it has aspects that are different. One of these is the tutorial system. Students have lectures. But they also have tutorials based on those lectures where they sit, usually in pairs, with a tutor, go through their work and, critically, get to ask questions. It is their tutorial.

Having streamed the First Year Students' Dynamics lecture last week and interviewed the students as they left the lecture, we now present the tutorial as it happened. Even if you are not a mathematician we hope the lectures and tutorial give you an insight in to how things work in Oxford. And maybe even encourage you, or someone you know, to think about giving Oxford a go. Or just giving maths a go.

 

 

 

 

 

 


 

Wednesday, 20 February 2019

When does one of the central ideas in economic theory work?

The concept of equilibrium is one of the most central ideas in economics. It is one of the core assumptions in the vast majority of economic models, including models used by policymakers on issues ranging from monetary policy to climate change, trade policy and the minimum wage. But is it a good assumption? In a paper just published in Science Advances, Oxford Mathematicians Marco Pangallo, Torsten Heinrich and Doyne Farmer investigate this question in the simple framework of games, and show that when the game gets complicated this assumption is problematic. If these results carry over from games to economics, this raises deep questions about economics models, and when they are useful for understanding the real world.

To understand what equilibrium is, it helps to think about a simple example. Kids love to play tic-tac-toe (also known as noughts and crosses), but at around eight years old they learn that there is a strategy for the second player that always results in a draw.  This strategy is what is called an equilibrium in economics. If all the players in the game are rational they will play an equilibrium strategy. In economics, the word rational means that the player can evaluate every possible move and explore its consequences to their endpoint and choose the best move. Once kids are old enough to discover the equilibrium of tic-tac-toe they quit playing because the same thing always happens and the game is really boring. One way to view this is that, for the purposes of understanding how children play tic-tac-toe, rationality is a good behavioural model for eight year olds but not for six year olds.

In a more complicated game like chess, rationality is never a good behavioural model. The problem is that chess is a much harder game, hard enough that no one can analyse all the possibilities, and the usefulness of the concept of equilibrium breaks down. In chess no one is smart enough to discover the equilibrium, and so the game never gets boring. This illustrates that whether or not rationality is a sensible model of the behaviour of real people depends on the problem they have to solve. If the problem is simple, it is a good behavioural model, but if the problem is hard, it may break down.

Theories in economics nearly universally assume equilibrium from the outset. But is this always a reasonable thing to do? To get insight into this question, Pangallo and collaborators study when equilibrium is a good assumption in games. They don’t just study games like noughts and crosses or chess, but rather they study all possible games of a certain type (called normal form games). They literally make up games at random and have two simulated players play them to see what happens. The simulated players use strategies that do a good job of describing what real people do in psychology experiments. These strategies are simple rules of thumb, like doing what has worked well in the past or picking the move that is most likely to beat the opponent’s recent moves.

The authors find that the prevalence of cycles in the structure of games is a very good indicator of the likelihood that strategies do not converge to equilibrium. This point is illustrated in the figure below. Cycles are indicated with red arrows in the payoff matrices – namely, the tables filled with numbers in the first row. When cycles are present, many learning dynamics are likely to perpetually fluctuate instead of converging to equilibrium. Fluctuating dynamics are colored red or orange in the panels of the second and third rows.

The theory then suggests that equilibrium is likely to be a wrong behavioral model when the game has a cyclical structure. When are cycles prevalent, and when are they rare?

When the game is simple enough, in the sense that the number of actions available to each player is small, cycles are rare. When the game is more complicated, whether or not cycles are common depends on whether or not the game is competitive.  If the incentives of the players are lined up, cycles are rare, even if the game is complicated.  But when the incentives of the players are not lined up and the game gets complicated, cycles are common. 

These results match the intuition about noughts and crosses vs. chess: complicated games are harder to learn and it is harder for players to coordinate on an equilibrium when one player’s gain is the other player’s loss. The main novelty of the paper is that the authors develop a formalism to make all this quantitative. This is confirmed in the figure below, which shows the share of cycles (dashed lines) and the non-convergence frequency of six learning algorithms (markers) as the complicatedness and competitiveness of a game vary. (The payoff correlation Γ is negative when the game is competitive.)

Many of the problems encountered by economic actors are too complicated to model easily using a normal form game. Nonetheless, this work suggests a potentially serious problem. Many situations in economics are complicated and competitive. This raises the possibility that many important theories in economics may be wrong: If the key behavioural assumption of equilibrium is wrong, then the predictions of the model are likely wrong too. In this case new approaches are required that explicitly simulate the behaviour of the players and take into account the fact that real people are not good at solving complicated problems.

 

 

Pages