News

Monday, 2 October 2017

QBIOX - a new network bringing together expertise in quantitative biology

QBIOX – Quantitative Biology in Oxford – is a new network that brings together biomedical and physical scientists from across the University who share a commitment to making biology and medicine quantitative. A wide range of bioscience research fields are interested in the behaviour of populations of cells: how they work individually and collectively, how they interact with their environment, how they repair themselves and what happens when these mechanisms go wrong. At the cell and tissue levels, similar processes are at work in areas as diverse as developmental biology, regenerative medicine and cancer, which means that common tools can be brought to bear on them.

QBIOX’s focus is on mechanistic modelling: using maths to model biological processes and refining those models in order to answer a particular biological question. Researchers now have access to more data than ever before, and using the data effectively requires a joined-up approach. It is this challenge that has encouraged Professors Ruth Baker, Helen Byrne and Sarah Waters from the Mathematical Institute to set up QBIOX. The aim is to help researchers with the necessary depth and range of specialist knowledge to open up new collaborations, and share expertise and knowledge, in order to bring about a step-change in understanding in these areas. In regenerative medicine, for example, QBIOX has brought together a team of people from across the sciences and medical sciences in Oxford who are working on problems at the level of basic stem cell science right through to translational medicine that will have real impacts on patients.

A look at the list of QBIOX collaborators demonstrates that Oxford researchers from a wide range of backgrounds are already involved: from maths, statistics, physics, computer science and engineering, through to pathology, oncology, cardiology and infectious disease. QBIOX is encouraging any University researcher with an interest in quantitative biology to join the network. It runs a programme of activities to catalyse interactions between members. For example, QBIOX’s termly colloquia offer opportunities for academics to showcase research that is of interest to network members, and there are regular smaller meetings that look in detail at specific topics. QBIOX also has funding for researchers who would like to run small meetings to scope out the potential for using theoretical and experimental techniques to tackle new problems in the biosciences.

The QBIOX website has details of all the activities run by the network, as well as relevant events taking place across the University. If you have events you would like to feature here, just complete the contact form. You can also sign up to be a collaborator and to receive QBIOX’s termly newsletter.

Sunday, 1 October 2017

Russo-Seymour-Welsh estimates for the Kostlan ensemble of random polynomials

Oxford Mathematician Dmitry Belyaev is interested in the interface between analysis and probability. Here he discusses his latest work.

"There are two areas of mathematics that clearly have nothing to do with each other: projective geometry and conformally invariant critical models of statistical physics. It turns out that the situation is not as simple as it looks and these two areas might be connected.

We start with projective geometry. Let $g(x):\mathbb{R}^{m+1} \to \mathbb{R}$ be a homogeneous polynomial of degree $n$ in $m + 1$ variables. Although the values of the polynomial are not well defined in homogeneous coordinates $[x_0 : x_1 : \dotsm : x_m]$, but the zero locus, the set where $g([x_0 : x_1 : \dotsm : x_m]) = 0$, is well defined. The set $S = \{x ∈ \mathbb{PR}^m : g(x) = 0\}$ is a projective variety.

We can ask what a typical projective variety looks like. The answer to this question very much depends on the meaning of the word ‘typical’. One possibility is to define some ‘natural’ probability measure on the space of all homogeneous polynomials $g$ and treat ‘typical’ behaviour as almost sure behaviour with respect to this measure. Since the space of polynomials is too large, there is no canonical way to define the most natural uniform measure. Second best choice is a Gaussian measure. This still does not completely determine the measure, but there is one Gaussian measure which stands out: this is the only Gaussian measure which is the real trace of a complex Gaussian measure on space of homogeneous polynomials on $\mathbb{CP}^m$ which is invariant with respect to the unitary group. A random polynomial of degree n with respect to this measure could be written as

$$f_n(x) = f_{n;m}(x) = \sum_{|J|=n}\sqrt{\binom{n}{J}} a_J x^J,$$

where $J = (j_0, . . . , j_m)$ is the multi-index, $|J| = j_0 + \dotsb + j_m$, $\binom{n}{J} = \frac{n!}{j_0! \dotsb j_m!}$, and $\{a_J\}$ are i.i.d. standard Gaussian random variables. This random function is called the Kostlan ensemble or complex Fubini-Study ensemble. We can think that a ‘typical’ variety of degree $n$ is the nodal set of the Kostlan ensemble of degree $n$. We are mostly interested in the two-dimensional case $m = 2$.

It has been shown by V. Beffara and D. Gayet that there is Russo-Seymour-Welsh type estimate for Bargmann-Fock random function which is the scaling limit of the Kostlan ensemble. This means that if one fixes a nice domain with two marked boundary arcs, then the probability that there is a nodal line connecting two arcs inside the domain is bounded from below by a constant which depends on the shape of the domain, but not on its scale. These types of estimates first appeared in the study of critical percolation models and are a strong indication that the corresponding curves have conformally invariant scaling limits.

In the recent work with S. Muirhead and I. Wigman we have extended this result to the Kostlan ensemble on the sphere. Namely, we have obtained a lower bound on the probability to cross a domain which is uniform in the degree of the polynomial and in the scale of the domain. This suggests that large components of a ‘typical’ projective curve have a scaling limit which is conformally invariant and should be described by the Schramm-Loewner Evolution."

For a fuller explanation of Dmitry and colleagues' work please clck here.

Monday, 25 September 2017

Oxford Mathematics Research looks at Ricci Flow

As part of our series of research articles focusing on the rigour and intricacies of mathematics and its problems, Oxford Mathematician Andrew Dancer discusses his work on Ricci Flow.

"A sphere and an ellipsoid (rugby ball) are the same topologically, in that each can be continuously deformed into the other without tearing, but obviously they are not the same geometrically. We can see that the sphere is in some sense uniformly curved, while the curvature of the ellipsoid varies as we move around the surface. 

The mathematical gadget that encodes information about curvature, lengths, angles, volumes etc. is called a metric. This concept in fact makes sense not just for surfaces but in higher dimensions as well. The curvature is now not a single function but an object called the Riemann curvature tensor.

A fundamental question in geometry is whether a given manifold has a best or nicest metric. One popular candidate is the notion of an Einstein metric. The equations expressing the Einstein condition are a complicated nonlinear system of partial differential equations, and questions about existence and uniqueness of Einstein metrics in dimensions above three are still not well understood in general.

One strategy to study the existence of Einstein  metrics is via the Ricci flow. This is a nonlinear version of heat flow, whose fixed points correspond to Einstein metrics, rather as fixed points of heat flow correspond to harmonic functions (solutions of Laplace's equation). In good situations the Ricci flow may converge to an Einstein metric, but it is also possible for singularities to develop, arising from the nonlinear nature of the flow. I am particularly interested in so-called  soliton solutions of the heat flow, corresponding to metrics that evolve just by rescaling and coordinate changes under the flow. These give a natural generalisation of the Einstein condition, and are also very important in understanding singularities of the flow via a rescaling of 
variables.

In collaboration with Mckenzie Wang of McMaster University in Canada, I have produced new examples of solitons by assuming the existence of a large enough symmetry group to reduce the PDEs to ordinary differential equations. With my student Alejandro Betancourt de la Parra, we have even found some cases where the soliton equations may be solved explicitly, due to unexpected integrability structures in certain dimensions."

For fuller explanations of Andrew's work please click on the links below:

On Ricci solitons of cohomogeneity one

Some New Examples of Non-Kähler Ricci Solitons

A Hamiltonian approach to the cohomogeneity one Ricci soliton equations

Image above courtesy of Syafiq Johar

Wednesday, 20 September 2017

Oxford Mathematics Public Lectures for the Autumn (and a bit of Winter)

We have an exciting series of Oxford Mathematics Public Lectures this Autumn. Summary below and full details here. All will be podcast and on Facebook Live. We also have a London Lecture by Andrew Wiles on 28 November (details will follow separately). Please email external-relations@maths.ox.ac.uk to register for the lectures below.

Closing the Gap: the quest to understand prime numbers - Vicky Neale

18 October, 5.00-6.00pm, Lecture Theatre 1, Mathematical Institute, Oxford

--

Maths v Disease - Julia Gog

1 November, 5.00-6.00pm, Lecture Theatre 1, Mathematical Institute, Oxford

--

The Seduction of Curves: The Lines of Beauty That Connect Mathematics, Art and The Nude - Allan McRobie

13 November, 5.00-6.00pm, Lecture Theatre 1, Mathematical Institute, Oxford

--

Oxford Mathematics Christmas Public Lecture - Alex Bellos, title tbc

6 December, 5.00-6.00pm, Lecture Theatre 1, Mathematical Institute, Oxford

--

Please email external-relations@maths.ox.ac.uk to register

Thursday, 14 September 2017

A continuum of expanders

As part of our series of research articles focusing on the rigour and intricacies of mathematics and its problems, Oxford Mathematician David Hume discusses his work on networks and expanders.

"A network is a collection of vertices (points) and edges (lines connecting two vertices). They are used to encode everything from transport infrastructure to social media interactions, and from the behaviour of subatomic particles to the structure of a group of symmetries. A common theme throughout these applications, and therefore of interest to civil engineers, advertisers, physicists, and mathematicians (amongst others), is that it is important to know how well connected a given network is. For example, is it possible that two major road closures make it impossible to drive from London to Oxford? An efficient road network should ensure that there are multiple ways to get between any two important places, but we cannot simply tarmac everything! As another example, if as an advertiser, you post adverts on a social media platform, how do you ensure that you reach as many people as possible, without paying to post to every single account?

Given a network, we say its cut size is the smallest number of vertices you need to remove, so that the remaining pieces have at most half the original number of vertices in them (in our examples: how many roads need to close before half the population are unable to drive to visit the other half, or how many people need to ignore your advert so that less than half of the users of social media will see it).

Let us say that a family of networks, with increasing numbers of vertices, is called an expander if the cut size of each network is proportional to the number of vertices (1) , and each vertex in a network is the end of at most a fixed number of edges. In theory this would be an optimal solution for a transport network as we can connect as many cities as we need to without needing to work out how to manage the traffic lights at a junction where 5,000 roads all converge. In practice, expanders are as incompatible with the geometry of our world as it is possible for any collection of networks to be.

(2)

Expanders, however, are still very interesting and naturally occur in diverse areas: in error-correcting codes in computer science; number theory; and in group theory, where my personal interest lies.

It is, in general very difficult to construct a family of expanders, even though randomly choosing larger and larger networks in which every vertex meets exactly three edges will almost surely produce an expander. The first construction of a family was done by Grigory Margulis - they came from the structure networks of finite groups of symmetries. Other constructions have since been found, most notably a construction of Ramanujan graphs (expanders which, in a particular sense, have the largest possible ratio between their cut-size and their number of vertices), and the fantastically named Zig-Zag product (3) , which builds expanders inductively, starting from two very simple networks.

One question, which seems to have avoided much attention, is the following: how many different expanders are there? To answer this, we first have to deal with the rather sensitive question of what exactly do we mean by different? Does adding one edge change the expander? If so, then the above question is not really very interesting. A more interesting example is provided by Manor Mendel and Assaf Naor: they prove that there are two different expanders so that however you try to associate the vertices in one with the vertices in another, you must either move vertices close that were very far apart before, or else move vertices far apart which previously were very close. In mathematical terms, they are not coarsely equivalent - we cannot even approximately preserve how close vertices are.

In my work, I show that there is a collection of expanders (we can even insist that they are Ramanujan graphs), which is impossible to ennumerate (it is uncountable), such that no pair of them are coarsely equivalent. The technique is to show that for any coarsely equivalent networks, the largest cut size of any network contained in the first with at most n vertices is proportional to the largest cut size of any network contained in the second with at most n vertices. By constructing expanders where these two values are not proportional, we rule out the possibility of such coarse equivalences between them.

The behaviour of cut sizes which is used above to rule out coarse equivalences is of much interest for networks which are not expanders. In my current work I am exploring how cut sizes behave for networks which are 'negatively curved at large scale': this is an area of particular interest in group theory, and plays a key role in the recent proofs of important conjectures in low-dimensional topology: the virtually Haken and virtually fibred conjectures. For such 'negatively curved' groups, cut sizes seem to be related to the dimension of an associated fractal 'at infinity'. With John Mackay and Romain Tessera, we have established this link for an interesting collection of such networks, and are working on developing the technology needed to generalise our results."

(1) This is not the traditional definition, but one of my results proves that a network is an expander in the definition given here if and only it contains an expander in the traditional sense

(2) Two networks with highlighted collections of vertices demonstrating the value of the cut size

(3) The header image of this article is the Zig-Zag product of a cycle of length 6 with a cycle of length 4 

Monday, 11 September 2017

Searching the genome haystack - Where is the disease? Where is the drug risk?

Medicines are key to disease treatment but are not without risk. Some patients get untoward side effects, some get insufficient relief. The human genome project promises to revolutionise modern health-care. However, there are 3 billion places where a human’s DNA can be different. Just where are the genes of interest in sufferers of complex chronic conditions? Which genes are implicated the most in which disease in which patients? Which genes are involved in a beneficial response to a medicine? Which genes might be predictive of drug-induced adverse events? Collaborative industrial research by Oxford Mathematics' Clive Bowman seeks to tackle these areas to enable drug discovery companies to develop appropriate treatments.

The Royal Society Industrial Fellowship research at the Oxford Centre for Industrial and Applied Mathematics (OCIAM) extends stochastic insights from communication theory into producing easy-to-interpret visualisations for biotech use. Interacting determinants of the illnesses or adverse syndromes can be displayed as heatmaps or coded networks that highlight potential targets against which chemists can rationally design drugs. All types of measured data can be used simultaneously and dummy synthetic indicators such as pathways or other ontologies can be added for clarity. Heterogeneity is displayed automatically allowing understanding of why some people get a severe disease (or drug response) and others a mild syndrome, as well as other variations, for example due to someone’s ethnicity.

Helped by this mathematics the hope is that the right drug can be designed for the right patient and suffering alleviated efficiently with the minimum risk for the individual. For fuller detail on Clive's work please click here.

The image above shows a drug adverse event example (please click on the image). Clockwise from top left: Drug molecule (by Fvasconcellos); heat map showing patients with severe (red) or mild (blue) syndrome in multidimensional information space (courtesy of Dr O Delrieu); two aetiological subnetworks to syndrome; 3D animation display of results with dummy indicator variables.

Friday, 1 September 2017

Heterogeneity in cell populations - a cautionary tale

Researchers from Oxford Mathematics and Imperial College London have provided a “'mathematical thought experiment' to inspire caution in biologists measuring heterogeneity in cell populations. 

As technologies for gene sequencing and microscopy improve, biologists and biomedical researchers are increasingly able to distinguish heterogeneity in cell populations. And some of these differences in cellular behaviours can have important implications for biological functions, such as stem cells in embryonic development, or invasive malignant cells in the onset of cancer. But where will this trend of looking for heterogeneity lead? With a good enough microscope, every cell may look different. But is this meaningful?

To illustrate their point, Linus Schumacher and Oxford Mathematicians Ruth Baker and Philip Maini focused on an example of heterogeneity in migrating cell populations. They used statistics relating to delays in the correlation between individual cells' movements to examine whether it is possible to infer heterogeneities in cell behaviours. This idea originally stems from analysing the movements of birds, but has since been applied to cells too. By measuring when the movement of two cells (or birds) is most aligned, we learn if cells (or birds) move and turn simultaneously (no delay in correlations), or follow each other (delays in correlations). This is of importance to biologists interested in understanding if a subset of cells is leading metastatic invasion, for example, or the migration of cells in the developing embryo.

Using a minimal mathematical model for cell migration, Schumacher, Baker and Maini show that correlations in movement patterns are not necessarily a good indicator of heterogeneity: even a population of identical cells can appear heterogeneous, due to chance correlations and limited sample sizes. What’s more, when the authors explicitly included heterogeneity in their model to describe experimentally measured data, the model of a homogeneous cell population could describe the data just as well (albeit for different parameter values), heavily limiting what can be concluded from such measurements.

Thus, we have learnt that heterogeneity can naively be inferred from cell tracking data, but it may not be so meaningful. And the implications reach further than a particular type of data and specific statistical analysis. In an associated commentary, Paul Macklin of Indiana University illustrates a corollary of the main work: cell populations that divide with a fixed rate, or a distribution of division rates, can have the same distribution of cell cycle times (which could be measured experimentally). In this case, heterogeneity (whether it is real or not) is unimportant in understanding the observed biological phenomenon.

Lead author Linus Schumacher got the idea for this study while finishing his DPhil at the Wolfson Centre for Mathematical Biology in Oxford, and was enabled to continue working on it through an EPSRC Doctoral Prize award. The research appears on the cover of the August issue of Cell Systems.

Tuesday, 29 August 2017

How our immune systems could help us understand crime

Taxation and death may be inevitable but what about crime? It is ubiquitous and seems to have been around for as long as human beings themselves. A disease we cannot shake. However, therein lies an idea, one that Oxford Mathematician Soumya Banerjee and colleagues have used as the basis for understanding and quantifying crime.

Their starting-point is that crime is analogous to a pathogenic infection and the police response to it is similar to an immune response. Moreover, the biological immune system is also engaged in an arms race with pathogens. These analogies enable an immune system inspired theory of crime and violence in human societies, especially in large agglomerations like cities.

An immune system inspired theory of crime can provide a new perspective on the dynamics of violence in societies. The competitive dynamics between police and criminals has similarities to how the immune system is involved in the arms race with invading pathogens. Cities have properties similar to biological organisms - the police and military forces would be the immune system that protects against invading internal and external forces.

Police are activated by crime just like immune system cells are activated by specialized cells called dendritic cells. Non-criminals are turned to criminals in the presence of crime. Hence crime is like a virus. This specifically simulates a spread of disorder.  Police also remove criminals similar to how T-cells kill and remove infected cells.

The work has implications for public policy, ranging from how much financial resource to invest in crime fighting, to optimal policing strategies, pre-placement of police, and the number of police to be allocated to different cities. The research can also be applied to other forms of violence in human societies (like terrorism) and violence in other primate societies and social insects such as ants. Although still an extremely ambitious goal, in the era of big data we may be able to predict behaviours of large ensembles of people without being able predict actions of individuals.

The researchers hope that will this be the first step towards a quantitative theory of violence and conflict in human societies, one that contributes further to the pressing debate about how to design smarter and more efficient cities that can scale and be sustainable despite population increase - a debate that mathematicians, especially in Oxford, are fully engaged in.

For a fuller explanation of the theory and a more detailed demonstration of the mathematics click here and here for PDF.

Wednesday, 16 August 2017

Oxford Mathematician Ulrike Tillmann elected to Royal Society Council

Oxford Mathematician Ulrike Tillmann FRS has been elected a member of the Council of the Royal Society. The Council consists of between 20 and 24 Fellows and is chaired by the President.

Founded in the 1660s, the Royal Society’s fundamental purpose is to recognise, promote, and support excellence in science and to encourage the development and use of science for the benefit of humanity. The Royal Society's motto 'Nullius in verba' is taken to mean 'take nobody's word for it'. 

Ulrike specialises in algebraic topology and has made important contributions to the study of the moduli space of algebraic curves.

Tuesday, 15 August 2017

Hair today, gone tomorrow. But have scientists found a new way to stimulate hair growth?

How does the skin develop follicles and eventually sprout hair? Research from a team including Oxford Mathematicians Ruth Baker and Linus Schumacher addresses this question using insights gleaned from organoids, 3D assemblies of cells possessing rudimentary skin structure and function, including the ability to grow hair.

In the study, the team started with dissociated skin cells from a newborn mouse. They then took hundreds of timelapse movies to analyse the collective cell behaviour. They observed that these cells formed organoids by moving through six distinct phases: 1) dissociated cells; 2) aggregated cells; 3) cysts; 4) coalesced cysts; 5) layered skin; and 6) skin with follicles, which robustly produce hair after being transplanted onto the back of a host mouse. By contrast, dissociated skin cells from an adult mouse only reached phase 2 - aggregation - before stalling in their development and failing to produce hair.

To understand the forces at play, the scientists analysed the molecular events and physical processes that drove successful organoid formation with newborn mouse cells. "We used a combination of bioinformatics and molecular screenings" said co-author Mingxing Lei from the University of Southern California. At various time points, they observed increased activity in genes related to: the protein collagen; the blood sugar-regulating hormone insulin; the formation of cellular sheets; the adhesion, death or differentiation of cells; and many other processes. In addition to determining which genes were active and when, the scientists also determined where in the organoid this activity took place. Next, they blocked the activity of specific genes to confirm their roles in organoid development.

By carefully studying these developmental processes, the scientists obtained a molecular "how to" guide for driving individual skin cells to self-organise into organoids that can produce hair. They then applied this "how to" guide to the stalled organoids derived from adult mouse skin cells. By providing the right molecular and genetic cues in the proper sequence, they were able to stimulate these adult organoids to continue their development and eventually produce hair. In fact, the adult organoids produced 40 percent as much hair as the newborn organoids - a significant improvement.

"Normally, many ageing individuals do not grow hair well, because adult cells gradually lose their regenerative ability," said Cheng-Ming Chuong from the team. "With our new findings, we are able to make adult mouse cells produce hair again. In the future, this work can inspire a strategy for stimulating hair growth in patients with conditions ranging from alopecia to baldness."

Pages