২০১৪ সালে ফিল্ডস পদক প্রাপ্ত গণিতবিদগণ !

” স্বর্ণপদক ! কে না চায় গলায় ঝুলাতে । তবে সেই স্বর্ণপদক যদি হয় ফিল্ডস পদক তবে তা হবে একজন গণিতবিদের জন্য অনন্য পাওয়া । ফিল্ডস পদক যা অনেক সময় গণিতে অনবদ্য আবিষ্কারের জন্য আন্তর্জাতিক পুরষ্কার হিসেবে বিবেচনা করা হয় । দুই, তিন অথবা চার জন গনিতবিদকে একসাথে এই পুরষ্কার দেয়া হয় এবং তাদের বয়স অবশ্যই ৪০ বছরের মধ্যে হতে হবে । এজন্য এই পদককে একজন তরুণ গণিতবিদের জন্য সর্বাধিক সম্মানজনক পুরষ্কার হিসেবে বিবেচনা করা হয় । ১৯৩৬ সালে প্রথম ফিল্ডস পদক দেয়া হয় । এরপর বিশ্বযুদ্ধের ফলে দীর্ঘদিন তা বন্ধ থাকে । ১৯৫০ সাল থেকে নিয়মিত ৪ বছর পর পর এই পদক দেয়া হচ্ছে ।  ”

এবং এবছর ২০১৪ সালেও ৪ জন গনিতবিদ কে এই অসাধারণ সম্মাননা প্রদান করা হয়েছে। ” ফিল্ডস পদক ” সম্পর্কে আরও জানতে চাইলে এবং ২০১০ সাল পর্যন্ত ফিল্ডস পদক প্রাপ্ত গণিতবিদদের নাম জানতে চাইলে, ঘুরে আসতে পারেন অরুনাভ রহমান এর লিখা ফিল্ডস পদক আর্টিকেলটি থেকে।

The following texts were provided by the International Mathematical Union, which awards the Fields Medals. The texts describing their work are copyright free and can be used in publications.They do an excellent job of explaining in as accessible language as possible what the four 2014 medallists did to earn their awards.The texts describing their work are copyright free and can be used in publications.

Artur Avila & his works

artur_avila

Artur Avila has made outstanding contributions to dynamical systems, analysis, and other areas, in many cases proving decisive results that solved long-standing open problems. A native of Brazil who spends part of his time there and part in France, he combines the strong mathematical cultures and traditions of both countries. Nearly all his work has been done through collaborations with some 30 mathematicians around the world. To these collaborations Avila brings formidable technical power, the ingenuity and tenacity of a master problem-solver, and an unerring sense for deep and significant questions.

Avila’s achievements are many and span a broad range of topics; here we focus on only a few highlights. One of his early significant results closes a chapter on a long story that started in the 1970s. At that time, physicists, most notably Mitchell Feigenbaum, began trying to understand how chaos can arise out of very simple systems. Some of the systems they looked at were based on iterating a mathematical rule such as 3x(1–x).

Bu5PaaCIUAAfEzY

Starting with a given point, one can watch the trajectory of the point under repeated applications of the rule; one can think of the rule as moving the starting point around over time. For some maps, the trajectories eventually settle into stable orbits, while for other maps the trajectories become chaotic. Out of the drive to understand such phenomena grew the subject of discrete dynamical systems, to which scores of mathematicians contributed in the ensuing decades. Among the central aims was to develop ways to predict long-time behavior. For a trajectory that settles into a stable orbit, predicting where a point will travel is straight-forward. But not for a chaotic trajectory: Trying to predict exactly where an initial point goes after a long time is akin to trying to predict, after a million tosses of a coin, whether the million-and-first toss will be a head or a tail. But one can model coin-tossing probabilistically, using stochastic tools, and one can try to do the same for trajectories. Mathematicians noticed that many of the maps that they studied fell into one of two categories: “regular”, meaning that the trajectories eventually become stable, or “stochastic”, meaning that the trajectories exhibit chaotic behavior that can be modeled stochastically. This dichotomy of regular vs. stochastic was proved in many special cases, and the hope was that eventually a more-complete understanding would emerge.

This hope was realized in a 2003 paper by Avila, Welington de Melo, and Mikhail Lyubich, which brought to a close this long line of research. Avila and his co-authors considered a wide class of dynamical systems – namely, those arising from maps with a parabolic shape, known as unimodal maps – and proved that, if one chooses such a map at random, the map will be either regular or stochastic. Their work provides a unified, comprehensive picture of the behavior of these systems.

Another outstanding result of Avila is his work, with Giovanni Forni, onweak mixing. If one attempts to shuffle a deck of cards by only cutting the deck – that is, taking a small stack off the top of the deck and putting the stack on the bottom – then the deck will not be truly mixed. The cards are simply moved around in a cyclic pattern. But if one shuffles the cards in the usual way, by interleaving them – so that, for example, the first card now comes after the third card, the second card after the fifth, and so on – then the deck will be truly mixed. This is the essential idea of the abstract notion of mixing that Avila and Forni considered. The system they worked with was not a deck of cards, but rather a closed interval that is cut into several subintervals. For example, the interval could be cut into four pieces, ABCD, and then one defines a map on the interval by exchanging the positions of the subintervals so that, say, ABCD goes to DCBA. By iterating the map, one obtains a dynamical system called an “interval exchange transformation”.

Considering the parallel with cutting or shuffling a deck of cards, one can ask whether an interval exchange transformation can truly mix the subintervals. It has long been known that this is impossible. However, there are ways of quantifying the degree of mixing that lead to the notion of “weak mixing”, which describes a system that just barely fails to be truly mixing.

What Avila and Forni showed is that almost every interval exchange transformation is weakly mixing; in other words, if one chooses at random an interval exchange transformation, the overwhelmingly likelihood is that, when iterated, it will produce a dynamical system that is weakly mixing.

This work is connected to more-recent work by Avila and Vincent Delecroix, which investigates mixing in regular polygonal billiard systems. Billiard systems are used in statistical physics as models of particle motion. Avila and Delecroix found that almost all dynamical systems arising in this context are weakly mixing.

In the two lines of work mentioned above, Avila brought his deep knowledge of the area of analysis to bear on questions in dynamical systems. He has also sometimes done the reverse, applying dynamical systems approaches to questions in analysis. One example is his work on quasi-periodic Schrodinger operators. These are mathematical equations for modeling quantum mechanical systems. One of the emblematic pictures from this area is the Hofstadter butterfly, a fractal pattern named after Douglas Hofstadter, who first came upon it in 1976. The Hofstadter butterfly represents the energy spectrum of an electron moving under an extreme magnetic field.

Physicists were stunned when they noticed that, for certain parameter values in the Schrodinger equation, this energy spectrum appeared to be the Cantor set, which is a remarkable mathematical object that embodies seemingly incompatible properties of density and sparsity. In the 1980s, mathematician Barry Simon popularized the “Ten Martini Problem” (so named by Mark Kac, who offered to buy 10 martinis for anyone who could solve it). This problem asked whether the spectrum of one specific Schrodinger operator, known as the almost-Mathieu operator, is in fact the Cantor set.

Together with Svetlana Jitomirskaya, Avila solved this problem. As spectacular as that solution was, it represents only the tip of the iceberg of Avila’s work on Schrodinger operators. Starting in 2004, he spent many years developing a general theory that culminated in two preprints in 2009. This work establishes that, unlike the special case of the almost-Mathieu operator, general Schrodinger operators do not exhibit critical behavior in the transition between different potential regimes. Avila used approaches from dynamical systems theory in this work, including renormalization techniques.

A final example of Avila’s work is a very recent result that grew out of his proof of a regularization theorem for volume-preserving maps. This proof resolved a conjecture that had been open for thirty years; mathematicians hoped that the conjecture was true but could not prove it. Avila’s proof has unblocked a whole direction of research in smooth dynamical systems and has already borne fruit. In particular, the regularization theorem is a key element in an important recent advance by Avila, Sylvain Crovisier, and Amie Wilkinson. Their work, which is still in preparation, shows that a generic volume-preserving diffeomorphism with positive metric entropy is an ergodic dynamical system.

With his signature combination of tremendous analytical power and deep intuition about dynamical systems, Artur Avila will surely remain a mathematical leader for many years to come.

Born in Brazil in 1979, Artur Avila is also a naturalized French citizen. He received his PhD in 2001 from the Instituto Nacional de Matematica Pura e Aplicada (IMPA) in Rio de Janeiro, where his advisor was Welington de Melo. Since 2003 Avila has been researcher in the Centre National de la Recherche Scientifique and became a Directeur de recherche in 2008; he is attached to the Institut de Mathematiques de Jussieu-Paris Rive Gauche. Also, since 2009 he has been a researcher at IMPA. Among his previous honors are the Salem Prize (2006), the European Mathematical Society Prize (2008), the Grand Prix Jacques Herbrand of the French Academy of Sciences (2009), the Michael Brin Prize (2011), the Premio of the Sociedade Brasileira de Matematica (2013), and the TWAS Prize in Mathematics (2013) of the World Academy of Sciences.

 

Manjul Bhargava & his works

manjul--621x414

Manjul Bhargava’s work in number theory has had a profound in influence on the field. A mathematician of extraordinary creativity, he has a taste for simple problems of timeless beauty, which he has solved by developing elegant and powerful new methods that offer deep insights.

When he was a graduate student, Bhargava read the monumentalDisquisitiones Arithmeticae, a book about number theory by Carl Friedrich Gauss (1777-1855). All mathematicians know of theDisquisitiones, but few have actually read it, as its notation and computational nature make it difficult for modern readers to follow. Bhargava nevertheless found the book to be a wellspring of inspiration. Gauss was interested in binary quadratic forms, which are polynomialsax2+bxy+cy2, where a, b, and c are integers. In the Disquisitiones, Gauss developed his ingenious composition law, which gives a method for composing two binary quadratic forms to obtain a third one.

This law became, and remains, a central tool in algebraic number theory. After wading through the 20 pages of Gauss’s calculations culminating in the composition law, Bhargava knew there had to be a better way. Then one day, while playing with a Rubik’s cube, he found it. Bhargava thought about labeling each corner of a cube with a number and then slicing the cube to obtain 2 sets of 4 numbers. Each 4-number set naturally forms a matrix. A simple calculation with these matrices resulted in a binary quadratic form. From the three ways of slicing the cube, three binary quadratic forms emerged. Bhargava then calculated the discriminants of these three forms. (The discriminant, familiar to some as the expression “under the square root sign” in the quadratic formula, is a fundamental quantity associated to a polynomial.) When he found the discriminants were all the same, as they are in Gauss’s composition law, Bhargava realized he had found a simple, visual way to obtain the law.

He also realized that he could expand his cube-labeling technique to other polynomials of higher degree (the degree is the highest power appearing in the polynomial; for example, x3 – x + 1 has degree 3). He then discovered 13 new composition laws for higher-degree polynomials. Up until this time, mathematicians had looked upon Gauss’s composition law as a curiosity that happened only with binary quadratic forms. Until Bhargava’s work, no one realized that other composition laws existed for polynomials of higher degree.

One of the reasons Gauss’s composition law is so important is that it provides information about quadratic number fields. A number field is built by extending the rational numbers to include non-rational roots of a polynomial; if the polynomial is quadratic, then one obtains a quadratic number field. The degree of the polynomial and its discriminant are two basic quantities associated with the number field. Although number fields are fundamental objects in algebraic number theory, some basic facts are unknown, such as how many number fields there are for a fixed degree and fixed discriminant. With his new composition laws in hand, Bhargava set about using them to investigate number fields.

Implicit in Gauss’s work is a technique called the “geometry of numbers”; the technique was more fully developed in a landmark 1896 work of Hermann Minkowski (1864-1909). In the geometry of numbers, one imagines the plane, or 3-dimensional space, as populated by a lattice that highlights points with integer coordinates. If one has a quadratic polynomial, counting the number of integer lattice points in a certain region of 3-dimensional space provides information about the associated quadratic number field. In particular, one can use the geometry of numbers to show that, for discriminant with absolute value less than X, there are approximately X quadratic number fields. In the 1960s, a more refined geometry of numbers approach by Harold Davenport (1907-1969) and Hans Heilbronn (1908-1975) resolved the case of degree 3 number fields. And then progress stopped. So a great deal of excitement greeted Bhargava’s work in which he counted the number of degree 4 and degree 5 number fields having bounded discriminant. These results use his new composition laws, together with his systematic development of the geometry of numbers, which greatly extended the reach and power of this technique. The cases of degree bigger than 5 remain open, and Bhargava’s composition laws will not resolve those. However, it is possible that those cases could be attacked using analogues of his composition laws.

Recently, Bhargava and his collaborators have used his expansion of the geometry of numbers to produce striking results about hyperelliptic curves. At the heart of this area of research is the ancient question of when an arithmetic calculation yields a square number. One answer Bhargava found is strikingly simple to state: A typical polynomial of degree at least 5 with rational coefficients never takes a square value. A hyperelliptic curve is the graph of an equation of the form y2 = a polynomial with rational coefficients.

In the case where the polynomial has degree 3, the graph is called an elliptic curve. Elliptic curves have especially appealing properties and have been the subject of a great deal of research; they also played a prominent role in Andrew Wiles’s celebrated proof of Fermat’s Last Theorem. A key question about a hyperelliptic curve is how one can count the number of points that have rational coordinates and that lie on the curve.

It turns out that the number of rational points is closely related to the degree of the curve. For curves of degree 1 and 2, there is an efiective way of finding all the rational points. For degree 5 and higher, a theorem of Gerd Faltings (a 1986 Fields Medalist) says that there are only finitely many rational points. The most mysterious cases are those of degree 3 ­– namely, the case of elliptic curves – and of degree 4. There is not even any algorithm known for deciding whether a given curve of degree 3 or 4 has finitely many or infinitely many rational points.

Such algorithms seem out of reach. Bhargava took a different tack and asked, what can be said about the rational points on a typical curve? In joint work with Arul Shankar and also with Christopher Skinner, Bhargava came to the surprising conclusion that a positive proportion of elliptic curves have only one rational point and a positive proportion have infinitely many. Analogously, in the case of hyperelliptic curves of degree 4, Bhargava showed that a positive proportion of such curves have no rational points and a positive proportion have infinitely many rational points. These works necessitated counting lattice points in unbounded regions of high-dimensional space, in which the regions spiral outward in complicated “tentacles”. This counting could not have been done without Bhargava’s expansion of the geometry of numbers technique.

Bhargava also used his expansion of the geometry of numbers to look at the more general case of higher degree hyperelliptic curves. As noted above, Faltings’ theorem tells us that for curves of degree 5 or higher, the number of rational points is finite, but the theorem does not give any way of finding the rational points or saying exactly how many there are. Once again, Bhargava examined the question of what happens for a “typical” curve.

When the degree is even, he found that the typical hyperelliptic curve has no rational points at all. Joint work with Benedict Gross, together with follow-up work of Bjorn Poonen and Michael Stoll, established the same result for the case of odd degree. These works also offer quite precise estimates of how quickly the number of curves having rational points decreases as the degree increases. For example, Bhargava’s work shows that, for a typical degree 10 polynomial, there is a greater than 99% chance that the curve has no rational points.

A final example of Bhargava’s achievements is his work with Jonathan Hanke on the so-called “290-Theorem”. This theorem concerns a question that goes back to the time of Pierre de Fermat (1601-1665), namely, which quadratic forms represent all integers? For example, not all integers are the sum of two squares, so x2 + y2does not represent all integers. Neither does the sum of three squares, x2 + y2 + z2. But, as Joseph-Louis Lagrange (1736-1813) famously established, the sum of four squares, x2 + y2+ z2 + w2, does represent all integers. In 1916, Srinivasa Ramanujan (1887-1920) gave 54 more examples of such forms in 4 variables that represent all integers.

What other such “universal” forms could be out there? In the early 1990s, John H. Conway and his students, particularly William Schneeberger and Christopher Simons, looked at this question a different way, asking whether there is a number c such that, if a quadratic form represents integers less than c, it represents all integers. Through extensive computations, they conjectured that ccould perhaps be taken as small as 290. They made remarkable progress, but it was not until Bhargava and Hanke took up the question that it was fully resolved. They found a set of 29 integers, up to and including 290, such that, if a quadratic form (in any number of variables) represents these 29 integers, then it represents all integers. The proof is a feat of ingenuity combined with extensive computer programming.

In addition to being one of the world’s leading mathematicians, Bhargava is an accomplished musician; he plays the Indian instrument known as the tabla at a professional level. An outstanding communicator, he has won several teaching awards, and his lucid and elegant writing has garnered a prize for exposition.

Bhargava has a keen intuition that leads him unerringly to deep and beautiful mathematical questions. With his immense insight and great technical mastery, he seems to bring a “Midas touch” to everything he works on. He surely will bring more delights and surprises to mathematics in the years to come.

Born in 1974 in Canada, Manjul Bhargava grew up primarily in the USA and also spent much time in India. He received his PhD in 2001 from Princeton University, under the direction of Andrew Wiles. Bhargava became a professor at Princeton in 2003. His honors include the Merten M. Hasse Prize of the Mathematical Association of America (2003), the Blumenthal Award for the Advancement of Research in Pure Mathematics (2005), the SASTRA Ramanujan Prize (2005), the Cole Prize in Number Theory of the American Mathematical Society (2008), the Fermat Prize (2011), and the Infosys Prize (2012). He was elected to the U.S. National Academy of Sciences in 2013.

 

Martin Hairer and his work

martin-hairer-fields-medaille-540x304

Martin Hairer has made a major breakthrough in the study of stochastic partial differential equations by creating a new theory that provides tools for attacking problems that up to now had seemed impenetrable.

The subject of differential equations has its roots in the development of calculus by Isaac Newton and Gottfried Leibniz in the 17th century. A major motivation at that time was to understand the motion of the planets in the solar system. Newton’s laws of motion can be used to formulate a differential equation that describes, for example, the motion of the Earth around the Sun. A solution to such an equation is a function that gives the position of the Earth at any time t. In the centuries since, differential equations have become ubiquitous across all areas of science and engineering to describe systems that change over time.

A differential equation describing planetary motion is deterministic, meaning that it determines exactly where a planet will be at a given time in the future. Other differential equations are stochastic, meaning that they describe systems containing an inherent element of randomness. An example is an equation that describes how a stock price will change over time. Such an equation incorporates a term that represents fluctuations in the stock market price. If one could predict exactly what the fluctuations would be, one could predict the future stock price exactly (and get very rich!). However, the fluctuations, while having some dependence on the initial stock price, are essentially random and unpredictable. The stock-price equation is an example of astochastic differential equation.

In the planetary-motion equation, the system changes with respect to only one variable, namely, time. Such an equation is called an ordinarydifferential equation (ODE). By contrast, partial differential equations (PDEs) describe systems that change with respect to more than one variable, for example, time and position. Many PDEs are nonlinear, meaning that the terms in it are not simple proportions – for example, they might be raised to an exponential power. Some of the most important natural phenomena are governed by nonlinear PDEs, so understanding these equations is a major goal for mathematics and the sciences. However, nonlinear PDEs are among the most difficult mathematical objects to understand. Hairer’s work has caused a great deal of excitement because it develops a general theory that can be applied to a large class of nonlinear stochastic PDEs.

An example of a nonlinear stochastic PDE – and one that played an important role in Hairer’s work – is the KPZ equation, which is named for Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang, the physicists who came up with the equation in 1986. The KPZ equation describes the evolution over time of the interface between two substances. To get a feel for the nature of this equation, consider the process of liquid crystal display manufacturing. In a simplified model of this process, one imagines drops of the liquid-crystal material being deposited between two closely aligned vertical sheets of glass. The drops interact with other drops, adhering to each other, merging, and spreading out as they settle to the bottom. At a scale much smaller than the scale at which one views this process, the molecules in the drops move in a random way. One can think of this random motion as introducing “white noise” into the system. It creates a rough, irregular interface between the air above and the material accumulating below.

The KPZ equation describes the evolution of this interface over time. Because it includes a white-noise term to describe the random motion of the molecules, the KPZ equation is a stochastic PDE. A solution to the KPZ equation would provide, for any time t and any point along the bottom edge of the glass, the height of the interface above that point.

The challenge the KPZ equation posed is that, although it made sense from the point of view of physics, it did not make sense mathematically. A solution to the KPZ equation should be a mathematical object that represents the rough, irregular nature of the interface. Such an object has no smoothness; in mathematical terms, it is not differentiable. And yet two of the terms in the KPZ equation call for the object to be differentiable. There is a way to sidestep this difficulty by using an object called a distribution.

But then a new problem arises, because the KPZ equation is nonlinear: It contains a square term, and distributions cannot be squared. For these reasons, the KPZ equation was not well defined. Although researchers came up with some technical tricks to ameliorate these difficulties for the special case of the KPZ equation, the fundamental problem of its not being well defined long remained an unresolved issue.

In a spectacular achievement, Hairer overcame these difficulties by describing a new approach to the KPZ equation that allows one to give a mathematically precise meaning to the equation and its solutions. What is more, in subsequent work he used the ideas he developed for the KPZ equation to build a general theory, the theory of regularity structures, that can be applied to a broad class of stochastic PDEs. In particular, Hairer’s theory can be used in higher dimensions (the KPZ equation has one spatial dimension because it models an idealization of the interface as a one-dimensional curve).

The basic idea of Hairer’s approach to the KPZ equation is the following. Instead of making the usual assumption that the small random effects occur on an infinitesimally small scale, he adopted the assumption that the random effects occur on a scale that is small in comparison to the scale at which the system is viewed. Removing the infinitesimal assumption, which Hairer calls “regularizing the noise”, renders an equation that can be solved. The resulting solution is not a solution to KPZ; rather, it can be used as the starting point to construct a sequence of objects that, in the limit, converges to a solution of KPZ. And Hairer proved a crucial fact: the limiting solution is always the same regardless of the kind of noise regularization that is used.

Hairer’s general theory addresses other, higher-dimensional stochastic PDEs that are not well defined. For these equations, as with KPZ, the main challenge is that, at very small scales, the behavior of the solutions is very rough and irregular. If the solution were a smooth function, one could carry out a Taylor expansion, which is a way of approximating the function by polynomials of increasingly higher degree. But the roughness of the solutions means they are not well approximated by polynomials. What Hairer did instead is to define objects, custom-built for the equation at hand, that approximate the behavior of the solution at small scales. These objects then play a role similar to polynomials in a Taylor expansion. At each point, the solution will look like an infinite superposition of these objects.

The ultimate solution is then obtained by gluing together the pointwise superpositions. Hairer established the crucial fact that the ultimate solution does not depend on the approximating objects used to obtain it. Prior to Hairer’s work, researchers had made a good deal of progress in understanding linear stochastic PDEs, but there was a fundamental block to addressing nonlinear cases. Hairer’s new theory goes a long way towards removing that block. What is more, the class of equations to which the theory applies contains several that are of central interest in mathematics and science. In addition, his work could open the way to understanding the phenomenon of universality. Other equations, when rescaled, converge to the KPZ equation, so there seems to be some universal phenomenon lurking in the background. Hairer’s work has the potential to provide rigorous analytical tools to study this universality.

Before developing the theory of regularity structures, Hairer made other outstanding contributions. For example, his joint work with Jonathan Mattingly constitutes a significant advance in understanding a stochastic version of the Navier-Stokes equation, a nonlinear PDE that describes wave motion. In addition to being one of the world’s top mathematicians, Hairer is a very good computer programmer. While still a school student, he created audio editing software that he later developed and successfully marketed as “the Swiss army knife of sound editing”. His mathematical work does not depend on computers, but he does find that programming small simulations helps develop intuition.

With his commanding technical mastery and deep intuition about physical systems, Hairer is a leader in the field who will doubtless make many further significant contributions.

Born in 1975, Martin Hairer is an Austrian citizen. In 2001, he received his PhD in physics from the University of Geneva, under the direction of Jean-Pierre Eckmann. He is currently Regius Professor of Mathematics at the University of Warwick. His honors include the Whitehead Prize of the London Mathematical Society (2008), the Philip Leverhulme Prize (2008), the Wolfson Research Merit Award of the Royal Society (2009), the Fermat Prize (2013), and the Frohlich Prize of the London Mathematical Society (2014). He was elected a Fellow of the Royal Society in 2014.

 

Maryam Mirzakhani & her work

 

Fields_Maryam_Mirzakhani1-1280x960

Maryam Mirzakhani has made striking and highly original contributions to geometry and dynamical systems. Her work on Riemann surfaces and their moduli spaces bridges several mathematical disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them all in return. She gained widespread recognition for her early results in hyperbolic geometry, and her most recent work constitutes a major advance in dynamical systems.

Riemann surfaces are named after the 19th century mathematician Bernhard Riemann, who was the first to understand the importance of abstract surfaces, as opposed to surfaces arising concretely in some ambient space. Mathematicians building on Riemann’s insights understood more than 100 years ago that such surfaces can be classified topologically, i.e. up to deformation, by a single number, namely, the number of handles. This number is called the genus of the surface. The sphere has genus zero, the surface of a cofee cup has genus one, and the surface of a proper pretzel has genus three. Provided that one disregards the precise geometric shape, there is exactly one surface of genus g for every positive integer g.

A surface becomes a Riemann surface when it is endowed with an additional geometric structure. One can think of this geometric structure as a so-called complex structure, which allows one to do complex analysis on the abstract surface. Since the complex numbers involve two real parameters, a surface, which is two-dimensional over the real numbers, has only one complex dimension and is sometimes called a complex curve. The following fact links the theory of Riemann surfaces to algebraic geometry: Every complex curve is an algebraic curve, meaning that the complex curve, although defined abstractly, can be realized as a curve in a standard ambient space, in which it is the zero set of suitably chosen polynomials. Thus, although a Riemann surface is a priori an analytic object defined in terms of complex analysis on abstract surfaces, it turns out to have an algebraic description in terms of polynomial equations.

An alternative but equivalent way of defining a Riemann surface is through the introduction of a geometry that allows one to measure angles, lengths, and areas. The most important such geometry ishyperbolic geometry, the original example of a non-Euclidean geometry discovered by Bolyai, Gauss, and Lobatchevski. The equivalence between complex algebraic and hyperbolic structures on surfaces is at the root of the rich theory of Riemann surfaces.

Mirzakhani’s early work concerns closed geodesics on a hyperbolic surface. These are closed curves whose length cannot be shortened by deforming them. A now-classic theorem proved more than 50 years ago gives a precise way of estimating the number of closed geodesics whose length is less than some bound L. The number of closed geodesics grows exponentially with L; specifically, it is asymptotic toeL/L for large L. This theorem is called the “prime number theorem for geodesics”, because it is exactly analogous to the usual “prime number theorem” for whole numbers, which estimates the number of primes less than a given size. (In that case the number of primes less than eL is asymptotic to eL/L for large L.)

Mirzakhani looked at what happens to the “prime number theorem for geodesics” when one considers only the closed geodesics that aresimple, meaning that they do not intersect themselves. The behavior is very different in this case: the growth of the number of geodesics of length at most L is no longer exponential in L but is of the order of L(6g– 6), where g is the genus. Mirzakhani showed that in fact the number is asymptotic to c . L(6– 6) for large L (going to infinity), where the constant c depends on the hyperbolic structure.

While this is a statement about a single, though arbitrary, hyperbolicstructure on a surface, Mirzakhani proved it by considering all such structures simultaneously. The complex structures on a surface of genus g form a continuous, or non-discrete, space, since they have continuous deformations.

While the underlying topological surface remains the same, its geometric shape changes during a deformation. Riemann knew that these deformations depend on 6– 6 parameters or “moduli”, meaning that the “moduli space” of Riemann surfaces of genus g has dimension 6– 6. However, this says nothing about the global structure of moduli space, which is extremely complicated and still very mysterious. Moduli space has a very intricate geometry of its own, and different ways of looking at Riemann surfaces lead to different insights into its geometry and structure. For example, thinking of Riemann surfaces as algebraic curves leads to the conclusion that moduli space itself is an algebraic object called an algebraic variety.

In Mirzakhani’s proof of her counting result for simple closed geodesics, another structure on moduli space enters, a so-called symplectic structure, which, in particular, allows one to measure volumes (though not lengths). Generalizing earlier work of G. McShane, Mirzakhani establishes a link between the volume calculations on moduli space and the counting problem for simple closed geodesics on a single surface. She calculates certain volumes in moduli space and then deduces the counting result for simple closed geodesics from this calculation.

This point of view led Mirzakhani to new insights into other questions about moduli space. One consequence was a new and unexpected proof of a conjecture of Edward Witten (a 1990 Fields Medalist), one of the leading figures in string theory. Moduli space has many special loci inside it that correspond to Riemann surfaces with particular properties, and these loci can intersect. For suitably chosen loci, these intersections have physical interpretations. Based on physical intuition and calculations that were not entirely rigorous, Witten made a conjecture about these intersections that grabbed the attention of mathematicians. Maxim Kontsevich (a 1998 Fields Medalist) proved Witten’s conjecture through a direct verification in 1992.

Fifteen years later, Mirzakhani’s work linked Witten’s deep conjecture about moduli space to elementary counting problems of geodesics on individual surfaces. In recent years, Mirzakhani has explored other aspects of the geometry of moduli space. As mentioned before, the moduli space of Riemann surfaces of genus g is itself a geometric object of 6– 6dimensions that has a complex, and, in fact, algebraic structure. In addition, moduli space has a metric whose geodesics are natural to study. Inspired by the work of Margulis, Mirzakhani and her co-workers have proved yet another analogue of the prime number theorem, in which they count closed geodesics in moduli space, rather than on a single surface. She has also studied certain dynamical systems (meaning systems that evolve with time) on moduli space, proving in particular that the system known as the “earthquake flow”, whichwas introduced by William Thurston (a 1982 Fields Medalist), is chaotic.

Most recently, Mirzakhani, together with Alex Eskin and, in part, Amir Mohammadi, made a major breakthrough in understanding another dynamical system on moduli space that is related to the behavior of geodesics in moduli space. Non-closed geodesics in moduli space are very erratic and even pathological, and it is hard to obtain any understanding of their structure and how they change when perturbed slightly. However, Mirzakhani et al have proved that complex geodesics and their closures in moduli space are in fact surprisingly regular, rather than irregular or fractal. It turns out that, while complexgeodesics are transcendental objects defined in terms of analysis and differential geometry, their closures are algebraic objects defined in terms of polynomials and therefore have certain rigidity properties.

This work has garnered accolades among researchers in the area, who are working to extend and build on the new result. One reason the work sparked so much excitement is that the theorem Mirzakhani and Eskin proved is analogous to a celebrated result of Marina Ratner from the 1990s. Ratner established rigidity for dynamical systems on homogeneous spaces – these are spaces in which the neighborhood of any point looks just the same as that of any other point. By contrast, moduli space is totally inhomogeneous: Every part of it looks totally different from every other part. It is astounding to find that the rigidity in homogeneous spaces has an echo in the inhomogeneous world of moduli space.

Because of its complexities and inhomogeneity, moduli space has often seemed impossible to work on directly. But not to Mirzakhani. She has a strong geometric intuition that allows her to grapple directly with the geometry of moduli space. Fluent in a remarkably diverse range of mathematical techniques and disparate mathematical cultures, she embodies a rare combination of superb technical ability, bold ambition, far-reaching vision, and deep curiosity. Moduli space is a world in which many new territories await discovery. Mirzakhani is sure to remain a leader as the explorations continue.

Born in 1977 in Tehran, Iran, Maryam Mirzakhani received her Ph.D in 2004 from Harvard University, where her advisor was Curtis McMullen. From 2004 to 2008 she was a Clay Mathematics Institute Research Fellow and an assistant professor at Princeton University. She is currently a professor at Stanford University. Her honors include the 2009 Blumenthal Award for the Advancement of Research in Pure Mathematics and the 2013 Satter Prize of the American Mathematical Society.

 

Permanent link to this article: https://www.borgomul.com/arifin/2705/


মন্তব্য করুন আপনার ফেসবুক প্রোফাইল ব্যবহার করে

1 comment

  1. আমাদের চেনাশোনা অনেকের মধ্যেই অনেক রকম প্রতিভা থাকে। নিজের বা সমাজের ভুলের কারনে তাঁরা ইতিহাসের অজানা পাতায় হারিয়ে যায়। আসুন সবাই নিজেকে চিনতে শিখি। কে জানে কালকে আমাদের কোন ভাই-বোন হয়ত আমাদের দেশের জন্য, পুরো বিশ্বের জন্য গৌরব বয়ে নিয়ে আসবে।

মন্তব্য করুন