Turbulence renormalization and the Euler equation: 2

Turbulence renormalization and the Euler equation: 2

In the early 1970s, my former PhD supervisor Sam Edwards asked me to be the external examiner for one of his current students. It was only a few years since I had been on the receiving end of this process so naturally I approached the task in a merciful way! Anyway, if memory serves, the thesis was about a statistical theory of surface roughness and it cited various papers that applied methods of theoretical physics to practical engineering problems such as properties of polymer solutions, stochastic behaviour of structures and (of course) turbulence. To me this crystallized a problem that was then troubling me. If you regarded yourself as belonging to this approach (and I did), what would you call it? The absence of a recognisable generic title when filling in research grant applications or other statements about one’s research seemed to be a handicap.

Ultimately I decided on the term renormalization methods but the term renormalization did not really come into general use, even in physics, until the success of renormalization group (or RG) in the early 1980s. Actually, the common element in these problems is that one is dealing with systems where the degrees of freedom interact with each other. So, another possible title would be many-body theory. We can also expect to observe collective behaviour, which is another possible label. We will begin by looking briefly at two pioneering theories in condensed matter physics, as comparing and contrasting these will be helpful when we go on to the theory of turbulence.

We begin with the Weiss theory of ferromagnetism which dates from 1907 (see Section 3.2 of [1]), in which a piece of magnetic material was pictured as being made up from tiny magnets at the molecular level. This predates quantum theory and nowadays we would think in terms of lattice spins. There are two steps in the theory. First there is the mean field approximation. Weiss considered the effect of an applied magnetic field $B$ producing a magnetization $M$ in the specimen, and argued that the tendency of spins to line up spontaneously would lead to a molecular field $B_m$, such that one could expect an effective field $B_E$, such that: \[B_E = B + B_m.\] This is the mean-field approximation.

Then Weiss made the assumption \[B_m\propto M.\] This is the self-consistent approximation. Combining the two, and writing the magnetization as a fraction of its saturation value $M_\infty$, an updated treatment gives: \[\frac{M}{M_\infty}= \tanh\left[\frac{JZ}{kT}\frac{M}{M_\infty}\right],\] where $J$ is the strength of interaction between spins, $Z$ is the number of nearest neighbours of any one spin, $k$ is the Boltzmann constant and $T$ is the absolute temperature. This expression can be solved graphically for the value of the critical temperature $T_C$: see [1].

Our second theory dates from 1922 and considers electrons (in an electrolyte, say) and evaluates the effect all the other electrons on the potential due to any one electron. For any one electron in isolation, we have the Coulomb potential, thus: \[V(r)\sim \frac{e}{r}\] where $e$ is the electronic charge and $r$ is distance from the electron. This theory too has mean-field and self-consistent steps (see [1] for details) and leads to the so-called screened potential, \[V_s(r) \sim \frac{e \exp[-r/l_D]}{r},\] where $l_D$ is the Debye length and depends on the electronic charge and the number density of electrons. This potential falls off much faster than the Coulomb form and is interpreted in terms of the screening effect of the cloud of electrons round the one that we are considering.

However, we can interpret it as a form of charge renormalization, in which the free-field charge $e$ is replaced by a charge which has been renormalized by the interactions with the other electrons, or:\[e \rightarrow e \times \exp[-r/l_D].\] Note that the renormalized charge depends on $r$ and this type of scale dependence is absolutely characteristic of renormalized quantities. In the next blog post we will discuss statistical theories of turbulence in terms of what we have learned here. For sake of completeness, we should also mention here that the idea of an `effective’ or `apparent’ or `turbulence’ viscosity was introduced in 1877 by Boussinesq. For details, see the book by Hinze [2]. This may possibly be the first recognition of a renormalization process.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[2] J. O. Hinze. Turbulence. McGraw-Hill, New York, 1st edition, 1959. (2nd edition, 1975).

 




Turbulence renormalization and the Euler equation: 1

Turbulence renormalization and the Euler equation: 1

The term renormalization comes from particle physics but the concept originated in condensed matter physics and indeed could be said to originate in the study of turbulence in the late 19th and early 20th centuries. It has become a dominant theme in statistical theories of turbulence since the mid-20th century, and a very simple summary of this can be found in my post of 16 April 2020, which includes the sentence: `In the case of turbulence, it is probably quite widely recognized nowadays that an effective viscosity may be interpreted as a renormalization of the fluid kinematic viscosity.’ Some further discussion (along with references) may be found in my posts of 30 April and 7 May 2020, but the point that concerns me here, is how can renormalization apply to the Euler equation when its relationship to the Navier-Stokes equation (NSE) corresponds to zero viscosity?

It is well known that a randomly excited and spectrally truncated Euler system corresponds to an equilibrium ensemble in statistical mechanics. This means that it must exhibit energy equipartition at long times (depending on initial conditions) with the spectral energy density $C(k) = A$, where $A$ is constant, and therefore the energy spectrum taking the form $E(k) \sim A k^2$. Indeed this was demonstrated as long ago as 1964 by Kraichnan in the course of testing his DIA statistical closure [1]. However, in 1993, She and Jackson studied a constrained Euler system in the context of reducing the number of degrees of freedom needed to describe Navier-Stokes turbulence [2]. This involves an Euler equation restricted to wavenumber modes $k_{min}\leq k \leq k_{max}$ which is embedded in a forced NSE system, with nonlinear transfer of energy in from the forced modes with $k<k_{min}$ and nonlinear dissipation of energy to modes with $k>k_{max}$, where viscous dissipation is present. This is a very interesting paper which I mention here for completeness and hope to return to at some later time. For the moment I want to concentrate on two rather simpler, but still important, studies of the incompressible Euler equation [3], [4].

In physical terms, it is well known that the presence of the viscous term in the NSE, with its $k^2$ weighting, breaks the symmetry of the nonlinear term common to both equations, and ensures a mean flux of energy in the direction of increasing wavenumber. This symmetry can also be broken by adopting as initial condition some energy spectrum which does not correspond to the equipartition solution. The resulting evolution of a system peaked initially near, but not at, the origin is shown in [1], along with a good discussion of the behaviour of the Euler equation as related to the NSE. Evidently the Euler equation may behave like the NSE as a transient, ultimately tending to equipartition. This behaviour has been studied by Cichowlas et al [3], using direct numerical simulation; and by Bos and Bertoglio [4], using the EDQNM spectral closure. They both find long-lived transients in which at smaller wavenumbers there is a Kolmogorov-type $k^{-5/3}$ spectrum and at higher wavenumbers an equipartition $k^2$ spectrum. In both cases, the equipartition range is seen as acting as a sink, and hence giving rise to an effect like molecular viscosity.

In [4], as in [2], there is some consideration of the relevance to large-eddy simulation, but it should be noted that in both investigations the explicit scales are not subject to molecular viscosity or its analogue. For sake of contrast we note that the operational treatment of NSE turbulence by Young and McComb [5] provides a subgrid sink for explicit modes which are governed by the NSE. It may not be a huge difference in practice, but it is important to be precise about these matters.

However, in the present context the really interesting aspect of [3] and [4] is that, in the absence of viscosity they obtain the sort of turbulent spectrum which may be interpreted in terms of an effective turbulent viscosity, and hence in terms of self-renormalization. In the next post, we will examine this further, beginning with a more detailed look at what is meant by the term renormalization.

[1] R. H. Kraichnan. Decay of isotropic turbulence in the Direct-Interaction Approximation. Phys. Fluids, 7(7):1030-1048, 1964.
[2] Z.-S. She and E. Jackson. Constrained Euler System for Navier-Stokes Turbulence. Phys. Rev. Lett., 70:1255, 1993.
[3] Cyril Cichowlas, Pauline Bonatti, Fabrice Debbasch, and Marc Brachet. Effective Dissipation and Turbulence in Spectrally Truncated Euler Flows. Phys. Rev. Lett., 95:264502, 2005.
[4] W. J. T. Bos and J.-P. Bertoglio. Dynamics of spectrally truncated inviscid turbulence. Phys. Fluids, 18:071701, 2006
[5] A. J. Young and W. D. McComb. Effective viscosity due to local turbulence interactions near the cutoff wavenumber in a constrained numerical simulation. J. Phys. A, 33:133-139, 2000.