image_pdfimage_print

Reynolds averaging re-formulated.
At the beginning of the 1980s I was still involved in experimental work on drag reduction; while, on the theoretical side, I had begun numerical evaluation of the LET theory. One day I went into the lab to help a student who was having problems with his laser anemometer. In those days we used a digital voltmeter to obtain the mean velocity and an rms voltmeter to obtain (you’ve guessed it!) the rms velocity. Actually at that stage we had begun recording the anemometer output voltage and taking it away for A/D conversion and subsequent processing on a computer. But we still used the voltmeters for setting things up, and essentially the rule of thumb was to turn up the value of the time constant until the reading became steady.

It was while my student was playing with these things, that I started thinking that it was Reynolds’s introduction of averaging which had created the closure problem, and (this is very profound!) if we didn’t average then we wouldn’t have that problem. So how would it be if we averaged over a very short time? Would we have a small version of the closure problem? One perhaps that would be more easily solved, and then one could average the resulting smoothed system over a slightly longer time; and so on. I began to picture replacing Reynolds averaging with a series of smoothing operations, over progressively longer times, and with some approximate calculation at each stage. So one might envisage replacing the Reynolds equation with a form in which the Reynolds stresses did not occur as such, but were represented by a constitutive relationship derived during the preceding iterations plus the unaveraged portion of the nonlinear term.

I began working on this idea and ultimately it was published as reference [1]. What I want to do here is make a couple of general points about this analysis, but first I will explain the basic idea. Suppose we have a quasi-steady mean flow, in which external conditions (e.g. applied pressure gradients, boundary conditions) vary with time over scales which are long with respect to the scales of the turbulent energy transfer processes. Then we may define the mean velocity as:

(1)   \begin{equation*}\overline{U(t)} = \frac{1}{2T}\int_{-T}^{T}\, U(t+s)\,ds,\end{equation*}

where 2T is large enough to smooth out the turbulent fluctuations, but shorter than the timescales of external variations. Of course, if the mean flow is actually steady, then we can take the limit where T\rightarrow \infty in the usual way. In either case, we may obtain the fluctuating velocity u in the usual way, thus:

    \[u=U-\overline{U},\]

where trivially it follows that \overline{u} =0. Note that this analysis is in real space, but to keep things simple I’m omitting space variables and the vector nature of velocity.

Next let us generalise the above smoothing operation to

(2)   \begin{equation*} \langle U(t)\rangle_0 =\int_{\infty}^{\infty}\, U(t+s) a_0(s)\, ds, \end{equation*}

where

    \[\int_{-\infty}^{\infty} \, a_0(s)\, ds =1.\]

The analogue of the fluctuating velocity from Reynolds averaging can be defined in an analogous way, thus:

    \[u_{0}(t) = U(t) - \langle U(t) \rangle_0.\]

Evidently the actual limits of integration are determined in practice by the choice of the weight function a_0(t) and I began with the natural choice of the Heaviside unit function multiplied by 1/2\tau_0 and defined on -\tau_0 \leq t \leq \tau_0, where \tau_0 is very small compared to any relevant turbulence timescale, but otherwise arbitrary. With this choice, our smoothing operation is just the first operation above, with T=\tau_0. Then, repeating the process with \tau_1 > \tau_0, and so on, for ever increasing smoothing times, would ultimately take us back to Reynolds averaging. But this is not the choice of a_0 that I made in [1], and I will come back to that.

At that time, the success of the renormalization group (RG) in the theory of critical phenomenon was becoming well known and it occurred to me that my underlying iterative method could be turned into a RG calculation. To do this, I dropped the shear flow aspects and specialised the theory to isotropic turbulence. Then I used Fourier transform with respect to time to introduce the angular frequency \omega; and invoked the Taylor hypothesis to introduce the wavenumber k. Hence I had turned my iterative averaging over time, into an iterative form of mode elimination which led to a fixed point for the effective viscosity arising from the eliminated modes.

This was the form of the paper submitted for publication. The referee was Bob Kraichnan and, although broadly happy with the paper, he expressed a concern that the wavenumber bands were not clearly defined. I agreed with this and fixed the problem by choosing a new weight function to be

    \[a_0(t) = (1/\tau_0) sinc (\omega t/\tau_0),\]

where sinc is the sine over its argument, and this is how the paper was published. There were two broad consequences of this.

First, I am left with the feeling that I didn’t actually do what I set out to do; and reformulate Reynolds averaging. Unfortunately, due to the pandemic, my older notebooks are not available to me, so a fresh look at that aspect will have to wait. Secondly, this was the beginning of a number of years working on RG applied to turbulence. There was a lot more to it than I imagined at the early stage and an overview and exposition of the current situation can be found in reference [2]. I intend to follow this post with some remarks and observations on the application of RG to turbulence in future posts.

[1] W. D. McComb. Reformulation of the statistical equations for turbulent shear flow. Phys. Rev. A, 26(2):1078-1094, 1982.
[2] W. D. McComb. Asymptotic freedom, non-Gaussian perturbation theory, and the application of renormalization group theory to isotropic turbulence. Phys. Rev. E, 73:26303-26307, 2006.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.