Berezin integration of Grassmann variables

1. Introduction

When I first came across the presentation of linear algebra in terms of Grassmann’s exterior products, I was struck by its elegance. An introduction to linear algebra in terms of Grassmann’s ideas can be found here. The Grassmann approach is so much more intuitive that, once learned, there is no going back to the old way of thinking. For example, although in college I found determinants and permanents relatively easy to understand using the conventional approach, I only really came to understand the meaning of Pfaffians after learning exterior algebra (a.k.a Grassmann algebra).

Curiously, Grassmann did not have a university education in mathematics. Rather, he was actually trained as a linguist. Yet, his contributions to mathematics are widely recognized today. Terms such as Grassmann numbers, Grassmann algebra and Grassmanian manifold are all named in his honor.

Also fascinating is how Grassmann numbers make their appearance very naturally in quantum field theory. Specifically, they are used in the path integral formulaton for fermionic fields. Readers interested in finding out more about the connection with fermionic path integrals should refer to standard textbooks, for instance the books by Weinberg or the one by Ryder.

Some years ago I derived, for my own convenience, a few of the basic identities satisfied by Grassmann variables, in the context of differentiation and integration. Integration of Grassmann variables is known as Berezin integration. This article is based on my old study notes.

2. Grassmann numbers

Consider the algebra generated by {N} grassmann numbers {x_i}, {i=1,2,\ldots N} that anticommute according to

\displaystyle \{x_i, x_j\} := x_i x_j + x_j x_i =0 ~. \ \ \ \ \ (1)

Such elements are nilpotent with degree 2 due to the antisymmetry property: {x_i^2=0}.

What is the dimension of this algebra (as a vector space)? Consider the set of all monomials. There are {N} generators {x_1\ldots x_N} which are each nilpotent with degree 2. Hence, a general nonzero monomial can have a generator as factor only once. Hence, there are exactly {2^N} monomials. It is then easy to check that the most general function of the generators can be expressed as a linear combination of these monomials. Indeed, all power series (e.g., Taylor series) terminate, i.e. the most general function is a polynomial in the generators. The dimension of the Grassmann algebra is thus equal to the number of linearly independent monomials, {2^N}.

It is worth calling attention to an antisymmetry property of the coefficients of monomials. Consider the monomial term {x_1 x_2}. By definition, it equals {-x_2 x_1}. So the representation of a function as a linear combination of monomials is not unique. However, if we define the coefficients to be antisymmetric, then we recover uniqueness. For instance, {a_{12} x_1 x_2 = a_{21} x_2 x_1} if the coefficients satisfy {a_{12}=-a_{21}}.

3. Differentiation

Because Grassmann variables do not commute, we can define derivatives acting from the right and from the left. Here, I consider only derivatives acting to the right. We define the derivative as follows for a single generator:

\displaystyle {\partial x_i \over \partial x_i} =1 ~. \ \ \ \ \ (2)

To extend the derivative to a monomial, we must first bring the matching generator {x_i} all the way to the left, multiplying by {(-1)^k} where {k} is the number of generators to the left of {x_i} in the original monomial. Then, the derivative is obtained by dropping {x_i}. For example,

\displaystyle {\partial \over \partial x_2} x_1 x_2 x_3 = {\partial \over \partial x_2} - x_2 x_1 x_3 = - x_1 x_3 ~. \ \ \ \ \ (3)

The derivative then extends to all functions via linearity, i.e. differentiation is a linear operator.

The chain rule holds in the usual manner.

The product rule holds as usual if the first factor {f_1} is of even degree in the generators:

\displaystyle {\partial \over \partial x_p} f_1 f_2 = \left ( {\partial \over \partial x_p} f_1 \right) f_2 + f_1 {\partial \over \partial x_p} f_2 ~. \ \ \ \ \ (4)

However, if the first factor has odd degree, then there is a sign change in the second term. Since a general funtion need not be homogeneous and may have terms of both odd and even degree, I consider it safer to assume that the product rule does not hold in general, and instead to calculate term by term explicitly, unless you know what you are doing.

4. Berezin integration

Now we come to the most interesting part of this article! Is it possible to define an integral for Grassmann numbers? The usual antiderivative of a variable {x}

\displaystyle \int x dx = \frac 1 2 x^2

would be zero if {x} were a Grassmann variable, so it does not make sense to define integration in this manner. However, one can define the equivalent of a definite Riemann integral. The definite integral

\displaystyle \int_{-\infty} ^\infty f(x) dx \ \ \ \ \ (5)

has the following properties:

(i) Translation invariance:

\displaystyle \int_{-\infty} ^\infty f(x) dx = \int_{-\infty} ^\infty f(x+y) dx \ \ \ \ \ (6)

(ii) Linearity:

\displaystyle \int (a+ b f(x)) dx = a \int dx + b \int f(x) dx ~. \ \ \ \ \ (7)

Hence, we will require that the integral of a Grassmann number also have these two properties. Let {x} and {y} now denote Grassmann numbers.

Then first we require translation invariance.

\displaystyle \int f(x) dx = \int f(x+y) dx . \ \ \ \ \ (8)

Given that {f(x)} is at most a linear function of {x}, let us write

\displaystyle f(x) = a + b x \ \ \ \ \ (9)

where {a} and {b} are complex numbers. Substituting, and invoking linearity, we get

\displaystyle \begin{array}{rcl}  & & \int f(x+y) dx \\ & & =\int (a + b(x+y)) dx \\ & & =(by)\int dx + \int (a+bx) dx\\ & & = (by)\int dx + \int f(x) dx~. \end{array}

Hence

\displaystyle \int f(x+y) dx - \int f(x) dx = b y \int dx = 0 ~. \ \ \ \ \ (10)

so that we are forced to assume

\displaystyle \int dx = 0 ~. \ \ \ \ \ (11)

Hence, we get

\displaystyle \int (a+ bx) dx= b \int x dx~. \ \ \ \ \ (12)

Berezin chose the convention that

\displaystyle \int x dx = 1 \ \ \ \ \ (13)

although other conventions are possible for the constant. Below I use Berezin’s convention.

From these observations, we define for Grassmann numbers,

\displaystyle \int dx_i =0 ~. \ \ \ \ \ (14)

(If this is too difficult or “wierd” to accept, try to imagine that supposed integrated quantity {x_i} vanishes at the boundary.)

Next we define

\displaystyle \int x_i dx_i = 1 ~. \ \ \ \ \ (15)

Note that the integral is independent of quantities such as {x_i^2/2} which is what you would expect for a Riemann integral for normal (i.e., non-Grassmann) variables, since that would be zero due to the nilpotent property. Instead, Berezin integration is similar to standard differentiation: the usual derivative of {x_i} is 1 and the usual derivative of {1} is 0.

Moreover, the differential {dx_i} anticommmutes with {x_i}. In fact, the anticommutation property holds generally:

\displaystyle \{dx_i, dx_j\} = \{dx_i, x_j\} =0 ~. \ \ \ \ \ (16)

Multiple integrals are defined, in analogy with Fubini’s theorem, as iterated integrals:

\displaystyle  \iint x_j x_i dx_i dx_j \equiv \int x_j \left( \int x_i dx_i \right) dx_j ~. \ \ \ \ \ (17)

Note that there are other sign conventions for Berezin integrals. Physicists usually use the convention

\displaystyle  \iint x_i x_j dx_i dx_j =1 ~. \ \ \ \ \ (18)

In what follows I use the former sign convention.

Next, we come to one of the most interesting and unexpected properties of the Berezin integral. Let {f(x)} represent a function of all the generators. Recall that the most general function of the Grassmann algebra generators is a polynomial. Hence, the most general function can be written

\displaystyle f (x) = f_0 + \sum_k f_1(k) x_{k} + \sum_{k_1<k_2} f_2({k_1},{k_2}) x_{k_1} x_{k_2} + \ldots + f_N (1,2,\ldots, N) x_{1} x_2 \ldots x_{N} ~. \ \ \ \ \ (19)

Indeed, every element of the Grassmann algebra can be written as such.

Now consider the multiple Berezin integral

\displaystyle \int f(x) ~ dx_1 dx_2 \ldots dx_N ~. \ \ \ \ \ (20)

Note that all monomial terms of degree {k} with {k<N} will vanish, because each of the {N-k} iterated integrals for the variables not appearing in the monomial vanish separately. Only monomials of degree {N} survive:

\displaystyle \int f(x) ~ dx_1 dx_2 \ldots dx_N =f_N(1,2,\ldots,N) ~. \ \ \ \ \ (21)

5. Change of variables

Riemann integrals satisfy

\displaystyle \int f(ax) dx = \frac 1 a \int f(x) dx ~. \ \ \ \ \ (22)

We will show that Berezin integrals satisfy intead

\displaystyle \int f(ax) dx = a \int f(x) dx ~. \ \ \ \ \ (23)

The reason for this opposite behavior is that Berezin integration is actually similar to (standard) differentiation.

Let {y=ax} for Grassmann variables {x} and {y} and consider that by definition

\displaystyle \int y \,dy = \int x \,dx =1 ~. \ \ \ \ \ (24)

So

\displaystyle \int y \,dy = \int a x \,dy = \int x \,dx ~. \ \ \ \ \ (25)

Hence

\displaystyle a \,dy = dx ~, \ \ \ \ \ (26)

which means that

\displaystyle dy = \frac {dx} a ~. \ \ \ \ \ (27)

In standard calculus, we would instead have {dy= a\,dx}, so Berezin differentials scale opposite to what one would expect from standard calculus.

Now let us generalize to {N} generators. Let {y_i= \sum_{j} a_{ij} x_j} Those with some familiarity with exterior products will recogize that the product

\displaystyle y_1 y_2 \ldots y_N \ \ \ \ \ (28)

corresponds to the exterior product of maximal grade, so that we naturally expect the determinant to make an appearance:

\displaystyle y_1 y_2 \ldots y_N = \det(a) ~x_1 x_2 \ldots x_N ~ . \ \ \ \ \ (29)

Moreover, because the differentials scale inversely to the generators, we have

\displaystyle \det(a)~dy_1 dy_2 \ldots dy_N = dx_1 dx_2 \ldots dx_N ~ . \ \ \ \ \ (30)

6. Gaussian integrals

Consider the following Riemann integrals:

\displaystyle \begin{array}{rcl} \int_{\Bbb R} e^{-ax^2} ~dx & = & \sqrt {\frac \pi a} \\ \iint_{\Bbb R^2} e^{-a(x^2 + y^2)} ~dxdy & =& \frac \pi a ~. \end{array}

We will next derive an analog of the above for the Berezin integral. The analog of the first integral is zero, due to the nilpotent property. We thus look at the Berezin analog of the second:

\displaystyle \begin{array}{rcl} \iint e^{-axy} ~dx dy &=& \iint (1 -axy ) ~dx dy \\ &=& -\iint axy ~dx dy \\ &=& \iint ay x ~dx dy \\ & =& a ~. \end{array}

Note that the {a} is in the numerator rather than the denominator. Morever, there is no more factor of {\pi}. (In fact, there are conventions that I do not discuss here, as previously mentioned.)

Let {x} and {y} be generators, so that there are {2N} generators total. Moreover, let

\displaystyle dx dy = dx_1 dy_1 dx_2 dy_2\ldots dx_N dy_N~.

Now consider the multiple Gaussian Berezin integral:

\displaystyle \int e^{-\sum_{ij}x_i A_{ij} y_j} dx dy ~ . \ \ \ \ \ (31)

Let us change basis in order to diagonalize the matrix {A_{ij}}, via a unitary transformation. In the new variables {x'} and {y' }, the transformed matrix {A'} is diagonal, so that the exponential factors into a product of terms such as {\exp[- x'_i A'_{ii} y'_i]}. Hence, the full integral can be written as a product of simple Gaussian integrals and the value of the full Gaussian integral will simply be

\displaystyle \int e^{-\sum_{ij}x_i A_{ij} y_j} dx dy = \prod _i A'_{ii} = \det A' = \det A~. \ \ \ \ \ (32)

Here we have used the fact that unitary transformations leave the determinant invariant.

In contrast, for a Riemann integral the correct expression is

\displaystyle \int_{\Bbb R^N} e^{-x^T Ax} ~d^Nx = \sqrt {\frac {\pi^N} {\det(A)}} ~. \ \ \ \ \ (33)

So, besides the factors of {\pi}, the determinant of {A} appears in the numerator instead of in the denominator for the Berezin integral.

7. A Gaussian integral in terms of a Pfaffian

Let us use a change of variable and define

\displaystyle \begin{array}{rcl} x_j &=& {\frac 1 {\sqrt 2}}~ (z_j^{(1)} + i z_j^{(2)}) \\ y_j &=& {\frac 1 {\sqrt 2}} ~(z_j^{(1)} - i z_j^{(2)}) ~. \end{array}

\displaystyle \begin{array}{rcl} dx_j dy_j &=& \det \left[ \begin{array}{cc} \frac 1{\sqrt 2} & \frac i {\sqrt 2} \\ \frac 1{\sqrt 2} & -\frac i {\sqrt 2} \end{array} \right]^{-1} dz_j^{(1)} dz_j^{(2)} \\ &=& i dz_j^{(1)} dz_j^{(2)} ~. \end{array}

Note that the reason the Jacobian matrix is inverted is due to the strange way that Grassmann variables behave under change of variables, as explained above.

Let {A} be an antisymmetric matrix of dimension {N\times N} with {N} even. Consider

\displaystyle \sum_{ij} x_i A_{ij} y_j = \sum_{ij} \frac {A_{ij}} 2 [ z_i^{(1)} z_j^{(1)} - i z_i^{(1)} z_j^{(2)} + i z_i^{(2)} z_j^{(1)} + z_i^{(2)} z_j^{(2)} ] ~. \ \ \ \ \ (34)

Since {A} is antisymmetric, the cross terms will cancel, so that

\displaystyle \sum_{ij} x_i A_{ij} y_j = \frac 1 2 ( z_i^{(1)} A_{ij }z_j^{(1)} + z_i^{(2)} A_{ij }z_j^{(2)} )~ . \ \ \ \ \ (35)

Integrating the exponential of the above and substituting into (32). and remembering that {N} is even we get,

\displaystyle \begin{array}{rcl} \int e^{-\sum_{ij}x_i A_{ij} y_j} dx dy &=& i^{N} \int e^{\sum_{ij} \frac 1 2 ( z_i^{(1)} A_{ij }z_j^{(1)} + z_i^{(2)} A_{ij }z_j^{(2)} ) } dz_1^{(1)} dz_1^{(2)} \ldots dz_N^{(1)} dz_N^{(2)} \\ &=& (-1)^{N/2} \int e^{\sum_{ij} \frac 1 2 ( z_i^{(1)} A_{ij }z_j^{(1)} + z_i^{(2)} A_{ij }z_j^{(2)} ) } (-1)^{\frac 1 2 N (N-1)} dz_1^{(1)} \ldots dz_N^{(1)} dz_1^{(2)} \ldots dz_N^{(2)} \nonumber \\ &=& \left[ \int e^{\sum_{ij} \frac 1 2 ( z_i A_{ij }z_j ) } dz \right]^2 ~. \end{array}

Recall the following identity for the Pfaffian of an antisymmetric even dimensional matrix:

\displaystyle {\rm Pf}(A) = \sqrt {\det(A)} ~. \ \ \ \ \ (36)

We thus obtain

\displaystyle \int e^{\sum_{ij} \frac 1 2 ( z_i A_{ij }z_j ) } dz = {\rm Pf}(A) ~ \ \ \ \ \ (37)

for any even dimensional antisymmetric matrix {A}.

8. Berezin integration as a contraction

My favorite way of thinking about Berezin integration is in terms of interior products in Grassmann algebras. (Note: interior products are not the same as inner products.) In fact, interior products are how I explicitly calculate more difficult Berezin integrals in practice. If time permits, I may write something up in the future on this topic. This idea is of course not new. It is known that Berezin integrals are a type of contraction, see here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s