Brazilian scientists turn down medals in repudiation of President Bolsonaro

Source:

https://en.mercopress.com/2021/11/08/brazilian-scientists-turn-down-medals-in-repudiation-of-president-bolsonaro


A group of Brazilian scientists who were awarded the National Order of Scientific Merit have decided to turn down the honours after President Jair Bolsonaro chose to remove one of them from the list.

The decision came in solidarity after Bolsonaro refused to give the award to a researcher who had conducted a study on the ineffectiveness of chloroquine against the coronavirus. Chloroquine is a drug against malaria which Bolsonaro has defended throughout the COVID-19 pandemic.

In a public letter released this weekend, 21 researchers from different universities and scientific centres have renounced the National Order of Scientific Merit, after the head of state excluded Marcus Vinícius Guimaraes Lacerda of the Oswaldo Cruz Foundation medical research centre (Fiocruz) and Adele Schwartz Benzaken, director of Fiocruz for the Amazon.

A study by Lacerda shows how the use of chloroquine, a drug against malaria and lupus, is not only completely useless in patients with coronavirus, but if administered in higher doses it can also cause arrhythmias to people with heart conditions.

Benzaken, for her part, served as director of the department in charge of analyzing and investigating the AIDS disease and viral hepatitis of the Ministry of Health before being fired with the arrival of Bolsonaro to power.

Both had been included in the list of those who would receive the award this year by the Ministry of Science and Technology.

“As scientists, we do not tolerate how denialism in general, peer harassment and recent cuts in federal budgets for science and technology have been used as tools to roll back the important advances made by the Brazilian scientific community in the last decades,” the researchers said in a letter published by Folha de Sao Paulo.

Bolsonaro, who since the beginning of COVID-19 has minimized its effects and even recommended chloroquine as a medication, removed the two scientists from the list included in the decree published Saturday in an extraordinary issue of the Official Gazette.

The scientists resigning their medals addressed their letter to the Ministry of Science and Technology: “We consider our presence on the list gratifying and we are very honoured with the possibility of receiving one of the highest recognitions that a scientist can receive in the country, but the tribute offered by a government that not only ignores science and actively boycotts the recommendations of specialists are not compatible with our careers.”

According to an investigation by Congress, the Government’s omissions in the face of the pandemic contributed to making Brazil the second country with the most deaths in the world, with more than 600,000 victims, and the third with more cases with 21.8 million infections.

The parliamentary committee that investigated the situation accused the president of nine crimes, including crimes against humanity, violation of sanitary measures and irregular use of public money.

Derivation of the Fermi-Dirac and Bose-Einstein distributions

Below we derive in standard textbook fashion the Fermi-Dirac and Bose-Einstein distributions from the perspective of the grand canonical ensemble, which in a way is the natural ensemble to consider.

Consider a system of {N} identical quantum particles. Let {n_j} be the number of particles occupying the single-particle states {|j\rangle}. Let {\epsilon_j} be the energy for the state {|j\rangle}. The occupation number {n_j} is limited to {n_j=0,~1} for fermions due to the Pauli exclusion principle. For bosons the occupation numbers can be arbitrarily large.

The total energy is thus given by

\displaystyle E= \sum_{j} \epsilon _j n_j ~, \ \ \ \ \ (1)

and

\displaystyle N= \sum_j n_j ~. \ \ \ \ \ (2)

Our goal is to calculate the mean occupation number {\overline{ n_j}} for fermions and bosons when they are in thermodynamic equilibrium, at temperature {T} and with chemical potential {\mu}.

1. The grand canonical partition function

For fixed chemical potential {\mu}, temperature parameter {\beta} and volume {V}, the grand canonical partition function is given by

\displaystyle \Xi(\beta, \mu, V) = \sum _{\rm all~states} \exp [-\beta (E-\mu N)] ~. \ \ \ \ \ (3)

In the situation previously described, we can write it as

\displaystyle \begin{array}{rcl} \Xi &=& \sum _{n_1} \sum _{n_2} \sum _{n_3} \ldots \exp \bigg[ \sum_j -\beta (n_j \epsilon_j - \mu n_j)\bigg] \\ &=& \sum _{n_1} \sum _{n_2} \sum _{n_3} \ldots \prod_j \exp [ -\beta (n_j \epsilon_j - \mu n_j) ] \\ &=& \bigg[\sum _{n_1} \exp [ -\beta (n_1 \epsilon_1 - \mu n_1) ] \bigg] \bigg[\sum _{n_2} \exp [ -\beta (n_2 \epsilon_1 - \mu n_2) ] \bigg] \ldots \end{array}

Hence,

\displaystyle \Xi = \prod_j ~\sum _{n_j} \exp [-\beta n_j (\epsilon_j - \mu)] ~. \ \ \ \ \ (4)

In other words, {\Xi} can be factored as product over a sum over occupation numbers for the {j}-th single-particle state. The latter sum is easy to calculate for bosons and even easier for fermions.

The mean occupation numbers can then be calculated as a weighted average for the grand canonical ensemble:

\displaystyle \begin{array}{rcl} \overline{n_j} &=& \frac{ \sum _{n_1,n_2\ldots} n_j \exp \bigg[ \sum_j -\beta (n_j \epsilon_j - \mu n_j)\bigg] } { \sum _{n_1,n_2\ldots} \exp \bigg[ \sum_j -\beta (n_j \epsilon_j - \mu n_j)\bigg] } \\ &=& \frac {1}{\Xi} \frac{\partial \Xi}{\partial(- \beta \epsilon_j)} \end{array}

so that

\displaystyle \overline{n_j} = -\frac{1 }{ \beta } \frac{\partial \log \Xi}{\partial\epsilon_j} ~. \ \ \ \ \ (5)

Substituting for {\Xi} from (4) we get

\displaystyle \begin{array}{rcl} \overline{n_j} &=& -\frac{1 }{ \beta } \frac{\partial}{\partial\epsilon_j} \log \left[ \prod_k ~\sum _{n_k} \exp [-\beta n_k (\epsilon_k - \mu)] \right] \\ &=& -\frac{1 }{ \beta } \frac{\partial}{\partial\epsilon_j} \sum_k \log \left[ \sum _{n_k} \exp [-\beta n_k (\epsilon_k - \mu)] \right] \\ &=& \sum_k -\frac{1 }{ \beta } \frac{\partial}{\partial\epsilon_j} \log \left[ \sum _{n_k} \exp [-\beta n_k (\epsilon_k - \mu)] \right]~. \end{array}

Unless {k=j} the derivative vanishes, so that

\displaystyle \overline{n_j} = -\frac{1 }{ \beta } \frac{\partial }{\partial\epsilon_j} \log \left[ \sum _{n_j} \exp [-\beta n_j (\epsilon_j - \mu)] \right] ~. \ \ \ \ \ (6)

2. Fermi-Dirac statistics

For fermions {n_j} cannot be greater than 1. So

\displaystyle \sum _{n_j} \exp [-\beta n_j (\epsilon_j - \mu)] = 1+ \exp [-\beta (\epsilon_j - \mu)]~. \ \ \ \ \ (7)

Hence,

\displaystyle \begin{array}{rcl} \overline{n_j} &=& -\frac{1 }{ \beta } \frac{\partial \log }{\partial\epsilon_j} (1+ \exp [-\beta (\epsilon_j - \mu)])\\ &=& -\frac{1 }{ \beta } \frac{ -\beta \exp [-\beta (\epsilon_j - \mu)]} {(1+ \exp [-\beta (\epsilon_j - \mu)])}\\ &=& \frac{ \exp [-\beta (\epsilon_j - \mu)]} {(1+ \exp [-\beta (\epsilon_j - \mu)])} ~. \end{array}

Finally we get

\displaystyle \overline{n_j} = \frac{1 } {\exp [\beta (\epsilon_j - \mu)]+1} ~. \ \ \ \ \ (8)

3. Bose-Einstein Statistics

For bosons we have

\displaystyle \sum _{n_j} \exp [-\beta n_j (\epsilon_j - \mu)] = \sum _{n_j} (\exp [-\beta (\epsilon_j - \mu)])^{n_j} \ \ \ \ \ (9)

which is a geometric series.

Recall that for {|r|<1} we can write

\displaystyle (1-r) \sum_{n=0}^\infty r^n = 1~, \ \ \ \ \ (10)

so that

\displaystyle \sum_{n=0}^\infty r^n = {1 \over 1-r}~, \ \ \ \ \ (11)

from which we get

\displaystyle \sum _{n_j} \exp [-\beta n_j (\epsilon_j - \mu)] = {1 \over 1 - \exp [-\beta (\epsilon_j - \mu)]}~. \ \ \ \ \ (12)

Substituting, we get

\displaystyle \begin{array}{rcl} \overline{n_j} &=& -\frac{1 }{ \beta } \frac{\partial \log }{\partial\epsilon_j} \left[ {1 \over 1 - \exp [-\beta (\epsilon_j - \mu)]} \right] \\ &=& -\frac{1 }{ \beta } \frac{\partial \log }{\partial\epsilon_j} \big[ 1 - \exp [-\beta (\epsilon_j - \mu)] \big]^{-1} \\ &=& \frac{1 }{ \beta } \frac{\partial \log }{\partial\epsilon_j} \big[ 1 - \exp [-\beta (\epsilon_j - \mu)] \big] \\ &=& \frac{1 }{ \beta } \bigg[ { \beta \exp [-\beta (\epsilon_j - \mu)] \over 1 - \exp [-\beta (\epsilon_j - \mu)]} \bigg] \\ &=& { \exp [-\beta (\epsilon_j - \mu)] \over 1 - \exp [-\beta (\epsilon_j - \mu)]} . \end{array}

Finally we get

\displaystyle \overline{n_j} = \frac{1 } {\exp [\beta (\epsilon_j - \mu)]-1} ~. \ \ \ \ \ (13)

4. Summary

The Fermi-Dirac and Bose-Einstein distributions can be written as

\displaystyle \overline{n(\epsilon)}= \frac{1 } {\exp [\beta (\epsilon - \mu)] \pm 1} ~, \ \ \ \ \ (14)

with the plus sign for Fermi-Dirac statistics and the minus sign for Bose-Einstein statistics.

A neat identity for the determinant of a power of (1+A)

Here we prove a slight variation on the well known identity for a complex matrix A:

\displaystyle {\displaystyle \det (I+A) =\sum _{k=0}^{\infty } {\frac {1}{k!}}\left(-\sum _{j=1}^{\infty }{\frac {(-1)^{j}}{j}}{\rm tr} \left(A^{j}\right)\right)^{k} ~.} \ \ \ \ \ (1)

Here I is the identity matrix. We prove below the following for \alpha \in \Bbb R \setminus \{ 0\}:

\displaystyle {\displaystyle \det [\hspace{1mm}(I+A)^\alpha\hspace{1mm}] =\sum _{k=0}^{\infty } {\frac {1}{k!}}\left(-\alpha\sum _{j=1}^{\infty }{\frac {(-1)^{j}}{j}}{\rm tr} \left(A^{j}\right)\right)^{k} ~.} \ \ \ \ \ (2)

The above formulas are valid in an analytic sense whenever the sum on the right converges. But independently of convergence they are always true if interpreted in terms of formal power series.

First, recall that

\displaystyle \det (\exp(A)) = \exp({\rm tr} (A))~. \ \ \ \ \ (3)

It is actually quite easy to prove (3). We can proceed as follows in 3 steps: (i) The equation clearly holds for diagonal matrices; (ii) since the determinant and trace are invariant with respect to similarity transformations, the formula is also clearly true for all diagonalizable matrices; (iii) finally, the diagonalizable matrices are dense in the set of all square matrices and the determinant and trace are continuous functions, which means that the formula must hold true at all limit points, i.e. for all matrices.

If we replace {A} with {\log (I+A)^\alpha } we get

\displaystyle \det [\hspace{1mm}(I+A)^\alpha\hspace{1mm}] = \exp\big[{\rm tr} [\alpha \log (1+A)]\big]~. \ \ \ \ \ (4)

Recall the Mercator series

\displaystyle \log (1+ x) = \sum_{ j =1}^\infty (-1)^{ j -1} \frac {x^ j } j ~.\ \ \ \ \ (5)

Substituting we get

\displaystyle \det [\hspace{1mm}(I+A)^\alpha\hspace{1mm}] = \exp\bigg[\alpha~{\rm tr} \bigg( \sum_{ j =1}^\infty (-1)^{ j -1} \frac {A^ j } j \bigg) \bigg] = \exp\bigg( {-\alpha} \sum_{ j =1}^\infty \frac{(-1)^{ j }}{ j } {\rm tr}({A^ j })\bigg) ~.\ \ \ \ \ (6)

The claim (2) follows from expanding the exponential in its well-known Taylor series.

{\square}

A complex analytic proof of the Cayley-Hamilton theorem

The Cayley-Hamilton theorem states that any real or complex square matrix satisfies its own characteristic equation. Hamilton originally proved a version involving quaternions, which can be represented by {4\times 4} real matrices. A few years later, Cayley established it for {3\times 3} matrices. It was Frobenius who established the general case more than 20 years later. (The theorem is also valid for a matrix over commutative rings in general.)

There are many nice proofs of Cayley-Hamilton. Doron Zeilberger‘s exposition of the combinatorial proof of Howard Straubing comes to mind. To my taste, one of the nicest proofs, due to Charles A. McCarthy, uses a matrix version of Cauchy’s integral formula. Here I expand on McCarthy’s original proof, and also have borrowed from  Leandro Cioletti‘s exposition on the subject.

We first state the theorem. Let {A} be an {n\times n} matrix over the real or complex fields {\Bbb R} or {\Bbb C}. Let {I_n} be the identity matrix. The characteristic polynomial {p(t)} for the variable {t} and matrix {A} is defined as

\displaystyle p(t)= \det (t I_n -A )~. \ \ \ \ \ (1)

The charactetristic equation for {A} is defined as

\displaystyle p(t)=0~. \ \ \ \ \ (2)

In the past this equation was sometimes known as the secular equation. The degree of {p} is clearly {n}. The Cayley-Hamilton theorem states that

\displaystyle p(A) =0~. \ \ \ \ \ (3)

In other words {A} satisfies its own characteristic equation.

Note that if {A} is diagonal, {p(A)=0} is clearly satisfied, because the diagonal entries {A_{ii}} are just the eigenvalues, which necessarily satisfy the characteristic equation. If {A} is not diagonal but is diagonalizable, then also the theorem is clearly true, because the determinant is invariant under similarity transformations. But what about the general case, which includes all the non-diagonalizable matrices? This general case is what makes the theorem non-trivial to prove.

We can prove the theorem using a continuity argument. Recall that every matrix {A} with non-degenerate eigenvalues is diagonalizable. So we can approximate every non-diagonalizable matrix to arbitrary precision in terms of a diagonalizable matrix. This qualitative statement can be made precise. Specifically, the diagonalizable matrices are dense in the set of all square matrices. Since the determinant is a continuous function of {A}, therefore {p(A)} cannot discontinuously jump away from zero as {A} is varied continuously — thus establishing Cayley-Hamilton for the general case. There are many variations of this theme.

The reason that I very much like McCarthy’s complex analytic proof is that it uses the Cauchy integral formula to implement this continuity argument without ever explicitly invoking continuity. This is possible because Cauchy’s integral formula allows a function to be calculated at a point without ever having to evaluate the function explicitly at that point. So the value of {p(A)} can be calculated for non-diagonalizable matrices without actually having to compute {p(A)} directly. It is a beautiful proof.

In what follows, I give a step-by-step reproduction of McCarthy’s proof. I assume that readers have familiarity with Cauchy’s integral formula for complex functions of a single complex variable. For our purposes, let {f: \Bbb C \rightarrow \Bbb C} be entire. Then Cauchy’s integral formula states that

\displaystyle f(a) = \frac 1 {2 \pi i } \oint \frac{f(z)}{z-a} ~dz ~. \ \ \ \ \ (4)

Proofs are found in standard texts. Here we will need the following version for matrices. Let {A} be an {n\times n} matrix with entries in {\Bbb C}. Then

\displaystyle f(A) = \frac 1 {2 \pi i } \oint \frac{f(z)}{z I_n -A} ~dz ~, \ \ \ \ \ (5)

where {I_n} is the identity matrix. See the appendix below for a simple proof.

Recall that the inverse {B^{-1}} of an invertible matrix {B} is given by

\displaystyle B^{-1} = \frac {{\rm adj} (B)} {\det (B)} \ \ \ \ \ (6)

wbere {{\rm adj}(B)} is the adjugate matrix which is the transpose of the cofactor matrix. Hence, the inverse of {(z I_n - A)} is given by

\displaystyle (z I_n - A)^{-1} = \frac M {\det (z I_n-A)} \ \ \ \ \ (7)

where

\displaystyle M= {{\rm adj} (z I_n-A)} ~. \ \ \ \ \ (8)

This adjugate matrix {M} contains entries that are polynomials in {z}. Hence, the entries of {M} are of finite degree in {z}. (In fact, due to the definition of the cofactor matrix they are of degree no larger than {n-1}.)

Next observe that the entries of {M} do not have negative powers of {z}. Hence, by the residue theorem, the entries {M_{ij}} of {M} satisfy

\displaystyle \frac 1 {2 \pi i } \oint M_{ij} ~dz =0 ~, \ \ \ \ \ (9)

because the complex residues at {z=0} all vanish.

The Cayley-Hamilton theorem now follows from (5), (7) and (9) :

\displaystyle \begin{array}{rcl} p(A) &=& \displaystyle \frac 1 {2 \pi i } \oint {p(z)} \frac M {\det (z I_n-A)} ~dz \\ &=& \displaystyle\frac 1 {2 \pi i } \oint M \frac {\det (z I_n-A) } {\det (z I_n-A)} ~dz \\ &=& \displaystyle \frac 1 {2 \pi i } \oint M ~dz =0 ~. \end{array}

\square

Appendix

There are several ways to prove (5). A common approach is to show convergence. Here I instead use formal power series, without worrying about convergence, since the latter can be checked a posteriori.

Taylor expanding {(1-x)^{-1}} we obtain the series

\displaystyle { 1 \over (1-x)} = \sum_{j=0}^\infty x^j ~.\ \ \ \ \ (10)

It should not therefore be a surprise that

\displaystyle { I_n \over (I_n-A)} = \sum_{j=0}^\infty A^j ~. \ \ \ \ \ (11)

The proof of the above is simple:

\displaystyle (I_n -A) \sum_{j=0}^\infty A^j = I_n \sum_{j=0}^\infty A^j - \sum_{j=1}^\infty A^j = I_n A^0 = I_n \ \ \ \ \ (12)

If we now put {A/z} in place of {A} we get

\displaystyle { I_n \over (zI_n-A)} = \sum_{j=0}^\infty \frac{A^j}{z^{j+1}} ~. \ \ \ \ \ (13)

Now consider that

\displaystyle \frac 1 {2 \pi i} \oint {z^k \over (zI_n-A)} dz = \frac 1 {2 \pi i} \oint \sum_{j=0}^\infty \frac{A^{j}}{z^{j+1-k}} dz ~. \ \ \ \ \ (14)

By the residue theorem, this last expression contains nonzero contributions only when {j=k}. So we obtain

\displaystyle \frac 1 {2 \pi i} \oint \sum_{j=0}^\infty {z^k \over (zI_n-A)} dz = A^k \ \ \ \ \ (15)

Observe that we now have an expression that we can substitute for {A^k} in any series expansioin involving powers of {A}. We are ready to prove the claim (5).

Since {f} is entire, its Laurent series has vaninishing principal part. We can thus write

\displaystyle f(z) = \sum_{j=0}^\infty a_j z^j ~~, \ \ \ \ \ (16)

so that

\displaystyle f(A) = \sum_{j=0}^\infty a_j A^j ~~, \ \ \ \ \ (17)

Invoking (15) we arrive at the claim,

\displaystyle f(A) = \sum_{j=0}^\infty a_j \frac 1 {2 \pi i} \oint {z^j \over (zI_n-A)} dz = \frac 1 {2 \pi i} \oint {f(z) \over (zI_n-A)} dz ~. \ \ \ \ \ (18)

\square

Derivation of the Gibbs entropy formula from the Boltzmann entropy

There are many ways to arrive at the Gibbs entropy formula

S= -k_B \displaystyle \sum_{i} p_i \log p_i~,

which is actually identical, except for units, to the Shannon entropy formula. Here I document a relatively straightforward derivation of the formula which might be the easiest route to develop intuition for undergraduate students who already know the formula for the Boltzmann entropy,

S(E)= k_B \log \Omega(E)~,

where \Omega(E) is the effective number of configurations of an isolated system with total energy E. This derivation is not new of course, but I decided to write it up anyway because it is a particularly elegant argument and is not always given emphasis in the usual textbooks.

Let us start by considering some general statistical ensemble such that a state i has energy E_i and probability p_i=p(E_i) which is the same for all states with the same energy E_i. For fixed energy E_i we can assume, from the principle of equal a priori probabilities, which is valid in the microcanonical ensemble, that

p_i = \displaystyle \frac 1 {\Omega(E_i)}~.

Hence we obtain for the Boltzmann entropy

S(E_i)=k_B \log \Omega(E_i) = -k_B \log p_i~.

For our general ensemble, the mean entropy is given by the weighted average, over all possible states, of the Boltzmann entropy, according to

S= \langle S(E_i) \rangle _i = \sum_i p_i S(E_i) = \sum_i p_i (-k_B \log p_i)

from which we obtain the famous Gibbs formula for the Boltzmann-Gibbs entropy:

S = -k_B \displaystyle \sum_i p_i \log p_i~.