Energy Dissipation

Authored by: Luca Palmeri , Alberto Barauesse , Sven Erik Jørgensen

Ecological Processes Handbook

Print publication date:  August  2013
Online publication date:  August  2013

Print ISBN: 9781466558472
eBook ISBN: 9781466558489
Adobe ISBN:

10.1201/b15380-5

 

Abstract

Dissipation in physics is strictly related to the concept of dynamics. Indeed, friction or turbulence causes a degradation of energy over time. This energy is typically converted into heat, which raises the temperature of the system. Such systems are called dissipative systems. Formally, dissipating forces are those that cannot be described by a Hamiltonian. This includes all those forces that result in the conversion of coherent or directed energy flow into an undirected or more isotropic distribution of energy (e.g., the conversion of light into heat).

 Add to shortlist  Cite

Energy Dissipation

3.1  Introduction

Dissipation in physics is strictly related to the concept of dynamics. Indeed, friction or turbulence causes a degradation of energy over time. This energy is typically converted into heat, which raises the temperature of the system. Such systems are called dissipative systems. Formally, dissipating forces are those that cannot be described by a Hamiltonian. This includes all those forces that result in the conversion of coherent or directed energy flow into an undirected or more isotropic distribution of energy (e.g., the conversion of light into heat).

As discussed in Chapters 1 and 2, this is exactly what happens in ecosystems. However, at first glance, this seems to be in contradiction with the observation that living objects embody order and information. In the 1940s of the twentieth century, in his famous book What is Life? (Schrödinger, 1944), Schrödinger threw new light on the relationship between order, disorder, entropy, and living organisms. The instance “order from order” introduced by Schrödinger was a startling premonition of what would be the results of the research that led Watson and Crick in 1954 to the discovery of DNA. The other fundamental idea expressed in Schrödinger’s book, the instance “order from disorder” was a first attempt to apply the fundamental theorems of thermodynamics to biology. Schrödinger recognized that living systems, being embedded in a network of flows of matter and energy, are far from equilibrium systems. He realized that the study of such systems from the perspective of nonequilibrium thermodynamics would enable the reconciliation of biological self-organization and thermodynamics.

Every biological process, event, or phenomenon is inherently irreversible, leading to an increase of the entropy of that part of the universe in which the event occurs. So, every living organism produces positive entropy, thus coming dangerously close to the state of maximum entropy, the heat death. It manages to stay away from this state only through continuous negative entropy or Helmholtz free energy absorption from the surrounding environment in the form of matter and energy. This follows immediately from the natural tendency of organisms to maintain the steady state (homeostasis). In this case, the condition of stationarity implies that the flow of entropy is null:

0 = d S d t = d S in d t + d S out d t

and, hence,

d S in d t = d S out d t

Along with Schrödinger, we can say that “… essential for metabolism to work is that the body is able to get rid of all the entropy that produces in its lifetime.” Thus, the organization in living systems is maintained by absorbing order from the surrounding environment and emitting disorder in the form of thermal radiation in the infrared. In this context, Schrödinger, without mentioning them, explicitly makes reference to the biological necessity of dissipative processes. He states, “… the highest body temperature of warm-blooded animals includes the advantage of making them able to get rid of its entropy at a faster rate so that they can afford a more intense process of life.”

3.2  The Second Principle

From everyday experience, we know that there are processes that meet the first principle but which, nevertheless, never occur spontaneously. The transformations that occur spontaneously in nature follow predefined orientations. In order to account for these experimental facts, a general criterion is needed that determines the transformations which can occur spontaneously. This is the subject of the second law of thermodynamics.

The second principle or the law of entropy states that for every physical system, a new thermodynamic state function can be defined, the entropy* S, which should satisfy the following set of rules:

  1. The entropy S is a state function.
  2. S is an extensive quantity: as U or V, if we divide the system into two subsystems α and β that are not interacting with each other, the total entropy is given by Stot = Sα + Sβ.
  3. If the system changes its state, the entropy S varies. This variation can be decomposed into two contributions dS = diS + deS, where deS represents the entropy change due to interactions with the external environment, whereas diS represents that due to processes that take place exclusively within the system.
  4. A quantitative definition is given: d e S = q T , where ∂q indicates the infinitesimal* amount of heat absorbed by the considered system in the transformation, and T is the temperature of the system at which this exchange takes place.
  5. For diS, an operative definition is not given, and one can only say that for spontaneous processes, diS ≥ 0, where the equality holds if the transformation is reversible or quasistatic.

If we consider a transformation between two states a and b, for the entropy change, we can write

3.1 Δ S a , b = a b d S = a b d i S +   a b d e S

In general, the integrals in the last member depend on the trajectory followed by the transformation between states a and b (i.e., diS and deS, and are not state functions), except in the event that the transformations along which we measure ΔS are reversible or quasistatic. In this case, we have

3.2 Δ S a , b = a b d S = a b d e S = a b q T

Note that the last integral can be written only when it makes sense: We know that if the transformation is not reversible, intermediate states are not equilibrium ones, and then, in general, the state variables are not well defined. From point 5, it follows immediately that for irreversible transformations between a and b, it is always

3.3 a b ( q T ) irrev . > a b ( q T ) = Δ S a , b

The integral of the heat transfer in a reversible transformation from a to b is always less than what it would be by making an irreversible transformation between these two states. In the former case, the integral coincides by definition with the change in entropy; in the latter case, the entropy is not defined. From point 4, it is evident that for adiabatic transformations, that is, transformations in which ∂q = 0, is deS = 0, hence, for a generic adiabatic transformation between states a and b,   a b q T 0

. We can, therefore, formulate the second principle with the following sentence: For an adiabatic system, entropy can only increase, or, at most, in the case of reversible transformations, remain constant.

 

Illustration 3.2.1: Statistical Significance of Entropy

From the definition given in the previous paragraph, it follows that entropy is a physical measurable quantity as, for instance, the length or the temperature. For example, a cube of ice that melts increases its entropy of a quantity, which is given by the heat of fusion divided by the temperature of the melting point. Boltzmann (1896) was the first who showed that the entropy is a measure of disorder as well. This fact can be demonstrated by applying the microscopic perspective to macroscopic thermodynamic systems. Consider a system consisting of n particles in equilibrium at temperature T, which can be found at various energy levels. The basic assumption is that the system is in a condition of complete molecular chaos. This assumption is equivalent to an assumption that all microscopic states are equally probable. Statistical mechanics for equilibrium systems tells us that at a given value of energy level, an equilibrium distribution, called the microcanonical ensemble, exists. The probability P for the system to be in a state whose energy is E is given by

P ( E ) e E k B T

where kB = 1, 3803 × 10−23 J/K is the Boltzmann’s constant. By normalization of this function, the energy distribution function f(E) is obtained:

f ( E ) = e ( E F ) k B T

with F Helmholtz free energy and

E e E k B T = e F k B T

From the conservation of the Hamiltonian flow, it follows that all the trajectories of the system states in the phase space are constrained to lie on constant energy surfaces. These are indeed states that are not distinguishable from the point of view of the energy or the macroscopic point of view. A measurement of this indistinguishability, for a system in equilibrium at temperature T, is provided by the area of the surface of constant energy, or the number of possible microscopic states or complexions. If we denote this number by Π, from the energy distribution function, we get

= E = E 0 e ( E F ) k B T d E

Boltzmann’s intuition was that the entropy S of the system increases with the number of possible microscopic states Π. In other words, the entropy is a measure of the macroscopic indistinguishability. This insight led to the demonstration of the renowned relation, valid for isolated systems, named Boltzmann’s order principle:

S = k B log

By using the energy distribution function f (E), the expression for the kinetic energy, E c = 1 2 m v 2

, Maxwell–Boltzmann distribution is obtained. This function describes the distribution of the particle velocity moduli for n moles of gas at temperature T:
f ( v ) = n ( 2 π m k B T ) 3 / 2 e ( 1 / k B T ) ( m v 2 / 2 )

which is represented in Figure 3.1 for three different values of the temperature. It is readily understood here how the existence of a more probable velocity, corresponding to the maximum of the curve of Maxwell–Boltzmann, is associated with the natural tendency of the system to reach the most probable state. The more likely macro states are those that are achievable with a greater number of microstates Π; that is, they are more indistinguishable from a macroscopic point of view and, therefore, more disordered.

An isolated system increases its entropy and more or less rapidly reaches the inert state of maximum entropy, which is the equilibrium. Thermodynamic systems tend to evolve toward states of greater disorder. The tendency to depart from an ordered state to reach a less ordered one, to move from a less probable to a more probable one, to an entropy increase are all different manifestations of the same natural law expressed by the second principle. For an isolated system, the state of equilibrium is the most likely and it is characterized by a maximum of the entropy.

Boltzmann’s order principle can be immediately extended to nonisolated systems. In fact, if the system is closed, that is, can exchange energy but not matter, and it is at a constant temperature T, its behavior is described by Helmholtz free energy:

F = U T S

Maxwell–Boltzmann distribution of particle velocity moduli for

Figure 3.1   Maxwell–Boltzmann distribution of particle velocity moduli for n moles of a gas at three different temperatures.

The second principle says that the state of thermodynamic equilibrium of the system, which corresponds to that of maximum entropy, is that in which F is a minimum. The structure of this latter definition reflects a competition between the internal energy U and the entropy S. At low temperatures, the second term is negligible, and the minimum of F corresponds to a minimum of U. In these conditions, the realization of low entropy structures such as crystals and solids is experimentally observed. The higher the temperature, the system evolves progressively toward high entropy states such as gaseous states. Boltzmann’s order principle does not apply to dissipative structures.

3.3  Dissipative Structures

The first and perhaps the most authoritative in-depth description of self-organizing systems was the theory of dissipative structures that had been developed during the 1960s of the twentieth century by Ilya Prigogine and his collaborators of the School of Brussels (Prigogine, 1967; Glansdorff and Prigogine, 1971; Nicolis and Prigogine, 1977). Prigogine’s crucial insight was that far-from-equilibrium systems should be described by nonlinear equations. Prigogine did not start his study on living systems; instead, he focused on much easier phenomena as Bénard instability (Bénard, 1901) or Belousov–Zhabotinsky autocatalytic reactions* (Nicolis and Prigogine, 1977). These are spectacular examples of spontaneous self-organization phenomena. In one case, the state of nonequilibrium is maintained by the continuous flow of heat through the edges of the system; whereas in the other case, it is maintained by the presence of catalytic compounds that are capable of sustaining nonequilibrium oscillating chemical reactions.

Prigogine’s work resulted in the development of a nonlinear thermodynamics that is apt at describing the phenomenon of self-organization in far-from-equilibrium open systems. Classical thermodynamics provides the concept of “equilibrium structure,” such as crystals. Bénard cells are also examples of structures, but of a very different nature. For this reason, Prigogine introduced the notion of dissipative structures to emphasize by the name itself the close association (which is at first glance truly paradoxical) that may exist between structure and order on one side, and losses and wastage (dissipation) on the other (Prigogine and Strangers, 1984).

Indeed, the most sensational prediction of Prigogine’s theory is that dissipative structures are able not only to maintain far-from-equilibrium steady states, but can also evolve through new phases of instability, becoming new structures of greater complexity (see, e.g., Chapter 19).

Classical thermodynamics has solved the problem of competition between chance and organization for equilibrium situations. If the temperature drops, the contribution of the energy E to Helmholtz free energy F = E–TS becomes dominant. Increasingly complex structures can appear, which correspond to increasingly lower values of entropy. A phase transition as a liquid → solid one is, in fact, characterized by a definite loss of entropy (or increase in organization). Actually, steady states are also characterized by lower entropy. Therefore, dissipative processes might lead to an increase in organization (Prigogine, 1967). This increase in organization is usually continuous. Prigogine wonders whether “discontinuous changes in structure are possible due to dissipative processes? Such situations would be the same for nonequilibrium systems of phase transitions.”

The detailed analysis carried out by Prigogine shows that dissipative structures receive their energy from the outside, whereas instabilities and jumps to new forms of organization are the result of microfluctuations that are amplified by positive feedback (Glansdorff and Prigogine, 1971).

3.4  Irreversible Processes

Classical thermodynamics displays severe limitations, because it is based on concepts such as state of equilibrium and reversibility. Biological systems instead prosper in thermodynamic states that are far from static equilibrium and are characterized by the presence of irreversible processes. It is, therefore, desirable to extend the results of classical thermodynamics to this class of processes as well. Irreversible thermodynamic processes can be divided into two subclasses: linear not-very-far-from equilibrium processes and nonlinear ones.

In its more general formulation, the second principle is equally applicable to situations of equilibrium as well as to those of nonequilibrium (Nicolis and Prigogine, 1977). However, the most significant results of classical thermodynamics, developed since the nineteenth century, belong to the domain of the equilibrium phenomena. Just think, for example, of the law of mass action, to the rule of Gibbs or to the equation of state of a classical ideal gas. In classical thermodynamics, the state of nonequilibrium is seen as a disturbance that temporarily prevents the system from reaching equilibrium. The implicit assumption is that the natural condition, or rather the only describable one, is that of equilibrium. The concept of dissipation enters the theory only as a disturbing element that is significantly linked to the inability to transform all the heat energy absorbed in useful work.*

The situation changed dramatically with the discovery of Onsager’s reciprocity relations (Onsager, 1931). This fact led to an extension of the domain of classical thermodynamics with the introduction of the thermodynamics of linear nonequilibrium states. These methods are applicable in situations where the flows or the reaction rates of irreversible processes are linear functions of the generalized thermodynamic forces, such as temperature or concentration gradients.

It is experimentally observed that linear nonequilibrium systems can evolve into states with lower entropy.* However, we cannot talk in this context of the emergence of new structures. These are simply structures, so to speak, that are inherited from the regime of equilibrium, modified and maintained in nonequilibrium conditions by external constraints. In the 1960s, the School of Brussels extended the description in terms of macroscopic thermodynamic variables also to far-from-equilibrium conditions. If we consider a system in equilibrium, we know that, depending on external constraints, the state will be stationary for an appropriate thermodynamic potential. For example, the state will be that of maximum entropy S for an adiabatic system, or, that of minimum free energy F for a closed system at constant temperature. The solution of the Hamiltonian system is unique. Prigogine calls this solution the thermodynamic branch (Nicolis and Prigogine, 1977). If the external constraints are changed in order to force the system to gradually further from equilibrium, the velocity of the processes or the intensity of the flows begins to have a nonlinear dependence on the generalized forces. As already mentioned in the Bénard experiment, reaching a critical value of forcing the system evolves by creating new structures (convective cells), exhibiting scale invariance and correlations at macroscopic dimensions. Prigogine and collaborators established a criterion for the stability of the thermodynamics branch. If the criterion is not verified, the thermodynamics branch may become unstable and bifurcations may appear.

When dealing with systems that are not in thermodynamic equilibrium, some remarks are needed with regard to the definition of thermodynamic potentials. Indeed, it was said that under nonequilibrium conditions, thermodynamic quantities that characterize every system that is under nonequilibrium conditions are not well defined. In order to allow the definition of potentials, thermodynamics of irreversible processes assumes the validity of the hypothesis of local thermodynamic equilibrium (LTE):

Every thermodynamic system can be decomposed into a certain number of macroscopic subsystems within which the state variables are substantially constant ( Δ x / x « 1

), and are locally linked by the same equations of state f (V, p, T) = 0, which are extensively valid for equilibrium systems.

Statistical mechanics explains why this assumption is reasonable. In fact, it is shown that all the relaxation times, which are characteristic for the achievement of the equilibrium in microscopic systems, are proportional to N

, where N is the number of elementary constituents. In other words if a large system is considered, this moves quickly to the equilibrium on small scale while remaining in nonequilibrium on a larger scale. Under conditions of LTE, Gibbs fundamental equation of thermodynamics holds

3.4 T d S = d U + p d V j = 1 k μ j d n j

with S entropy, T absolute temperature, V volume, U total energy, p pressure, μj chemical potential of the jth element, and nj molar concentration of the jth element.

This equation is initially obtained for equilibrium states, and it is subsequently considered valid locally also for nonequilibrium systems. Note that this assumption has an important consequence: The function S(U,V,ni) that appears in Equation 3.4 is point by point the same S which has been defined for equilibrium systems in the previous paragraphs. Finally, we note that S is an extensive state function, and, thus, the total entropy of the system is the sum of the entropies of the macro subsystems.

3.5  Entropy Production

From Equations 2.3 and 3.4 and recalling that if we consider only mechanical work is w = pdV, we obtain for the entropy change

d S = d q T 1 T j = 1 k μ j d n j

In general, any extensive quantity can vary for two broad categories of causation: variations due to internal processes and changes due to external ones. For example, we have seen in Equation 3.1 that the entropy change can be written as dS = diS + deS. Similarly, the k variables nj (number of moles of the jth element) appearing in Equation 3.4 are extensive quantities whose variations can be separated into two contributions:

d n j = d i n j + d e n j ;               1 j k

d e n j

represents the change due to flows of matter through the surface of the system, whereas d i n j represents the variation of matter due to internal processes such as chemical reactions or phase changes.

As we did for d n j

, the variation of heat dq can also be decomposed into two contributions:
d q = d i q + d e q

The variation in time of the entropy is

d S d t = 1 T d q d t 1 T j = 1 k μ j d n j d t

and separating internal and external contributions, we may write

d S d t = d i S d t + d e S d t = 1 T d i q d t 1 T j = 1 k μ j d i n j d t + 1 T d e q d t 1 T j = 1 k μ j d e n j d t

The entropy production p is defined as

3.5 p = d i S d t = 1 T d i q d t 1 T j = 1 k μ j d i n j d t

According to what has been stated in point 5 of Section 3.2 and under LTE hypothesis, the total entropy production is given by the sum of the entropy production of the single irreversible processes. Definition 3.5 can be generalized. In fact, we observe that each of the terms appearing in Equation 3.5 is formally the product of a state function and a flow. In a very general way, we can, therefore, redefine the entropy production of a thermodynamic system with a form

3.6 p = d i S d t = k X k J k

where Xk and Jk denote, respectively, the generalized forces and generalized flows related to the considered irreversible processes. It is shown that the value of p does not depend on the choice of Xk and Jk, and ultimately, it depends on the choice of the potentials. From the second principle of thermodynamics, it immediately descends that for irreversible processes,

3.7 p = d i S d t = k X k J k > 0

Relation 3.7 says that the entropy production p of an open thermodynamic system in which irreversible transformations take place is always positive. For a system at equilibrium instead, d i S = 0

and, consequently, p = 0. This result is a direct consequence of the second law and is of general validity.

Equation 3.7 has an important consequence: There should be some causal relation, a coupling, between generalized forces and flows of the type

3.8 J k = J k ( X 1 , X 2 , .... , X n )

This is clear when we consider a thermodynamic system in which a single irreversible process occurs. Equation 3.7 for such a system becomes

3.9 p = X · J > 0

This relationship obviously imposes a constraint on the signs of X and J. The second law of thermodynamics says that forces and fluxes should have equal signs. For example, if we think of a system consisting of two compartments that can only interact with each other by exchanging heat, we can write

X = ( 1 T 1 1 T 2 )   ;         J = d i q 1 d t

From Equation 3.9, we will have only two possibilities:

T 1 < T 2 , J > 0 or T 1 > T 2 , J < 0.

Heat always flows from the body at a higher temperature to the coldest one. As long as a difference or gradient in temperature exists, there will be a flux J ≠ 0. Temperature difference is progressively reduced, and the system will move to a state of equilibrium with X = 0 and J = 0. At equilibrium, entropy production is null, and we conclude that the thermodynamic equilibrium is a state of absolute minimum for p. The existence of a gradient is necessary for the flow to persist. Is henceforth crucial the action on the system from outside by imposing external constraints or forcings.

We may define a steady state as a condition in which some state variables are time independent. An equilibrium state is trivially a steady state. The existence of external constraints is a necessary condition for the existence of nonequilibrium stationary states. In the condition of the nonequilibrium steady state, the following equations hold:

p = d i S d t > 0 ,                   d e S d t + d i S d t = 0

From these, it necessarily follows that

d e S d t < 0

This condition may be interpreted by saying that for nonequilibrium stationary states, the entropy of matter/energy entering the system should be minor of that released by the system to the outside. Along with Prigogine, we can say that, from the thermodynamic point of view, open systems degrade the material that comes in and it is this degradation that maintains the steady state (Prigogine, 1967).

For a system that is subject to external constraints from Equation 3.7 and the assumption of linearity of the relation expressed by Equation 3.8, it can be shown that

3.10 d p d t 0

This inequality follows directly from the second law of thermodynamics. It says that the entropy production of a system, under the linearity condition, decreases over time. The linearity condition is demonstrated by Onsager (1931). From Equation 3.7, for steady-state systems that are subject to external constraints, as long as constraints persist, p > 0. This fact as well as Equation 3.10 leads to the conclusion that for constrained systems, the stationary states are local minima of the entropy production p. In summary, we will have d p / d t < 0

far from the steady state, and d p / d t = 0 at the steady state. The following theorem can be proved: A thermodynamic system under linear regime and subject to outside constraints tends to evolve toward states of minimum entropy production. The previous statement is a first example of evolutional criterion.

The entropy production p, in the regime of irreversible linear processes, plays a similar role as thermodynamic potentials in the theory of equilibrium states. However, this decrease in entropy does not reflect the emergence of macroscopic order, as is observed while continuously increasing the constraints that drive the system away from equilibrium. The stability of the not-too-far-from-equilibrium states implies that, in a system which obeys linear laws, the spontaneous emergence of superior ordered structures is impossible.

All the results presented so far are based on the assumption of validity of the fundamental Gibbs Equation 3.4. This was first demonstrated for equilibrium states. The physical interpretation of this assumption is that under nonequilibrium conditions also, entropy depends on the same independent variables as in equilibrium states. This can happen in far-from-equilibrium situations as well. The assumption of linearity is rather more restrictive. Even in the range of validity of Gibbs equation, the relationship between generalized flows and forces may be nonlinear. Since the theorem of minimum entropy production is valid only for the linear regime, it is not very interesting for biological systems. In 1971, Prigogine and Glansdorff proposed a method to extend the results of the thermodynamics of stationary states to nonlinear situations (Glansdorff and Prigogine, 1971). Indeed, it can be shown that there is always a part of the variation of the entropy production in time that keeps a definite sign:

3.11 d X p d t 0 or d X p d t 0

This part identifies a local extreme for that part of p that has been defined as in Equation 3.6 by some external constraints.

An in-depth analysis of this theory would lead us very far. However, the remarkable point is that in these developments, the theoretical basis for the definition of evolutional criteria might be found.

From Greek εντροπειον, evolution.

It is not denoted by dq to emphasize the fact that this is not an exact differential.

The so-called chemical watches.

The concept of dissipation is then bound to the second principle in the impossibility to build thermal machines with a 100% efficiency.

For example, by applying a temperature gradient to a box containing a mixture of two gases, we observe an increase in the concentration of one of the two gases in the vicinity of the warmest wall and a decrease in the vicinity of the coldest one (Nicolis and Prigogine, 1977).

Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.