100

Dissipation in physics is strictly related to the concept of dynamics. Indeed, friction or turbulence causes a degradation of energy over time. This energy is typically converted into heat, which raises the temperature of the system. Such systems are called *dissipative systems*. Formally, dissipating forces are those that cannot be described by a Hamiltonian. This includes all those forces that result in the conversion of coherent or directed energy flow into an undirected or more isotropic distribution of energy (e.g., the conversion of light into heat).

Dissipation in physics is strictly related to the concept of dynamics. Indeed, friction or turbulence causes a degradation of energy over time. This energy is typically converted into heat, which raises the temperature of the system. Such systems are called *dissipative systems*. Formally, dissipating forces are those that cannot be described by a Hamiltonian. This includes all those forces that result in the conversion of coherent or directed energy flow into an undirected or more isotropic distribution of energy (e.g., the conversion of light into heat).

As discussed in Chapters 1 and 2, this is exactly what happens in ecosystems. However, at first glance, this seems to be in contradiction with the observation that living objects embody order and information. In the 1940s of the twentieth century, in his famous book *What is Life?* (Schrödinger, 1944), Schrödinger threw new light on the relationship between order, disorder, entropy, and living organisms. The instance “order from order” introduced by Schrödinger was a startling premonition of what would be the results of the research that led Watson and Crick in 1954 to the discovery of DNA. The other fundamental idea expressed in Schrödinger’s book, the instance “order from disorder” was a first attempt to apply the fundamental theorems of thermodynamics to biology. Schrödinger recognized that living systems, being embedded in a network of flows of matter and energy, are far from equilibrium systems. He realized that the study of such systems from the perspective of nonequilibrium thermodynamics would enable the reconciliation of biological self-organization and thermodynamics.

Every biological process, event, or phenomenon is inherently irreversible, leading to an increase of the entropy of that part of the universe in which the event occurs. So, every living organism produces positive entropy, thus coming dangerously close to the state of maximum entropy, the heat death. It manages to stay away from this state only through continuous negative entropy or Helmholtz free energy absorption from the surrounding environment in the form of matter and energy. This follows immediately from the natural tendency of organisms to maintain the steady state (homeostasis). In this case, the condition of stationarity implies that the flow of entropy is null:

$$0=\frac{\text{d}S}{\text{d}t}=\frac{\text{d}{S}_{\text{in}}}{\text{d}t}+\frac{\text{d}{S}_{\text{out}}}{\text{d}t}$$

and, hence,

$$\frac{\text{d}{S}_{\text{in}}}{\text{d}t}=-\frac{\text{d}{S}_{\text{out}}}{\text{d}t}$$

Along with Schrödinger, we can say that “… essential for metabolism to work is that the body is able to get rid of all the entropy that produces in its lifetime.” Thus, the organization in living systems is maintained by absorbing order from the surrounding environment and emitting disorder in the form of thermal radiation in the infrared. In this context, Schrödinger, without mentioning them, explicitly makes reference to the biological necessity of dissipative processes. He states, “… the highest body temperature of warm-blooded animals includes the advantage of making them able to get rid of its entropy at a faster rate so that they can afford a more intense process of life.”

From everyday experience, we know that there are processes that meet the first principle but which, nevertheless, never occur spontaneously. The transformations that occur spontaneously in nature follow predefined orientations. In order to account for these experimental facts, a general criterion is needed that determines the transformations which can occur spontaneously. This is the subject of the second law of thermodynamics.

The second principle or the law of entropy states that for every physical system, a new thermodynamic state function can be defined, the entropy^{*} *S*, which should satisfy the following set of rules:

- The entropy
*S*is a state function. -
*S*is an extensive quantity: as*U*or*V*, if we divide the system into two subsystems α and β that are not interacting with each other, the total entropy is given by*S*_{tot}=*S*_{α}+*S*_{β}. - If the system changes its state, the entropy
*S*varies. This variation can be decomposed into two contributions d*S*= d+ d_{i}S, where d_{e}Srepresents the entropy change due to interactions with the external environment, whereas d_{e}Srepresents that due to processes that take place exclusively within the system._{i}S - A quantitative definition is given: ${\text{d}}_{e}S=\frac{\partial q}{T}$
, where ∂
*q*indicates the infinitesimal^{*}amount of heat absorbed by the considered system in the transformation, and*T*is the temperature of the system at which this exchange takes place. - For d
, an operative definition is not given, and one can only say that for spontaneous processes, d_{i}S≥ 0, where the equality holds if the transformation is reversible or quasistatic._{i}S

If we consider a transformation between two states *a* and *b*, for the entropy change, we can write

3.1
$$\Delta {S}_{a,b}=\underset{a}{\overset{b}{{\displaystyle \int}}}\text{\hspace{0.05em}}\text{d}S=\underset{a}{\overset{b}{{\displaystyle \int}}}\text{\hspace{0.05em}}{\text{d}}_{i}S+\underset{a}{\overset{b}{{\displaystyle \int}}}\text{\hspace{0.05em}}{\text{d}}_{e}S$$

In general, the integrals in the last member depend on the trajectory followed by the transformation between states *a* and *b* (i.e., d* _{i}S* and d

3.2
$$\Delta {S}_{a,b}=\underset{a}{\overset{b}{{\displaystyle \int}}}\text{\hspace{0.05em}}\text{d}S=\underset{a}{\overset{b}{{\displaystyle \int}}}\text{\hspace{0.05em}}{\text{d}}_{e}S=\underset{a}{\overset{b}{{\displaystyle \int}}}\frac{\partial q}{T}$$

Note that the last integral can be written only when it makes sense: We know that if the transformation is not reversible, intermediate states are not equilibrium ones, and then, in general, the state variables are not well defined. From point 5, it follows immediately that for irreversible transformations between *a* and *b*, it is always

3.3
$$\underset{a}{\overset{b}{{\displaystyle \int}}}{\left(\frac{\partial q}{T}\right)}_{\text{irrev}\text{.}}>\underset{a}{\overset{b}{{\displaystyle \int}}}\left(\frac{\partial q}{T}\right)=\Delta {S}_{a,b}$$

The integral of the heat transfer in a reversible transformation from *a* to *b* is always less than what it would be by making an irreversible transformation between these two states. In the former case, the integral coincides by definition with the change in entropy; in the latter case, the entropy is not defined. From point 4, it is evident that for *adiabatic* transformations, that is, transformations in which ∂*q* = 0, is d* _{e}S* = 0, hence, for a generic adiabatic transformation between states

From the definition given in the previous paragraph, it follows that entropy is a physical measurable quantity as, for instance, the length or the temperature. For example, a cube of ice that melts increases its entropy of a quantity, which is given by the heat of fusion divided by the temperature of the melting point. Boltzmann (1896) was the first who showed that the entropy is a measure of disorder as well. This fact can be demonstrated by applying the microscopic perspective to macroscopic thermodynamic systems. Consider a system consisting of *n* particles in equilibrium at temperature *T*, which can be found at various energy levels. The basic assumption is that the system is in a condition of complete molecular chaos. This assumption is equivalent to an assumption that all microscopic states are equally probable. Statistical mechanics for equilibrium systems tells us that at a given value of energy level, an equilibrium distribution, called the *microcanonical ensemble*, exists. The probability *P* for the system to be in a state whose energy is *E* is given by

$$P(\text{\hspace{0.05em}}E)\propto {\text{e}}^{-\frac{E}{{k}_{\text{B}}\text{\hspace{0.05em}}T}}$$

where *k*_{B} = 1, 3803 × 10^{−23} J/K is the Boltzmann’s constant. By normalization of this function, the energy distribution function *f*(*E*) is obtained:

$$f\left(E\right)={\text{e}}^{-\frac{\left(E-F\right)}{{k}_{\text{B}}\text{\hspace{0.05em}}T}}$$

with *F* Helmholtz free energy and

$$\sum}_{E}{\text{e}}^{-\frac{E}{{k}_{\text{B}}\text{\hspace{0.05em}}T}}={\text{e}}^{-\frac{F}{{k}_{\text{B}}\text{\hspace{0.05em}}T}$$

From the conservation of the Hamiltonian flow, it follows that all the trajectories of the system states in the phase space are constrained to lie on constant energy surfaces. These are indeed states that are not distinguishable from the point of view of the energy or the macroscopic point of view. A measurement of this indistinguishability, for a system in equilibrium at temperature *T*, is provided by the area of the surface of constant energy, or the number of possible microscopic states or complexions. If we denote this number by Π, from the energy distribution function, we get

$$\prod =\underset{E={E}_{0}}{\overset{}{{\displaystyle \int}}}{\text{e}}^{-\frac{\left(E-F\right)}{{k}_{\text{B}}\text{\hspace{0.05em}}T}}\text{d}E$$

Boltzmann’s intuition was that the entropy *S* of the system increases with the number of possible microscopic states Π. In other words, the entropy is a measure of the macroscopic indistinguishability. This insight led to the demonstration of the renowned relation, valid for isolated systems, named *Boltzmann’s order principle*:

$$S={k}_{\text{B}}\mathrm{log}\prod $$

By using the energy distribution function *f* (*E*), the expression for the kinetic energy, ${E}_{c}=\frac{1}{2}m{v}^{2}$

$$f\left(v\right)=\frac{n}{{\left(2\pi m{k}_{\text{B}}\text{\hspace{0.05em}}T\right)}^{3/2}}{\text{e}}^{-\left(1/{k}_{\text{B}}\text{\hspace{0.05em}}T\right)\left(m{v}^{2}/2\right)}$$

which is represented in Figure 3.1 for three different values of the temperature. It is readily understood here how the existence of a more probable velocity, corresponding to the maximum of the curve of Maxwell–Boltzmann, is associated with the natural tendency of the system to reach the most probable state. The more likely macro states are those that are achievable with a greater number of microstates Π; that is, they are more indistinguishable from a macroscopic point of view and, therefore, more disordered.

An isolated system increases its entropy and more or less rapidly reaches the inert state of maximum entropy, which is the equilibrium. Thermodynamic systems tend to evolve toward states of greater disorder. The tendency to depart from an ordered state to reach a less ordered one, to move from a less probable to a more probable one, to an entropy increase are all different manifestations of the same natural law expressed by the second principle. For an isolated system, the state of equilibrium is the most likely and it is characterized by a maximum of the entropy.

Boltzmann’s order principle can be immediately extended to nonisolated systems. In fact, if the system is closed, that is, can exchange energy but not matter, and it is at a constant temperature *T*, its behavior is described by Helmholtz free energy:

$$F=U-TS$$

Figure 3.1 Maxwell–Boltzmann distribution of particle velocity moduli for n moles of a gas at three different temperatures.

The second principle says that the state of thermodynamic equilibrium of the system, which corresponds to that of maximum entropy, is that in which *F* is a minimum. The structure of this latter definition reflects a competition between the internal energy *U* and the entropy *S*. At low temperatures, the second term is negligible, and the minimum of *F* corresponds to a minimum of *U*. In these conditions, the realization of low entropy structures such as crystals and solids is experimentally observed. The higher the temperature, the system evolves progressively toward high entropy states such as gaseous states. Boltzmann’s order principle does not apply to dissipative structures.

The first and perhaps the most authoritative in-depth description of self-organizing systems was the theory of dissipative structures that had been developed during the 1960s of the twentieth century by Ilya Prigogine and his collaborators of the School of Brussels (Prigogine, 1967; Glansdorff and Prigogine, 1971; Nicolis and Prigogine, 1977). Prigogine’s crucial insight was that far-from-equilibrium systems should be described by nonlinear equations. Prigogine did not start his study on living systems; instead, he focused on much easier phenomena as *Bénard instability* (Bénard, 1901) or Belousov–Zhabotinsky autocatalytic reactions^{*} (Nicolis and Prigogine, 1977). These are spectacular examples of spontaneous self-organization phenomena. In one case, the state of nonequilibrium is maintained by the continuous flow of heat through the edges of the system; whereas in the other case, it is maintained by the presence of catalytic compounds that are capable of sustaining nonequilibrium oscillating chemical reactions.

Prigogine’s work resulted in the development of a nonlinear thermodynamics that is apt at describing the phenomenon of self-organization in far-from-equilibrium open systems. Classical thermodynamics provides the concept of “equilibrium structure,” such as crystals. Bénard cells are also examples of structures, but of a very different nature. For this reason, Prigogine introduced the notion of *dissipative structures* to emphasize by the name itself the close association (which is at first glance truly paradoxical) that may exist between structure and order on one side, and losses and wastage (dissipation) on the other (Prigogine and Strangers, 1984).

Indeed, the most sensational prediction of Prigogine’s theory is that dissipative structures are able not only to maintain far-from-equilibrium steady states, but can also evolve through new phases of instability, becoming new structures of greater complexity (see, e.g., Chapter 19).

Classical thermodynamics has solved the problem of competition between chance and organization for equilibrium situations. If the temperature drops, the contribution of the energy *E* to Helmholtz free energy *F* = *E–TS* becomes dominant. Increasingly complex structures can appear, which correspond to increasingly lower values of entropy. A phase transition as a liquid → solid one is, in fact, characterized by a definite loss of entropy (or increase in organization). Actually, steady states are also characterized by lower entropy. Therefore, dissipative processes might lead to an increase in organization (Prigogine, 1967). This increase in organization is usually continuous. Prigogine wonders whether “discontinuous changes in structure are possible due to dissipative processes? Such situations would be the same for nonequilibrium systems of phase transitions.”

The detailed analysis carried out by Prigogine shows that dissipative structures receive their energy from the outside, whereas instabilities and jumps to new forms of organization are the result of microfluctuations that are amplified by positive feedback (Glansdorff and Prigogine, 1971).

Classical thermodynamics displays severe limitations, because it is based on concepts such as state of equilibrium and reversibility. Biological systems instead prosper in thermodynamic states that are far from static equilibrium and are characterized by the presence of irreversible processes. It is, therefore, desirable to extend the results of classical thermodynamics to this class of processes as well. Irreversible thermodynamic processes can be divided into two subclasses: linear not-very-far-from equilibrium processes and nonlinear ones.

In its more general formulation, the second principle is equally applicable to situations of equilibrium as well as to those of nonequilibrium (Nicolis and Prigogine, 1977). However, the most significant results of classical thermodynamics, developed since the nineteenth century, belong to the domain of the equilibrium phenomena. Just think, for example, of the law of mass action, to the rule of Gibbs or to the equation of state of a classical ideal gas. In classical thermodynamics, the state of nonequilibrium is seen as a disturbance that temporarily prevents the system from reaching equilibrium. The implicit assumption is that the natural condition, or rather the only describable one, is that of equilibrium. The concept of dissipation enters the theory only as a disturbing element that is significantly linked to the inability to transform all the heat energy absorbed in useful work.^{*}

The situation changed dramatically with the discovery of Onsager’s reciprocity relations (Onsager, 1931). This fact led to an extension of the domain of classical thermodynamics with the introduction of the thermodynamics of linear nonequilibrium states. These methods are applicable in situations where the flows or the reaction rates of irreversible processes are linear functions of the generalized thermodynamic forces, such as temperature or concentration gradients.

It is experimentally observed that linear nonequilibrium systems can evolve into states with lower entropy.^{*} However, we cannot talk in this context of the emergence of new structures. These are simply structures, so to speak, that are inherited from the regime of equilibrium, modified and maintained in nonequilibrium conditions by external constraints. In the 1960s, the School of Brussels extended the description in terms of macroscopic thermodynamic variables also to far-from-equilibrium conditions. If we consider a system in equilibrium, we know that, depending on external constraints, the state will be stationary for an appropriate thermodynamic potential. For example, the state will be that of maximum entropy *S* for an adiabatic system, or, that of minimum free energy *F* for a closed system at constant temperature. The solution of the Hamiltonian system is unique. Prigogine calls this solution the *thermodynamic branch* (Nicolis and Prigogine, 1977). If the external constraints are changed in order to force the system to gradually further from equilibrium, the velocity of the processes or the intensity of the flows begins to have a nonlinear dependence on the generalized forces. As already mentioned in the Bénard experiment, reaching a critical value of forcing the system evolves by creating new structures (convective cells), exhibiting scale invariance and correlations at macroscopic dimensions. Prigogine and collaborators established a criterion for the stability of the thermodynamics branch. If the criterion is not verified, the thermodynamics branch may become unstable and bifurcations may appear.

When dealing with systems that are not in thermodynamic equilibrium, some remarks are needed with regard to the definition of thermodynamic potentials. Indeed, it was said that under nonequilibrium conditions, thermodynamic quantities that characterize every system that is under nonequilibrium conditions are not well defined. In order to allow the definition of potentials, thermodynamics of irreversible processes assumes the validity of the hypothesis of local thermodynamic equilibrium (LTE):

Every thermodynamic system can be decomposed into a certain number of macroscopic subsystems within which the state variables are substantially constant ($\Delta x/x\xab1$

), and are locally linked by the same equations of stateStatistical mechanics explains why this assumption is reasonable. In fact, it is shown that all the relaxation times, which are characteristic for the achievement of the equilibrium in microscopic systems, are proportional to $\sqrt{N}$

, where
3.4
$$T\text{d}S=\text{d}U+p\text{d}V-\underset{j=1}{\overset{k}{{\displaystyle \sum \text{\hspace{0.05em}}\text{\hspace{0.05em}}}}}\text{\hspace{0.05em}}{\mu}_{j}\text{d}{n}_{j}$$

with *S* entropy, *T* absolute temperature, *V* volume, *U* total energy, *p* pressure, μ* _{j}* chemical potential of the

This equation is initially obtained for equilibrium states, and it is subsequently considered valid locally also for nonequilibrium systems. Note that this assumption has an important consequence: The function *S*(*U*,*V*,*n _{i}*) that appears in Equation 3.4 is point by point the same

From Equations 2.3 and 3.4 and recalling that if we consider only mechanical work is *w* = *p*d*V*, we obtain for the entropy change

$$\text{d}S=\frac{\text{d}q}{T}-\frac{1}{T}\underset{j=1}{\overset{k}{{\displaystyle \sum \text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}}}}{\mu}_{j}\text{d}{n}_{j}$$

In general, any extensive quantity can vary for two broad categories of causation: variations due to internal processes and changes due to external ones. For example, we have seen in Equation 3.1 that the entropy change can be written as d*S* = d* _{i}S* + d

$$\text{d}{n}_{j}={\text{d}}_{i}{n}_{j}+{\text{d}}_{e}{n}_{j};1\le j\le k$$

${\text{d}}_{e}{n}_{j}$

represents the change due to flows of matter through the surface of the system, whereas ${\text{d}}_{i}{n}_{j}$ represents the variation of matter due to internal processes such as chemical reactions or phase changes.As we did for $\text{d}{n}_{j}$

, the variation of heat d
$$\text{d}q={\text{d}}_{i}q+{\text{d}}_{e}q$$

The variation in time of the entropy is

$$\frac{\text{d}S}{\text{d}t}=\frac{1}{T}\frac{\text{d}q}{\text{d}t}-\frac{1}{T}\underset{j=1}{\overset{k}{{\displaystyle \sum \text{\hspace{0.05em}}\text{\hspace{0.05em}}}}}{\mu}_{j}\frac{\text{d}{n}_{j}}{\text{d}t}$$

and separating internal and external contributions, we may write

$$\frac{\text{d}S}{\text{d}t}=\frac{{\text{d}}_{i}S}{\text{d}t}+\frac{{\text{d}}_{e}S}{\text{d}t}=\frac{1}{T}\frac{{\text{d}}_{i}q}{\text{d}t}-\frac{1}{T}\underset{j=1}{\overset{k}{{\displaystyle \sum \text{\hspace{0.05em}}\text{\hspace{0.05em}}}}}{\mu}_{j}\frac{{\text{d}}_{i}{n}_{j}}{\text{d}t}+\frac{1}{T}\frac{{\text{d}}_{e}q}{\text{d}t}-\frac{1}{T}\underset{j=1}{\overset{k}{{\displaystyle \sum \text{\hspace{0.05em}}}}}\text{\hspace{0.05em}}{\mu}_{j}\frac{{\text{d}}_{e}{n}_{j}}{\text{d}t}$$

The entropy production *p* is defined as

3.5
$$p=\frac{{\text{d}}_{i}S}{\text{d}t}=\frac{1}{T}\frac{{\text{d}}_{i}q}{\text{d}t}-\frac{1}{T}\underset{j=1}{\overset{k}{{\displaystyle \sum \text{\hspace{0.05em}}\text{\hspace{0.05em}}}}}{\mu}_{j}\frac{{\text{d}}_{i}{n}_{j}}{\text{d}t}$$

According to what has been stated in point 5 of Section 3.2 and under LTE hypothesis, the total entropy production is given by the sum of the entropy production of the single irreversible processes. Definition 3.5 can be generalized. In fact, we observe that each of the terms appearing in Equation 3.5 is formally the product of a state function and a flow. In a very general way, we can, therefore, redefine the entropy production of a thermodynamic system with a form

3.6
$$p=\frac{{\text{d}}_{i}S}{\text{d}t}={\displaystyle \sum}_{k}\text{\hspace{0.05em}}{X}_{k}{J}_{k}$$

where *X _{k}* and

3.7
$$p=\frac{{\text{d}}_{i}S}{\text{d}t}={\displaystyle \sum}_{k}\text{\hspace{0.05em}}{X}_{k}{J}_{k}>0$$

Relation 3.7 says that the entropy production *p* of an open thermodynamic system in which irreversible transformations take place is always positive. For a system at equilibrium instead, ${\text{d}}_{i}S=0$

Equation 3.7 has an important consequence: There should be some causal relation, a coupling, between generalized forces and flows of the type

3.8
$${J}_{k}={J}_{k}\left({X}_{1},{X}_{2},\mathrm{....},{X}_{n}\right)$$

This is clear when we consider a thermodynamic system in which a single irreversible process occurs. Equation 3.7 for such a system becomes

3.9
$$p=X\text{\hspace{0.05em}}\xb7\text{\hspace{0.05em}}J>0$$

This relationship obviously imposes a constraint on the signs of *X* and *J*. The second law of thermodynamics says that forces and fluxes should have equal signs. For example, if we think of a system consisting of two compartments that can only interact with each other by exchanging heat, we can write

$$X=\left(\frac{1}{{T}_{1}}-\frac{1}{{T}_{2}}\right);J=\frac{{\text{d}}_{i}{q}_{1}}{\text{d}t}$$

From Equation 3.9, we will have only two possibilities:

$${T}_{1}<{T}_{2},J>0\text{}\text{or}\text{}{T}_{1}>{T}_{2},J<0.$$

Heat always flows from the body at a higher temperature to the coldest one. As long as a difference or gradient in temperature exists, there will be a flux *J* ≠ 0. Temperature difference is progressively reduced, and the system will move to a state of equilibrium with *X* = *0* and *J* = *0*. At equilibrium, entropy production is null, and we conclude that the thermodynamic equilibrium is a state of absolute minimum for *p*. The existence of a gradient is necessary for the flow to persist. Is henceforth crucial the action on the system from outside by imposing external constraints or forcings.

We may define a *steady state* as a condition in which some state variables are time independent. An equilibrium state is trivially a steady state. The existence of external constraints is a necessary condition for the existence of nonequilibrium stationary states. In the condition of the nonequilibrium steady state, the following equations hold:

$$p=\frac{{\text{d}}_{i}S}{\text{d}t}>0,\frac{{\text{d}}_{e}S}{\text{d}t}+\frac{{\text{d}}_{i}S}{\text{d}t}=0$$

From these, it necessarily follows that

$$\frac{{\text{d}}_{e}S}{\text{d}t}<0$$

This condition may be interpreted by saying that for nonequilibrium stationary states, the entropy of matter/energy entering the system should be minor of that released by the system to the outside. Along with Prigogine, we can say that, from the thermodynamic point of view, open systems degrade the material that comes in and it is this degradation that maintains the steady state (Prigogine, 1967).

For a system that is subject to external constraints from Equation 3.7 and the assumption of linearity of the relation expressed by Equation 3.8, it can be shown that

3.10
$$\frac{\text{d}p}{\text{d}t}\le 0$$

This inequality follows directly from the second law of thermodynamics. It says that the entropy production of a system, under the linearity condition, decreases over time. The linearity condition is demonstrated by Onsager (1931). From Equation 3.7, for steady-state systems that are subject to external constraints, as long as constraints persist, *p > 0*. This fact as well as Equation 3.10 leads to the conclusion that for constrained systems, the stationary states are local minima of the entropy production *p*. In summary, we will have $\text{d}p/\text{d}t<0$

The entropy production *p*, in the regime of irreversible linear processes, plays a similar role as thermodynamic potentials in the theory of equilibrium states. However, this decrease in entropy does not reflect the emergence of macroscopic order, as is observed while continuously increasing the constraints that drive the system away from equilibrium. The stability of the not-too-far-from-equilibrium states implies that, in a system which obeys linear laws, the spontaneous emergence of superior ordered structures is impossible.

All the results presented so far are based on the assumption of validity of the fundamental Gibbs Equation 3.4. This was first demonstrated for equilibrium states. The physical interpretation of this assumption is that under nonequilibrium conditions also, entropy depends on the same independent variables as in equilibrium states. This can happen in far-from-equilibrium situations as well. The assumption of linearity is rather more restrictive. Even in the range of validity of Gibbs equation, the relationship between generalized flows and forces may be nonlinear. Since the theorem of minimum entropy production is valid only for the linear regime, it is not very interesting for biological systems. In 1971, Prigogine and Glansdorff proposed a method to extend the results of the thermodynamics of stationary states to nonlinear situations (Glansdorff and Prigogine, 1971). Indeed, it can be shown that there is always a part of the variation of the entropy production in time that keeps a definite sign:

3.11
$$\frac{{\text{d}}_{X}p}{\text{d}t}\le 0\text{\hspace{1em}}\text{or}\text{\hspace{1em}}\frac{{\text{d}}_{X}p}{\text{d}t}\ge 0$$

This part identifies a local extreme for that part of *p* that has been defined as in Equation 3.6 by some external constraints.

An in-depth analysis of this theory would lead us very far. However, the remarkable point is that in these developments, the theoretical basis for the definition of *evolutional criteria* might be found.

From Greek εντροπειον, evolution.

It is not denoted by d*q* to emphasize the fact that this is not an exact differential.

The so-called chemical watches.

The concept of dissipation is then bound to the second principle in the impossibility to build thermal machines with a 100% efficiency.

For example, by applying a temperature gradient to a box containing a mixture of two gases, we observe an increase in the concentration of one of the two gases in the vicinity of the warmest wall and a decrease in the vicinity of the coldest one (Nicolis and Prigogine, 1977).