Power Analysis and Optimization from Circuit to Register-Transfer Levels

Authored by: José Monteiro , Rakesh Patel , Vivek Tiwari

Electronic Design Automation for IC Implementation, Circuit Design, and Process Technology

Print publication date:  April  2016
Online publication date:  February  2017

Print ISBN: 9781482254600
eBook ISBN: 9781315215112
Adobe ISBN:

10.1201/b19714-5

 

Abstract

CONTENTS

 Add to shortlist  Cite

Power Analysis and Optimization from Circuit to Register-Transfer Levels

CONTENTS

The complexity and speed of today’s VLSI designs entail a level of power consumption that, if not addressed, causes an unbearable problem of heat dissipation. The operation of these circuits is only possible due to aggressive techniques for power reduction at different levels of design abstraction. The trends of mobile devices and the Internet of Things, on the other hand, drive the need for energy-efficient circuits and the requirement to maximize battery life. To meet these challenges, sophisticated design methodologies and algorithms for electronic design automation (EDA) have been developed.

One of the key features that led to the success of CMOS technology was its intrinsic low power consumption. It allowed circuit designers and EDA tools to concentrate on maximizing circuit performance and minimizing circuit area. Another interesting feature of CMOS technology is its nice scaling properties, which permitted a steady decrease in the feature size, allowing for numerous and exceptionally complex systems on a single chip, working at high clock frequencies.

Power consumption concerns came into play with the appearance of the first portable electronic systems in the late 1980s. In this market, battery lifetime is a decisive factor for the commercial success of the product. It also became apparent that the increasing integration of active elements per die area would lead to prohibitively large energy consumption of an integrated circuit. High power consumption is undesirable for economic and environmental reasons and also leads to high heat dissipation. In order to keep such a circuit working at acceptable temperature levels, expensive heat removal systems may be required.

In addition to the full-chip power consumption, and perhaps even more importantly, excessive heat is often dissipated at localized areas in the circuit, the so-called hot spots. This problem can be mitigated by selectively turning off unused sections of the circuit when such conditions are detected. The term dark silicon has been used to describe this situation where many available computational elements in an integrated circuit cannot be used at the same time [1]. These factors have contributed to the rise of power consumption as a major design parameter on par with performance and die size and a limitation of the continuing scaling of CMOS technology.

To respond to this challenge, intensive research has been invested in the past two decades in developing EDA tools for power optimization. Initial efforts focused on circuit- and logic-level tools, because at these levels EDA tools were more mature and malleable. Today, a large fraction of EDA research targets system- or architectural-level power optimization (Chapters 7 and 13 of Electronic Design Automation for IC System Design, Verification, and Testing, respectively), which promise a higher overall impact given the breadth of their application. Together with optimization tools, efficient techniques for power estimation are required, both as an absolute indicator that the circuits’ consumption meets some target value and as a relative indicator of the power merits of different alternatives during design space exploration.

This chapter provides an overview of key CAD techniques proposed for low power design and synthesis. We start in Section 3.1 by describing the issues and methods for power estimation at different levels of abstraction, thus defining the targets for the tools presented in the following sections. In Sections 3.2 and 3.3, we review power optimization techniques at the circuit and logic levels of abstraction, respectively.

3.1  Power Analysis

Given the importance of power consumption in circuit design, EDA tools are required to provide power estimates for a circuit. When evaluating different designs, these estimates are needed to help identify the most power-efficient alternative. Since power estimates may be required for multiple alternatives, accuracy is sometimes sacrificed for tool response speed when the relative fidelity of estimates can be preserved. Second, an accurate power consumption estimate is required before fabrication to guarantee that the circuit meets the allocated power budget.

Obtaining a power estimate is significantly more complex than circuit area and delay estimates, because power depends not only on the circuit topology but also on the activity of the signals.

Typically, design exploration is performed at each level of abstraction, motivating power estimation tools at different levels. The higher the abstraction level, the less information there is about the actual circuit implementation, implying less assurance about the power estimate accuracy.

In this section, we first discuss the components of power consumption in CMOS circuits. We then discuss how each of these components is estimated at the different design abstraction levels.

3.1.1  Power Components in CMOS Circuits

The power consumption of digital CMOS circuits is generally divided into three components [2]:

  1. Dynamic power (Pdyn)
  2. Short-circuit power (Pshort)
  3. Static power (Pstatic)

The total power consumption is given by the sum of these components:

3.1 P   =   P d y n +   P s h o r t +   P s t a t i c

The dynamic power component, Pdyn, is related to the charging and discharging of the load capacitance at the gate output, Cout. This is a parasitic capacitance that can be lumped at the output of the gate. Today, this component is still the dominant source of power consumption in a CMOS gate.

As an illustrative example, consider the inverter circuit depicted in Figure 3.1 (to form a generic CMOS gate, the bottom transistor, nMOS, can be replaced by a network of nMOS transistors, and the top transistor, pMOS, by a complementary network of pMOS transistors). When the input goes low, the nMOS transistor is cut off and the pMOS transistor conducts. This creates a direct path between the voltage supply and Cout. Current IP flows from the supply to charge Cout up to the voltage level Vdd. The amount of charge drawn from the supply is CoutVdd and the energy drawn from the supply equals CoutVdd2. The energy actually stored in the capacitor, Ec, is only half of this, Ec = ½CoutVdd2. The other half is dissipated in the resistance represented by the pMOS transistor. During the subsequent low-to-high input transition, the pMOS transistor is cut off and the nMOS transistor conducts. This connects the capacitor Cout to the ground, leading to the flow of current In. Cout discharges and its stored energy, Ec, is dissipated in the resistance represented by the nMOS transistor. Therefore, an amount of energy equal to Ec is dissipated every time the output makes a transition. Given N gate transitions within time T, its dynamic power consumption during that time period is given by

3.2 P d y n =   E c × N / T = ½   C o u t V d d 2 N T
Illustration of the dynamic and short-circuit power components.

Figure 3.1   Illustration of the dynamic and short-circuit power components.

In the case of synchronous circuits, an estimate, α, of the average number of transitions the gate makes per clock cycle, Tclk = 1/fclk, can be used to compute average dynamic power

3.3 P d y n =   E c × a × f c l k =   ½ C o u t V d d 2 a f c l k

Cout is the sum of the three components Cint, Cwire, and Cload. Of these, Cint represents the internal capacitance of the gate. This includes the diffusion capacitance of the drain regions connected to the output. Cload represents the sum of gate capacitances of the transistors this logic gate is driving. Cwire is the parasitic capacitance of the wiring used to interconnect the gates, including the capacitance between the wire and the substrate, the capacitance between neighboring wires, and the capacitance due to the fringe effect of electric fields. The term αCout is generally called the switched capacitance, which measures the amount of capacitance that is charged or discharged in one clock cycle.

The short-circuit power component, Pshort, is also related to the switching activity of the gate. During the transition of the input signal from one voltage level to the other, there is a period of time when both the pMOS and the nMOS transistors are on, thus creating a path from Vdd to ground. Thus, each time a gate switches, some amount of energy is consumed by the current that flows through both transistors during this period, indicated as Ishort in Figure 3.1. The short-circuit power is determined by the time the input voltage Vin remains between VTn and VddVTp, where VTn and VTp are the threshold voltages of the nMOS and the pMOS transistors, respectively. Careful design to minimize low-slope-input ramps, namely, through the appropriate sizing of the transistors, can limit this component to a small fraction of total power; hence, it is generally considered only a second-order effect. Given an estimate of the average amount of charge, Qshort, that is carried by the short-circuit current per output transition, the short-circuit power is obtained by

3.4 P s h o r t = Q s h o r t V d d a f c l k

The static power component, Pstatic, is due to leakage currents in the MOS transistors. As the name indicates, this component is not related to the circuit activity and exists as long as the circuit is powered. The source and drain regions of a MOS transistor (MOSFET) can form reverse-biased parasitic diodes with the substrate. There is leakage current associated with these diodes. This current is very small and is usually negligible compared to dynamic power consumption. Another type of leakage current occurs due to the diffusion of carriers between the source and drain even when the MOSFET is in the cutoff region, that is, when the magnitude of the gate-source voltage, VGS, is below the threshold voltage, VT. In this region, the MOSFET behaves like a bipolar transistor and the subthreshold current is exponentially dependent on VGSVT. With the reduction of transistor size, leakage current tends to increase for each new technology node, driving up the relative weight of static power consumption. This problem has been mitigated through the introduction of high-κ dielectric materials and new gate geometry architectures [3].

Another situation that can lead to static power dissipation in CMOS is when a degraded voltage level (e.g., the high output level of an nMOS pass transistor) is applied to the inputs of a CMOS gate. A degraded voltage level may leave both the nMOS and pMOS transistors in a conducting state, leading to continuous flow of short-circuit current. This again is undesirable and should be avoided in practice.

This condition is true for pure CMOS design styles. In certain specialized circuits, namely, for performance reasons, alternative design styles may be used. Some design styles produce a current when the output is constant at one voltage level, thus contributing to the increase in static power consumption. One example is the domino design style, where a precharged node needs to be recharged on every clock cycle if the output of the gate happens to be the opposite of the precharged value. Another example is the pseudo-nMOS logic family, where the pMOS network of a CMOS gate is replaced by a single pMOS transistor that always conducts. This logic style exhibits a constant current flowing whenever the output is at logic 0, that is, when there is a direct path to ground through the nMOS network.

3.1.2  Analysis at the Circuit Level

Power estimates at the circuit level are generally obtained using a circuit-level simulator, such as SPICE [4]. Given a user-specified representative sequence of input values, the simulator solves the circuit equations to compute voltage and current waveforms at all nodes in the electrical circuit. By averaging the current values drawn from the source, Iavg, the simulator can output the average power consumed by the circuit, P = IavgVdd (if multiple power sources are used, the total average power will be the sum of the power drawn from all power sources).

At this level, complex models for the circuit devices can be used. These models permit the accurate computation of the three components of power—dynamic, short-circuit, and static power. Since the circuit is described at the transistor level, correct estimates can be computed not only for CMOS but also for any logic design style and even analog modules. After placement and routing of the circuit, simulation can handle back-annotated circuit descriptions, that is, with realistic interconnect capacitive and resistive values. The power estimates thus obtained can be very close to the power consumption of the actual fabricated circuit.

The problem is that such detailed simulation requires the solution of complex systems of equations and is only practical for small circuits. Another limitation is that the input sequences must necessarily be very short since simulation is time consuming; hence, the resulting power estimates may poorly reflect the real statistics of the inputs. For these reasons, full-fledged circuit-level power estimation is typically only performed for the accurate characterization of small-circuit modules. To apply circuit-level simulation to larger designs, one can resort to very simple models for the active devices. Naturally, this simplification implies accuracy loss. On the other hand, massively parallel computers extend the applicability of these methods to even larger designs [5].

Switch-level simulation is a limiting case, where transistor models are simply reduced to switches, which can be either opened or closed, with some associated parasitic resistive and capacitive values. This simplified model allows for the estimation of significantly larger circuit modules under much longer input sequences. Switch-level simulation can still model with fair accuracy the dynamic and short-circuit components of power, but this is no longer true for leakage power. At early technology nodes, designers were willing to ignore this power component since it accounted for a negligible fraction of total power, but now its relative importance is increasing. Leakage power estimation must then be performed independently using specifically tailored tools. Many different approaches have been proposed, some of which are presented in the next section.

Among intermediate-complexity solutions, PrimeTime PX, an add-on to Synopsys’ static timing analysis tool [6], offers power estimates with accuracy close to SPICE. This tool employs table lookup of current models for given transistor sizes and uses circuit partitioning to solve the circuit equations independently on each partition. Although some error is introduced by not accounting for interactions between different partitions, this technique greatly simplifies the problem to be solved, allowing for fast circuit-level estimates of large designs.

3.1.3  Static Power Estimation

Static power analysis is typically performed using the subthreshold model to estimate leakage per unit micron, which is then extrapolated to estimate leakage over the entire chip. Typically, the stacking factor (leakage reduction from stacking of devices) is a first-order component of this extension and serves to modify the total effective width of devices under analysis [7]. Analysis can be viewed as the modification of this total width by the stacking factor.

Most analytical works on leakage have used the BSIM2 subthreshold current model [8]:

3.5 I s u b = A e ( V G S V T γ V S B + η V D S ) n V T H ( 1 e V D S V T H )

where

  • VGS, VDS, and VSB are the gate-source, drain-source, and source-bulk voltages, respectively
  • VT is the zero-bias threshold voltage
  • VTH is the thermal voltage (kT/q)
  • γ′ is the linearized body-effect coefficient
  • η is the drain-induced barrier lowering (DIBL) coefficient
  • A = μ 0 C o x ( W / L e f f ) V T H 2 e 1.8

The BSIM2 leakage model incorporates all the leakage behavior that we are presently concerned with. In summary, it accounts for the exponential increase in leakage with reduction in threshold voltage and gate-source voltage. It also accounts for the temperature dependence of leakage.

Calculating leakage current by applying Equation 3.5 to every single transistor in the chip can be very time consuming. To overcome this barrier, empirical models for dealing with leakage at a higher level of abstraction have been studied [9,10]. For example, a simple empirical model is as follows [10]:

3.6 I l e a k = I o f f W t o t X s X t

where

  • Ioff is the leakage current per micron of a single transistor measured from actual silicon at a given temperature
  • Wtot is the total transistor width (sum of all N and P devices)
  • Xs is an empirical stacking factor based on the observation that transistor stacks leak less than single devices
  • Xt is the temperature factor and is used to scale Ioff to the appropriate junction temperature of interest

The Ioff value is typically specified at room temperature (therefore the need for a temperature factor to translate to the temperature of interest).

The other major component of static power is gate leakage. Gate leakage effectively becomes a first-order effect only when the gate oxide is thin enough such that direct quantum tunneling through the oxide becomes a significant quantity. The amount of gate leakage current is directly proportional to transistor width, and thus, the main effort for gate leakage estimation is to estimate the total transistor width, Wtot, similar to what is required for subthreshold current. The exact value of gate leakage depends on the gate-to-source and drain-to-source voltages, VGS and VDS, and these depend on gate input values. State-based weighting of the transistor width can therefore be used for more accurate gate leakage estimation. However, this entails the additional effort of estimating the state probabilities for each gate [11].

At present, VT is high enough such that subthreshold current is dominated by the dynamic component of the total active current. On the other hand, subthreshold current dominates the total standby current when compared to gate and well leakage components. As oxide thickness continued to scale down, it was feared that gate leakage would become the dominant source of leakage. However, the introduction of new technologies like metal gates and 3D FinFETs has decelerated the trend toward thinner oxides, and therefore, subthreshold leakage will continue to dominate gate leakage for at least a couple of more technology nodes.

3.1.4  Logic-Level Power Estimation

A key observation from Section 3.1.1 that facilitates power estimation at the logic level is that, if the input of the gate rises fast enough, the energy consumed by each output transition does not depend on the resistive characteristics of the transistors and is simply a function of the capacitive load, Cout, the gate is driving, Ec = ½CoutVdd2. Given parasitic gate and wire capacitance models that allow the computation of Cout_i for each gate i in a gate-level description of the circuit, power estimation at the logic level reduces to computing the number of transitions that each gate makes in a given period of time, that is, the switching activity of the gate. This corresponds to either parameter N or α, and we need to only apply Equation 3.2 or 3.3, respectively, to obtain power.

Naturally, this estimate refers only to the dynamic power component. For total power consumption, we must take leakage power into account, meaning that the methods described in the previous section must complement the logic-level estimate. In many cases, power estimates at the logic level serve as indicators for guiding logic-level power optimization techniques, which typically target the dynamic power reduction, and hence, only an estimate for this component is required. There are two classes of techniques for the switching activity computation, namely, simulation-based and probabilistic analyses (also known as dynamic and static techniques, respectively).

3.1.4.1  Simulation-Based Techniques

In simulation-based switching activity estimation, highly optimized logic simulators are used, allowing for fast simulation of a large number of input vectors. This approach raises two main issues: the number of input vectors to simulate and the delay model to use for the logic gates.

The simplest approach to model the gate delay is to assume zero delay for all the gates and wires, meaning that all transitions in the circuit occur at the same time instant. Hence, each gate makes at most one transition per input vector. In reality, logic gates have nonzero transport delay, which may lead to different arrival times of transitions at the inputs of a logic gate due to different signal propagation paths. As a consequence, the output of the gate may switch multiple times in response to a single input vector. An illustrative example is shown in Figure 3.2.

Consider that initially signal x is set to 1 and signal y is set to 0, implying that both signals w and z are set to 1. If y makes a transition to 1, then z will first respond to this transition by switching to 0. However, at about the same time, w switches to 0, thus causing z to switch back to 1.

Example of a logic circuit with glitching and spatial correlation.

Figure 3.2   Example of a logic circuit with glitching and spatial correlation.

This spurious activity can make for a significant fraction of the overall switching activity, which in the case of circuits with a high degree of reconvergent signals, such as multipliers, may be more than 50% [12]. The modeling of gate delays in logic-level power estimation is, thus, of crucial significance. For an accurate switching activity estimate, the simulation must use a general delay model where gate delays are retrieved from a precharacterized library of gates. Process variation introduces another level of complexity, motivating a statistical analysis for delay, and consequently of the spurious activity [13].

The second issue is determining the number of input vectors to simulate. If the objective is to obtain a power estimate of a logic circuit under a user-specified, potentially long, sequence of input vectors, then the switching activity can be easily obtained through logic simulation. When only input statistics are given, a sequence of input vectors needs to be generated. One option is to generate a sequence of input vectors that approximates the given input statistics and simulate until the average power converges, that is, until this value stays within a margin ε during the last n input vectors, where ε and n are user-defined parameters.

An alternative is to compute beforehand the number of input vectors required for a given allowed percentage error ε and confidence level θ. Under a basic assumption that the power consumed by a circuit over a period of time T has a normal distribution, the approach described in [14] uses the central limit theorem to determine the number of input vectors that must be simulated:

N ( z θ / 2 s p ¯ ε ) 2

where

  • N is the number of input vectors
  • p ¯ and s are the measured average and standard deviation of the power
  • zθ/2 is obtained from the normal distribution

In practice, for typical combinational circuits and reasonable error and confidence levels, the number of input vectors needed to obtain the overall average switching activity is typically very small (thousands) even for complex logic circuits. However, in many situations, accurate average switching activity for each node in the circuit is required. A high level of accuracy for low-switching nodes may require a prohibitively large number of input vectors. The designer may need to relax the accuracy for these nodes, based on the argument that these are the nodes that have less impact on the dynamic power consumption of the circuit.

Still, today’s highly parallel architectures facilitate fast simulation of a large number of input vectors, thus improving the accuracy of this type of Monte Carlo–based estimation methods [15].

3.1.4.2  Probabilistic Techniques

The idea behind probabilistic techniques is to propagate directly the input statistics to obtain the switching probability of each node in the circuit. This approach is potentially very efficient, as only a single pass through the circuit is needed. However, it requires a new simulation engine with a set of rules for propagating the signal statistics. For example, the probability that the output of an AND gate evaluates to 1 is associated with the intersection of the conditions that set each of its inputs to 1. If the inputs are independent, then this is just the multiplication of the probabilities that each input evaluates to 1. Similar rules can be derived for any logic gate and for different statistics, namely, transition probabilities. Although all of these rules are simple, there is a new set of complex issues to be solved. One of them is the delay model, as mentioned earlier. Under a general delay model, each gate may switch at different time instants in response to a single input change. Thus, we need to compute switching probabilities for each of these time instants. Assuming the transport delays to be Δ1 and Δ2 for the gates in the circuit of Figure 3.2 means that signal z will have some probability of making a transition at instant Δ2 and some other probability of making a transition at instant Δ1 + Δ2. Naturally, the total switching activity of signal z will be the sum of these two probabilities.

Another issue is spatial correlation. When two logic signals are analyzed together, they can only be assumed to be independent if they do not have any common input signal in their support. If there is one or more common input, we say that these signals are spatially correlated. To illustrate this point, consider again the logic circuit of Figure 3.2 and assume that both input signals, x and y, are independent and have a px = py = 0.5 probability of being at 1. Then, pw, the probability that w is 1, is simply pw = 1 − px py = 0.75. However, it is not true that pz = 1 − pwpy = 0.625 because signals w and y are not independent: pwpy = (1 − pxpy).py = pypxpy (note that pypy = py), giving pz = (1 − py + pxpy) = 0.75. This indicates that not accounting for spatial correlation can lead to significant errors in the calculations.

Example signal to illustrate the concept of temporal correlation.

Figure 3.3   Example signal to illustrate the concept of temporal correlation.

Input signals may also be spatially correlated. Yet, in many practical cases, this correlation is ignored, either because it is simply not known or because of the difficulty in modeling this correlation. For a method that is able to account for all types of correlations among signals see [16], but it cannot be applied to very large designs due to its high complexity.

A third important issue is temporal correlation. In probabilistic methods, the average switching activity is computed from the probability of a signal making a transition 0 to 1 or 1 to 0. Temporal correlation measures the probability that a signal is 0 or 1 in the next instant given that its present value is 0 or 1. This means that computing the static probability of a signal being 1 is not sufficient, and we need to calculate the transition probabilities directly so that temporal correlation is taken into account. Consider signals x and y in Figure 3.3, where the vertical lines indicate clock periods. The number of periods where these two signals are 0 or 1 is the same, and hence, the probability of the signals being at 1 is p x 1 = p y 1 = 0.5

(and the probability being at 0 is p x 0 = p y 0 = 0.5 ). If we only consider this parameter, thus ignoring temporal correlation, the transition probability for both signals is the same and can be computed as α = p 01 + p 10 = p 0 p 1 + p 1 p 0 = 0.5 . However, we can see that, during the depicted time interval, signal x remains low for three clock cycles, remains high for another three cycles, and has a single clock cycle with a rising transition and another with a falling transition. Averaging over the number of clock periods, we have p x 00 = 3 8 = 0.375 , p x 01 = 1 8 = 0.125 , p x 10 = 1 8 = 0.125 , and p x 11 = 3 8 = 0.375 . Therefore, the actual average switching activity of x is α x = p x 01 + p x 10 = 0.25 . As for signal y, it never remains low or high, making a transition on every clock cycle. Hence, p y 00 = p y 11 = 0 and p y 01 = p y 10 = 4 8 = 0.5 , and the actual average switching activity of y is α y = p y 01 + p y 10 = 1.0 . This example illustrates the importance of modeling temporal correlation and indicates that probabilistic techniques need to work with transition probabilities for accurate switching activity estimates.

It has been shown that exact modeling of these issues makes the computation of the average switching activity an NP-hard problem, meaning that exact methods are only applicable to small circuits and thus are of little practical interest. Many different approximation schemes have been proposed [17].

3.1.4.3  Sequential Circuits

Computing the switching activity for sequential circuits is significantly more difficult, because the state space must be visited in a representative manner to ensure the accuracy of the state signal probabilities. For simulation-based methods, this requirement may imply too large an input sequence and, in practice, convergence is hard to guarantee.

Probabilistic methods can be effectively applied to sequential circuits, as the statistics for the state signals can be derived from the circuit. The exact solution would require the computation of the transition probabilities between all pairs of states in the sequential circuit. In many cases, enumerating the states of the circuit is not possible, since these are exponential in the number of sequential elements in the circuit. A common approximation is to compute the transition probabilities for each state signal [18]. To partially recover the spatial correlation between state signals, a typical approach is to duplicate the subset of logic that generates the next state signals and append it to the present state signals, as is illustrated in Figure 3.4. Then the method for combinational circuits is applied to this modified network, ignoring the switching activity in the duplicated next state logic block.

Creating temporal and spatial correlation among state signals.

Figure 3.4   Creating temporal and spatial correlation among state signals.

3.1.5  Analysis at the Register-Transfer Level

At the register-transfer level (RTL), the circuit is described in terms of interconnected modules of varied complexity from simple logic gates to full-blown multipliers. Power estimation at this level determines the signal statistics at the input of each of these modules and then feeds these values to the module’s power model to evaluate its power dissipation. These models are normally available with the library of modules. One way to obtain these power models is to characterize the module using logic- or circuit-level estimators, a process known as macromodeling. We refer to Chapter 13 of Electronic Design Automation for IC System Design, Verification, and Testing, where this topic is discussed in more detail.

3.2  Circuit-Level Power Optimization

From the equations that model power consumption, one sees that a reduction of the supply voltage has the largest impact on power reduction, given its quadratic effect. This has been the largest source of power reductions and is widely applied across the semiconductor industry. However, unless accompanied by the appropriate process scaling, reducing Vdd comes at the cost of increased propagation delays, necessitating the use of techniques to recover the lost performance.

Lowering the frequency of operation, fclk, also reduces power consumption. This may be an attractive option in situations with low-performance requirements. Yet, the power efficiency of the circuit is not improved, as the amount of energy per operation remains constant.

A more interesting option is to address the switched capacitance term, αCout, by redesigning the circuit such that the overall switching activity is reduced or the overall circuit capacitance is reduced or by a combination of both, where the switching activity in high-capacitance nodes is reduced, possibly by exchanging it with higher switching in nodes with lower capacitance.

Static power due to leakage current, however, presents a different set of challenges. As Equation 3.5 shows, reducing Vdd will reduce leakage current as well. However, reducing the number of transitions to reduce switched capacitance has little benefit, since leakage power is consumed whether or not there is a transition in the output of a gate. The most effective way to reduce leakage is to effectively shut off the power to a circuit—this is called power gating. Other techniques are motivated by the relationship of leakage current to the threshold voltage VT. Increasing the threshold voltage reduces the leakage current. Equation 3.6 motivates other techniques that exploit the relationship of leakage current to circuit topology.

In the following, we briefly discuss the key circuit-level techniques that have been developed to address each of these points. There is a vast amount of published work that covers these and other techniques in great detail, and the interested reader is recommended to start with books [19] and overview papers [20] that cover these topics in greater depth.

3.2.1  Transistor Sizing

The propagation delay (usually just referred to as delay) of a gate is dependent on the gate output resistance and the total capacitance (interconnect and load) [2]. Transistor sizing (or gate sizing) helps reduce delay by increasing gate strength at the cost of increased area and power consumption. Conversely by reducing gate strength, the switched capacitance, and therefore, the power, can be reduced at the cost of increased delay. This trade-off can be performed manually for custom designs or through the use of automated tools.

Up until the recent past, large parts of high-performance CPUs were typically custom designed. Even now, the most performance critical parts of high-performance designs have a mix of synthesized and custom designed parts. Such designs may involve manual tweaking of transistors to upsize drivers along critical paths. If too many transistors are upsized unnecessarily, certain designs can operate on the steep part of a circuit’s power–delay curve. In addition, the choice of logic family used (e.g., static vs. dynamic logic) can also greatly influence the circuit’s power consumption. The traditional emphasis on performance often leads to overdesign that is wasteful of power. An emphasis on lower power, however, motivates the identification of such sources of power waste. An example of such waste is circuit paths that are designed faster than they need to be. For synthesized blocks, the synthesis tool can automatically reduce power by downsizing devices in such paths. For manually designed blocks, on the other hand, downsizing may not always be done. Automated downsizing tools can thus have a big impact. The benefit of such tools is power savings as well as productivity gains over manual design methodologies.

The use of multiple-threshold voltages (“multi-VT”) to reduce leakage power in conjunction with traditional transistor sizing is now a widely used design technique. The main idea here is to use lower-VT transistors in critical paths rather than large high-VT transistors. However, this technique increases subthreshold leakage due to low VT. So, it is very important to use low-VT transistor selectively and optimize their usage to achieve a good balance between capacitive current and leakage current in order to minimize the total current. This consideration is now part of the postsynthesis or postlayout automated tools and flows that recognize both low-VT and high-VT substitution. For example, after postlayout timing analysis, a layout tool can operate in incremental mode to do two things: insert low-VT cells into critical paths to improve speed and insert higher-VT cells into noncritical paths to bring leakage back down again.

Custom designers may have the flexibility to manually choose the transistor parameters to generate custom cells. Most synthesized designs, however, only have the choice of picking from different gates or cells in a cell library. These libraries typically have a selection of cells ranging from high performance (high power) to low power (low performance). In this case, the transistor-sizing problem reduces to the problem of optimal cell selection either during the initial synthesis flow or of tweaking the initial selection in a postsynthesis flow. This has been an area of active academic research [21] as well as a key optimization option in commercial tools [19].

3.2.2  Voltage Scaling, Voltage Islands, and Variable VDD

As mentioned earlier, the reduction of Vdd is the most effective way of reducing power. The industry has thus steadily moved to lower Vdd. Indeed, reducing the supply voltage is the best for low-power operation, even after taking into account the modifications to the system architecture, which are required to maintain the computational throughput. Another issue with voltage scaling is that to maintain performance, threshold voltage also needs to be scaled down since circuit speed is roughly inversely proportional to (Vdd–VT). Typically, Vdd should be larger than 4VT if speed is not to suffer excessively. As the threshold voltage decreases, subthreshold leakage current increases exponentially. With every 0.1 V reduction in VT, subthreshold current increases by 10 times. In the nanometer technologies, with further VT reduction, subthreshold current has become a significant portion of the overall chip current. At 0.18 m feature size and less, leakage power starts eating into the benefits of lower Vdd. In addition, the design of dynamic circuits, caches, sense amps, PLAs, etc., becomes difficult at higher subthreshold leakage currents. Lower Vdd also exacerbates noise and reliability concerns. To combat the subthreshold current increase, various techniques have been developed, as mentioned in the Section 3.2.5.

Voltage islands and variable Vdd are variations of voltage scaling that can be used at the circuit level. Voltage scaling is mainly technology dependent and typically applied to the whole chip. Voltage islands are more suitable for system-on-chip design, which integrates different functional modules with various performance requirements onto a single chip. We refer to the chapter on RTL power analysis and optimization techniques for more details on voltage islands. The variable voltage and voltage island techniques are complementary and can be implemented on the same block to be used simultaneously. In the variable voltage technique, the supply voltage is varied based on throughput requirements. For higher-throughput applications, the supply voltage is increased along with operating frequency and vice versa for the lower-throughput application. Sometimes, this technique is also used to control power consumption and surface temperature. On-chip sensors measure temperature or current requirements and lower the supply voltage to reduce power consumption. Leakage power mitigation can be achieved at the device level by applying multithreshold voltage devices, multichannel length devices, and stacking and parking state techniques. The following section gives details on these techniques.

3.2.3  Multiple-Threshold Voltages

Multiple-threshold voltages (most often a high-VT and a low-VT option) have been available on many, if not most, CMOS processes for a number of years. For any given circuit block, the designer may choose to use one or the other VT or a mixture of the two. For example, use high-VT transistor as the default and then selectively insert low-VT transistors. Since the standby power is so sensitive to the number of low-VT transistors, their usage, in the order of 5%–10% of the total number of transistors, is generally limited to only fixing critical timing paths, or else leakage power could increase dramatically. For instance, if the low-VT value is 110 mV less than the high-VT value, 20% usage of the former will increase the chip standby power by nearly 500%. Low-VT insertion does not impact the active power component or design size, and it is often the easiest option in the postlayout stage, leading to the least layout perturbation. Obvious candidate circuits for using high-VT transistors as the default and only using selectively low-VT transistors are SRAMs, whose power is dominated by leakage, and a higher VT generally also improves SRAM stability (as does a longer channel). The main drawbacks of low-VT transistors are that delay variations due to doping are uncorrelated between the high- and low-threshold transistors, thus requiring larger timing margins, and that extra mask steps are needed, which incur additional process cost.

3.2.4  Long-Channel Transistors

The use of transistors that have longer than nominal channel length is another method of reducing leakage power [22]. For example, by drawing a transistor 10 nm longer (long-L) than a minimum sized one, the DIBL is attenuated and the leakage can be reduced by 7×−10× on a 90 nm process. With this one change, nearly 20% of the total SRAM leakage component can be eliminated while maintaining performance. The loss in drive current due to increased channel resistance, on the order of 10%–20%, can be compensated by an increase in width or since the impact is on a single gate stage, it can be ignored for most of the designs [22]. The use of long-L is especially useful for SRAMs, since their overall performance is relatively insensitive to transistor delay. It can also be applied to other circuits, if used judiciously. Compared with multiple-threshold voltages, long-channel insertion has similar or lower process cost—it manifests as size increases rather than mask cost. It allows lower process complexity and the different channel lengths track over process variation. It can be applied opportunistically to an existing design to limit leakage. A potential penalty is the increase in gate capacitance. Overall active power does not increase significantly if the activity factor of the affected gates is low, so this should also be considered when choosing target gates.

The target gate selection is driven by two main criteria. First, transistors must lie on paths with sufficient timing margin. Second, the highest leakage transistors should be chosen first from the selected paths. The first criterion ensures that the performance goals are met. The second criterion helps in maximizing leakage power reduction. In order to use all of the available positive timing slack and avoid errors, long-L insertion is most advisable at the late design stages.

The long-L insertion can be performed by using standard cells designed using long-L transistors or by selecting individual transistors from the transistor-level design. Only the latter is applicable to full custom design. There are advantages and disadvantages to both methods. For the cell-level method, low-performance cells are designed with long-L transistors. For leakage reduction, high-performance cells on noncritical paths are replaced with lower-performance cells with long-L. If the footprint and port locations are identical, then this method simplifies the physical convergence. Unfortunately, this method requires a much larger cell library. It also requires a fine-tuned synthesis methodology to ensure long-L cell selection rather than lower-performance nominal channel length cells. The transistor-level flow has its own benefits. A unified flow can be used for custom blocks and auto placed-and-routed blocks. Only a single nominal cell library is needed, albeit with space for long-L as mentioned.

3.2.5  Topological Techniques: Stacking and Parking States

Another class of techniques exploits the dependence of leakage power on the topology of logic gates. Two examples of such techniques are stacking and parking states. These techniques are based on the fact that a stack of “OFF” transistors leaks less than when only a single device in a stack is OFF. This is primarily due to the self-reverse biasing of the gate-to-source voltage VGS in the OFF transistors in the stack. Figure 3.5 illustrates the voltage allocation of four transistors in series [10]. As one can see, VGS is more negative when a transistor is closer to the top of the stack. The transistor with the most negative VGS is the limiter for the leakage of the stack. In addition, the threshold voltages for the top three transistors are increased because of the reverse-biased body-to-source voltage (body effect).

Both the self-reverse biasing and the body effects reduce leakage exponentially as shown in Equation 3.5. Finally, the overall leakage is also modulated by the DIBL effect for submicron MOSFETs. As VDS increases, the channel energy barrier between the source and the drain is lowered. Therefore, leakage current increases exponentially with VDS.

The combination of these three effects results in a progressively reduced VDS distribution from the top to the bottom of the stack, since all of the transistors in series must have the same leakage current. As a result, significantly reduced VDS, the effective leakage of stacked transistors, is much lower than that of a single transistor.

Table 3.1 quantifies the basic characteristics of the subthreshold leakage current for a fully static four-input NAND gate. The minimum leakage condition occurs for the “0000” input vector (i.e., all inputs a, b, c, and d are at logic zero). In this case, all the PMOS devices are “ON” and the leakage path exists between the output node and the ground through a stack of four NMOS devices. The maximum leakage current occurs for the “1111” input case, when all the NMOS devices are ON and the leakage path, consisting of four parallel PMOS devices, exists between the supply and the output node. The stacking factor variation between the minimum and maximum leakage conditions reflects the magnitude of leakage dependence on the input vector. In the four-input NAND case, we can conclude that the leakage variation between the minimum and maximum cases is a factor of about 40 (see Table 3.1). The values were measured using an accurate SPICE-like circuit simulator on a 0.18 μm technology library. The average leakage current was computed based on the assumption that all the 16 input vectors were equally probable.

Voltage distribution of stacked transistors in OFF state.

Figure 3.5   Voltage distribution of stacked transistors in OFF state.

Table 3.1   Stacking Factors of Four-Input NAND

 

Minimum

Maximum

Average

Stacking factor Xs

1.75

70.02

9.95

Input vector (a b c d)

(1 1 1 1)

(0 0 0 0)

Stacking techniques take advantage of the effects described earlier to increase the stack depth [23]. One of the examples is the sleep transistor technique. This technique inserts an extra series-connected device in the stack and turns it OFF during the cycles when the stack will be OFF as a whole. This comes at the cost of the extra logic to detect the OFF state, as well as the extra delay, area, and dynamic power cost of the extra device. Therefore, this technique is typically applied at a much higher level of granularity, using a sleep transistor that is shared across a larger block of logic. Most practical applications in fact apply this technique at a very high level of granularity, where the sleep state (i.e., inactive state) of large circuit blocks such as memory and ALUs can be easily determined. At that level, this technique can be viewed as analogous to power gating, since it isolates the circuit block from the power rails when the circuit output is not needed, that is, inactive. Power gating is a very effective and increasingly popular technique for leakage reduction, and it is supported by commercial EDA tools [24], but it is mostly applied at the microarchitectural or architectural level and therefore not discussed further in here.

The main idea behind the parking state technique is to force the gates in the circuit to the low-leakage logic state when not in use [25]. As described earlier, leakage current is highly dependent on the topological relationship between ON and OFF transistors in a stack, and thus, leakage depends on the input values. This technique avoids the overhead of extra stacking devices, but additional logic is needed to generate the desirable state, which has an area and switching power cost. This technique is not advisable for random logic, but with careful implementation for structured datapath and memory arrays, it can save significant leakage power in the OFF state.

One needs to be careful about using these techniques, given the area and switching overheads of the introduced devices. Stacking is beneficial in cases where a small number of transistors can add extra stack length to a wide cone of logic or gate the power supply to it. The delays introduced by the sleep transistors or by power gating also imply that these techniques are beneficial only when the targeted circuit blocks remain in the OFF state for long enough to make up for the overhead of driving the transitions in and out of the OFF states. These limitations can be overcome with careful manual intervention or by appropriate design intent hints to automated tools.

Leakage power reduction will remain an active area of research, since leakage power is essentially what limits the reduction of dynamic power through voltage scaling. As transistor technology scales down to smaller feature sizes, making it possible to integrate greater numbers of devices on the same chip, additional advances in materials and transistor designs can be expected to allow for finer-grained control on the power (dynamic and leakage) and performance trade-offs. This will need to be coupled with advances in power analysis to understand nanometer-scale effects that have so far not been significant enough to warrant detailed power models. In conjunction with these models, new circuit techniques to address these effects will need to be developed. As these circuit techniques gain wider acceptability and applicability, algorithmic research to incorporate these techniques in automated synthesis flows will continue.

3.2.6  Logic Styles

Dynamic circuits are generally regarded as dissipating more power than their static counterparts. While the power consumption of a static CMOS gate with constant inputs is limited to leakage power, dynamic gates may be continually precharging and discharging their output capacitance under certain input conditions.

For instance, if the inputs to the NAND gate in Figure 3.6a are stable, the output is stable. On the other hand, the dynamic NAND gate of Figure 3.6b, under constant inputs A = B = 1, will keep raising and lowering the output node, thus leading to high energy consumption.

For several reasons, dynamic logic families are preferred in many high-speed, high-density designs (such as microprocessors). First, dynamic gates require fewer transistors, which means not only that they take up less space but also that they exhibit a lower capacitive load, hence allowing for increased operation speed and for reduced dynamic power dissipation. Second, the evaluation of the output node can be performed solely through N-type MOSFET transistors, which further contributes to the improvement in performance. Third, there is never a direct path from Vdd to ground, thus effectively eliminating the short-circuit power component. Finally, dynamic circuits intrinsically do not create any spurious activity, which can make for a significant reduction in power consumption. However, the design of dynamic circuits presents several issues that have been addressed through different design families [26].

Pass-transistor logic is another design style whose merits for low power have been pointed out, mainly due to the lower capacitance load of the input signal path. The problem is that this design style may imply a significantly larger circuit.

Sequential circuit elements are of particular interest with respect to their chosen logic style, given their contribution to the power consumption of a logic chip. These storage elements—flip-flops or latches—are the end points of the clock network and constitute the biggest portion of the switched capacitance of a chip because of both the rate at which their inputs switch (every clock edge) and their total number (especially in high-speed circuits with shallow pipeline depths). For this reason, these storage elements received a lot of attention [27]. For example, dual-edge-triggered flip-flops have been proposed as a lower-power alternative to the traditional single-edge-triggered flip-flops, since they provide an opportunity to reduce the effective clock frequency by half. The trade-offs between ease of design, design portability, scalability, robustness, and noise sensitivity, not to mention the basics trade-offs of area and performance, require these choices to be made only after a careful consideration of the particular design application. These trade-offs also vary with technology node, as the leakage power consumption must be factored into the choice.

In general, one can expect research and innovation in circuit styles to continue as long as the fundamental circuit design techniques evolve to overcome the limitations or exploit the opportunities provided by technology scaling.

NAND gate: (a) static CMOS and (b) dynamic domino.

Figure 3.6   NAND gate: (a) static CMOS and (b) dynamic domino.

3.3  Logic Synthesis for Low Power

A significant amount of CAD research has been carried out in the area of low power logic synthesis. By adding power consumption as a parameter for the synthesis tools, it is possible to save power with no, or minimal, delay penalty.

3.3.1  Logic Factorization

A primary means of technology-independent optimization is the factoring of logical expressions. For example, the expression xyxzwywz can be factored into (xw)(yz), reducing transistor count considerably. Common subexpressions can be found across multiple functions and reused. For area optimization, several candidate divisors (e.g., kernels) of the given expressions are generated and those that maximally reduce literal count are selected. Even though minimizing transistor count may, in general, reduce power consumption, in some cases the total effective switched capacitance actually increases. When targeting power dissipation, the cost function must take into account switching activity. The algorithms proposed for low power kernel extraction compute the switching activity associated with the selection of each kernel. Kernel selection is based on the reduction of both area and switching activity [28].

3.3.2  Don’t-Care Optimization

Multilevel circuits are optimized taking into account appropriate don’t-care sets. The structure of the logic circuit may imply that some input combinations of a given logic gate never occur. These combinations form the controllability or satisfiability don’t-care set of the gate. Similarly, there may be some input combinations for which the output value of the gate is not used in the computation of any of the outputs of the circuit. The set of these combinations is called the observability don’t-care set. Although initially don’t-care sets were used for area minimization, techniques have been proposed for the use of don’t-care sets to reduce the switching activity at the output of a logic gate [29]. The transition probability of a static CMOS gate is given by α x = 2 p x 0 p x 1 = 2 p x 1 ( 1 p x 1 )

(ignoring temporal correlation). The maximum for this function occurs for p x 1 = 0.5 . Therefore, in order to minimize the switching activity, the strategy is to include minterms from the don’t-care set in the onset of the function if p x 1 > 0.5 or in the off-set if p x 1 < 0.5 .

3.3.3  Path Balancing

Spurious transitions account for a significant fraction of the switching activity power in typical combinational logic circuits [30]. In order to reduce spurious switching activity, the delay of paths that converge at each gate in the circuit should be roughly equal, a problem known as path balancing. In the previous section, we discussed that transistor sizing can be tailored to minimize power primarily at the cost of delaying signals not on the critical path. This approach has the additional feature of contributing to path balancing. Alternatively, path balancing can be achieved through the restructuring of the logic circuit, as illustrated in Figure 3.7.

Path balancing through logic restructuring to reduce spurious transitions.

Figure 3.7   Path balancing through logic restructuring to reduce spurious transitions.

Path balancing is extremely sensitive to propagation delays, becoming a more difficult problem when process variations are considered. The work in [30] addresses path balancing through a statistical approach for delay and spurious activity estimation.

3.3.4  Technology Mapping

Technology mapping is the process by which a logic circuit is realized in terms of the logic elements available in a particular technology library. Associated with each logic element is the information about its area, delay, and internal and external capacitances. The optimization problem is to find the implementation that meets the delay constraint while minimizing a cost function that is a function of area and power consumption [31,32]. To minimize power dissipation, nodes with high switching activity are mapped to internal nodes of complex logic elements, as capacitances internal to gates are generally much smaller.

In many cases, the inputs of a logic gate are commutative in the Boolean sense. However, in a particular gate implementation, equivalent pins may present different input capacitance loads. In these cases, gate input assignment should be performed such that signals with high switching activity map to the inputs that have lower input capacitance.

Additionally, most technology libraries include the same logic elements with different sizes (i.e., driving capability). Thus, in technology mapping for low power, the size of each logic element is chosen so that the delay constraints are met with minimum power consumption. This problem is the discrete counterpart of the transistor-sizing problem described in the previous section.

3.3.5  State Encoding

The synthesis of sequential circuits offers new avenues for power optimization. State encoding is the process by which a unique binary code is assigned to each state in a finite-state machine (FSM). Although this assignment does not influence the functionality of the FSM, it determines the complexity of the combinational logic block in the FSM implementation. State encoding for low power uses heuristics that assign minimum Hamming distance codes to states that are connected by edges that have larger probability of being traversed [33]. The probability that a given edge in the state transition graph (STG) is traversed is given by the steady-state probability of the STG being in the start state of the edge, multiplied by the static probability of the input combination associated with that edge. Whenever this edge is exercised, only a small number of state signals (ideally one) will change, leading to reduced overall switching activity in the combinational logic block.

3.3.6  FSM Decomposition

FSM decomposition has been proposed for low power implementation of an FSM. The basic idea is to decompose the STG of the original FSM into two coupled STGs that together have the same functionality as the original FSM. Except for transitions that involve going from one state in one sub-FSM to a state in the other, only one of the sub-FSMs needs to be clocked. The strategy for state selection is such that only a small number of states is selected for one of the sub-FSMs. This selection consists of searching for a small cluster of states such that summation of the probabilities of transitions between states in the cluster is high, and there is a very low probability of transition to and from states outside of the cluster. The aim is to have a small sub-FSM that is active most of the time, disabling the larger sub-FSM. Having a small number of transitions to/from the other sub-FSM corresponds to the worst case, when both sub-FSMs are active. Each sub-FSM has an extra output that disables the state registers of the other sub-FSM, as shown in Figure 3.8. This extra output is also used to stop transitions at the inputs of the large sub-FSM. An approach to perform this decomposition solely using circuit techniques, thus without any derivation of the STG, was proposed in [34].

Implementation diagram of a decomposed FSM for low power.

Figure 3.8   Implementation diagram of a decomposed FSM for low power.

Two retimed versions, (a) and (b), of a network to illustrate the impact of this operation on the switched capacitance of a circuit.

Figure 3.9   Two retimed versions, (a) and (b), of a network to illustrate the impact of this operation on the switched capacitance of a circuit.

Other techniques based on blocking input signal propagation and clock gating, such as precomputation, are covered in some detail in Chapter 13 of Electronic Design Automation for IC System Design, Verification, and Testing.

3.3.7  Retiming

Retiming was first proposed as a technique to improve throughput by moving the registers in a circuit while maintaining input–output functionality. The use of retiming to minimize switching activity is based on the observation that the output of a register has significantly fewer transitions than its input. In particular, no glitching is present. Moving registers across nodes through retiming may change the switching activity at several nodes in the circuit. In the circuit shown in Figure 3.9a, the switched capacitance is given by N0CB + N1CFF + N2CC, and the switched capacitance in its retimed version, shown in Figure 3.9b, is N0CFF + N4CB + N5CC. One of these two circuits may have significantly less switched capacitance. Heuristics to place registers such that nodes driving large capacitances have a reduced switching activity, subject to a given throughput constraint, have been proposed [35].

3.4  Summary

This chapter has covered methodologies for the reduction of power dissipation of digital circuits at the lower levels of design abstraction. The reduction of supply voltage has a large impact on power; however, it also reduces performance. Some of the techniques we described apply local voltage reduction and dynamic voltage control to minimize the impact of lost performance.

For most designs, the principal component of power consumption is related to the switching activity of the circuit during normal operation (dynamic power). The main strategy here is to reduce the overall average switched capacitance, that is, the average amount of capacitance that is charged or discharged during circuit operation. The techniques we presented address this issue by selectively reducing the switching activity of high-capacitance nodes, possibly at the expense of increasing the activity of other less capacitive nodes. Design automation tools using these approaches can save 10%–50% in power consumption with little area and delay overhead.

The static power component has been rising in importance with the reduction of feature size due to increased leakage and subthreshold currents. Key methods, mostly at the circuit level, to minimize this power component have been presented.

Also covered in this chapter are power analysis tools. The power estimates provided can be used not only to indicate the absolute level of power consumption of the circuit but also to direct the optimization process by indicating the most power-efficient design alternatives.

References

M. Taylor, A landscape of the new dark silicon design regime, IEEE Micro, 33(5):8–19, 2013.
N. Weste and K. Eshraghian. Principles of CMOS VLSI Design: A Systems Perspective. Addison-Wesley Publishing Company, Reading, MA, 1985.
E. Shauly, CMOS leakage and power reduction in transistors and circuits: Process and layout considerations, Journal of Low Power Electronics and Applications, 2(1):1–29, 2012.
T. Quarles, The SPICE3 Implementation Guide, ERL M89/44, University of California, Berkeley, CA, 1989.
L. Han, X. Zhao, and Z. Feng, TinySPICE: A parallel SPICE simulator on GPU for massively repeated small circuit simulations, Design Automation Conference (DAC), Austin, TX, 2013, pp. 89:1–89:8.
PrimeTime, Synopsys, http://www.synopsys.com/Tools/Implementation/SignOff/Pages/PrimeTime.aspx (Accessed November 19, 2015.).
M. Johnson, D. Somasekhar, and K. Roy, Models and algorithms for bounds on leakage in CMOS circuits, IEEE Transactions on Computer-Aided Design of Integrated Circuits, 18(6):714–725, June 1999.
B. Sheu et al., BSIM: Berkeley short-channel IGFET model for MOS transistors, IEEE Journal of Solid-State Circuits, 22:558–566, August 1987.
J. Butts and G. Sohi, A static power model for architects, Proceedings of MICRO-33, Monterey, CA, December 2000, pp. 191–201.
W. Jiang, V. Tiwari, E. la Iglesia, and A. Sinha, Topological analysis for leakage prediction of digital circuits, Proceedings of the International Conference on VLSI Design, Bangalore, India, January 2002, pp. 39–44.
R. Rao, J. Burns, A. Devgan, and R. Brown, Efficient techniques for gate leakage estimation, Proceedings of the International Symposium on Low Power Electronics and Design, Seoul, South Korea, August 2003, pp. 100–103.
L. Zhong and N. Jha, Interconnect-aware low-power high-level synthesis, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 24(3):336–351, March 2005.
D. Quang, C. Deming, and M. Wong, Dynamic power estimation for deep submicron circuits with process variation, Asia and South Pacific Design Automation Conference (ASP-DAC), Taipei, Taiwan, 2010, pp. 587–592.
R. Burch, F. Najm, P. Yang, and T. Trick. A Monte Carlo approach to power estimation, IEEE Transactions on VLSI Systems, 1(1):63–71, March 1993.
D. Chatterjee, A. DeOrio, and V. Bertacco, GCS: High-performance gate-level simulation with GPGPUs, Design, Automation & Test in Europe Conference & Exhibition, DATE ‘09, Nice, France, 2009, pp. 1332–1337.
A. Freitas and A. L. Oliveira, Implicit resolution of the Chapman-Kolmogorov equations for sequential circuits: An application in power estimation, Design, Automation and Test in Europe, Munich, Germany, March 2003, pp. 764–769.
M. Pedram, Power minimization in IC design: Principles and applications, ACM Transactions on Design Automation of Electronic Systems, 1(1):3–56, 1996.
C.-Y. Tsui, J. Monteiro, M. Pedram, S. Devadas, A. Despain, and B. Lin, Power estimation methods for sequential logic circuits, IEEE Transactions on VLSI Systems, 3(3):404–416, September 1995.
C. Piguet, Low-Power CMOS Circuits: Technology, Logic Design and CAD Tools, CRC Press, Boca Raton, FL, 2005.
M. Alioto, Ultra-low power VLSI circuit design demystified and explained: A tutorial, IEEE Transactions on Circuits and Systems, 59(1):3–29, January 2012.
P. Kang, Y. Lu, and H. Zhou, An efficient algorithm for library-based cell-type selection in high-performance low-power designs, Proceedings of the International Conference on Computer Aided Design, San Jose, CA, November 2012, pp 226–232.
L. Clark, R. Patel, and T. Beatty, Managing standby and active mode leakage power in deep sub-micron design, International Symposium on Low Power Electronics and Design, August 2005, pp. 274–279.
A. Agarwal, S. Mukhopadhyay, A. Raychowdhury, K. Roy, and C. Kim, Leakage power analysis and reduction for nanoscale circuits, IEEE Mirco, 26:68–80, March–April 2006.
M. Keating, D. Flynn, R. Aitken, A. Gibbons, and K. Shi, Low Power Methodology Manual: For System-on-Chip Design, Springer Science Publishing, New York, 2007.
D. Lee and D. Blaauw, Static leakage reduction through simultaneous threshold voltage and state assignment, Proceedings of the Design Automation Conference, Anaheim, CA, June 2003, pp. 191–194.
J. Yuan and C. Svensson, New single-clock CMOS latches and flipflops with improved speed and power savings, IEEE Journal of Solid-State Circuits, 32(1):62–69, January 1997.
M. Alioto, E. Consoli, and G. Palumbo, Analysis and comparison in the energy-delay-area domain of nanometer CMOS flip-flops: Part I and II, IEEE Transactions on VLSI Systems, 19(5):725–750, May 2011.
C.-Y. Tsui, M. Pedram, and A. Despain, Power-efficient technology decomposition and mapping under an extended power consumption model, IEEE Transactions on CAD, 13(9):1110–1122, September 1994.
A. Shen, S. Devadas, A. Ghosh, and K. Keutzer, On average power dissipation and random pattern testability of combinational logic circuits, Proceedings of the International Conference on Computer-Aided Design, Santa Clara, CA, November 1992, pp. 402–407.
S. Hosun, Z. Naeun, and K. Juho Kim, Stochastic glitch estimation and path balancing for statistical optimization, IEEE International SOC Conference, Austin, TX, September 2006, pp. 85–88.
V. Tiwari, P. Ashar, and S. Malik, Technology mapping for low power in logic synthesis, Integration, The VLSI Journal, 20:243–268, July 1996.
C. Tsui, M. Pedram, and A. Despain, Technology decomposition and mapping targeting low power dissipation, Proceedings of the Design Automation Conference, Dallas, TX, June 1993, pp. 68–73.
L. Benini and G. Micheli, State assignment for low power dissipation, IEEE Journal of Solid-State Circuits, 30(3):258–268, March 1995.
J. Monteiro and A. Oliveira, Implicit FSM decomposition applied to low power design, IEEE Transactions on Very Large Scale Integration Systems, 10(5):560–565, October 2002.
J. Monteiro, S. Devadas, and A. Ghosh, Retiming sequential circuits for low power, Proceedings of the International Conference on Computer-Aided Design, Santa Clara, CA, November 1993, pp. 398–402.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.