Universidad Politécnica de Madrid
Escuela Técnica Superior de Ingenieros
Industriales
Design and Optimization of Power
Delivery and Distribution Systems Using
Evolutionary Computation Techniques
Tesis Doctoral
Leonardo Laguna Ruiz
Master en Electrónica Industrial, Universidad Politécnica de Madrid
2012
Departamento de Automática, Ingeniería
Electrónica e Informática Industrial
Escuela Técnica Superior de Ingenieros
Industriales
Design and Optimization of Power
Delivery and Distribution Systems Using
Evolutionary Computation Techniques
Autor
Leonardo Laguna Ruiz
Master en Electrónica Industrial, Universidad Politécnica de Madrid
Director
Roberto Prieto López
Doctor Ingeniero Industrial por la Universidad Politécnica de Madrid
2012
Nowadays computing platforms consist of a very large number of components that require to be supplied with different voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty different loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets.
The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy efficiency and cost targets.
The design difficulties of selecting the right solution arise due to the wide range of power conversion products provided by different manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ different manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis.
In this thesis, in order to overcome the mentioned difficulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application.
The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Hoy en día las plataformas de cómputo consisten de un gran numero de componentes los cuales requieren diferentes niveles de voltaje y potencia. Incluso una plataforma pequeña como, un teléfono portátil, puede contener más de veinte cargas y reguladores de voltaje diferentes. Los diseñadores de los sistemas de distribución de potencia para estas plataformas necesitan ser capaces de proveer, en un tiempo muy corto, la arquitectura de potencia adecuada que optimice el desempeño, cumpla con las especificaciones eléctricas así como los requerimientos de coste y tamaño.
La selección adecuada de la arquitectura y los convertidores define directamente el desempeño de la solución dada. Por lo tanto, el diseñador necesita ser capaz de evaluar un número grande de opciones con el fin de saber con mejor certeza si la solución seleccionada cumple con los objetivos de tamaño, eficiencia y coste.
La dificultad de seleccionar la solución correcta se incrementa debido al hecho de que existen un gran número de productos de diferentes fabricantes usados para la conversión de energía. Estos productos van desde componentes discretos (para construir convertidores) hasta módulos completos para la conversión de energía que usan diferentes tecnologías de manufactura. En consecuencia, en la mayoría de los casos no es posible analizar todas las alternativas (combinaciones de arquitecturas y convertidores) que se pueden construir. El diseñador se ve forzado a seleccionar un número limitado de componentes con el fin de simplificar el análisis.
En esta tesis, con el fin de resolver las dificultades previamente mencionadas, se propone una nueva metodología para el diseño de sistemas de potencia. Esta metodología integra técnicas de cómputo evolutivo lo cual hace posible el análisis de un gran número de posibilidades. Este análisis exhaustivo ayuda al diseñador a definir de una manera muy rápida el conjunto de mejores soluciones y seleccionar entre ellas las que presentan un mejor compromiso en desempeño de acuerdo con la aplicación.
La metodología propuesta consiste en dos etapas: la generación automática de arquitecturas y la selección optimizada de componentes. En esta tesis se detalla la implementación de estos dos pasos. La utilidad de esta metodología se corrobora contrastando los resultados obtenidos usando problemas reales y experimentos diseñados para poner a prueba los algoritmos.
The future cannot be predicted, but futures can be invented.
Dennis Gabor
Computers are becoming more powerful every day.
However, the reality has changed from what the science fiction told us in the past decades. In the past, it was very clear that the key was to produce a powerful computer able to perform all the calculations that we require; a single super-computer with an unimaginable computational power, capable of reasoning further than humans. The reality is that there are technological challenges that we need to face before dreaming of that. One of those challenges is supplying energy to these computers.
Science fiction books have told us how in the future computers and energy sources will be like. However, they do not provide us with a clue about how the energy conversion technology will be on those days. Maybe in that future, researchers have already found a way of transforming the energy without consuming it. Meanwhile, it is time for us to fix the mistakes that we have made in the past.
Some years ago, energy efficiency was not a priority. Engineers were busy trying to make the things work, and they assumed that the energy will be there when the time comes. In most of the block diagrams (describing the behavior of a system), the power system was not even part of the diagram. Some people realized that there could be environmental benefits in making the power systems more efficient. But by that time, the environment was not a priority. Therefore, they had to prove that there could be also economic benefits in energy saving.
New problems have emerged. The single-unit super-computer has split into millions of smaller interconnected entities. We have PCs (Personal Computers) capable of doing very complex tasks in the commodity of our home. Each of these computers consumes a small fraction of energy, but because there are millions out there, they represent a significant impact.
These PC have split also into a big variety of smaller groups called netbooks, cellphones, smartphones, MP3 players, etc. These devices are not static in our homes; we can carry them with us every day. Every time the computing units change, the paradigms also change and we have new problems to face.
We want the computers to be cheaper, smaller, faster, lighter and cuter; simply better. And to achieve that, we need to learn how to make a better use of the things that we have. It could be good to slow down the development of new fancy products, and focus all our effort in doing the things better. Focus our efforts in design optimized systems considering as first priority improving our quality of life. Our next concern should be improving the “life-time” of the computing devices; this will bring both ecological and psychological benefits.
Contradictorily, commercial applications require that the products are developed in a very short time. And many cases it is necessary to sacrifice an excellent feature that may improve the device life-time or energy efficiency, because it costs more money and the sells will be affected. Companies have to fight hard in order to be competitive and survive.
Despite the technological wars, the massive production of consumer products, like MP3 players, smartphones and videogames, has brought good things1 to our lives. For example, the industry of videogames has driven the manufacturers of visualization technology to develop low cost solutions. These low cost visualization technologies have been used in many medical applications that are saving human lives.
There is no evil in technology, only ignorance. And even when many science fiction authors have imagined how the society will shaped by the technology in a far future. We don’t know how each small step that we achieve will affect us, neither how many stages we have to pass before becoming a society full of prosperity.
Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it.
And to make matters worse: complexity sells better.
Edsger W. Dijkstra
Nowadays computer aided design is a necessity in most areas of engineering. Every day we tend to make more complex designs1 and we require simpler tools that help us achieve our tasks. In some areas of electronics, like design of digital systems, it is practically impossible not using a computer and designing system with a common complexity.
In order to handle this complexity, researches have been creating new perspectives that help us see the problems in a more friendly way. In the case of electronics, around the seventies the SPICE simulator was created. This simulator has played an extremely important role in the design of electronic systems. The idea behind SPICE is to simulate the final implementation of a circuit. For that purpose, it provides us with “virtual” components that we can interconnect and let them interact. The result is a very accurate simulation of the circuit behavior.
SPICE is a tool for simulating the final implementation and is not meant to replace the calculations needed when designing a circuit. SPICE can be considered rather a verification tool than a design tool. In the design of power converters SPICE is widely used because it is possible to reproduce the behavior of the switching semiconductors. However, these simulations may take a long time what makes very difficult to design a power converter using only SPICE.
In the area of design tools for electronics, we can find a large variety of tools that solve specific problems. One good example of how to solve a complex problem by following a methodology is the design of analog filters. For example, for tools to design analog filters we have FILTER-DESIGNER from TI [Texas Intruments, 2011], FilterLab from Microchip [Microchip, 2011] and FilterCAD from Linear Technology [Linear Technology, 2011] to mention a few. By using these tools, a simple way to design an analog filter will require (at least) three steps:
Following these steps will maximize the chances of succeeding.
These three steps can be considered a top-down methodology. In the beginning of the design we have only the specifications. Using these values as input, the filter design tool will suggest us an appropriate filter structure and the components that we should use. If we are satisfied with the solution, we can concretize the problem and start with the implementation into an electrical simulator. The electrical simulator will use more detailed models. This will help us to determine if we may have other problems like saturation and distortion. Once we are satisfied, we can concretize the model further and implement the real circuit.
Using top-down methodologies has many advantages when designing large systems. In the area of computer science, software complexity has been a problem since the seventies. Software was becoming so complex that it was hard to develop and maintain. Experts realized that they needed better tools (programing languages, editors, etc...) and better methodologies (based on paradigms like structured programming and object-oriented) [Schach, 2007].
In the area of power electronics, complexity of the designs is also increasing. Power supply systems are evolving from a few converters connected to dozens. This increase in the complexity is an effect of the evolution of the systems demanding energy. Power systems are becoming more distributed, consisting of many converters supplying energy to a large number of loads. This makes more difficult to the designer achieving the right power system design.
Manufacturing and construction industries originated what is called the “waterfall model”. This model specifies a series of steps that we need to follow in order to deploy a product. The typical steps are the following: Requirements, design, implementation, verification and maintenance (see Figure 2.1). The name waterfall comes from the fact that, after finishing one step, we cannot go back; waterfalls flow in only one direction. In many industries, if we need to return to a previous step, for example from implementation to design, the cost that it implies may be prohibitive.
In other industries, like software, changes are less expensive. For that reason, modified approaches have emerged. Each step we can acquire knowledge, we can return one step and feedback that knowledge in order to improve our design. This approach is shown in Figure 2.2.
In industries of electronic products, performing changes once the board is implemented are very costly. For that reason we need to be sure that the design we have selected satisfies all the requirements and is optimal. For that purpose we use electric circuit simulators like SPICE.
It is very important to use the appropriate tools. When we are designing a power converter the main concerns are:
In order to guarantee these two points, the designer should perform a detailed analysis of the converter. The designer needs to know in detail the behavior of each current and voltage in the components. The most effective way of knowing that information is through simulations. However, contrary to common belief, simulations are not for free. Building a simulation model and running it consumes time and money. We can introduce the concept of the Efficiency of Simulation as follows: the more usable information we obtain over the time we spent, the more efficient a simulation is.
| (2.1) |
To exemplify, 3D finite element simulations (FEA) of magnetic components may take many hours to complete. This makes the analysis of many options inefficient since we have to spend a long time simulating each possible solution. In order to make efficient FEA simulations, we need to design our component by using other methods, and using the FEA simulator only as a validation tool.
The execution time of a model defines the feasibility of using a simulator as a design tool or as a validation tool. Power supply systems are comprised of more than one converters and loads. That make unfeasible using SPICE like tools to design a complete power supply system.
In the following section the main problem treated in this thesis is presented.
In engineering, designing optimal systems considering more than one objective is a complex task to achieve without the use of computers. Consider as an example the optimization of a power converter in which we want to select the appropriate inductor, switching frequency and semiconductors in order to maximize its efficiency and reduce its size. The first thing that we need to do is defining equations for efficiency and size that depend on the parameters to optimize. Once we have the equations, we have to calculate these values for different options of semiconductors, switching frequency and inductor values. When we have the results, we need to determine which of those solutions presents minimum losses and minimum size.
Deriving the equations for efficiency and size is a complex task that can be the topic of a complete PhD thesis. In addition, in order to have a degree of certainty that our solution will really be optimal, we need to test a wide range of components.
This same complexity is translated to the design of power systems. If we want to design a system comprised of more than one converter, the difficulty of the problem may grow to the point in which is completely impractical to use the same models, methods and simulator used in the design of converters. Consider the design of a power system for a server computer. This type of platforms requires energy supply to one or more processors with multiple cores, memory banks, hard drives and many other peripherals. We need to define new methods to handle complexity.
One way of simplifying is to consider the power converters as building blocks. This way we can use the converters without knowing the precise behavior of each component inside it. This simplification allows the designer to focus on optimizing the most important characteristics of a power system:
Using building blocks like commercial and pre-designed converters makes the optimization process simpler. However, creating an optimal design is still challenging. The main difficulties that the designer will find are the following:
Some manufacturers like National Semiconductor have created tools like WEBENCH Power Designer [National Semiconductor , 2011] to solve part of these problems. This tool is able to design power systems using their own products. This tool is able to find solutions for regular sized architectures and uses a weighted approach in order to classify solutions.
The main disadvantage is that, for commercial reasons, only National semiconductor products can be used. This can be very constraining since manufacturers may not want to use only one type of products in their designs. In addition, we could find better solutions by combining the best products from different converter manufacturers.
It is possible to define a new methodology to help the power system designer to make the decisions that have more impact to the cost, size, and energy losses of the power system. The designer will be able to determine:
These results can used by the designer in different ways. We have mainly the following cases:
This methodology in order to be useful needs to fulfill the following characteristics:
In order to create a tool better than the existing ones, we need to provide:
Designing a power system requires a different number of steps depending on its type. In Figure 2.3 we can observe a visual representation of a top-down methodology used for the design of a power system. Following this approach, we have to define the specifications at first instance (level 1). Based on those specifications, we have to create a design (level 2), and if we have the appropriate simulation technologies, we can perform a system-level validation (level 3). At this step, if the design is not satisfactory, we can always return to level 2 and create a new one.
The tasks that the designer has to perform in level 2 are not well defined because they depend on the designer’s preferences, experience and type of system to design. However this step is fundamental. The designer has to be sure that the solution he is selecting is the most appropriate before continuing with the following step.
The proposed solution splits the design level into two intuitive tasks (see Figure 2.4): definition of the architecture and selection of converters. For each task we have to provide an adequate design and validation tool that can assist the designer through the whole process.
One of the biggest advantages is that, by using these design and validation tools, the designer will be able to create a reliable design in a more efficient way. We can observe an analogy of this process in Figure 2.5. Without using the adequate tools during the design task, we are taking a slow an long way ( see Figure 2.5.a). The proposed approach provides two shorter and faster routes that the designer can take repeated times until he find the adequate solution (see Figure 2.5.b).
In order to create the methods and tools necessary to fulfill the proposed solution, we need to solve the following problems:
The architecture search and component selection algorithms should have the following characteristics:
The work presented in this document has been constrained to the design of power supply systems comprised of DC/DC converters. Nevertheless, the methods can be applied to AC systems since the methodology does not rely on the modeling approach used. The only requirements for the models are that we should be able to calculate the energy losses, size and cost of the power architecture.
The accuracy of the results obtained depends directly in the accuracy of the models used. If the designer uses incorrect information to create its models, he will obtain incorrect results. Capturing models from converters is a critical step, hence we have to provide the adequate tools to simplify and reduce the chances of making mistakes.
In the case of size and cost calculation it is not important to obtain an accurate value. What is important is to obtain a relative value that allows us to determine whether one option is more expensive or bigger than other.
In the following section we will present a set of ideas that will define the way this work is performed.
This section presents the ideas considered to build the scientific paradigm used in this thesis. The main purpose of describing them is to help the reader to understand the motivation, the methods and the interpretation of the results presented in this document. This section consists of three parts. In each part one idea is presented by using a series of examples trying to clarify its importance.
Learning has a cost, but ignorance cost even more. This idea has been explored in economics for a long time. Researchers have found this relationship in a very simple way (figure 2.6): the more knowledge we acquire in early stages of the design, the more flexibility we have and less money is spent [Fabrycky and Blanchard., 1991].
Buildings cannot be designed on the run. Consider the following example. When designing a bridge, it is necessary to perform many simulations in order to know how it will behave under different conditions. That was the case of the famous Tacoma Narrows bridge [Prelinger Archives, 2011]. The original Tacoma Narrows bridge (opened in July 1940) collapsed because the vibrations produced by wind, what made the bridge resonate in its natural frequency causing its destruction. At that time, that phenomenon was unknown. Since then this type of analysis is performed in big structures and no one will build it before being completely sure about how will it behave.
In electronics there are many known effects that we have to test before being sure that our design will not fail. By using the appropriate tools we can simplify this task. With the proposed methodology we try to help the designer to acquire more knowledge about its system. This will provide him more flexibility as [Fabrycky and Blanchard., 1991] states and a reduction in the design time and cost.
There exist persons capable of performing very complex calculations in a fast way using only its brain power. An alternative to all of us that do not have that ability is to use a computer equipped with the appropriate tools. Up to this year (2011), computers cannot be considered intelligent. However, computers can behave in a way that may appear intelligent. Playing chess is an activity associated with intelligent people. But since 1997 computers have proven to be smarter than the best human players. The most famous case is the match between the World Chess Champion, Garry Kasparov, against the computer Deep Blue [Hsu, 2002]. After that, computers have shown in repeated occasions that they are better than humans in playing chess.
A computer with the adequate programs can perform many tasks faster than humans. For that reason is important to distinguish which are the activities that require human intelligence and which can be performed in an autonomous way by the computer.
In this thesis want to create the necessary methods, tools and programs that can turn a computer into a fast calculation machine designated to create power systems. We are not trying to compete against human designers, but to help them to make better designs.
Like Voltaire said: “Doubt is not a pleasant condition, but certainty is absurd”. Very often, (as engineers, researchers or scientist) we tend to forget these wise words. Lack of information can make us think that we have a good solution, since we do not have reference points. Our level of certainty can be high by ignoring other possibilities. On the other hand excess of information can overwhelm us making us hard to define a good solution. As designers we have to avoid falling in any of these conditions. We can achieve that by using appropriate methodologies and measuring mechanisms; using a good strategy.
Take as example the problem of the “Fog Creek Programmers” (Posted on April 16, 2010 [Tech Interview, 2010]). In this problem, an assassin lines 100 programmers and puts on each of them a hat that can be red or blue (50 red and 50 blue). They can’t see their own hats, but they can see the hats of the programmers in front of them. The assassin starts with the programmer in the back and asks him “What color is your hat?” if the programmer gives an incorrect answer, the assassin kills him and continues with the next in line. The problem consist on determine how can we save most of the programmers. Since we don’t know how is going the assassin to put the hats, we do not have certainty that we will save them all. The only thing that we can do is to define a good strategy. The simplest strategy is to tell the programmer to answer always the same color, that way we can save the 50% of them. There exist other strategies that can be used to save more programmers but we are not going to cover them in this document.
In this work we try to increase the level of certainty by using a good strategy. This strategy is the central idea of the proposed methodology.
This thesis is organized in three parts:
Background: The main objective of the chapters within this part is to present a summary of the methods and techniques necessary to understand the foundations of this thesis. In these chapters we present the state of the art and works related to metaheuristic optimization algorithms and behavioral modeling.
New Techniques for the Automatic Design of Power Supply Systems: in this part we present the proposed methods for the automatic search of power architectures and converter selection. This section is split into two chapters, one focused on the architecture generation algorithms and other focused in the converter selection. Each chapter contains a validation section where the experimental results are displayed. This part contains the central work of this thesis.
Application Example of the Presented Techniques: this part contains a more practical validation of the methods proposed. We have included examples of real power. These examples were created with the purpose of covering the main design cases.
All models are false but some models are useful.
George E. P. Box
This chapter presents a review of the modeling methods that can be used to perform fast simulations of power systems. This chapter starts describing the key factors that affect the execution time in complex models and introduces a few techniques that we can use to reduce this time.
The second section is focused on the modeling techniques that will allow us to characterize the energy loss of a power converter. Once we are able to calculate the energy loss of a converter, we can calculate the efficiency of a complete power architecture. In the third section it is presented the modeling methodology used to calculate the cost and size of the architectures,
In this section is presented a summary of the key points that need to be considered in order to perform fast and effective simulations. The time that a simulation of dynamic systems takes to complete (execution time) depends mainly on two factors (Figure 3.1):
These two factors are described with more detail in the following subsections.
Simple models can be solved by simple methods. Complex models, on the contrary, may require sophisticated (and slower) methods to achieve a solution. The model complexity can be divided into two categories: the number of elements that contains the model, and the complexity of equations. The number of elements (in most simulators) directly defines the number of equations that the simulator has to solve. Solving a system with a large number of equations implies performing a large number of calculations what is reflected in a slower simulation. One option to make simulations faster is to reduce the number of equations. To that end, some simulators try to reduce the number of equation by symbolically simplifying the described model before creating its final representation. Some examples of these simulators are Modelica language [Modelica Association, 2011] simulators. In this type of simulators, the model is preprocessed in order to obtain a representation that is easier to solve. Consider the example on Figure 3.2. Figure 3.2.a contains seven resistors while the circuit in Figure 3.2.b contains only one. The model with seven resistors will take more time to simulate1. If we are interested in knowing the current that the voltage source provides, we can simplify the circuit with seven resistors to the circuit with one. That way we will obtain a circuit that simulates faster and we still obtain the information that we need.
When the model contains only linear equations, solving it requires simpler methods, like Gaussian elimination. However, if the model is nonlinear, it requires using methods that are computationally more expensive, like Newton-Rhapson. In order to simplify the simulation of models with complex equations, we can use piecewise linear models. The model of a diode is given by the following equation:
| (3.1) |
In this equation we can see a nonlinear relation between the voltage (V D) and the current (I) of the diode. However, in many cases, like the simulation of power rectifiers, the ideal model of a diode is good enough to provide useful results. The equations of the ideal diode are the following:
| (3.2) |
| (3.3) |
The second factor that defines the execution time is the number of calculation points. The number of points depends on two parameters: the simulated time and the dynamic of the model. It is important to notice that we are using two names that are similar but define different concepts. The simulated time is the virtual time that is being simulated using the model. Nevertheless, the execution time is the time the computer takes calculating the results. For example, the typical simulated time of a power converter is around milliseconds, but the actual execution time can be several minutes.
The dynamic of the model defines how fast the model variables change. If the variables never change, the systems is static and it is necessary to calculate only one point. A model with very high dynamics (high frequencies) will require more calculation points than a model with low dynamics2.
If a model has low dynamics, it is possible to make the simulation very fast. On the other side if the model has high dynamics (high frequencies) and a long simulation time, the simulation will take a lot of time.
By a rule of thumb, it can be stated that a model is faster when less details are simulated (Figure 3.3). If we want to improve the simulation speed, it is necessary to lose some details of the model by making some variable negligible. This led us to lose some accuracy, but this is not bad in all cases. In order to obtain a fast model, it is necessary that the model simulate only the required information according to the test.
In power electronics the typical techniques to make faster simulations are:
These two techniques can improve drastically the speed of a simulation. However, when simplified semiconductor models are used, the information lost is what happens when the semiconductors switch. Averaging techniques drop completely the information produced when the converter change of state, yet this models are particularly useful when we want to design control loops..
Other technique to obtain fast simulation models is the behavioral modeling. The main idea of behavioral models is to define a simpler model that mimics the input-output behavior of the real system during a certain number of tests. This simpler model may reduce both, the model complexity and the number of calculation points.
The application of this technique to the simulation of power systems is presented in next section.
The application presented in this document requires modeling and simulation approaches that enhance the following characteristics:
Usually, capturing the model of a converter is not an easy task. This is mainly because the typical modeling approaches require a detailed knowledge of the converter e.g. the topology and the values of the components. These modeling approaches are not suitable for this application because capturing a large number of converters can be time consuming and error prone. In addition, averaged models are still complex and slow for this application.
Behavioral models are the best choice for this application. The main reasons are the following:
These models also present limitations. The first one is that the models are developed to behave in a specific way under given conditions. For example, if we do not include the protections of the converter, the model will not turn-off when an over-current occurs. Therefore, the user should be careful in order avoid any misunderstanding of the simulation results.
Typical power supply architectures used in mobile devices may include the following components:
Each of these elements has different levels of interaction. Depending on the level of detail modeled, the physical effects that involve may acquire more or less importance. For example, when the losses are calculated, the effect of the protections and EMI filters is minimal.
The Table 3.1 shows a summary of the different components of a power architecture and the modeling level at which its effects cannot be diminished. Depending on the information that we want to know about the system, a different modeling level may be more suitable.
Modeling Level | Converter | Protections | Filters | Sources | Loads |
Static | ⋅ | ⋅ | ⋅ | ||
Dynamic | ⋅ | ⋅ | ⋅ | ⋅ | |
Event-Driven | ⋅ | ⋅ | ⋅ | ⋅ | ⋅ |
High Frequency | ⋅ | ⋅ | ⋅ | ⋅ | ⋅ |
In the current application, it is necessary to calculate the cost, size and losses of the power supply system. Therefore, it is possible to calculate these three parameters without entering to the full detail of the converter behavior. As mentioned in the previous section, by reducing the level of detail it is possible to obtain simple and fast models that recreate the physical effects of importance. In this case, we are taking into account the following considerations:
These two considerations reduce the requirements of the models. We can obtain all the things that with need to calculate by using static level models. Other consequence is that we can model the protections of the converter at a functional level. This means that if the converter operates at steady state, no protection should be on. On the other hand, if a protection is on, even in a steady state simulation, this means that the converter is not appropriate and should not be used.
Considering that we want to simulate the power architecture at a static level (steady state), it is possible to simplify the problem by grouping the effects of the filters and protections. This lead us to the conclusion that we only need to create models for the converters, the loads and the sources.
In the following subsections are presented the details of how to model these components.
The behavioral modeling of DC/DC converters has been presented by [Oliver, 2007]. This model is based on the Wiener-Hammerstein structure (Figure 3.4). The structure consists of three blocks that can simulate complex dynamic and nonlinear behaviors. The input and output linear network is used to simulate the dynamic behavior, in the case of a power converter, these block simulate the inrush current and output voltage transient response. The static block simulates the nonlinear power transference of a converter. More details about the complete features of these models can be found in [Prieto et al., 2007, Oliver et al., 2008b, Oliver et al., 2008a].
In this application it is only used the static model since transient behavior and EMI is negligible at this level of abstraction. Therefore, the behavioral model of a DC/DC converter is simplified to the following equations:
| (3.4) |
| (3.5) |
Using equation-based modeling, we can create complex models by connecting small models. Each small model has its own equations and new equations are generated each time we connect one of its pins. This approach allow us to represent me models as typical electric components. Figure 3.5 shows the equivalent electric model to equations (3.4) and (3.5).
It can be seen in equations (3.4) and (3.5) that the function losses and vdrop may depend in variables as input voltage, output voltage, output current and temperature. Obtaining these two functions can be quite challenging because they represent a five dimensions surface. These two functions can be simplified depending on the information available or the type of converter. In order to simplify these functions the following assumptions are considered:
Taking into account the previous points, we can approximate the model losses and vdrop of a typical DC/DC converter by a very small model, without a significant loss in accuracy:
| (3.6) |
| (3.7) |
In equation (3.6) the losses are approximated by a polynomial function. Figure 3.6 shows a comparison between the modeled losses of a converter using a polynomial and the actual measurements. The term ro in equation (3.7) represents the output voltage drop due conduction. This value in some cases may be negligible.
The behavior of linear regulator can be approximated using equations (3.8) and (3.9).
| (3.8) |
| (3.9) |
First it is assumed that the linear regulator has a perfect regulation, that is the reason why in equation (3.9) the voltage drop is 0. In addition, the losses are practically defined by voltage difference and the output current (equation (3.9)).
In most commercial modules it is possible to use one specific converter with more than one input voltage and output voltage configurations. In these cases, we recommend to create more than one model to represent each behavior of the converter. Usually the manufacturers provide efficiency curves for different input/output voltages. From these efficiency curves it is possible to obtain models with a very good level of accuracy for each of these configurations.
If we require models with a higher level of complexity, we can obtain the losses and vdrop function by using other analytic methods. [Elbanhawy, 2006] and [Das and Kazimierczuk, 2005] present two approaches based on the calculation of the losses by individual parts: switching, conduction and circulating energy. These approaches can be used specially on ad-hoc designs before a prototype is constructed and measured.
Until this point, we have presented only the electrical model of the losses. However, it is possible to add other features to this model, for example, equations to determine if the converter is operating outside its maximum specifications and other characteristics like the output impedance. It is possible to detect if a converter is providing more output power than its maximum defined by adding the following equation to the model.
| (3.10) |
In equation (3.12) the function above is defined as follows:
| (3.11) |
Therefore, the security value of a converter is 1 if the output power (iout vout) is greater than the maximum output power of the converter.
The value of area and cost of the converters are constant. Thus it is very simple to model with trivial equations this behavior (equations (3.12) and (3.13)).
| (3.12) |
| (3.13) |
To summarize, in Tables 3.2, 3.3 and 3.4 are presented the necessary equations to model typical converters and linear regulators.