Rochester
Recom
Article

Streamlining the System Design Process

CMSImage-5485
Power supply solutions also tend to be some of the most expensive components in the system bill-of-materials (BOM).

 

  1. Introduction
  2. A review of the typical power solution design process
  3. Streamlining the system design process
  4. Implementing pragmatic leverage/reuse tradeoffs in power solutions (Download/Login)
  5. Summary and conclusions (Download/Login)

1. Introduction

There are plenty of motivations for wanting to optimize the design process, especially when it comes to the power supply subsystem that can often be seen more as an inconvenient necessity than directly contributing to high-value system feature sets. Power supply solutions also tend to be some of the most expensive components in the system bill-of-materials (BOM). These reasons, along with the confidence that accompanies direct reuse of a qualified design block or commercially-available power module, are the main motivators for driving a heavy leverage/reuse strategy from one project to the next.

2. A review of the typical power solution design process

In order to gain a true appreciation for the motivations driving leverage/reuse of power supply solutions, it is best to take a brief moment to explore the typical design process and identify the opportunities/gaps that stimulate the push for such strategies. Whether a direct power stakeholder or one on the receiving end of power supply design services, if the following, generalized process resonates with any personal experiences, then they are likely not coming from a unique perspective.

Figure 1 provides the typical, high-level steps a team may take to get from concept to arrive on a system power budget and physical/environmental constraint. Ok, so this is not quite official, but even though a little facetious, there is a lot of ground truth represented here too. The “magic” part represents the unrealistic demands that result from a highly overinflated system power budget that may mathematically necessitate efficiencies/densities/transient responses that are either highly impractical for the class of product at hand or even just flat out not in alignment with what is available even from the state-of-the-art (SOTA).

Rx The “Official” Power Supply Design Process

  • Step 1: All system stakeholders (typically minus the Power stakeholder) get together and architect a system.
  • Step 2: Determine system power budget by summing maxima of all major loads in the system.
  • Step 3: Confirm feasibility with the Mechanical/Thermal stakeholder.
  • Step 4: Provide power budget, volumetric constraints, and project timeline to Power Stakeholder.
  • Step 5: Magic?!? (i.e. — forget physics and reality)

 

Fig. 1: The “Official” Power Supply Design Process, courtesy of PowerRox

A key takeaway is that even though a power stakeholder shall be beholden to the outputs, they are rarely an integral part of the process that contributed to the inputs. Given a specialty area of focus that requires a multidisciplinary background (often only derived from many years of field experience), it is often puzzling how little power stakeholder perspectives are sought out early in the process for a subsystem(s) that tends to be a primary, gating agent to system size, weight, power, and cost (a.k.a. – the infamous SWaP-C factors) optimization. Since no electronics run without power, add performance and reliability to that list too. Just for icing on the cake, a project timeline that is structured around a perfect, error-free development process (minus 10% to be even faster time-to-market or TTM than the previous product) will also accompany all these idealistic demands.

Now comes the negotiating process. Engineers are trained to be problem solvers so when faced with a list of challenging problems, a kneejerk response is to start digging into the solutions (i.e. – Is there an existing part that can meet this power density and footprint? Should airflow go from front-to-back or back-to-front to meet the system thermal envelope? And so on and so forth…). Even this starting point is the first opportunity to take pause and dig very deeply into the system budget and how it came to be. For instance, how often are all loads (especially the bigger ones) drawing their max currents simultaneously? Surely, there are many subsystems that are designed to be in antiphase with another subsystem (e.g. – the classic examples of compute vs. memory power demands or sleep/wake/transmit operating cycles) so it is pretty rare when the sum of maxima (typically derived from datasheets that may already be starting from a point of an unrealistic max with safety margin) makes sense for an aggregated power budget. Consider each touch point of that power budget from inception until finalized. Each stakeholder will also be sure to add their own margins to cover their own guidance, which really adds up when aggregated. Those extra layers of fat cost a whole lot of money and resources to design to truly unrealistic operating scenarios in even the extremist of corner-case usage modeling.

Another key point in the fight against overinflated system power budgets is to know when to recognize the biggest opportunities for budget optimization. Start with the largest, most demanding loads in the system and talk to the critical stakeholder(s) that best understand what the load really needs in terms of power requirements and try to take real characterization data whenever possible. Doing so shall likely open the door to implementing intelligent power management (IPM) techniques, such as aggregating lower-voltage power rails, load sharing/shedding, and short-term power allocation. IPM is a “combination of hardware and software that optimizes the distribution and use of electrical power in computer systems and data centers” [1]. Though the term was coined for data center applications, the applicability is fairly universal as this is more a frame of mind in design approach than anything else. For instance, changing the approach to power subsystem architecture from an “always on” to an “always available” mentality can bring paradigm shifts in the results of the end solution. This will involve extensive discussions with team members as well as external vendors.

In other words, it tends to be far simpler, faster, and cheaper to put the maniacal work into reducing the system budget into a REALISTIC summary of even true, worst-case, maximum power loading (from each individual power supply’s perspective) as opposed to putting all that sweat equity into trying to bend physics and available components to the whim of unreality. Given that time and cost-down pressures are a constant, following this strategy will enable a far more amenable process to negotiate amongst team stakeholders and find a pragmatic balance between time, cost, and quality. These inevitable tradeoffs are inexorably tied to each other regardless of how much we wish they were not at times, as illustrated by the figure below. For instance, a product can be optimized for either time/cost/quality without optimizing for the other two.

Articulating the difference between a leverage and a reuse is important when communicating needs to program managers or external vendors because each can imply something very different, yet can be used interchangeably in a way that can yield negative program and/ or solution impacts when miscommunicated. Leverage is taking an existing solution and tweaking minor aspects (i.e. – passive component values, signal/logic/comparator thresholds, cosmetics, form factors, etc.) of the original to optimize for a similar, yet not identical use case.

In this context, the term “semi-custom” is another common term for leverage. This distinction is particularly salient when talking to a component vendor about a “fully-custom” design (e.g. – new from the ground up) versus a “semi-custom” design that will likely be a modification to some commercial off-the-shelf (COTS) solution as there will likely be huge differences between the two for what is quoted in terms of price (component and non-recurring engineering or NRE) and time.

CMSTextComponent-textImages-1245
Fig. 2: The Time/Cost/Quality Triangle

Direct reuse refers to taking an existing design and copying it exactly. Effectively, this is the same thing as buying COTS components though this can sometimes be a little bit of a grey area since some fixed designs can actually be created with flexibility in mind. For instance, reusing power bricks with a wide-input-voltage range or programmable output for different applications. It can also be common to leverage a part family, particularly when referring to power modules designed for common footprints, optimizing specific module features (i.e. – input/output voltage range, power density, current handling, pinout, filtering, etc.) to the application.

CMSImage-5485

In general, common criteria for determining if one is looking at a leverage or a reuse comes in a test for three key characteristics: form, fit, and function (i.e. – aesthetics, mechanical/thermal compatibility, and electrical/communicative performance). This is another area in which very careful negotiation and detailed discussions with team partners and solution providers pays big dividends because of how strict some organizations may define adherence to fit/form/function. For instance, taking the exact same power supply and changing its ENABLE or POWER ON signal logic from positive to negative (high-level turn on vs. low-level turn on) may seem too trivial to go from direct reuse to heavy leverage, but the change may require a whole new round of qualification testing just like a new product (e.g. – new part numbers to manage and all that comes with it) and therefore fall under the leverage category. Even seemingly more trivial is to change a word or statement or value on a printed label of a power brick, but if that is a safety label or requires a special formatting of the part number or unique identification info in the electrically erasable programmable read-only memory (EEPROM), then new regulatory compliance testing may be required and/or manufacturing processes must be adjusted so this breaks the form/fit/function test.

Having survived the process of negotiating the system power budget, one can now confidently focus on proposing solutions to turn that budget into a reality. Given the time and cost pressures, an initial effort will focus on known-good solutions or sub circuits (a.k.a. – macros), which is where leverage and even direct reuse come into play. It is important to be sure to focus on leveraging/reusing good solutions and not just blindly recycling because of operational pressures (with an exception noted below). This is where the need to make the time and resources available for the things “we do not have time/resources to address” comes into play. Blindly recycling also means all the bugs and shortcomings are reused as well. Though just to make the point, an organization that is very explicit about their form/fit/ function test may require a second-source component to intentionally mimic a known bug or defect to maintain backward compatibility when multisourcing solutions (NOTE: Multisourcing is an entire topic of its own and should be deeply investigated for pros/cons of implications before implementing, though out of scope of this white paper). Neglecting product generation-over-generation, iterative improvement can really hurt overall, operational efficiency. Conversely, reusing a tried and trusted design with known performance greatly speeds up the development process (i.e. – the platform design approach). There are lots of well-established, reliable power vendors to partner with and gain these advantages, particularly from COTS power modules.

CMSImage-5487

If a design team must work on multiple system developments concurrently and/or in rapid succession, then they are likely to have a go-to toolbox of various power solutions/sub blocks/product families to fit a handful of standard application scenarios. This frequently consists of pre-built, pre-qualified, pre-tested power modules, whether they be developed in house or procured from a power supply vendor. Naturally, this strategy is implemented to optimize all SWaP-C factors as discussed above, but what matters most is the mitigation of risk factors, especially for critical/high-reliability and/or high-volume deployments.

For example, an isolated power supply for a SiC driver can be built using a transformer driver + transformer + rectifier + LDO, but a readymade DC/DC module (such as RECOM’s RxxP1503D with asymmetric output voltages designed for optimal gate driver performance) not only speeds up the R&D stage, but is one BOM component instead of many and reduces the chance that an error causes the expensive SiC transistor to be damaged.

3. Streamlining the system design process
Know your team stakeholders!

This transcends well beyond the first-order, engineering team members directly involved in the system development. It should include program managers (PMs), supply chain owners, manufacturing personnel, and even the SW/FW designers. Though seemingly counterintuitive, some of the most important stakeholders to talk with early on are the marketing/sales folks along with anyone else with the most direct contact with customers and/or end users. It is best to negotiate compromises and informed decisions before they are dictated in a trickle-down fashion, exclusive of power solution stakeholder inputs and oversight, as outlined at the beginning of this white paper. Avoid “if we build it, they will come” thinking. If the market requirements and potential is not known before starting a new project, then the chances the product will flop is much higher.

Know your technology!

Do not wait until design kickoff to start thinking about performing an industry survey to either get a finger on the pulse of the latest and/or refresh any dated info used to drive previous projects. Inviting vendors to give technology/roadmap updates can be a great way to get a quick overview and utilize vendor resources to consolidate proposed solutions and may even enable a jump start on competitive analysis. It can save a lot of time and effort (and mitigate risk of missing out on the SOTA) by leveraging the resources of motivated, external support partners to survey the massive landscape of industry offerings and boil that down to a more manageable list to start working with. Most vendors will jump at this opportunity (and perhaps even throw in free lunch) for early engagement on potential developments.

NOTE:

always consider the source of any info and take with a grain of salt, but this also highlights the importance and value of establishing a comprehensive, working relationship with key vendors and service providers. In high-stakes developments, “The customer is always right!” approach does not foster the most conducive engineering environment so a more collaborative relationship that also shares some risk can enable much greater chances of success for all involved.

Plan ahead of, during, and after project completion!

Take pause to review a “design playbook” or collection or learnings (a.k.a. – best practices, golden nuggets, etc.) before getting too deep in project/product definition. Typically, the most recent issues from the last project are the ones to get overlooked because the team was too pressured to get a product out the door. Do not be shy about arranging team meetings multiple times throughout the project (ideally once per project major phase/milestone), especially related to reviews for Design for Anything (DFx), safety/compliance (this includes powerline and electromagnetic interference or EMI test compatibility), and user experience.

For more information, please visit www.recom-power.com

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top