Wednesday, September 29, 2010

Trouble ahead for the Cleantech markets Europe wide




I was reading an analysis from Citi group on the prospects for the European Utilities to meet there extraordinary capex demands
being faced by the Utility Sector in the decade to 2020 as it tries to meet the
enormous costs of government imposed environmental regulations and embark
upon large scale asset replacement cycle, this directly impacts the Cleantech sector, the summary below makes an interesting read, it points to still more reductions in the cost of energy required, and the capex, if companies are going to survive through the next decade




The €1trn Euro DecadeEuropean Utilities
European Utilities sector is facing a decade of unprecedented investment
requirements. Across the five major markets (UK, Germany, France, Spain,
Italy) we estimated that the investment requirements totaled €800m in the
decade 2010 to 2020, and across the EU as a whole the figure could easily top
€1trn.
One year on we revisit this issue and ask what has changed? Unfortunately for
the governments of Europe who are driving the capex surge and are relying
upon the Utility sector to deliver their very expensive environmental policies,
the main developments in the past 12 months are all negative for capex. The
main developments are:
1. Costs up: Estimates of the total capex spend required by 2020 to
meet environmental targets and replace/re-new aging assets has risen
from €800bn to €938bn for the five major EU markets. Rising
equipment costs and additional requirements on Utility companies in
areas like energy efficiency have outweighed the delay in some
replacement expenditure;
2. Risks up: Events in Spain and Germany over the summer have
substantially increased investor perception of political risk in the Utility
sector in our view. This is particularly damaging where Utilities are
making very large investments in order to meet government
environmental targets. Much of this investment is fundamentally
uncommercial and relies upon government directed subsidy. If
companies do not have complete confidence that the subsidy regime
will be maintained beyond their investment pay-back period then
investment will not flow in our view.
3. Woeful sector performance: Since March 2009 the European Utility
sector has underperformed the wider market by 26%. Of particular
worry to policy markers should be that the underperformance has
been concentrated in the large cap generation based Utilities – the
very same companies who are expected to do most of the heavy lifting
on the capex front. The European Utility sector has been significant
de-rated both relatively and in absolute terms. This suggests that
equity investors are much less confident over the sectors future cash
flows and reflects a substantial increase in the cost of equity.



Can the Sector Finance €938bn?

From the Citigroup Global Markets report they have built a sector wide model to try and ascertain how much balance sheet headroom exists to fund the required capex. We assume that companies seek to maintain at least an A- credit rating and that power prices average
€55/MWh. With these assumption we calculate that the sector would have a
funding shortfall of €277bn that would need to be met largely through issuing
equity if it were to meet the total capex spend requirement of €938bn. Given
the cost of equity faced by the sector, such a level of equity issuance is, in our
view, highly unlikely

Wednesday, September 01, 2010

How lessons learned in other can help the emerging Industries


How lessons learned in other Industries can help the emerging Industries:

I was at a CPV conference a couple of years ago in San Diego, and one of the companies presenting, went through how they had improved the productivity and quality, and I was amazed at how rudimentery the techniques they had used to achieve it, and they thought this was "rocket science". This articel might help some off you who are struggling a bit to increase profit, or are struggling to get yours mind around product costs etc, and remember these techniques can be used even when outsourcing product.

A better way to measure shop floor costs

The CEO was coming to visit, and the senior plant manager at a large biotech production facility was uneasy. The latest numbers from the Finance Department hadn’t been good: the plant’s labor costs were rising, while margins were slumping. When the CEO asked what was going wrong, the manager could only describe his difficulties getting his hands around the problems.

As he explained, standard accounting measures based on the cost of goods sold meant that he couldn’t tell for sure whether margins were declining because fluctuating production volumes were reducing operating efficiency or because variations in the mix of high- and low-margin products were bringing down the plant’s average margins—or both. He thought the numbers should be better, given his knowledge of what was happening on the plant floor, but he had no way to dig into the operating details to explain quarter-on-quarter changes in productivity. That would require a much finer-grained understanding of the many components of product costs. The CEO gave the plant executive three months—until the next operating review—to come up with a better answer.

The plant manager knew he faced a devil of a time parsing the many activities of the biotech facility. For starters, the plant had seven distinct production areas and thousands of stock-keeping units (SKUs). In one laboratory-like section, PhDs mixed customized chemical products by hand. Elsewhere, fermentation and cracking lines processed biologic inputs. In another wing, staffers surveyed a continuous stream of capsules and vials as they passed through a fully automated production line. An assembly line for medical instruments occupied one wing; other areas housed testing and packaging lines. Some product families had hundreds of SKUs because of slight differences in key ingredients or concentrations. Swings in the monthly volumes and mix of production compounded the difficulty of pinpointing cost problems.

Imprecise cost accounting and its distortions

This plant was complex, but its problems are common. The issues facing its managers resemble those bedevilling myriad processes used in the fabrication of semiconductors, the production of specialty chemicals, and other applications with thousands of SKUs and complex production environments. Similarly, in our experience many managers who oversee shop floors consider traditional cost-of-goods-sold accounting—the widely used measure of operational performance—a blunt instrument. Fixed costs for capital equipment and inventory charges, for example, are averaged across SKU groups, masking changes in variable costs. When products are scrapped, that could often be due to poor forecasts by the marketing and sales functions, an issue that should be recognized in productivity measures. In most factories, multiple products often pass through the same production lines and share the same workers, making true cost assignments difficult, so the averages applied distort the true cost picture. Volume and mix swings accentuate the problem. Finally, when output volumes rise or fall, costs often don’t follow in lockstep, since there’s a time lag in consuming inventory.

The effects of getting measurements wrong can be substantial. Without good cost data, it’s hard to decide how to price products or even how much to produce. A hazy understanding of which production areas in a plant perform poorly leads to bad investment decisions. Multiplied across a large corporation’s manufacturing footprint, even minor plant-level miscalculations can have a significant impact. That’s a serious handicap in the current economic climate, since slower growth and more intense competition put a premium on operating efficiency. In plants we have examined, true costs vary from those assigned by traditional cost-accounting methods by 30 to 100 percent.

A new basis for measuring costs

The plant manager, knowing that he had no time to waste, quickly put together a team of experts, from a variety of functions, with the best knowledge of the plant’s processes and costs. The members of the team divided up the tasks facing it. Some undertook full-day fact-finding missions across the plant to get a more detailed understanding of the way processes flowed and the production staff was configured. Others pored over data on the cost of materials, labor, scrap, and overhead. After two months, the group had a plan for tackling the issues.

Clearly, the key was developing a radically detailed understanding of what happened to costs as the product mix and volumes shifted. The team mapped out three steps to accomplish this goal. First, it would define new product pathways and subpathways—granular “factories within factories” that made it possible to assign costs more accurately. Next, using a regression analysis of historical data, the team would detail cost drivers for each subpathway, an analysis based on past relationships between input costs and output produced. Finally, to account for dissimilar products, as well as for changes in the product mix and volumes, the team would define standardized “manufacturing units” (see below) that would allow productivity to be measured across time periods.

Using pathways to fine-tune product segments

The team grouped the plant’s product lines into pathways according to their common characteristics, such as the types of workers handling them and the processes used to manufacture them. In some cases, different pathways share labor or machinery. These high-level pathways, for example, separated biologics from chemical solutions and from instrument assembly. To delineate costs clearly, each pathway had its own measure of output: grams of gel for biologics, milliliters for chemicals, and pieces for instruments. The result was a set of distinct product families, each comprising several narrowly focused lines that shared common traits.

Building profiles of cost drivers

The next step was to identify cost drivers for each subpathway to help estimate input costs by the amount of output produced. Team members mined data on materials, labor, capital costs, scrapping charges, and other costs for each subpathway’s finely tuned production units. The team used statistical estimates to build these profiles, because materials and labor costs don’t rise and fall in linear fashion as output changes. (A 15 percent increase in the output of chemical solutions, for example, raises total hourly wages by only 10 percent, thanks to scale economies.) To estimate these cost and volume relationships, the team performed hundreds of regression analyses on historical cost data.

With the pathways and information on cost drivers in place, the factory team could accurately assign the amounts of chemical and biological compounds, labor inputs, and in-process scrap that went into, say, the creation of a vial. Take the example of a shop floor area that processed both vials of chemicals and biologic capsules. Traditional accounting averaged labor costs for this area across all the biologic and chemical products that passed through the line; only minor adjustments were made for variations in the mix or in volumes. The new data on cost drivers, by contrast, made it possible to measure labor costs down to a fraction of a penny for each of the more precisely defined product pathways.

Standardize output with manufacturing units

These new metrics gave a highly accurate picture of how costs varied within each pathway when volumes or the product mix changed. But the team still had no way to get a broad picture of productivity fluctuations across the entire facility and across time periods as mixes and volumes changed. This was an apples-and-oranges problem: as the mix of vials and capsules fluctuated, there was no meaningful way to add vials measured in milliliters to capsules measured in grams across time periods to get a baseline output figure.

With pathway and cost driver analysis, the team could assess productivity change across periods by modeling the predicted production costs of each pathway and comparing them with actual incurred costs. To solve the apples-and-oranges problem, the team denominated these input costs in standardized manufacturing units, which allowed costs at the most granular levels to be rolled up to pathways and, critically, to the site level. This approach provided the big picture on costs and changes in productivity (for a before-and-after example, see the interactive exhibit, “Product pathways reveal true costs”).

Product pathways reveal true costs

Pathways and standardized manufacturing units reveal how costs vary when volumes or product mix change.

Here’s an illustration. In a base quarter, the biologics pathway might produce 1,000 grams of gel at an expected cost of $500 (in direct and indirect labor), the chemicals pathway 500 milliliters at an expected cost of $1,000 (also in direct and indirect labor). The computation assigns a value of 1 manufacturing unit for every $50 in production costs, so the first pathway earned 10 manufacturing units ($500 divided by 50), the second pathway 20 ($1,000 divided by $50), for a total of 30 manufacturing units. If in a subsequent quarter, the actual cost of producing 1,000 grams of gel fell to $450, the cost per manufacturing unit would fall to $45, from $50—for a productivity gain of 10 percent. Similarly, changes in total costs in other pathways can be compared with regression-expected costs and the totals rolled up across pathways for a view of overall productivity change at a site.

Applying the lessons

At the next quarterly meeting with the CEO, the new metrics were in force. Repeating the pattern of past meetings, the Finance Department reported numbers that seemed to show persistent problems. Labor costs and the number of labor hours worked had fallen, indicating a falloff in business. Meanwhile, raw-material inputs had skyrocketed. Using the newly developed pathway cost numbers, however, the plant executive showed that production volumes rose substantially in the instruments line but had dropped significantly for chemicals. The production of instruments involves high costs for materials but not much for labor—the exact opposite of the pattern for chemical products. That explained how the cost of goods sold could climb in the face of declining hours.

What about productivity? An analysis based on manufacturing units showed that it had risen by 5 percent. While the product mix had shifted substantially, total output, as measured by manufacturing units, had risen by 3 percent; the inputs used to produce those manufacturing units had fallen by 2 percent.

The CEO incorporated the new metrics into company-wide reporting practices, and the gaps between operations and financial-performance measures diminished across the organization (see sidebar, “Managers’ checklist: Locking new cost measures into company practice”). A clearer picture of product margins allowed management to drop a range of poorly performing SKUs and to shift resources to higher-margin products. A more detailed understanding of costs led the company to realize further economies by shifting some production to sites where higher volumes would help absorb high fixed costs. The new measures also entered the company’s performance dashboards, and factory managers began tracking leading indicators of productivity, such as in-process materials scrap and labor utilization rates, on a daily basis.

In the wake of the recession, the demand for increased operating efficiency remains high. But disparities between financial and plant measures of costs and productivity exist at many manufacturing facilities. A better alignment, based on the enhanced gathering and analysis of data, can improve efficiency and provide a stronger foundation for pricing and product strategies.


About the Authors

Jon Duane is a director in McKinsey’s Silicon Valley office, where Nazgol Moussavi is a consultant and Nick Santhanam is a principal.The authors would like to acknowledge the contributions of Susan Ringus, an alumnus of the Pittsburgh office, to the development of this article.