Rated output, also known as Nameplate rating, is determined by the wind turbine manufacturer, based on their chosen wind speed. The rated output can be a high number or a low number, depending on the wind regime chosen for performance calculations. In its current state, there is no unified approach to wind turbine ratings, making the process capricious.
Actual net output is not affected by output rating, but capacity factor is, since capacity factor is a percent of the output rating.
If we say that a machine is rated at 50kW and it delivers 20kW on average, its capacity factor will be 40%. If we rate that same machine at 60kW, the average output remains the same but the capacity factor changes to 33%.
You can start to see how nameplate rating (output rating) and capacity factor are arbitrary.
Most good performing machines average 25% of rated output, a very good machine will deliver 35%, but again, these percentages are based on the wind turbine maker's chosen power rating.
Imagine the scenario where a manufacturer wants to give the illusion of a high output machine; they could use performance figures from unusually high wind speeds, utilize a generator big enough to support these unrealistic wind conditions and presto, they've got virtually whatever size machine they want. Not only is this deceiving to the consumer but utilizing a generator that's too big for the application drives the cost up and hurts efficiency.
So, since there's no industry standard for wind turbine power ratings, what's the best way to compare machines?
Cost per kilowatt hour.
Output per square foot of footprint or swept area is another way to compare apples-to-apples but the one we use most frequently at Uprise Energy is $/kWhr.
If you have another metric that you use or would like to continue the discussion, please use the comments section below. Thanks for reading!