Validation Is Critical to Making Hosting Capacity Analysis a Clean Energy Game-Changer
A hosting capacity analysis (HCA), is a key grid transparency tool that provides a snapshot in time of the conditions on a utility’s distribution grid that reflect its ability to host additional distributed energy resources (DERs) at specific locations, without the need for costly upgrades and/or lengthy interconnection studies.
When a utility publishes an HCA, the expectation is that customers will be able to readily access data about what size of solar array, energy storage system, or electric vehicle charging station they can easily and affordably install at their home or business. However, the usefulness of an HCA is highly contingent on having confidence that its results accurately reflect grid conditions at the site.
As more states and utilities adopt and implement HCAs, we are learning more about this tool and gaining a clearer understanding of where HCAs are falling short of these expectations. Resoundingly, there is a clear need for an enhanced focus on validation measures to ensure these grid information tools are useful, accurate and reflective of the state of the distribution grid.
The usefulness of an HCA is highly contingent on having confidence that its results accurately reflect grid conditions at the site.
When it comes to HCAs, there can be multiple ways to assess what it means for the model and results to be “accurate.” IREC’s Optimizing the Grid discussed the different HCA methodologies and their impact on accuracy. In this article, the latest in our Insight Series which provides deep dives on regulatory issues, I discuss the importance of validation and identify steps that regulators and utilities can take in order to ensure they have adequate validation efforts in place to ensure confidence in their HCA maps and data.
In addition, I examine two different aspects of accuracy:
- Are the results produced by a utility’s HCA model free of data and calculation errors such that the results are reliable?
- And is the model producing results that provide adequate answers to the question being posed?
Early Focus on Validation Saves Time and Money
Having a validation plan in place from the start is a key, and often overlooked step, in the process. As we have learned from HCA pioneers, focusing on validation at the outset can save considerable time and money down the road. California offers a good example for other states to learn from.
In January of 2019, each of the California utilities published their first, much anticipated, system-wide HCA results. IREC and others excitedly started to peruse the HCA maps to see how much capacity was available on the system overall and at specific potential interconnection locations. However, we quickly discovered that Pacific Gas and Electric’s (PG&E’s) map was showing little to no available hosting capacity.
Before performing its first Hosting Capacity Analysis, a utility should undertake quality control efforts.
In fact, aggregate data showed that approximately 80% of PG&E’s feeders, also known as circuits or lines, had little or no hosting capacity for new solar available! While it is broadly known that PG&E has relatively high solar penetration, it was highly unlikely that the vast majority of the system had NO remaining capacity for new solar projects of any size.
These results did not reflect the reality experienced by customers currently interconnecting projects and were met with immediate frustration and suspicion that the results were inaccurate and had not been validated.
In addition, while PG&E’s solar results were the most glaringly unexpected, IREC also started to notice that it seemed like a surprising number of circuits were also showing no capacity for any new load, such as electric vehicles, across all three of the utility territories in California. (Capacity for new load is indicated by the HCA “load results,” as compared to the “solar results,” which are the hosting capacity for new solar generation.)
IREC pointed this out to PG&E, San Diego Gas & Electric (SDG&E), and Southern California Edison (SCE), and asked the utilities to provide aggregated HCA results for new load. In response, the utilities revealed that 60-70% of their results indicated zero hosting capacity for new load. This too would have been a very surprising outcome of the HCA process, if it was true, but the result was so inconsistent with expectations that we knew a second look was needed.
IREC worked with the utilities, other stakeholders, and the staff at the Public Utilities Commission to investigate whether the results were indeed reflective of the actual system capacity. Ultimately these discussions led to PG&E conducting some validation efforts and conceding that the solar results were erroneous. Similarly, after some initial pushback, the California utilities all agreed that the questionable load results warrant investigation and refinement of the HCA load methodology.
This flawed HCA rollout in California could have been prevented had the utilities adequately verified the results before publishing their HCA maps. There were requests for validation at the outset of the proceeding, however the Commission had not yet provided guidance on that topic, and thus the maps were published without an explicit requirement for validation.
Similar concerns and questions regarding the accuracy of HCA maps in Minnesota and New York have cropped up. This recurring theme suggests that validation is an essential step in the HCA process. But what does it mean to validate results and how does one approach this process?
IREC has been working with state regulatory commissions and utilities in California, Nevada and Minnesota to explore approaches to data validation that ensure all stakeholders can rely on and trust the results. Below I describe what IREC has learned to date about HCA validation and point to areas where further investigation is required.
How Accurate Are the Inputs to the HCA Model?
One key consideration in determining the level of accuracy of HCAs is whether the results produced by a utility’s HCA model are free of data and calculation errors. HCA models are only as good as the data input to them: “garbage in, garbage out” as system modelers like to say.
Therefore, before performing its first HCA, a utility should undertake quality control efforts. Such efforts include verifying the accuracy of distribution system asset management databases (software that tracks the age, condition, and configuration of equipment) and geographic information system (GIS) databases (software tracks the location of equipment).
From there, utilities must also ensure that their feeder models (models of the physical configuration of a feeder) support a power flow simulation (simulation of the electrical behavior of a feeder under different conditions).
Preparing feeder models so that they are accurate enough for use in the HCA takes time and effort. Historically, utilities’ distribution system asset management and GIS databases have not been reliable enough to support a load flow analysis like the HCA. Modernizing these utility databases to be accurate and precise enough to support the load flow analysis necessary to perform HCA is often the most time-intensive task associated with performing an HCA.
HCA models are only as good as the data input to them: “garbage in, garbage out” as system modelers like to say.
The first step a utility can take is to check for missing or incorrect equipment and equipment settings in their databases. Utilities report verifying a variety of settings (such as those for capacitors, reclosers, relays, and regulators, proper connectivity of laterals, loading and voltage violations, and the type and sizes of conductors and other equipment). There may be other things that need to be checked.
After identifying a problem, an engineer fixes the problem with the feeder model in question and determines if the problem is likely to exist on other feeder models. Many utilities report identifying the same problem on numerous feeder models. In that case, using an automated script can expedite the process of identifying the problem and fixing it system-wide.
After running the clean-up scripts, a utility can prepare (also known as building or forging) its feeder model and develop its load data. These feeder models must be accurate enough to support load flow simulations, so at this point utilities often input their feeder models into their modeling software to check if each circuit results in an error or successfully completes the load flow simulation.
Utilities across the country report spending a significant amount of time and resources to identify and correct errors in their distribution system data when first performing an HCA. For example, PG&E’s HCA rollout was challenged because it did not complete these steps before publishing its first HCA. When PG&E first published its ICA map, it did not provide results for approximately one third of the circuits on its system because the feeder models for those circuits produced errors rather than completing the load flow simulation. After these rollout issues, PG&E hired a consultant to develop and employ automated processes to clean up its model and check for complete load flow simulations. By contrast, other utilities like NV Energy and SCE completed these tasks before publishing their first results.
Establish Protocols for Validation of HCA Results
Once a utility receives its HCA results, it should determine if any results are anomalous. An HCA validation plan should include a specific list of circumstances that will trigger manual review by an engineer, such as when results show zero hosting capacity and on a certain number of randomly selected feeders. Many utilities use automated flags to identify anomalous results that may be inaccurate and require review.
Criteria that utilities report using as a basis for flagging data include:
- large discrepancies between previous HCA results and current results;
- a lower hosting capacity during a non-peak daytime hour;
- a discrepancy between the number of results provided for a specific feeder, i.e, line sections, between the previous HCA cycle and the current cycle;
- an equipment setting that varies from those commonly used;
- a loading violation;
- a voltage violation.
Most utilities check all HCA results for false negatives, reviewing a feeder model if the results show that no hosting capacity remains. Utilities may also need to check for false negatives where the result is not zero but it is nonetheless incorrect (i.e., if the model produced a result of 100 kW but the actual result should have been 500 kW).
We would further expect a rigorous data validation process to check for false positives, perhaps by randomly selecting feeders to review to ensure that results do not show more capacity than is actually available on that feeder. Providing an automatic check for anomalous results is likely an important part of a data validation process.
An HCA validation plan should include a specific list of circumstances that will trigger manual review by an engineer, such as when results show zero hosting capacity and on a certain number of randomly selected feeders.
After IREC raised concerns about the accuracy of PG&E’s initial rollout, PG&E’s consultant implemented a process to automatically flag anomalous results and manually review feeder models that produce anomalous results. If a problem is identified, PG&E then develops new quality control checks to fix the problem, and implements that check system-wide. Fourteen months after PG&E published its initial map, it appears that it has completed the first step in validating its results for solar generation.
More Work Needed to Define Needed Validation Steps
As no utility claims to have performed a comprehensive HCA data validation effort, a comprehensive effort would likely include other checks not described here. Further research by data modeling experts is necessary to determine what those additional checks should look for. Moreover, the publication of a set of best practices for HCA validation is necessary so that stakeholders and regulators know if utilities are performing a comprehensive data validation effort.
Is the HCA Model Based on Valid Assumptions?
Data validation efforts may reveal not only errors with data input to a model but more fundamental problems with a model’s assumptions or functionality.
As described above, when the California utilities rolled out their first results, we identified irregularities with PG&E’s solar HCA results as well as with all three utilities’ load HCA results. However, in the case of the load, the results were fairly consistent across the California utilities, each showing roughly 60-70% of the distribution grid having no ability to absorb new load.
The similar results across all three utilities reveal that the problem is not likely caused by faulty data inputs. Rather, it appears that the results are a “correct” output of the load HCA model, and that problems likely exist with the basic set of assumptions or techniques in the load HCA model itself. The California utilities agreed that the load HCA results warrant investigation into and refinement of the HCA load methodology.
NV Energy, which uses the same iterative methodology as the California utilities, identified at least one problem with the load model: its treatment of low voltage violations. When the voltage on a line segment is below 116 volts, “any increase in load anywhere on the feeder in question or on any feeder served from the same substation transformer” will produce a result of zero hosting capacity for all feeders served by the same substation transformer. Moreover, IREC learned that PG&E’s load results have changed considerably in recent months, leading to further unanswered questions. Other problems with the load model may exist.
With IREC and multiple utilities questioning the results of the load HCA model, it is clear that work is needed to explore why the model is producing these results and improve it. This unfortunately means that load HCA results currently provide little or no value to regulators or stakeholders. Validation efforts can reveal some underlying problems with model assumptions as well as simple data errors or discrepancies.
Lessons Learned for Making HCAs a DER Game-Changer
The time is right to build upon existing quality control efforts and develop a rigorous program of HCA validation. Although best practices have not yet been established, some key themes emerge from the existing efforts:
- Early emphasis on validation protocols can catch a variety of errors and unforeseen challenges.
- Implementing a data validation plan for a utility’s first HCA can support a more cost-effective and efficient process, avoiding retroactive data cleanup processes down the road.
- Models used to perform load calculations as part of the HCA may need refinement.
- Engaging data modeling experts on HCA validation protocols can support the development of a set of best practices for HCA validation, which will provide utilities, regulators, and stakeholders greater confidence in HCA results.
Using HCA results in the interconnection process has the potential to be a game-changer that will both expedite the interconnection process and minimize the number of applications for speculative projects, enabling DERs to connect to the grid more quickly, efficiently and cost-effectively. This could drastically reduce the cost and time spent by customers siting and designing new DERs and by utility engineers performing interconnection studies.
If utilities are to use HCA results when deciding which projects can interconnect without further study, then it is essential for regulators to require that utilities take adequate steps to validate their results. With these considerations in mind, HCAs will continue to evolve and improve and serve as a key tool for grid transformation.
Using HCA results in the interconnection process has the potential to be a game-changer that will both expedite the interconnection process and minimize the number of applications for speculative projects, enabling DERs to connect to the grid more quickly, efficiently and cost-effectively.