Opinion Dynamics Acquires Research Into Action




Waltham, Mass: Opinion Dynamics, the largest independently-owned company specializing in conducting research concerning energy efficiency, demand response and renewable energy efforts, is pleased to announce the acquisition of Research Into Action, a leading company focused on energy evaluation and market insights.

“We have admired the work of Research Into Action for a number of years, and we are pleased to add their vast experience and knowledge of energy research to our team. Combining our companies allows us to continue to provide the quality of work that our clients have come to expect, while also helping us to further expand our core services” said Opinion Dynamics CEO Brad Kates.

“I am looking forward to combining forces with Opinion Dynamics,” said the President of Research Into Action, Dr. Jane S. Peters. “A large consideration in selling Research Into Action was ensuring that our current staff would have opportunities to grow and fit into a culture that is focused on clients in the clean energy industry.  I’m pleased that we found such a fit in collaborating with Opinion Dynamics.” Dr. Peters will serve as a Senior Vice President in the combined company.

The energy industry is shifting rapidly, and the best companies need to be flexible and custom in their approach to serving clients. With this acquisition, Opinion Dynamics adds seasoned energy research professionals to our team, allowing us to continue to provide cutting edge research approaches to our clients, as well as allowing our senior staff to continue their focus on answering the emerging questions our clients are asking.

The combined company will continue to be headquartered in Massachusetts, with offices throughout the West Coast including the Bay Area and San Diego, California, along with Portland, Oregon.


Media Contact:
Keri Bailey, Communications Specialist
email: kbailey@opiniondynamics.com
phone: 617-301-4645

The Green to Fund Green

By Megan Campbell

The last ten years have seen an explosion in the number of financing options available for energy efficiency and renewable energy upgrades. With more financing choices than ever, both homeowners and energy efficiency program implementers are looking for the best options for financing energy efficiency upgrades that lead to deeper energy savings.

New Financing Options
Traditionally, property owners have conventional financing options for energy efficiency projects, such as personal loans, home equity lines of credit (HELOC), credit cards, as well as energy-specific credit cards available through major banking institutions. However, these options do not address access to financing as a barrier to energy efficiency and, more recently, new options have become available. In 2008, the green bank concept was conceived as a way to facilitate clean energy development. Defined as a public or quasi-public financial institution, a green bank maximizes public funds and private-sector investment to create low-cost, long-term financing for clean energy measures. State green bank programs such as those in Connecticut and New York (the first states to adopt) have had great success in terms of lending volume. Since its establishment, the Connecticut Green Bank has surpassed $1 billion in clean energy investment and, to date, the NY Green Bank has made close to $458 million in overall investments. Alternatively, Property Assessed Clean Energy (PACE) programs allow a property owner to finance the up-front cost of energy improvements made to their property and then pay the costs back over time through their property tax bill (10 to 20-year terms). With all these options, which financial products are residential clients turning to? And are they getting the most bang (or energy efficiency) for their buck?

Baby Steps for Energy Efficiency Financing
At the time when energy efficiency financing products were emerging for the residential sector, Opinion Dynamics conducted a market characterization study for energy efficiency financing. Through a general population survey of California homeowners, we found that over a two-year period (2014-2015) 36% of homeowners made some upgrade to their homes that reduced energy use. Of these, a quarter used financing to pay for the upgrade, most commonly (over 80%) a conventional, non-energy efficiency financing product, such as loans offered through retailers, contractors, and credit cards. Extrapolating to the population, this means that only 1 to 2% of California homeowners in that two-year period used an energy efficiency financing product.

How Far Can EE Financing Grow?
This finding raises several important market questions. How many customers want or need new financing options and how many need financing at all? How can we encourage homeowners to invest in deeper retrofits that lead to deeper energy savings? Is it possible that more attractive financing options could entice more homeowners to invest in energy-efficient upgrades and potentially invest in larger projects at one time? With the plethora of existing energy efficiency and conventional financing options, what type of financing product would cut through the market noise and resonate with a homeowner?

Addressing Customer Needs
To help answer these questions, Opinion Dynamics recently conducted a latent-class, discrete-choice (LCDC) research study with over 400 homeowners to identify the financing product features that resonate most with those looking to upgrade their homes. In this study, customers completed online “shopping exercises” asking for their likelihood to pay for a given home improvement project with varying project costs, payment methods, underwriting criteria, financing services, energy savings thresholds, and rebate levels.

When faced with a marketplace full of options and features, the study revealed that most homeowners (78%) preferred some sort of financing over cash or credit card, suggesting that customers who used no financing in the baseline study (75%) may have chosen financing if they found an attractive option. The LCDC showed that a variety of financing models would be attractive in the market, though term loans were the most popular type of model. However, rather than financing type, the monthly payment amount, or more precisely, an affordable monthly payment amount (from the homeowner’s perspective) was the most critical factor in encouraging homeowners to take out a loan for home improvements.

Many utilities in the industry are struggling to make whole-home upgrade programs cost-effective under the traditional rebate model. Some utilities are attempting finance-only pathways to whole-home upgrades as a potentially more cost-effective solution to encouraging deeper savings. Based on the research to date, many homeowners may attempt a deeper retrofit if they find an affordable monthly payment solution.

Discrete Choice Modeling: Choices Speak Louder Than Words

By: Evan Tincknell

An important yet difficult challenge for any energy-efficiency program is the ability to quantify savings directly attributable to program offerings. To do so, one needs to understand the degree to which customers would have taken comparable energy-saving measures in the program’s absence. Other valuable insights such as which aspects of a product or offering people find most enticing or how much customers would be willing to pay are key to implementing an effective program. Every purchase involves choice, and those choices are rarely as simple as they appear. While it’s nearly impossible to define an individual’s exact purchase behavior, discrete choice modeling can reveal complex patterns in the choices that groups of people make.

Developed by economists and psychologists in the late 1970’s, discrete choice modeling pairs a specialized survey design with regression-based analytics to better understand and predict customer preferences under various market scenarios. A discrete choice survey presents participants with a series of ‘choice sets’ and then asks the participants to make a series of hypothetical purchase decisions, choosing between several products with varying characteristics. By aggregating the outcome of many individual purchase decisions, discrete choice analysis can predict the types of products customers find most attractive, how purchase tendencies would shift in the absence of discounts, and which products are most effective to discount. Common applications of this method include pricing analysis, product concept testing, product branding or positioning, and market share forecasting. In market research applications, discrete choice experiments are most often used with big-ticket items, such as airline tickets or cars.

One particularly powerful application for discrete choice modeling within the energy sector is to estimate the net impacts of energy-efficiency programs. This modeling can be used effectively to optimize rate structure offerings, pricing plans, design effective demand-response, and energy-efficiency program offerings, as well as assess market conditions and remaining market potential for a product, service, or program.

In recent energy-efficiency evaluations, we have used discrete choice surveys to assess price sensitivity of program-discounted energy efficient light bulbs. Although light bulbs are a relatively low-cost item and a fairly routine purchase decision for most shoppers, different customers focus on different product characteristics. Some might only consider price or gravitate toward the cheapest available product of a given wattage, while others may care more about light color, expected bulb life, or energy savings. The discrete choice survey presents respondents with a few product options with varying characteristics and uses hundreds of decisions from different customers to interpret preferences across products. The results allow us to simulate markets featuring either discounted or non-discounted energy efficient bulbs, in effect projecting how market shares would shift in the absence of the program.

In addition to modeling price sensitivity and predicting market shift with different price options available, the discrete choice model also measures the relative importance of various product characteristics. In the case of light bulbs, we can include light color or expected bulb life in the survey and then use the results to predict which light colors are most popular or how strong preferences are for light color relative to bulb life. We can also estimate how similar products fare or the relative importance of price for different types of products, which can help identify what products are most impactful to discount and what level of discounting is most effective.

While the discrete choice method relies on customer self-report, it’s methodology avoids many of the biases associated with more direct survey questions about decision-making processes or willingness to pay. By asking customers to make trade-offs between price and other non-price attributes, this method reveals the true effect of price or other considerations on customer choice and avoids much of the social desirability bias commonly associated with more direct questions on product pricing.

Discrete choice modeling analysis offers a wide array of analytic possibilities and insight into customer behavior. It is a time-proven method that holds tremendous promise in the relatively new area of research for energy-efficiency program evaluators. Energy-efficiency programs seeking to influence larger, more carefully considered purchase decisions may, therefore, look towards discrete choice modeling experiments offer a wide array of analytic possibilities and are readily customizable, making them a promising tool for informing both program design and program evaluation. In the context of constantly shifting markets, we look forward to helping our clients take advantage of this innovative and flexible approach to maximize program efficacy and provide their customers with the products they value most.

Let’s Get the ‘Auto’ in Auto M&V Pointed Down the Right Road


By Jeremy Eddy

When it comes to the advancement and integration of distributed energy resources (DERs) in a growing number of utility jurisdictions, things are moving at a fast and furious pace. The traditional ‘pipes and wires’ model of utilities generation and distribution of power in a one-way flow is giving way to a networked two-way flow of information and energy resources. The proliferation of smart meters and advanced metering infrastructure (AMI), have opened up many exciting possibilities for energy efficiency projects to contribute to the integration of DERs and the vitality of utilities. But those possibilities are not without some interesting challenges which evaluators and regulators will need to tackle as we travel along the Automated Measurement and Verification (Auto M&V) road.

Thanks to AMI, it is increasingly possible to describe energy consumption or energy savings in every hour of a year for every building. After normalizing for weather and, in some cases, for other factors, changes in consumption at the meter before and after an energy efficiency project can potentially yield quick turnaround on savings. Knowing both the location as well as the daily and yearly load-shape patterns for buildings allows these savings signals to make it possible for utilities to deliver geotargeted energy efficiency by strategically grouping projects. This can deliver higher value to system operators and enable energy efficiency participation, at a targeted level, alongside other DERs such as demand response, energy storage, and distributed generation in meeting the needs of the grid, the needs of end users, and in helping meet greenhouse gas emission reduction goals. Another benefit is improved cost-effectiveness. With increased data made available through AMI, the potential for the reduction in costs associated with the evaluation, measurement, and verification of savings from energy efficiency measures allows for a wider variety of measures to become available over a broader array of geographic areas and building types then are currently cost-effective.

One of the attractive opportunities that this movement brings to light is the possibility of having energy savings ‘units’ bought and sold in energy markets. By commoditizing energy efficiency savings, attracting investments from larger capital markets becomes a viable next step. For the realization of commoditizing to come to fruition, energy savings measurement needs standardized, automated units with near real-time turnaround that is both reliable and robust. However, there are some key considerations and hurdles associated with making automation of credible energy savings happen and be market-ready.

Occupancy changes, equipment or schedule changes, non-routine weather events or even customer participation in multiple, overlapping energy efficiency projects are some examples of situations that can create ‘noise.’ This noise in the analysis of the AMI data can potentially confound the signal or energy savings results. While looking at projects in aggregate versus individually may help to dilute the noise, there’s still a risk of losing the signal. Geotargeted programs and project location are also key to determining how the projects are aggregated. Additionally, some building circumstances don’t lend themselves to an Auto M&V approach; new construction projects are the perfect example.

Currently, AMI data is not a panacea. The notion of standardizing and automating energy savings calculations also presumes that the requisite data is (1) consistently available, and (2) consistently accurate, but, in fact, it can be notoriously difficult to obtain while requiring significant effort and expertise to clean and prepare the data for savings analysis. What additional challenges to the standardization and automation of energy savings calculations need to be addressed? For starters, standardizing the measurement of energy savings can lead to tradeoffs with accuracy, and depending on the situation this tradeoff may or may not be worth the cost. Second, when it comes to weather-sensitive measures, time is needed to monitor performance over different seasons, which reduces the ability to turnaround savings feedback in near real-time. Finally, commoditized energy savings require safeguards. The challenges noted above could be fodder for market ‘sharks’ who could profit from claiming savings that are not valid. For example, a commercial property owner whose retail tenants are going out-of-business. The property owner could easily claim the associated decline in energy use as energy savings to gain a profit.

As the dust has settled a bit on early claims regarding the transformative potential of Auto M&V, a few themes have come to light that shows Auto M&V tools and methods represent a useful and evolving tool-set in the hands of various stakeholders, but not a silver bullet. Establishing standards and protocols are important to build shared ownership and buy-in regarding energy savings data management, calculations and system boundaries. This is critical if the tools and savings are to meet users’ needs and be considered credible by all stakeholders.

Attribution of savings.

Different audiences have different needs for data and how its interpreted. Proponents of commoditized energy savings focus on general change in energy consumption. In contrast, program administrators need to be able to pinpoint the exact change in usage in order to evaluate the cost-effectiveness of programs and determine if ratepayer funds are being used responsibly. These different needs translate into different baseline assumptions and different levels of tolerance for standard error bands around the savings estimates.

Defensibility of the measurement approach or software.

It is important to keep in mind the adage, “measure what you care about, or you will end up caring about what you measure.” Open-source codes are currently being offered by various software developers and are useful tools to our industry. Currently, these products are built around specific statistical regression models and data analysis protocols based on each software developer’s proposed solution.  While each of these software products has attractive features, regulatory bodies should not mistake these products as an effective replacement for dialog about how the needs of different stakeholders can best be met. Those needs serve as the perfect platform for open and systematic dialog around the issues and opportunities that Auto M&V present and the best ways to address and incorporate them.

Build on successes.

Pay-for-performance programs allow implementers to be compensated based on achieved energy savings measured at the meter. The tools and protocols for savings measurement are made available in advance, so the implementer can anticipate and gauge their performance using the same tools that will be used by the program administrator or independent third-party evaluator. Where AMI data is available, sharing between administrators and evaluators minimizes the risk associated with differences in measurement approaches between claimed and verified savings. These program environments where Auto M&V tools and methods are woven into the program design and execution can create a context for ‘kicking the tires’ of the tools and improving them over time. As an ongoing form of best practice, multiple tools and approaches should be triangulated to gauge the degree to which they yield similar savings estimates. Discrepancies can highlight opportunities for improvement and comparing results can yield helpful insight. Closer scrutiny of early use cases aided by Auto M&V where modeling reveals results that are markedly different from expectations, can trigger a closer look by all stakeholders to identify and potentially rectify the discrepancy. This application requires lower accuracy criteria than verifying savings, leads to better program results, and provides valuable opportunities to test and refine tools and methods.

Experience-to-date shows that Auto M&V tools are already proving their worth as a source of rapid empirical feedback on changes in usage at the meter; whether at the whole building level or in settings where individual end-uses or pieces of equipment are sub-metered, but we still have a long road ahead. A host of hurdles around standardization and stakeholder needs must be addressed before broadly financed markets for commoditized and standardized units of energy savings become established. The ongoing evolution of Auto M&V helps move energy efficiency further forward in participating with other DERs as an optimized resource in the modern and future grid.

Great Influencers in Energy: Nikola Tesla: Master of Energy


By: Keri Bailey

We chose Nikola Tesla as one of our Great Influencers in Energy for not only his incredible intellect and acumen with developing the alternating current, but also for his way-before-his-time advancement of conservation initiatives.

Nikola Tesla was born amid a Summer lightning storm* at midnight on July 10, 1856 in the mountainous region of the Balkan Peninsula known as Lika. From a young age, he was passionate about mathematics as well as science and had his heart set on becoming an engineer. His eidetic memory allowed him to memorize novels and learn to speak eight languages fluently. He claimed that many of his best ideas came to him in flashes, images he would retain and build to spec from and at age 24, one such flash occurred during a sunset stroll. Tesla remarked, “…The idea came like a flash of lightning, and in an instant, the truth was revealed. I drew with a stick on the sand the diagram shown six years later in my address before the American Institute of Electrical Engineers.” That diagram, drawn impermanently in the sand, was for the alternating current (AC) induction motor, a technological advance that would change the world from a man whose vision and genius would also do the same.

Tesla came to New York with little more than four cents to his name. After a few failed starts, he was finally able to set up a laboratory in Manhattan, where he developed his induction motor. Tesla filed for seven U.S. patents in the field of polyphase AC motors and power transmission. These consisted of a complete system of generators, transformers, transmission lines, motors, and lighting. When George Westinghouse, inventor, and owner of the Westinghouse Electric Company, heard of Tesla’s invention, he made Tesla an offer to purchase the patents and license the technology, providing an upfront payment and royalties on each horsepower of electricity generated and sold.

The World’s Fair of 1893 in Chicago was set to be the first all-electric fair in history. Tesla helped illuminate the fair with more light bulbs than could be found in all of Chicago, and wowed audiences with a variety of demonstrations, including lighting a wireless bulb from across a stage. These displays featuring AC impressed fair patron Lord Kelvin and would help garner Westinghouse and Tesla a contract to generate electrical power at Niagara Falls, the first large-scale AC power plant in the world (read more here). According to Marc Seifer, author of Wizard: Life and Times of Nicola Tesla, “People aren’t aware that Tesla was close friends with conservationist John Muir. One of the founders of the Sierra Club, Muir loved that Tesla’s hydroelectric power system was a clean energy system.” Or as Tesla put it, “running on the wheelwork of nature.”

After Niagara, Tesla returned to working on experiments in his lab in New York City and became engrossed in the exploration of high-frequency electricity. Tesla knew that higher frequencies could hold technological advantages: lamps could glow brighter and energy could be transmitted more efficiently. His experiments lead him to create the Tesla coil, which could generate both high frequencies and extremely high voltages. During his time working with high frequencies, Tesla developed some of the first neon and fluorescent lighting, took the first x-ray photos, and discovered that his coils could transmit and receive radio signals when they were tuned to resonate at the same frequency. These experiments marked the beginning of Tesla’s lifelong obsession—the wireless transmission of energy.

Tesla hypothesized that he could wirelessly transmit unlimited amounts of energy to any location on earth at high altitudes; theorizing that the thinner air would be more conductive. To test his theory, he would require the ideal high-altitude location and power supply as he would need to create electrical effects on a grand scale. His friend and patent attorney, Leonard Curtis, offered to find land and power for the research; he tapped the El Paso Power Company of Colorado Springs.

For nine months Tesla conducted experiments in Colorado. On the evening of one his experiments, huge arcs of blue human-made lightning more than a 130-feet in length shot out from the lab. Residents reported lightbulbs illuminating when off, sparks coming off metal fixtures, and horses bolting from stalls due to their metal shoes sparking. Tesla’s experiment ultimately burned out the dynamo at the El Paso Electric Company and the entire city lost power. The power station manager was livid. Though Tesla kept a detailed journal, to this day, it is still unclear if any of his experiments were truly successful, but he returned to New York convinced wireless transmission was possible.

When Tesla returned to New York City, he wrote an article detailing his futuristic vision for energy. He chronicled a means of tapping the sun’s energy with an antenna, he suggested that it would be possible to control the weather with electricity, and he proposed a global system of wireless communications. To most, his ideas were almost incomprehensible, but Tesla’s vision of the future was clear. In interviews, he spoke of the necessities of clean energy, the conservation of natural resources, and the protection and clean-up of the environment. Tesla’s close friend, Robert Underwood Johnson, shared his views and in his autobiography, Remembered Yesterdays, talks of his friends and acquaintances such as notable conservationists Theodore Roosevelt, John Burroughs, and John Muir. Johnson was editor of Century Magazine and one of the driving forces, alongside Mr. Muir, behind the creation of Yosemite National Park in 1890.

Today the name Tesla is ever-present. The magnetic field strength of MRI scanners is measured in Teslas. His name graces schools and institutions, he has an airport dedicated to him, he is celebrated in museums, numerous plaques and statutes pay him tribute, and even world’s best-known electric car bears his name. Though his vision for wireless power has yet to be realized, his hope for energy efficiency is actively being implemented across transportation, manufacturing, and in residential and commercial buildings on a global scale. Opinion Dynamics conducts energy efficiency program evaluations for clients across North America. Just like Tesla’s holistic view of energy, we value a multidisciplinary approach that blends our engineering, statistical analysis, and technical capabilities with techniques from fields such as sociology, economics, psychology, and anthropology to provide an in-depth, human-focused perspective on energy evaluation and efficiency.

*Favorite Tesla fact of Spencer Kates (age 11) along with Tesla’s complete dislike of pearl jewelry!

To Be Continued…

Great Influencers in Energy: Lord Kelvin: When Electricity and Water Do Mix.






By: Keri Bailey

Sir William Thomson, 1st Baron Kelvin of Largs, was born on June 26th, 1824 in Belfast, Northern Ireland. He was a famed physicist, mathematician, engineer, and inventor. Throughout his life, Thomson’s main focus was the practical implementation of science. He achieved fame through his work on submarine telegraphy and was employed as a scientific adviser in the laying of the Transatlantic telegraph cables in 1857-58 and 1865-66, for which he was knighted by Queen Victoria. In 1892, he became the first scientist to be honored with a peerage and took the title Baron Kelvin from Kelvin Grove in Glasgow, where his grandmother had lived. While you may not be familiar with Sir William Thomson; the name, Lord Kelvin, will be forever linked to energy not just for his lifetime achievements, but for his incredible scientific legacy.

Lord Kelvin did more than any other scientist up to his time in developing accurate methods for measuring electricity. Motivated by challenges of his time, Lord Kelvin was an adept inventor. He established the Kelvin balance or Ampere balance, for the precise specification of the ampere, the standard unit of electric current. Lord Kelvin realized a need to define extremely low temperatures precisely, an “absolute thermometric scale” a powerful concept with far-reaching potential. Beyond just a simple relative temperature, where objects are referred to as hotter or colder than something else, the absolute, thermodynamic temperature of an object provides information on how much kinetic energy its atoms and molecules have. Kelvin’s definition of the absolute temperature scale became especially important in the field of superconductivity. And in a more common-place application, the Kelvin scale includes the measurement of color temperature for items such as light bulbs and LED screens. Enjoy a ‘warm’ color light? The yellowish spectrum closely resembles what a hot object at a 3,000 Kelvin (K) temperature would naturally radiate. A cooler blue light color temperature of 5,000-5,600 K is typically labeled ‘daylight’ or ‘full spectrum’ in relation to the temperature of the sun’s surface, which is about 5,800 K.

Water, Water Everywhere – So Let’s Make Electricity

In 1893, the international Niagara Falls Commission decided on the design of the Niagara Falls power station. At the head of the commission was Lord Kelvin who asked Westinghouse Electric Manufacturing Company to harness the power of the falls. The Westinghouse hydroelectric system was based on the work of Nikola Tesla. Since his childhood, Tesla had dreamed of harnessing the power of the great natural wonder. It was the first large-scale, alternating current electric generating plant in the world. When the system went live, the first power reached Buffalo at midnight, November 16, 1896. The Niagara Falls Gazette reported that day, “The turning of a switch in the big powerhouse at Niagara completed a circuit which caused the Niagara River to flow uphill.” At first, power was only supplied to Buffalo, but within a few years, the power lines were electrifying New York City, illuminating Broadway, powering the trolley railways, and subway system.

Today, multiple hydropower stations are still in operation on both the American and Canadian sides of the Falls, including the Robert Moses Niagara Power Plant, the third largest in the United States. Every evening, when the demand for electricity is much lower, and the tourists have gone, the Moses plant uses power pumps to divert water from the Niagara River and push it into the upper reservoir behind the Lewiston Dam. During the following day, when electrical demand is high, part of the potential energy of the water in the Lewiston reservoir is converted into electricity at the Lewiston Dam, and then flows into the forebay, where it falls through the turbines of the Moses plant.

The Power of the Ocean

Lord Kelvin’s interest in water didn’t stop there; maritime issues also inspired him. Besides his work on the Transatlantic cable he also worked on improving the nautical compass to account for the interference from the metal used in the modern construction of ships as well as constructed a harmonic analyzer, where an analog computer was used to predict tidal rhythms. Tidal measurements are still of particular interest even today, especially when you consider the potential of tidal energy. Lord Kelvin might be pleased to know that Scotland is currently the proving ground for some impressive tidal energy projects.

Orkney, Scotland is more known for idyllic coastal landscapes, Neolithic structures, and gentle rolling farmlands then for being the hub of marine renewables. But, earlier this year, phase-1 of MeyGen’s tidal stream array was completed off the coast of Orkney and entered into what is expected to be a 25-year operational period. To the north of the site is the uninhabited island of Stroma, which creates a natural channel with the mainland that accelerates the millions of tons of water flowing between the North Sea and the Atlantic Ocean every day. The array of underwater turbines makes use of these high flows (some of the fastest flowing waters in the U.K) and the ideal medium water depths of the channel. Since connecting to the grid, the Edinburgh-based developer said the units have already generated about 6 GWh-worth of power, setting a new world record for tidal stream monthly production in March with 1,400 MWh. The completed array will eventually have 269 turbines, enabling it to provide enough electricity to power 175,000 homes.

Microsoft also looked to Orkney for its latest experiment, Project Natick. A marriage of cloud computing and sustainability, Project Natick leveraged submarine technology and Orkney’s sustainable tidal and wind-powered grid to develop a self-sufficient, 40-foot-long, underwater data center. Data centers are notorious energy consumers, due to the required cooling they need. Microsoft’s development team adapted a heat-exchange process commonly used for cooling submarines for the underwater data center which houses 12 racks and 864 servers. The system pipes seawater directly through the radiators on the back of each of the 12 server racks and back out into the ocean. “Colocation with marine renewable energy is a step toward realizing Microsoft’s vision of data centers with their own sustainable power supply,” explained Christian Belady, general manager of cloud infrastructure strategy and architecture in Microsoft’s cloud and enterprise division.

And Scotland is not alone; marine energy projects are looking to be implemented worldwide. A pilot program for marine energy development is currently being tested in Washington’s Puget Sound. Massachusetts has a tidal technology testing site established in Cape Cod Canal while the Fundy Ocean Research Centre for Energy (FORCE) in Nova Scotia is Canada’s leading test center for tidal current technology. Bermuda has signed an agreement to purchase two 20 MW wave energy parks in the Caribbean, and the planned development of a wave park is set to be installed off the Canary Islands. Northern Ireland, India, China, and Japan are all in the process of piloting tidal stream arrays as well. Marine energy is still a growing market that has enormous potential according to the World Energy Council; these current initiatives could demonstrate tidal energy as a competitive renewable energy source.

In acknowledgment of his contribution to electrical standardization, the International Electrotechnical Commission elected Lord Kelvin as its first President. Lord Kelvin was appointed a Privy Counselor and one of the first members of the new Order of Merit. He received the order from the King in August 1902 and was sworn a member of the council at Buckingham Palace. These achievements mark a career that was dedicated to putting science and technology to pragmatic use for a greater good. Lord Kelvin’s appreciation for the sea and electricity helped cement his name in history, so here’s to the scientists, the engineers, and the innovators who look ahead and dream of affordable, renewable energy and see potential in the tides.


Photo credit: The Smithsonian Archives

Olivia Patterson Named Vice President

News from Opinion Dynamics

WALTHAM, MA. – Opinion Dynamics announced today that Olivia Patterson has been promoted to Vice President, effective immediately. In this new role, Patterson will continue her industry leading work in research surrounding demand response and distributed energy resources, along with providing senior direction to the company’s Data Science team.

“Olivia has been a star at Opinion Dynamics since she joined us a decade ago,” said Opinion Dynamics CEO Brad Kates.  “Her talents are numerous, but chief among them is her ability to understand abstract and detailed technical information and communicating that information to others in a way that anyone can understand.  Couple this knowledge with her engaging and interpersonal style and Olivia excels in everything that she does—we’re excited to have her as a Vice President.”

“I am excited to work at a time when the confluence of new technologies, advanced data access and analysis, and changing grid needs require new policy, interventions, and methods, and I look forward to working with our incredible team in this new capacity,” said Patterson.

Ms. Patterson earned her Bachelor’s in History and Cultural Anthropology from the University of California Los Angeles and her Master’s in International Economics from the University of California San Diego. She is an active member in several industry associations, including the Peak Load Management Alliance, and is a go to speaker and trainer for issues involving energy efficiency and demand response.

About Opinion Dynamics – Opinion Dynamics works to advance knowledge to address emerging energy and social issues through sound and insightful research. It is the largest independently-owned company that focuses on energy efficiency, demand response and renewables research. It is headquartered in Massachusetts with offices in Northern and Southern California, as well as satellite offices throughout the country.


Media Contact:
Keri Bailey, Communications Specialist
PH:  617-492-1400 x4645

Zero Net Energy for All.

By Dr. Ellen Steiner

Buildings consume almost half of the energy produced in the United States and produce 45% of all greenhouse gas (GHG) emissions. However, energy savings and GHG reducing opportunities in commercial and residential buildings remain untapped. There is a solution, and it has gained momentum but lacks traction, and that is Zero Net Energy (ZNE) also referred to as Zero Energy (ZE) and Net Zero Energy (NZE) and therein lies a glimpse of the issue. ZNE over-laps several industries and is impacted by numerous governing bodies all of which have their own terms, definitions, and jurisdictions. By and large, this hinders policy development, implementation, and even classification. How do we better define something that is broad in audience and ambition yet greatly in need of clarification to make it universal?

For us to understand how to implement ZNE and make it more inclusive, it is imperative to understand ZNE at its core, including the key benefits and importance of building sustainably. Since the definition of a ZNE building differs depending on whom you ask, at a basic level it’s like a balance sheet, a building that produces as much energy as it consumes over the course of a year. ZNE buildings remain connected to the grid; and, depending on their configuration can produce more energy than they can consume—either pushing excess power back to the grid or storing excess power on-site. While the basic concept of what a constitutes a ZNE building is generally agreed upon, there continues to be much debate regarding the nuances and qualifications of ZNE.

Building professionals, industry stakeholders, and the government all use multiple definitions to determine what amount of renewable energy is necessary to offset a building’s energy use. These various definitions account for variables that affect the energy balance sheet such as energy generation, transmission, delivery, and distribution losses. The main goal of these current definitions largely focuses on determining a building’s renewable offset, but limit the type of buildings and scope of projects that could qualify for ZNE classification, especially in urban areas.

A key issue is defining the building system ‘boundary’ or where the renewables are located. The founding principles of ZNE insisted that renewables be located within the building footprint or on-site. Great for new construction, but as ZNE transitioned from theory to practice, experts quickly realized that ZNE is often not possible with many existing buildings and the restrictive definition was inhibiting the expansion of ZNE to broader applications. With an increase in city and state ZNE-related initiatives, the call for more flexible definitions was tackled by The U.S. Department of Energy which established the following four ZNE definitions:

  • Zero Energy Building: The building footprint (i.e., rooftop), or the building site (i.e., parking lot, adjacent land) must be utilized for an on-site renewable generation.
  • Zero Energy Campus: A multi-building campus can be utilized for on-site renewable generation to offset the energy use of all or a portion of the campus buildings. This approach allows ZNE to be achieved as a whole when some of the buildings within the campus footprint are limited in capacity for on-site renewables.
  • Zero Energy Portfolio: A single owner of multiple building sites could aggregate its buildings so that the combined on-site renewable energy generated would offset the combined building energy use.
  • Zero Energy Community: An energy-efficient community where the annual delivered energy is less than or equal to the on-site renewable exported energy.

Buildings in urban areas face the greatest challenges to becoming self-sustainable and are often left by the ZNE wayside.  Buildings that occupy large footprints often have difficulty implementing simple energy-efficiency measures necessary for achieving ZNE, such as leveraging natural ventilation and daylighting. For buildings that exceed three stories, the ratio of energy demand to the potential for energy production is high. For these buildings, ZNE efforts could be hindered by the cost and complexity of adding structural support to the roof to accommodate the weight of solar panels. Other buildings that lack surrounding parking lots or green spaces are challenged to find suitable areas to situate renewable energy systems such as wind or solar. These building scenarios, due to their boundary limitations, have little chance of cost-effectively being converted to ZNE status.

Lowering energy loads for all buildings is desirable, with the greatest need being in urban environments and communities, yet these areas require more flexibility and an increase in creativity to help develop policies and conditions that make achieving ZNE possible. By broadening the definition of ZNE boundaries and where renewables can be sited, we expand the potential for how ZNE can be achieved in an urban or community setting. Understanding how to calculate the needed renewable energy offset to achieve ZNE, and where to physically locate the sources for renewable energy supplies are important details. They are key when determining how to utilize ZNE principles in state and local policies, as well as factor into where resources and funding are directed. By broadening how we define ZNE, not just for new construction, but for existing buildings as well, we will ultimately help all stakeholders keep the focus on the spirit of ZNE’s end-goals; energy efficiency and reducing the carbon footprint, because that’s a balance sheet worthy of the effort.




U.S. Department of Energy. (September 2015) A Common Definition of Zero Energy Buildings.
Retrieved from: www.energy.gov
File: bto_common_definition_zero_energy_buildings_093015.pdf
Link: https://bit.ly/2qCboGE











Energy can power more than just your WIFI; it can power your future.

“In order to focus our attention on energy education for the young — at all grade levels [in an effort] to help our children understand our domestic and international energy situation now and in the future.” -Proclamation 4738

With those words, then President, Jimmy Carter established National Energy Education Day in 1980. To be honest, one day a year seems hardly enough to focus on all the promise that energy education holds, after all, there is a vastness to it. Energy gave rise to advances in industry, technology, and science and those industries, in turn, have spawned new and exciting ways to take energy beyond powering a simple lightbulb. Energy is ever-evolving. Just think for a second on how it impacts your own life and where you would be without it. Energy demand is ever-evolving as well, and we have made great strides to harness the elements like solar, wind, and water to power us, but the energy industry itself needs powering. It needs brilliant young minds to look for new alternative sources, advance technology, take up the challenge of sustainability, and help forge new policies that provide positive impacts on energy demands and climate change.

There are few industries as broad in scope as energy.  It interconnects with and is dependent on, so many fields of study that no matter your passion, there are a plethora of diverse opportunities to find your niche. For example, bachelor programs that focus on the Humanities, Communication, Economics, Political Science, and the Environment can be used to gain a deeper understanding of energy conservation, predict future trends in sustainability, and provide the foundation to guide organizational and global policy in an environmentally friendly direction. Geography, Social Sustainability, Ecology, Geology, and Marine Biology studies are all relevant to sustainability, conservation, and energy demand impacts.

Cities and municipalities are rolling out energy-driven initiatives with vast scope. They are taking advantage of government prizes and grants to attract the best talent in technology, science, and engineering to tackle projects like smart grid initiatives, energy storage solutions, and mobilize electric vehicle fleets. Governments and corporations employ data scientists and statisticians to analyze demographic and economic trends to better understand global energy demands and make informed strategies to address future demand growth.

Wind turbine technicians install, maintain and repair wind turbines. According to the Bureau of Labor Statistics employment of wind turbine service technicians is projected to grow much faster than average as wind electricity generation is expected to grow rapidly over the coming decade. Alternatively, employment of solar photovoltaic installers is projected to grow due to the continued expansion and adoption of solar panel installations. These jobs, in turn, have spawned even more creative roles such as wind turbine drone inspectors with even more interesting job titles expected as technology and demand evolve. There are also a whole host of behind-the-scenes jobs that keep our energy infrastructure operational and moving forward.

Energy is at the forefront of everything we do; it is a universal commodity with a global scale. That vastness of scope means that no matter what field of study drives you, you can more than likely find an opportunity in the energy sector to make your own. Energy powers more than just our lives and our tech, and the energy innovations of tomorrow need the up-and-coming minds of today to continue forging vast, new paths and discovering vast, new methods.  Now, when asked that most dreaded of questions when you’re in high school and college, “What do you want to do when you graduate?” Just think of all the vast possibilities!

solar voltaic panel installer, chief technology officer, energy efficiency researcher, smart grid solutions architect, wind technician, smart grid director, electrical engineer, smart grid engineer, systems engineer, geologist, climatologist, project manager, cyber security manager, statistician, engagement manager, systems analyst, test engineer, renewable systems engineer, socio-economist, distribution systems engineer, anthropologist, ecologist, systems manager, critical infrastructure consultant, scientific researcher, smart grid systems interoperability validation manager, hydro-electric technician, grid security director, software engineer, manager infrastructure development, software infrastructure subsystem leader, lead software engineer, software quality assurance leader, programmer, global policy consultant, software systems engineer, advanced metering engineer, marketing, sales, director business development, strategic commercial manager, sustainability consultant, wire technician, economist, materials scientist, production engineer…

We, at Opinion Dynamics, are always on the lookout for energy-minded people! Please feel free to check out our latest job openings here on our site!





Do Distributed Energy Resources Call for Integrated Evaluation? The Value of Understanding Each Tool in the Energy Resources Tool Box

By: Chelsea Petrenko, Ph.D.

The effort to integrate distributed energy resources (DERs) into a seamless instrument began nearly a decade ago. While disparate demand-side management tools existed in the early 2000’s, more recently, utilities have leveraged them as a package to address energy and carbon emission reductions. Distributed energy resources include energy efficiency (EE), demand response (DR), distributed generation (DG), energy storage, and electric vehicle charging. Any one of these technologies constitutes an entire field of innovation and adaptation. As such, evaluating the efficacy of DER technologies, both individually and as part of an integrated package, presents challenges and opportunities.

Well-established energy resources like EE have protocols for measuring the impact and cost-effectiveness of programs. For instance, years of program evaluation and methods refinement have resulted in specific algorithms and approaches to measuring program attribution (i.e., the fraction of energy savings directly attributable to an EE program). These approaches have led to critical findings in the past. For example, Opinion Dynamics has pioneered the use of multilevel modeling to identify positive, neutral, and negative savers in programs with thousands of participants, resulting in optimization and refinement of program delivery. Over time, discoveries such as these have strengthened EE technologies, resource allocation, and EE program performance.

Nascent technologies such as storage or electric vehicles do not have a rich history of measurement and verification, which leaves a broad range of energy professionals asking how to measure the impacts of these programs, and further, if it is worthwhile to do so. If the driving force behind DER is to reduce energy use and carbon emissions overall, perhaps time and money need not be spent tracing the origin of each kWh saved – a task that becomes incrementally harder with each DER component added to the mix. On the other hand, in a time of rapid growth and innovation, we see benefits to determining which technologies and programs are driving energy savings, and by doing so, developing a stronger and more informed foundation for the deployment of DERs. To forego evaluating individual DER components would be akin to treating a medical condition with five medications without ever knowing which one works—the result would be costly medical bills and compounding side effects.

To this end, we are working to develop evaluation methods in stride with our clients’ program implementation. For example, Opinion Dynamics recently contributed to the research design for PSEG Long Island’s Super Saver program, which will deploy an impressive array of DER technologies to reduce load in a capacity constrained area of Long Island. The program is one way that PSEG Long Island is advancing the New York Reforming the Energy Vision (REV) strategy. The Super Saver program will offer a mix of EE, DR and storage strategies including advanced metering, smart thermostats, energy audits, and educational materials to about 10,000 customers; all of which combined aim to allow the utility to meet rising demand without investing in costly grid upgrades. Opinion Dynamics and PSEG Long Island worked together to create a plan to quantify the demand impacts of each piece of the multi-pronged program. By participating in the research planning process before the program rolled out, we tailored evaluation strategies to meet specific program needs, and importantly, helped PSEG Long Island determine what data collection efforts would be critical to measuring program success in the future. The resulting body of work will illustrate the contribution of each DER component and bolster the implementation of a first-of-kind program for a major U.S. utility.

In New York, the speedy adoption of DER has generated questions regarding the effective and equitable distribution of new energy resources. We are working on the necessary but difficult task of developing approaches and quantitative methods to measure the incremental effects of DER technologies. In the long run this effort will result in integrated programs that deploy the most effective technologies in the communities that need them most.

Normalized Metered Energy Consumption: It’s All About Context


Many in our industry have become enamored with Normalized Metered Energy Consumption (NMEC) as a fresh new metric that we can use to measure baselines, determine energy savings and help value energy efficiency.  If you work outside of California, you may not have heard of NMEC–it is emerging as a valuable tool in the growing body of analysis methods tied to Advanced Metering Infrastructure (AMI) data, and sometimes is thought of as a part of the Enhanced Measurement and Verification (EM&V/M&V2.0) toolbox. As with anything new, there is always a race from idea to implementation and execution. However, when dealing with an ‘out with the old and in with the new’ paradigm, it is important to carefully consider the approach and implications, prior to full execution. NMEC is no different—we have smart metering technology data that can inform an array of powerful analytical methods, but to date, the capabilities have been notably narrow and largely untested. As NMEC methodology is still in its infancy, we thought we would help provide a bit of context because taking stock now will go a long way to proactively shaping NMEC’s future.

The march of smart meter data.

Essentially NMEC focuses on estimating energy consumption from metered data as a way to tie an energy efficiency program to the grid. Advanced Metering Infrastructure (AMI) data only serves to reinvigorate existing consumption analysis practices, building greater precision and confidence within these models due to the greater observations available through the vast streams of AMI data now available to both implementers and evaluators alike. With this wave of new data and the promise of accompanying program designs, we can innovate new methods to standardize and quantify the measurement of energy consumption and, more specifically, changes in consumption relative to a reference point: an intervention. This dimension of analysis holds tremendous potential for supporting the measurement of energy efficiency gains with quick turnaround on initial savings estimates supported by a more rigorous measurement of achieved savings at the meter as time accumulates post-intervention. It may facilitate the participation of energy efficiency savings alongside other distributed energy resources in markets that match supply, demand, and timing on the grid. While this potential is exciting and encouraging for future applications, it is taking place in a rapidly changing environment regarding policy and software development. The realization of NMEC’s promising future depends on recognizing both the strengths and limitations of existing NMEC methods and the important contextual considerations in using NMEC for different programs.

To automate or not to automate.

The commercial products currently available represent much of the push to be first to market with automated NMEC software tools. Each of these products boils down to an underlying set of modeling algorithms that have been chosen based on the judgment and background of the product developers. Disaggregating the energy savings signal from the background energy consumption noise can be highly complex and not amenable to automated methods in a variety of settings. Changes in building occupancy, weather-sensitive measures, or energy price fluctuations may confound the effort to attribute a specific change, through the overall energy savings signal, to a given energy efficiency measure. Changes in equipment or energy usage patterns that are unrelated to the energy efficiency measure (e.g., non-routine adjustments) also threaten to cloud the energy savings data. Proper NMEC analysis factors in all the pre- and post-consumption data. It’s data with context allowing us to analyze all the variables to view the whole picture of what was in place before, what conditions existed and to which audience it can be applied, that help establish a solid baseline to measure future initiatives.

Data, data everywhere? Not so much.

There is value in data. In this digital age, companies pay for it, guard it fiercely, and leverage it. There is little in our day-to-day that isn’t tracked or monitored so that someone can have access to data to make informed business choices. It’s the same with energy data. But, with the proliferation of AMI data current NMEC development runs into a conundrum: accessibility.

In some cases, the utility companies that implemented and invested in the smart meter system are dependent on the Software as a Service (SaaS) companies they commissioned to collect and disseminate their AMI data. Ideally, this would be viable if guidelines were in place to protect client data, maintain transparency, and confirm that the SaaS is truly returning the results requested.

Consumers (considered the legal owners of their data) and competitive service providers can be subjected to overly complex authorization processes to gain access. Many states that were early adopters of smart meters had initiated state-wide data repositories. Most were difficult, if not impossible, to interface or access. In 2012, the Obama administration started the Green Button Initiative to help grant easy access to energy consumption data in an easy to understand format. Non-profit alliances and national coalitions have taken up championing the ‘data for all’ initiative and often serve as industry watchdogs. However, there still exists an invisible barrier to access and permissions. A recent settlement agreement in Texas between utilities, energy customers, and third-party users addresses the bottlenecks between smart meter data and accessibility which, if approved by the Public Utility Commission of Texas, may pave the way for similar agreements.

Another hurdle is that many utility data systems were built around the core purpose of customer billing and regulated as such. Even in cases where the regulated permissions issue can be resolved, AMI data has historically been stored in systems that make it difficult to transfer. AMI data can be prolific. Such large quantities of data require advanced storage capabilities, and Personally Identifiable Information (PII) approaches.

Data is so much more than just its capture; its provision, analysis, and meaning are imperative. Universal access is just the start. Businesses thrive on data analysis, but what’s interpreted from the data is vital to leveraging that data. Embedding data collection and processing protocols into the design of future energy efficiency programs would provide much-needed transparency to achieving reproducible and replicable results allowing stakeholders to make informed decisions.

Question everything.

“What specific question are we trying to answer?” and “Who is asking the question?” These questions are at the root of statistics and econometrics in virtually all cases. So, who are we trying to answer the fundamental questions for? At present, those most vested are utility program administrators followed by third-party program implementers and external auditors. Current automated commercial products are only capable of answering some of the questions these stakeholders are looking for. But they are not the only audience; regional transmission operators, investors, and consumers have a stake as well.

From a Program Administrator’s Perspective:

One specific and promising case for using NMEC methods is pay-for-performance (P4P). P4P would shift programs away from a flat-rate rebate system to a more market-based approach. For this type of program to be effective, it would factor heavily on NMEC data to show that change in consumption relative to pre-existing conditions is adequate to suit a P4P initiative and that a certain period of post-intervention measurement is adequate to demonstrate savings from the intervention. In this setting, the program implementer may receive information in a timely fashion from NMEC outputs that can inform real-time adjustments to how the program is being implemented. Final P4P payments could be based on a refinement of the overall savings estimate as time accumulates post-intervention. This arrangement works well when the background energy usage is relatively stable, the energy savings signal is relatively large, and very little energy usage change happens at the site level such as significant changes in usage patterns unrelated to the energy efficiency intervention. Program effectiveness can also benefit from early Measurement and Verification (M&V) partnered with NMEC tools and methods. The key criterion for reviewing the effectiveness of early M&V is to gauge whether it converges with ex-post M&V over the course of 9-16 months, where the ex-post EM&V is conducted using a full complement of pre- and post-period data and, in some cases, triangulation across multiple modeling and measurement methods.

From a Consumer’s Perspective:

For the electricity consumer, their interests are two-fold: involvement in the initial intervention and the gains after. However, there are unintended consequences to these new program models that require a single intervention to measure effectiveness. For the sake of accurate pre- and post-measurement of savings, a program administrator may forego allowing customers to participate in more than one energy efficiency program at a time so as not to cloud the signal of changes in energy usage before and after the energy efficiency intervention. When the consumer’s end goal is the maximum possible energy savings, this can feel a bit unfair.

From a Regional Transmission Operator’s Perspective:

One of the powerful advantages of AMI data over monthly billing data is that it is capable of reflecting the daily load shapes of individual sites and therefore estimating the changes in those load shapes as a function of energy efficiency measures. Aggregated energy efficiency measures can potentially deliver the absence of demand on specific distribution grids at particular times over daily and yearly cycles. These traits give energy efficiency (when seen as a grid resource) some of the same load shifting or peak shaving characteristics often associated with demand response resources. If grid operators can observe that a set of aggregated energy efficiency measures “moved the needle” regarding demand at a specific location and time of interest on the grid, then the value of energy efficiency as a commodity in markets is reinforced. These observed savings, if attributed to specific energy efficiency interventions, constitute claimable savings for which the implementer can be paid based on stacked benefits associated with avoided congestion.

From an Investor’s Perspective:

Currently, investing in infrastructure technology revolves around renewables with some turning to transmission investments since they are considered ‘low-risk’ with a low rate of return but with proven stability as utilities provide an essential service with steady demand. This stability has the potential to grow as the advances in AMI and smart grid technologies progress. Future pay-for-performance models may drive more returns and therefore make energy as a tradable commodity more appealing.

Current rates of return are set by state utility commissions, and this regulation makes energy investing attractive. As reporting models and methods advance, implementers and aggregators within the energy market may be pressed to be more transparent with their energy savings reporting or risk policy change that may impact future investment opportunities.

Define your needs.

An important path forward with NMEC tools and methods is to anticipate potential stakeholder uses for the data and articulate what questions the data can answer. Essentially, delineate if the priority is to measure energy savings at the site level, at the efficiency program level, or both. Well-designed NMEC methodology will provide context and insight on whether the methods deployed are, in fact, answering the questions at hand. Thoughtful and informed data-driven approaches that integrate stakeholder perspectives while minimizing risk and maintaining appropriate levels of accuracy are needed to refine the process and the algorithms.

So, which methods stakeholders come to embrace will depend on how NMEC methods are allowed to grow and expand through iterative cycles of refinement. With a stable foundation to start building up from, technology and methodology will be able to advance the process of refining NMEC’s ongoing applications. In the grand scheme, we are early in the process of the growth and implementation of NMEC where context and perspective are vital. NMEC methods will ideally grow to complement other existing tools used for purposes of energy savings evaluation, measurement, and verification. We feel it is important to keep context in the forefront as we move forward. Prioritizing our motivations for keeping NMEC methodology on track comes back to understanding what we are trying to measure and for whom. At Opinion Dynamics, we understand the kinds of testing, experimenting, and verification required to advance NMEC development. It is this type of work that is at the core of what we do for our clients as professional analysts and statisticians.

Great Influencers in Energy: Thomas Edison may not have invented the light bulb; but where would it (or we) be without him.





Thomas Edison did not invent the light bulb. He did, however, patent the first commercially viable incandescent light bulb in 1880. Twenty-three different iterations of the light bulb existed prior to Edison’s, and even after filing his initial patent, Edison and his team were still working on improvements. His pre-occupation was with building a better filament, and after he and his team experimented on upwards of 6,000 plant varieties (obtained from botanist’s collections around the world), he settled on a carbonized bamboo filament that gave his bulbs up to 1,200 hours of use.

Edison didn’t stop there. He proceeded to develop a host of inventions that would make the use of light bulbs practical. He opened Pearl Street Station, the first commercial power station in the U.S., in the heart of Manhattan’s financial district. On September 4th, 1882,  he threw the switch and brought light to 25 buildings, including the offices of one of his investors, J.P. Morgan. Pearl Street Station was his proving ground that his electrical system worked and effectively launched the beginnings of New York’s electrical grid. Ever the businessman, Edison needed a way to track his customer’s energy usage and so invented an electrochemical meter which could measure electricity flow over time. All this from one light bulb.

That first ‘flip of the switch’ started a passion for lighting, and we became masters of our environment. One needs only to look around to see its impact. The way we design buildings, the way we light outdoor spaces, and our ability to be productive longer can all be credited to the invention of the commercial light bulb. But so has our demand for electricity as we illuminate the dark.

Time and invention wait for no one, but sometimes they wait for technology and economy.

Fluorescent lighting got its start in Europe during the late 1920’s, and their development quickly sparked interest in the U.S. At the 1939 New York World’s Fair, American lighting companies showed off their longer-lasting, energy efficient fluorescent light bulbs. By the mid-1970’s, researchers were trying to improve lighting again. Sylvania engineers started working on ways to miniaturize the fluorescent lamp ballast. While they successfully patented their bulb, they couldn’t feasibly produce it. General Electric figured out how to bend fluorescent tubing to create the first compact fluorescent light (CFL), but they too shelved their design because the investment needed to build equipment capable of mass-producing these lights were too expensive. CLFs made it to market in the 80’s with an average price tag of $25. It wasn’t until the 90’s that CFLs were able to gain a foothold in the marketplace thanks to improvements in performance, efficiency, lifetime, and their price. Today CFLs cost as little as $1.75 per bulb.

Never an industry to sit idle, lighting has recently embraced Light-Emitting Diodes (LEDs). They are one of the fastest growing advancements in lighting technology and efficiency. They have also gone retro. Modern Edison bulbs were LED’s answer to vintage reproduction needs. Their unique light-emitting filaments are designed to replicate the same warmth and glow of antique bulbs, and their long, pear-shaped glass design harkens back to a time long past. This throwback look and feel has allowed many theaters to become energy efficient while maintaining a degree of authenticity in not only their architecture but also stage and prop designs. They have also been widely embraced by interior designers and retrophiles alike, proving everything old can be new again. From where he started to where we are now; lighting has come a long way so thank you Mr. Edison. And happy belated birthday.



2017 IEPEC Conference Presentations and Papers

Opinion Dynamics delivered thought leading presentations on a wide range of topics at the recent International Energy Program Evaluation Conference (IEPEC) in Baltimore, MD.  Download the cutting edge papers, presentations and posters below.

Financing or Incentives: Disentangling Attribution


Antje Flanders

Download the paper

Download the presentation

Plugging the Holes in Leakage: Methods for Calculating the Leakage into Upstream Residential Lighting Programs 

Tami Buhr

Download the paper

Download the presentation

The Impact of Survey Incentives and Survey Mode on Response Rates

Download the poster

Planners Do It, Implementers Do it, Even Evaluators Should Do It: Utilizing Segmentation to Assess Program Performance

Brad Kates

Download the paper

Download the presentation

The Keymaker, Opening the Door to Energy Data Possibilities

Kai Zhou

Download the poster


Driving ZNE to Scale

Ellen Steiner

Download the paper

Download the presentation

Changing Demographics: A Research Odyssey

Download the Poster

Duck Hunting with a New Generation of Demand Response in California

By: Hilary Polis

The afternoon of March 11, 2017 was a historic date for energy in the state of California—on this date, 40% of the electricity supplied to the California grid was generated from solar power, a new record high. This comes as the result of total solar capacity in California skyrocketing from less than 1 GW in 2007 to 14 GW in 2016.

While this progress is exciting for the expansion of clean energy, it also presents a growing concern for grid stability. Grid operators have been warning about the future threat of extreme changes in load curves in California for the past few years, and that future is now upon us. In a classic mismatch of supply and demand, the supply of solar power dips when the sun starts to set which coincides with the time most Californians head home from work and engage in electricity-intensive activities. This results in duck-shaped energy load curves, representing a formidable challenge for grid operators to meet the spike in demand after the drop in supply.

Tackling the duck curve requires thought leadership and collaboration from all aspects of the energy industry. The energy efficiency industry has much to offer by developing innovative approaches to balance out new solar on the grid. The recent Spring Symposium of the California Efficiency + Demand Management Council (CEDMAC) focused on the transition to an “advanced” generation of demand response. Traditionally, demand response (DR) has meant shedding load at times of peak demand, but the next generation of demand response will need to be multidimensional to meet a suite of broadly diversifying needs. However, to turn DR into a grid solution, it is vital to understand how participants react to DR signals and interact with new DR-enabled technologies. Utilities are changing their models to identify and deploy programs in support of these needs.

At Opinion Dynamics, we help our clients not only quantify the energy and demand impacts of these emerging programs, but also utilize segmentation and propensity scoring techniques to identify the  customers whose energy usage can be most easily shifted to help support the grid. The next generation of Demand Response initiatives include:

Flexible DR: In critical times, we will need flexible DR resources that grid operators can deploy in less than 5 minutes. We can accomplish this by quantifying savings associated with mass media DR events, and understanding customer awareness and attitudes towards these events.  This information allows program implementers insight on the ability to count on DR as a flexible and fast-acting resource during supply shortages.

Geographically-targeted DR: The uneven distribution of solar panels in California provides more stress on the grid in some areas over others. The next generation of DR will provide more value if it is localized, with the ability to target stressed areas. Our research supports efforts in geographically capacity constrained environments. We develop research designs and assess incremental reductions in demand to support a more balanced grid.

Multi-purpose DR: We will need to be so confident in our DR resources that they can play double duty, as a dispatch-able resource on the supply side and a source of load reduction on the demand side. In the space of smart thermostats, our team has been able to examine customer engagement patterns, particularly opt-out behavior, and its implication on overall event performance, which helps to provide confidence around using DR on the supply side as a multipurpose grid resource.

Transformational DR: Most importantly, the DR of the future will need to provide the correct price signals to shift our entire electric demand curve to complement the rest of our electric system. We conduct studies to assess the interplay of emerging technologies with dynamic pricing rates to help utilities identify who benefits and loses when dynamic pricing goes into effect. These studies offer strategies for educating customers about behavioral practices that can transform our demand for electricity and support our new energy future.

Just as utility customers helped to drive the California solar revolution, they also have the capability to turn DR into a grid solution.  We’re excited to be providing our expertise to this crucial area of ensuring the world’s sustainable energy future.

For more information, please contact: Hilary Polis at hpolis@opiniondynamics.com or Olivia Patterson at opatterson@opiniondynamics.com

Customer Journey Mapping: Take a Walk in Your Customers’ Shoes

By Jennifer Watters & Emma DeCotis

Our industry has started to embrace customer journey mapping as another tool in the process evaluation toolbox. Energy efficiency programs can be complex, often with multiple components, goals, and stakeholders, each with a unique set of needs and desires. Because of this, program implementers sometimes struggle to develop programs that are customer-centric.

Customer journey mapping is a strategy, long used in other industries, that requires those selling a product or service to examine their services from the perspective of the customer. Doing so allows stakeholders to imagine their customers’ delights as well as frustrations with each company or product interaction. By creating a journey map, companies learn more about the customer experience, which helps to ensure better customer satisfaction and engagement.

The benefits of enhanced customer experience vary depending on the program design. For programs that involve multiple steps, like a home energy assessment program, customers are more likely to continue further down the path to higher energy savings if they are engaged and face fewer barriers. For programs that rely on persistent engagement to achieve ongoing energy savings, such as home energy reports, customer satisfaction is paramount to achieving long term savings. As an added benefit, engaged customers may share their positive experience with others, generating word of mouth promotion and new participants for a program.

To illustrate the powerful content contained in a journey map, we provide an example that traces the customer’s experience with a home energy assessment program, a common energy efficiency program that involves multiple steps and customer touchpoints. In the example, we follow the customer’s journey from first learning about the program – through completion of an assessment-recommended project. The journey map contains the following components:

Stages: We organize the journey map by each participation stage the customer goes through from initial through final contact with the program. For each stage listed at the top of each column in the journey map, we provide key details about how the program engages with the customer, as well as what the customer is doing, thinking and feeling.

Program Interactions: For each stage, we list the touchpoints between program and the customer in the gray cells. Notice that at different steps in their journey, the customer interacts with different stakeholders and engagement channels including a program representative, outreach materials such as the program website, an energy specialist, and a contractor.

What the Customer is Doing, Thinking, and Feeling: We provide details about the customer’s actions and cognitive experience at each stage in the light blue cells. The doing section identifies the actions the customer takes, while the thinking and feeling sections lists possible customer concerns, perceptions, attitudes, and beliefs associated with each program interaction.

The customer journey map helps identify the places in the delivery and customer engagement process where the program design team should add interventions to lessen the customer burden and engage the customer in a more positive manner. For example, in developing this journey map, we uncovered that when scheduling a home energy assessment, customers are frustrated that assessments aren’t offered at convenient times, and that the program only offers one channel to schedule an assessment. Knowing that this is a stress point for the customer, program designers can weigh whether the additional cost of adding staff to accommodate night and weekend visits is worth it to allow for more customer convenience and potentially more program uptake.

Market research is an important tool to develop a customer journey map, and quality data provides the foundation. A common approach to constructing a customer journey map uses anecdotal inputs from stakeholders based on their interactions with customers to develop a hypothetical mapping of the customer experience. While this approach is a valuable starting point, it misses the full customer perspective and provides only part of the picture. A better approach is to combine the stakeholder perspectives with customer research. Qualitative research methods such as ethnographic studies, in-depth interviews, and focus groups are particularly valuable for exploring what the customer is thinking, feeling, and doing. Quantitative methods, such as structured telephone or on-line interviews can also be useful to confirm qualitative findings among the larger program target population, as well as putting together customer profiles (a journey map typically focuses on a single customer profile).

Journey maps have enabled our clients to gain insightful knowledge about the intricacies of their customers’ experiences. We find that many program evaluations already use methods, such as program theory and logic models, process mapping or customer satisfaction studies that serve as good starting points for journey maps. Taking the research one step further, by collecting a few additional insights and turning them into a journey map will benefit both the program and the customer.

Interested in learning more about journey mapping and its applications? Please contact Tami Buhr at tbuhr@opiniondynamics.com.