Methods that weight efficacy, toxicity and cost improve understanding but provide no easy answers
“Green chemistry” laws, adopted by a handful of states and the European Union, aim to reduce the use of toxic chemicals in consumer products. These laws require manufacturers to determine whether there are less harmful alternatives and evaluate possible substitutes to make sure that they aren’t any more toxic than the substances they’re replacing.
This process, called an alternatives assessment, isn’t simple, though. It has to balance the health and environmental benefits of the new material with the cost and technical feasibility of making the change. It relies on often-incomplete data and on comparing quantifiable measures like costs with harder to measure factors like health effects. It also requires value judgments about possible trade-offs: Is technical performance more important than environmental impact, for instance?
Typically this is done holistically, using rules of thumb and general principles to compare alternatives. Given the challenges, the problem is ripe for a more formal approach called multi-criteria decision analysis, or MCDA. A product of operations research, MCDA is a structured, systematic way to weigh the complicated factors that go into an alternatives assessment. What’s unknown is whether these more formal methods, which are sometimes supported by specialized decision making software, are more useful in alternatives assessment than less-structured methods.
Opt In to the Review Monthly Email Update.
Christian Beaudrie, with British Columbia’s Technical Safety BC, UCLA Anderson’s Charles J. Corbett, Gradient Corp.’s Thomas A. Lewandowski, UCLA Law’s Timothy Malloy and Xiaoying Zhou, of California Department of Toxic Substances, compare the methods in a recent paper in Integrated Environmental Assessment and Management. The results were mixed: The more formal methods helped participants understand the problem better but did not consistently lead to higher satisfaction with the method or the outcome.
The researchers brought together experts from government, industry and nongovernmental organizations to compare informal and structured approaches in a mock alternatives-assessment exercise. In surveys after the exercise, participants said the MCDA-based methods had several advantages: They made the process more transparent, gave a better understanding of the trade-offs involved and made it easier to communicate exactly how the decision was reached.
However, participants said their organizations weren’t very likely to use MCDA tools because of the cost and time involved.
So while MCDA did what it was supposed to do, the authors conclude that “much should be done to increase the level of comfort and accessibility of decision making tools.” What’s needed are MCDA support tools designed specifically for alternatives assessment, more training in their use and case studies describing their application.
In a two-day workshop conducted in 2017 at the UCLA School of Law, the researchers tested two of the many methods that fall under the MCDA umbrella: one known as “multi-attribute value theory,” or MAVT, and another called structured decision making.
A dozen participants played the role of a paint manufacturer required to look for alternatives to the copper-based marine paints used to prevent barnacles and zebra mussels from attaching to the bottoms of boats. Copper, a biocide, can leach into lakes and rivers, harming the environment.
Before the workshop, the group was asked to evaluate a series of alternatives without the aid of a formal decision-support tool, using any approach. Although a variety of approaches was used, eight of the 12 experts settled on the same option (a ninth ranked it second), which had scored well on technical performance and cost but also raised concerns about toxicity.
On the first day of the workshop, the group was divided into two cohorts and given a second set of alternatives to evaluate using software based on MAVT. The experts scored each of the alternatives on how well it scored on various criteria — such as health and environmental effects, technical performance and cost — and determined how much weight to give each factor. The software then ranked the results. In this analysis, a low score on one factor could be offset by a high one on another. An alternative that, say, scored poorly on ecological impacts could still rank high if it did especially well on technical feasibility, for example.
One group worked individually, each member separately fiddling with the various weights and scores. The other group worked collectively through the analysis and together selected their favored option. While the group that worked individually agreed on the top-scoring option, the group that worked together rejected the software’s choice and chose another. Participants were concerned with the way the system aggregated each participant’s weightings and compensated with better performance on one factor for a poor showing on another.
On the last day, the whole group used structured decision making on a third case study. This method matches pairs of alternatives, ranking the results based on which of the pair performs best on the various measures. Weak alternatives get discarded, while criteria that don’t vary much are ignored. At the end, remaining choices are scored and ranked using MAVT, and the group voted on the preferred alternative.
Overall, participants were nearly equally satisfied with each of the methods — except for those who used the MAVT software as a group. Also, they said the formal approaches substantially increased their understanding of the trade-offs involved in the alternatives. Still, they said their organizations were more likely to stick with the unaided approach than to try any of the formal methods.
Because alternatives assessment isn’t designed to come up with a single best answer — each choice involves trade-offs — the authors conclude that they cannot identify one approach as ‘better’ than the other. “AA practitioners might need to explore different weighting methods and decision approaches in any given situation,” they wrote.
For instance, for someone performing an alternatives assessment inside a company, the MAVT method offers simplicity and the ability to see how trade-offs respond to the weight each factor is given. A trade association, on the other hand, might find that structured decision making delivers the transparency and communication necessary to inform its members of the value of its decision.
In a more complex scenario, perhaps a government agency considering whether to order the phaseout of toxic chemicals, a multifaceted approach might be needed. An individual analyst might conduct an MAVT exercise, based on inputs from all the various stakeholders. Then, since its decision — whatever it might be — is likely to be hotly contested, the agency would bring in internal and outside experts for a group-based structured decision making session.
Featured Faculty
-
Charles J. Corbett
Professor of Operations Management and Sustainability; IBM Chair in Management
-
Timothy Malloy
Professor of Law; Faculty Director, UCLA Sustainable Technology and Policy Program
Frank G. Wells Endowed Chair in Environmental Law
About the Research
Beaudrie, C., Corbett, C., Lewandowski, T., Malloy, T., Zhou, X. Evaluating the Application of Decision Analysis Methods in Simulated Alternatives Assessment Case Studies: Potential Benefits and Challenges of Using MCDA. Integrated Environmental Assessment Management.