Skip to main content

Suite of Approaches That Can be Applied to Uncover Possible Harm

May 1, 2024

When developers or companies release technology, questions of harm and safety often arise. In response, organizations will often reply with “Well, we can't anticipate everything.”  

Which is really saying, “Well, we can't anticipate everything, so we are not going to at all.”  

The argument is often that there are so many factors that go into determining the impact of a specific technology and so many possibilities that exploring them all is impossible, and no one should be on the hook for missing something. 

The problem is that this argument supports the behavior of not exploring possible harms and pushing applications out without any real analysis of impact. Why bother looking if you cannot find everything? And why blame someone for not anticipating the problems that arise?  

In response to this, I’d like to propose a suite of approaches that can be applied to uncover possible harm. These techniques are based on history, assumptions, and practitioner ideation to tease out possible problems and harms. These will not uncover every possibility but will, at least, make it harder for organizations to just say, “Who could have guessed it?” when bad things happen. 

The first technique is to simply look for antecedents for guidance. Resources like the AI Incident Database (AIID), which archives harms related to AI, can be used to uncover problems with past applications that can provide guidance in shaping new ones. If your business model is based upon optimizing for engagement, seeing that similar models have resulted in digital addiction can help guide your work. The same can be said regarding recommendation systems, image processing, data driven diagnostic systems, etc. Which is to say, pay attention to the idea that, “Those who cannot remember the past are condemned to repeat it.” 

There is a push underway to use the AI Incident Database to build out design patterns for developers, case studies for businesses, and even policy statements based upon existing problems that have been identified.  

Of course, you may have something so new, that there are no useful antecedents. 

The second approach is to consider modifying your own assumptions about your product and your business plan with respect to it. For example, a company such as Facebook might consider what would happen to it if instead of an engagement model, its business would be subscription based. Or what would be the impact if X (née Twitter) was a government service? These are all systems for which you can play with underlying assumptions, change them, and see what the impact is. 

This isn’t necessarily about changing the business plan, but it’s about seeing what the impact of certain assumptions you have will be. If Facebook turned into a subscription service, then its goal would not be to keep you on the site all the time. Instead, its goal would be to give you the product that you want and have you move on. Given that the goal is not engagement, it would no longer be a model in which users would be prone to addiction. You can do the same thing for any piece of technology (e.g., recommendation engines, decision support systems, etc.).  

The third approach is scenario building, a concept most recently done by Nick Diakopoulos. Instead of having a company, developer, or a product manager try to hallucinate harms, potential users can be enlisted to build out scenarios of both positive and negative outcomes. For example, if you're building tools for journalists, use them to construct scenarios of positive and negative impact. Familiarity with the domain, its goals, and values puts practitioners in a powerful position to predict possible harm. This technique expands our ability to envision impacts and provides grist for the mill of our own thinking about harm.   

This is not to say that these things will all go wrong. It's not to say that, in using existing problems, you're going to be able to see analogs to what you're working on. It's not to say that looking at how your assumptions impact things is always going to provide insight. And it's not to say that scenario building is going to give you everything you need.  

But these are tools for people who are producing technologies to see what they themselves might not be able to anticipate.  

It takes away an excuse that I no longer want to hear: “We just didn’t see it coming.”  

Instead, what we could hear is, “We looked to the past, our own assumptions, and the experience of the people we are building for to try to anticipate as much as we could.”  

Not as catchy, but a step in the right direction. 

Kristian HammondKristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top