A basic algorithm with limited data has shown to be 80-90 percent accurate when predicting whether someone will attempt suicide within the next two years, and 92 percent accurate in predicting whether someone will attempt suicide within the next week. Ben Hunt, Epsilon Theory
What, wait, if we are that good to predict something as complex as suicide, than making lending decisions should be a snap, right? I mean even the “Wisdom of the Crowd” has shown a thousand non-experts make better decisions than the most sophisticated experts in any field. Multiple experiments have been done around this concept. For example, using the online game Foldit, more than 57,000 players helped scientists at the University of Washington solve a long-standing molecular biology problem within three weeks.
This rapid rise of the machines is all based on humans and our biases in decision-making. Some examples include:
- Overconfidence: We are too confident in our own abilities.
- Confirmation bias: We tend to listen to only the information that proves our point.
- Clustering illusion: We see patterns in random events, like the number 7 turns up five time in a row on the craps table and you see a pattern.
- Recency effect: We weigh the latest information more heavily than older data.
- Ostrich effect: We bury or ignore negative information.
- Information bias: More is not necessarily better.
But does this make machines automatically better? Aren’t humans designing these algorithms? Yes, there are artificial intelligence bias issues in the algorithms we design to try to make us more efficient and effective. There are four areas that we should be cognizant of:
1. Algorithm bias: For example, your model may be motivated by profit margin and could sway loans toward certain individuals or businesses. Or in medicine, some patients make you money and a big dependency could be insurance, but the model may conflict with patient health. There could be hidden bias and design bias. Design bias may be intentional, as designers of algorithms may conflict with societal values or norms.
2. Data bias: Algorithms are only as good as the data they are learning from, and bias can be embedded in data. For example, some organizations are trying to predict dilution in order to finance invoices. This is by no means a small feat, especially when the data sets typically have been insulated in a benign credit world.
3. Interpretations of what the algorithms mean: Algorithms are a black box, and designers may know the model limitations and how to interpret it, but when it comes to lender or relationship managers, they may not know how to interpret. This is where you get into what I consider the most important error that can be made — Type I or Type II errors, also called false positives and false negatives, respectively.
4. Who is responsible for the decisions – the model makes? While AI may produce better outcomes, it can also reduce autonomy.
So in building AI applications, it’s important to bear the above points in mind. No doubt, there are big advantages with AI as it can reduce inherent individual biases. With lending applications, the ability to get smarter as you look at more data sets to help reduce expected losses is quite attractive, but we must bear in mind the caveat that most modeling does not have a long enough business credit cycle. And for many models built on recent data, the real concern can arise when this long benign credit cycle ends.
So be careful when new solutions taut the use of AI. Ask yourself, where are the biases in the model and data sets.