Watching AI Software for Bias

An ongoing problem with machine learning is the likelihood of enforced bias.

Artificial intelligence has become virtually synonymous with generative AI in public discussion. But AI is a much bigger topic. What is likely the most used in CRE as well as in other industries is machine learning.

As IBM describes it: “Through the use of statistical methods, algorithms are trained to make classifications or predictions, and to uncover key insights in data mining projects. These insights subsequently drive decision making within applications and businesses, ideally impacting key growth metrics.”

The techniques are widespread across many industries, including engineering, manufacturing, image recognition, autonomous vehicles, automatic translators, data analysis, and many other places.

When you hear that proptech software uses “AI,” machine learning is probably of the most widely used techniques frequently employed in recognizing patterns and implementing classifications. If you employ software to pull important data out of spreadsheets or documents, to identify properties that have characteristics you seek in a potential investment,

All well and good except for one aspect. Unlike investing, the past really is a guide to the future. The initial training documents have enormous implications for how the system will work.

Here’s an example passed on by a marketing expert. A cable company decides to better focus its messages and attention to improve its outreach to customers. So, it looks for the most frequent and engaged users. Unfortunately, the most intent users of the cable company’s programming are those customers that are out of work and can’t afford additional services.

That may seem frivolous, but the issue is called bias. Just like people can have cognitive biases, so can machine learning systems because they are the products of people and are trained by people on decisions that people have made in the past.

A famous example is that of Amazon. The company set up a system to review resumes for potential hires, as Reuters reported a few years ago. The hope was that the software would process resumes and suggest the best possible hires.

Unfortunately, the software wouldn’t make decisions in a gender-neutral way. The problem? The training materials were the resumes the company had received for years as well as the hiring decisions made. There had already been a gender bias problem in hiring and now it was built into the software. The project was ultimately scrapped.

This isn’t the only example. In late 2019, the National Institute of Standards and Technology released the results of a face recognition software test in which “false positives rates often vary by factors of 10 to beyond 100 times” between samples of different racial faces. Again, whatever the issue was — something in the software algorithms, training samples, hardware, or some other reason, there was repeated racial bias.

This is all to show that with the best of intentions, things can go badly wrong. It would be an act of wishful thinking to assume CRE software employing machine learning couldn’t also exhibit bias of whatever kind.

A complicating factor is that CRE firms are typically using software written by third parties to which you won’t have access to details that might prove important in diagnosing the problem.

The thing to do is experiment. Track results. Build your own database of examples and then see whether they present patterns of bias. Because it won’t just be embarrassing, but open companies to potential legal jeopardy.