Products and services that rely on machine learning – computer programs that constantly absorb new data and adapt their decisions in response – don’t always make ethical or accurate choices.
Sometimes they cause investment losses, for instance, or biased hiring or car accidents. And as such offerings proliferate across markets, the companies creating them face major new risks. Executives need to understand and mitigate the technology’s potential downside.
Are there conditions under which machine learning should not be allowed to make decisions, and if so, what are they?
Machine learning can go wrong in a number of ways. Because the systems make decisions based on probabilities, some errors are always possible.
Their environments may evolve in unanticipated ways, creating disconnects between the data they were trained with and the data they’re currently fed. And their complexity can make it hard to determine whether or why they made a mistake.
A key question executives must answer is whether it’s better to allow smart offerings to continuously evolve or to “lock” their algorithms and periodically update them.
In addition, every offering will need to be appropriately tested before and after rollout and regularly monitored to make sure it’s performing as intended.