You Can't Say AI Benefits Outweigh Risk Without Some Algorithmic Transparency

I am increasingly hearing the phrase, "the benefits outweigh the risks" applied when talking about AI, machine learning, and the increasing number of algorithmic decisions that are being made in all parts of our digital world. This seems to be the new default of AI and machine learning advocates looking to tip the scales in favor of their technology, over the human side of the discussion.

This can be found used in discussions about AI used in self-driving cars, all the way to policing algorithms making decisions on the street or in a court of law. I'm not opposed to this argument if it is truly the case, but it seems something you can claim without providing the data behind this decision, and simply relying on your lack of faith in humans being able to consistently making decisions.

This is why I wrote about the important of data sharing in industries where algorithms are making an impact, and I am an advocate for providing API access for journalists, analysts, and regulators to actually follow-up with claims that are being made. Allowing 3rd parties to actual weight the pros and cons, and make a collective, more fair and balanced determination of whether or not the benefits truly do outweigh the risk.

I'm not saying that folks who make these claims are being dishonest, but in my experience, in the API space most folks blindly believe in tech and their algorithms, and seem to have almost no faith in humans, and are more than happy to make false claims in the service of the algorithm. This is why I have to say that you can't ever tell me the benefits outweigh the risk without some algorithmic transparency involved--it just won't mean anything to me.