Every now and then there's quite the outcry over something that a company has done. Cambridge Analytica is one such scandal. It has been well documented, there's even a movie: The Great Hack.
No, this is not a political beat up on who used the output from the models, rather it is to point out that long before any models had been built, before data was even collected, someone decided that it would be a good idea to collect data, without people's explicit consent. More recently, Clearview AI has come under fire - Clearview AI sells AI-powered identify matching to law enforcement and other paying customers via a facial recognition platform that it trained covertly on photos harvested from Internet sources (like social media platform) (techcrunch).
University degrees in Data Science, Analytics etc, often have an "Ethics" component to them, trying to ensure that people understand how to use data ethically, ensuring that people understand data privacy. Governments and regions have introduced regulations to ensure that it is clear what is legal and what is not, along with penalties for breaching those regulations. There are people who will stretch the law to the absolute limit and then some, why? Well, usually there is a LOT of money to be made.
Fortunately, it isn't all doom and gloom - Amazon realised that its algorithm to identify the best candidates, which was trained on 10 years of history, was biased against women. Amazon disbanded the team.
I don't think AI is going away; intuitively, AI should be able to help people do things faster, better, but we have quite a way to go before we're there. The data used to train models comes from somewhere, and decisions that were previously made, were made using various criteria. Modelers, Analysts, Data Scientists (and the organisations leveraging their skillsets) are, or really ought to be, deeply invested in where the data comes from.
Perhaps someone at Clearview AI ought to have considered whether people who were posting pics on their social media accounts expected them to be used to develop facial recognition software, and did they expect this to be sold to law enforcement? To my way of thinking it boils down to values, the values of the organisation, as well as the analyst's values. Our values are at the heart of what we believe is right or wrong, our personal and corporate values act to "prime" the way we think and this directly affects our behaviour (HBR). In the same way that companies can design corporate values to drive positive outcomes, left unchecked, unspoken values can drive negative outcomes. This article highlights 5 key factors that indicate that perhaps a scandal is on the horizon.
Our values, what we believe to be right, drive our thoughts, our thoughts drive our words and our words drive our actions; over time, our actions become habits.
As analysts we need to consider many things when building models. And typically, the vast majority of the focus is on technical skills - can we get the data, it is clean enough, if not, what can we do about clean it. How about features - do we have enough features? What techniques should we apply when feature engineering? How will features be evaluated and selected? What modelling techniques should we apply? How do we ensure that the model is robust and not over-fitted? Have we ensured that we have adequate model monitoring in place, so that we'll know when it is not performing and we ought to be redeveloped. These are all very important for an analyst to consider and mean the difference between a good model and a lousy model that isn't worth the effort it takes to implement.
All of these considerations come after we have the data. As analysts, as companies employing analysts, and managers and leaders of analysts, we should be asking ourselves the harder ethical questions: Where did the data come from? Is this how it is supposed to be used? Do I/we actually own this data, and if it is personal data, is it reasonable that the person it came from feels the same way?
As leaders of analysts, do we foster an environment where our analysts can raise their concerns without fear of reprisal? Conversely, do we have the checks and balances in place to ensure that our analysts are working within a framework that aligns with our company values? In a world where we can do anything, we should think about what we're creating.
Further Reading
Comments