Recently, Google fired Dr. Gebru, one of the leaders in the AI ethics space, for some poorly-justified reasons. It was big news in the machine learning world because of the way it happened, who it happened to, and the way Google handled the aftermath. It was all a big mess.
This feature in VentureBeat contains a list of predictions and suggestions on what could and might happen next in the AI ethics space. I feel this is a clear-headed list and lays out several paths forward.
Unfortunately, I don’t think any of these paths is a long-term solution. Algorithmic bias is baked in to the way that we train models to the point where it takes a ton of extra work and effort to even recognize it. On top of that, the most well-funded organizations, which have the reach to effect the biggest populations, are incentivized to hide the negative externalities of the products they build. Layering these problems together makes it really difficult to address all of it at once. Legislation, whistle-blowing, and collective action may cut the Gordian knot and rein in these organizations, but the fundamental issues of machine learning still remain.