DjangoCon Europe 2017 - The ghost in the algorithm: how to define and to apply digital ethics for the common good

Speaker: Fabio Chiusi

Technology has involved rapidly, and pervades our life. We don't really have a clue how to deal with this. There are concepts like "People first", vs "Technology first". There is the word "solutionism" - present everything as better, newer, by using technology. The self quantification movement ties in here, too: people who want to track all facts about themselves.

But the technology originates with humans, with their own political and ethic stances, which are somewhat passed on to the machines/programs. The digital revolution needs to discuss how to better incorporate ethics into machines.

We should not ask what is safe (to use, to share), but why it is important to share. Why we are ignorant of the mechanisms, and why we accept the impact it has on us all. Frank Pasquale: The Black Box Society discusses how to make our society more transparent with respect to modern digital technology.

Facebook could, eg, not only give you news you want to read, but the opposite, too. Make selections and filtering criteria open. It should protect its users from outside agencies, it should advance mankind. Being on facebook is easy, reflecting on it is not. This leads to a large transfer of power over fundamental issues regarding ethics to software engineers. Those people are now becoming the masters of ethical decisions, without any prior experience or knowledge.

Do we want our cars be self-driving, even if that implies that somebody always knows where we are going. Do we want to sacrifice our freedom of speech for ease of web search or ordering products? We have no answers, only silicon valley ethics, preferring cheaper, faster, more profitable shiny products without reflecting about it.

Everybody should cooperate in the effort - the rest will follow.

  • Can algorithms be made human rights compliant? (How can humans be made human rights compliant?)
  • What happens when algorithms develop their own ethics? (Can they?)
  • Should be talking about robot rights, once machines become more autonomous?
  • Should we add jobs like Data Ethics Officer?
  • Should we mandate the unpacking of algorithms with regulation and laws? (Especially regarding company secrets …)
  • Or should we assemble ethics boards, checking for ethics and discrimination in algorithms?
  • How do we deal with conflation of problems (filtering one thing leads to filter another …