Journey of one paper (I)

Starting up with the first proposal

  • This student described a potentially interesting topic.
  • But he did not follow the guidelines for what to put on his proposal.
  • And he did not relate his proposal to the seminar theme of "The Self".

This is a somewhat negative example of what *not* to do in your first proposal. Usually I prefer to put *positive* examples out there. But in this case, the student's final paper was excellent, and so perhaps it's useful to see one development of a paper with twists and turns.

First round proposal

Machine Bias

Artificial Intelligence is one of the most powerful tools humans have ever created, and sometimes it is so powerful that can be easily turned into the wrong direction. We take for granted that the result that a computer outputs is completely objective, but what if it is not? What if it is biased? Then fields like criminal justice, predictive policing and even healthcare, which are using AI on a daily basis, would be trusting a prejudiced recommendation leading to biased actions.

But, how did it all start? How does a computer go from being objective to biased? What are the specific examples where computers output prejudiced recommendations? In criminal justice, a judge should not trust a biased recommendation, but who should we trust: computers or human judgement, which is to some degree biased as well? What if biased predictions are in fact beneficial for some fields, such as predictive policing? What if that helps us predict a terrorist attack and save lives? For a non beneficial case, what can we do to reduce the level of bias of an algorithm and create a more objective world?

Angwin, Julia and Jeff Larson. “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica, 23 May 2016, www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

J. Larrazabal, Agostina, et. al. “Gender imbalance in medical imaging datasets produces biased classifiers for computer aided diagnosis”. PNAS, 2020. https://www.pnas.org/content/pnas/117/23/12592.full.pdf

Johndrow, James E and Kristian Lum. “An algorithm for removing sensitive information: application to race-independent recidivism prediction” Stanford University. 2017. https://arxiv.org/pdf/1703.04957.pdf

3rd round proposal

Human-computer trust/interaction

Along the years research shows the tendency of humans to trust computers. But why? What is it about human nature that drives this tendency? In spite of the fact that there have been many cases where algorithms have shown to be biased, overall, humans are still inclined to follow the advice of a computer rather than from another human. Even in some cases we are more willing to change our own decision for the decision of a computer. It seems like we are letting the algorithm think for us. We are giving up what differentiates us from all other animals and what really makes us humans: reasoning and cognitive ability to think. On the other hand, why wouldn’t some people accept the advice of a computer that has shown high competences in its field? For example, weather forecasters or even physicians. The term called “algorithm aversion” seems rather radical, and somewhat dangerous. Is it our innate ego that prevents us from accepting their advice even when they are correct?

Are we going into the direction of being less human? Are we increasing the freedom of AI to decide for us? Is the overwhelming power of AI making us less self-confident and therefore our trust for computers increases? Is it the lack of trust between humans that increases the trust for computers? Are we becoming more independent of each other and at the same time, more dependent on a computer? Why does not the fact of seeing the reality built from a biased algorithm affect us? Is it just that we are not aware of that? What about the term "algorithmic aversion", how does that relate to the self and our personality?

References:

Dietvorst, B., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. Journal of Experimental Psychology: General, 144 (1), 114-126.https://repository.upenn.edu/cgi/Algorithm%20Aversion%3A%20People%20Erroneously%20Avoid%20Algorithms%20after%20Seeing,future%20than%20do%20human%20forecasters

Kulms Philipp & Kopp Stefan. A Social Cognition Perspective on Human–Computer Trust: The Effect of Perceived Warmth and Competence on Trust in Decision-Making With Computers. Frontiers in Digital Humanities (2018). https://www.frontiersin.org/article/10.3389/fdigh.2018.00014

Logg. Jennifer M., Julia A. Minson & Don A. Moore, Algorithm Appreciation: People Prefer Algorithmic To Human Judgment, Harvard Business School (2018) https://www.hbs.edu/faculty/Publication%20Files/17-086_610956b6-7d91-4337-90cc-5bb5245316a8.pdf

S. Shyam Sundar and Jinyoung Kim. 2019. Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, Paper 538, 1–9. DOI: https://dl.acm.org/doi/10.1145/3290605.3300768