Journey of one paper (I)
Starting up with the first proposal
- This student described a potentially interesting topic.
- But he did not follow the guidelines for what to put on his proposal.
- And he did not relate his proposal to the seminar theme of "The Self".
This is a somewhat negative example of what *not* to do in your first proposal. Usually I prefer to put *positive* examples out there. But in this case, the student's final paper was excellent, and so perhaps it's useful to see one development of a paper with twists and turns.
First round proposal
Below, in black text is the student's initial proposal, with Paul's first round of feedback (in red) interspersed. I tried to re-direct him to some related topics. He didn't embrace any of my content ideas, but [spoiler alert] eventually he took it in his own direction, but related it to the course theme more strongly, and produced a good final paper!
Machine Bias
Artificial Intelligence is one of the most powerful tools humans have ever created, and sometimes it is so powerful that can be easily turned into the wrong direction. We take for granted that the result that a computer outputs is completely objective, but what if it is not? [*] What if it is biased? Then fields like criminal justice, predictive policing and even healthcare, which are using AI on a daily basis, would be trusting a prejudiced recommendation leading to biased actions.
[*] I would not agree that we take this for granted. Actually, I suspect that the very idea of “objective” is contested and not settled outside the domain of AI.
Much of current “Machine Learning” involves giving computers “training sets” of data, (your second reference looks like it refers to that) and having them look for patterns. At this level perhaps the bias is inevitable based on any bias in the data set. We have a current example in covid vaccine development. This time vaccine manufacturers worked pretty hard to include minorities and people from all over the world in their trials, but this has not always been the case. Often they’ve been trialed mainly on white folks of european ancestry.
Another high-profile AI experiment that ended badly: Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day
But, how did it all start? How does a computer go from being objective to biased? What are the specific examples where computers output prejudiced recommendations? In criminal justice, a judge should not trust a biased recommendation, but who should we trust: computers or human judgement, which is to some degree biased as well? What if biased predictions are in fact beneficial for some fields, such as predictive policing? What if that helps us predict a terrorist attack and save lives? For a non beneficial case, what can we do to reduce the level of bias of an algorithm and create a more objective world?
I have a friend from grad school who is part of the organizing team for an “AI for good” conference. Several of the talks are available somewhere on their website:
aiforgood.itu.int
The focus seems to be rather more on using AI as part of the solution to problems of poverty and the Sustainable Development Goals (SDGs) of the UN.
But one researcher on there is Stuart Russell, and he is thinking about a larger problem, that might relate to your proposal and our class a bit more, about the principals of how AI development happens in order serve human goals. He makes some connections with development of nuclear power. And also with the rise of corporations—how can we insure that those don’t fulfill selfish objectives, but serve humans? So maybe there’s a larger question in there of how we can steer human-created “systems” in human-compatible directions
Stuart Russell article
Angwin, Julia and Jeff Larson. “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica, 23 May 2016, www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
J. Larrazabal, Agostina, et. al. “Gender imbalance in medical imaging datasets produces biased classifiers for computer aided diagnosis”. PNAS, 2020. https://www.pnas.org/content/pnas/117/23/12592.full.pdf
Johndrow, James E and Kristian Lum. “An algorithm for removing sensitive information: application to race-independent recidivism prediction” Stanford University. 2017. https://arxiv.org/pdf/1703.04957.pdf
3rd round proposal
Yes, there was also a 2nd round proposal, but I'm skipping straight to the one that we all finally approved.
Human-computer trust/interaction
Along the years research shows the tendency of humans to trust computers. But why? What is it about human nature that drives this tendency? In spite of the fact that there have been many cases where algorithms have shown to be biased, overall, humans are still inclined to follow the advice of a computer rather than from another human. Even in some cases we are more willing to change our own decision for the decision of a computer. It seems like we are letting the algorithm think for us. We are giving up what differentiates us from all other animals and what really makes us humans: reasoning and cognitive ability to think. On the other hand, why wouldn’t some people accept the advice of a computer that has shown high competences in its field? For example, weather forecasters or even physicians. The term called “algorithm aversion” seems rather radical, and somewhat dangerous. Is it our innate ego that prevents us from accepting their advice even when they are correct?
Are we going into the direction of being less human? Are we increasing the freedom of AI to decide for us? Is the overwhelming power of AI making us less self-confident and therefore our trust for computers increases? Is it the lack of trust between humans that increases the trust for computers? Are we becoming more independent of each other and at the same time, more dependent on a computer? Why does not the fact of seeing the reality built from a biased algorithm affect us? Is it just that we are not aware of that? What about the term "algorithmic aversion", how does that relate to the self and our personality?
References:
Dietvorst, B., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. Journal of Experimental Psychology: General, 144 (1), 114-126.https://repository.upenn.edu/cgi/Algorithm%20Aversion%3A%20People%20Erroneously%20Avoid%20Algorithms%20after%20Seeing,future%20than%20do%20human%20forecasters
Kulms Philipp & Kopp Stefan. A Social Cognition Perspective on Human–Computer Trust: The Effect of Perceived Warmth and Competence on Trust in Decision-Making With Computers. Frontiers in Digital Humanities (2018). https://www.frontiersin.org/article/10.3389/fdigh.2018.00014
Logg. Jennifer M., Julia A. Minson & Don A. Moore, Algorithm Appreciation: People Prefer Algorithmic To Human Judgment, Harvard Business School (2018) https://www.hbs.edu/faculty/Publication%20Files/17-086_610956b6-7d91-4337-90cc-5bb5245316a8.pdf
S. Shyam Sundar and Jinyoung Kim. 2019. Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, Paper 538, 1–9. DOI: https://dl.acm.org/doi/10.1145/3290605.3300768
By the third round, this proposal still didn't exactly hew to the proposal guidelines. But the idea of exploring trust through how we interact with computers did seemed to be something that everyone thought was interesting.
The titles of the articles he found were pretty fascinating. In this case none of the articles he started with are here in this round. The articles and the proposal have a more obvious connection with the self--how humans decide they will trust (or not) computers.