by Fumeng (fumeng.yang@pnnl.gov, fy@brown.edu)
Fumeng
: There are at least twice as many papers as what I list here...but I think these are enough unless we are missing important ones.
MUST AT LEAST CONSIDER TO CITE
: Sheridan 1988, Sheridan 1980, 83, 84, Lee and See 2004, Lee and Moray 1994, Rempel et al 1985, Parasuraman 1993, Muir and Moray 1996, Singh 1993, S. Zuboff (1988), Miller 2002 2004, Hwang and Buergers 1997, Couch and Jones (1997), Rotter 1967, Dijkstra et al. 1998, Dijkstra 1999; Muir 1989
(Survey) Trust in automation - Lee & See 2004
Measuring Human-Computer Trust - Madsen & Gregor 2000
(Very Important) Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations [cb 13, Wang & Pynadath, 2013]
(Very Important) Trust, control strategies and allocation of function in human-machine system (trust per trial) [1992, cb 1040]
Muir 1989
do we need this?
)(Very Important) Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err [cb 92, 2015]
Fumeng:
The performance of the models they used is very poor. The estimating grades one is equal or slightly better or even worse than the participants. iIt is not a surprise for me that participants trust more on their own because they think they can improve while the models can't.
In their experiments, when using a different model that absolutely outform human performance or the prediction is too difficult to make, people tend to slightly trust the model more.
evidence-based algorithms more accurately
when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster
algorithm aversion
Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them & Transparency
the admissions office had created a statistical model that was designed to forecast student performance. participants received detailed descriptions of the eight variables that they would receive about each applicant
Study 1
Study 2
Study 3
Study 4
Trust between humans and machines, and the design of decision aids [cb 591, Muir 1987]
Fumeng: this is not the thing... Muir Doc thesis in 1989 is important but I cam't find it...
The persuasive power of data visualization [cb 45, Pandey ... Bertini 2014]
Affective Processes in Human–Automation Interactions [cb 52, Merritt 2011]
10 practice trials
and received feedback. They next were presented with information on how to use the AWD and watched the AWD perform 10 trials
so that they could observe its reliability.I Trust It, but I Don’t Know Why: Effects of Implicit Attitudes Toward Automation on Trust in an Automated System [cb 63, Merritt 2013]
Can computer personalities be human personalities? [Nass et al, 1995, cb 665]
Brains or Beauty: How to Engender Trust in User-Agent Interactions [Yuksel et al, cb 4, 2017]
independent
of agents from any previous trials
in order to avoid attribution of perceived trustworthiness from previous agents.Trust, self-confidence and authority in human-machine systems [Inagaki et al, 1998, cb 44]
(might be important)
Foundations for an Empirically Determined Scale of Trust in Automated Systems [Jian et al 2000, cited by 493]
Trust between man and machine in a teleoperation system [Dassonville et al, 1996, cb 24]
The impact of cognitive feedback on judgment performance and trust with decision aids [Seong & Bisantz 2008, cb 61]
A model for predicting human trust in automated systems [Khasawneh et al, 2003, cb 16]
Measurement of Trust Over Time in Hybrid Inspection Systems [Master et al, 2005, cb 17]
The effects of errors on system trust, self-confidence, and the allocation of control in route planning [de Vries, 2003, cb 164]
Trust in and Adoption of Online Recommendation Agents [Benbasat & Wang, cited by 557]
Trust in Adaptive Automation: The Role of Etiquette in Tuning Trust via Analogic and Affective Methods [Miller, 2004, cb 43]
The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle [Waytz et al, 2014, cb 155]
The perceived utility
of human and automated aids in a visual detection task [Dzindolet et al, 2002, cited by 165]
good references sources, ANOVA
Experiment 1
Experiment 2
Experiment 3
in every study participants did rely on automated and human aids differently
people expect machine makes less error than human partners
(Important for data analysis & whether we should give feedback
) Are Well-Calibrated Users Effective Users? Associations Between Calibration of Trust and Performance on an Automation-Aided Task [cb 8, Merritt et al, 2015]
Not All Trust Is Created Equal: Dispositional and History- Based Trust in Human-Automation Interactions [cb 189, Merrit & Ilgen 2008]
Tuning trust using cognitive cues for better human-machine collaboration [Cai & Lin, 2010, cb 13]
Trust in New Decision Aid Systems [Atoyan et al, 2006, cb 56]
Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations [McAllister, 1995, cb 7544
]
(famous survey paper) An Integrative Model of Organizational Trust
Similarities and differences between human–human and human–automation trust: an integrative review
Trust in decision aids A Model and its training implications
On Deep Learning for Trust-Aware Recommendations in Social Networks
A survey of trust in computer science and the Semantic Web
Trustworthiness of command and control systems [Sheridan 1989, cb 104]
Humans and Automation: Use, Misuse, Disuse, Abuse (Parasuraman & Riley, 1997, cb 2407)
Trust in Close Relationships [Rempel et al, 2973, 1985]
Measurement of Trust in Hybrid Inspection Systems: Review and Evaluation of Current Methodologies and Future Approach
Trust Models for Community Aware Identity Management
Trust in Electronic Commerce: Definition and Theoretical Considerations
On-line trust: concepts, evolving themes, a model
Trust metrics in information fusion
Towards a cognitive approach to human+machine cooperation in dynamic situations
Measuring Levels of Trust [Couch & Jones, 1997, cb 253]
Modeling Trust Negotiation for Web Services
The Dyadic Trust Scale: Toward Understanding Interpersonal Trust in Close Relationship
A Machine Learning Based Trust Evaluation Framework for Online Social Networks
How do We Learn to Trust? A Confirmatory Tetrad Analysis of the Sources of Generalized Trust [Glanville & Paxton, 2007, cb 218]
Madsen & Gregor 2000 article and actual measure (word doc) – This is a state measure that captures the antecedents of trust (personal attachment, faith, reliability, technical competence, understandability). It is specifically designed for human-computer trust. I adapted it for one of my human-human trust studies and found it to be pretty good. I think all of the constructs predicted self-report trust in my study.
Mayer adapted for human-machine trust – this is the self-report measure of trust that I have had the most success with. However, it was designed to measure human-human trust and I have never used this adapted version for human-machine trust (I adapted it for some upcoming human-machine trust research). You should be able to just substitute UAV for your tool. I think it’s pretty good, but if you don’t want to use this measure I believe Merritt has specifically designed a measure for human-machine trust I can get my hands on.
Merritt et al 2015 - Table three in this article shows the items for the perfect automation schema measure. I don’t know a lot about this measure. I would consider it a trait or individual difference measure and the construct sounds really interesting to me. I have never used it, but I think I know some who have and can reach out to them if you want me to.
Singh et al 1993 – I can’t for the life of me find this measure, but the reference is attached. I haven’t used this one either. As you can tell from the reference it has been around for a while and I have heard the questions are a bit dated. This is another trait/individual difference measure.
For Madison & Gregor and Mayer I can give you a lot more info if you want it. For the others I know less. Overall, I thought these measures would probably be best based on our conversation. However, if none of these fit the bill or you are looking for more, I have a few more that I can pull together including some implicit measures of trust.
(Muir) Automation that is predictable, dependable, and inspires faith that it will behave as expected in unknown situations will be seen as more trustworthy.
(Lee and See) When trust in the automation was higher than self-confidence in manual operation, the participant tended to allocate the task to the automated system
Toward Establishing Trust in Adaptive Agents
A Design Methodology for Trust Cue Calibration in Cognitive Agents
The Calibration of Trust in an Automated System: A Sensemaking Process
A Design Methodology for Trust Cue Calibration in Cognitive Agents