The placebo effect of autonomy level on trust in human-swarm interaction (Group 1)

Prolific ID: e603c9c5placeboeffect | Study ID: c635f299demo | Session ID: study_demo
Briefing
(2 min)
Consent
(2 min)
Scenario I
(2 min)
Questionnaire
(2 min)
Scenario II
(2 min)
Questionnaire
(2 min)
Finish
(3 min)

Participant Information Sheet

Study Title: The Placebo Effect of Autonomy Level on Trust in Human-swarm interaction
Researcher(s): Hemangi Kakirde
University email: hjk1n22@soton.ac.uk
Ethics/ERGO no: ERGO/FEPS/81410
Version and date: Version 2, 24/07/2023

What is the research about?
My name is Hemangi Kakirde and I am a Masters in Artificial Intelligence student at the University of Southampton in the United Kingdom.

I am inviting you to participate in a study regarding the placebo effect of autonomy level on trust in human-swarm interaction. The project aims to identify casualties using the human and robot interactive swarm (HARIS) simulator, survey with different scenarios and evaluate if the participants will trust the swarm (group of UAVs – unmanned aerial vehicles/drones) in different scenarios or not. The results will indicate if the participants trust the autonomous swarm or the human-controlled swarm.

This study was approved by the Faculty Research Ethics Committee (FREC) at the University of Southampton (Ethics/ERGO Number: ERGO/FEPS/81410).

What will happen to me if I take part?
This study involves completing an anonymous questionnaire which should take approximately 30 minutes of your time. You will need a device with Internet access to be able to access the task, for which a link will be sent to you. If you are happy to complete this survey, you will need to tick (check) the box below to show your consent. Participation is voluntary and relies on you watching the video and answering the questionnaire based on the video.

As this survey is anonymous, the researcher will not be able to know whether you have participated, or what answers you provided.

An attention check question will also be included in the questionnaire.

Why have I been asked to participate?
You have been asked to take part because we are interested in collecting data from a wide range of individuals. You have responded to an advert for the study, or accessed the study through Prolific, and have showed willingness to take part.

I am aiming to recruit around 80 participants for this study.

What information will be collected?
The questions in this survey ask for information in relation to the video that you will see before the questionnaire starts. Your prolific id, gender and age group will be collected for analysis purposes. We will also collect your answers to the questionnaires given to you. Your data will only be used by the researcher mentioned above to analyze and summarize the outcomes of the study.

You must answer all the questions in the questionnaire.

What are the possible benefits of taking part?
If you decide to take part in this study, you will not receive any direct benefits; however, your participation will contribute to knowledge in this area of research.

Your participation will help me understand the way humans react to scenarios where the swarms are controlled autonomously or by humans. It will help me analyse the approaches needed for a trustworthy human-robot interaction system. You will also get a reward for the time that you have dedicated to the study. But you will only get the reward on the successful completion of the study. The payment will be in the form of prolific credits.

Are there any risks involved?
No risks are expected, and you are free to leave the questionnaire at any time during the study. You should also ensure that previous NDAs, conflicts of interests and time budget have been addressed prior to participation.

It is expected that taking part in this study will not cause you any psychological discomfort and/or distress, however, should you feel uncomfortable you can leave the survey at any time.

What will happen to the information collected?
If you decide to take part in this study, you will not receive any direct benefits; however, your participation will contribute to knowledge in this area of research.

Your participation will help me understand the way humans react to scenarios where the swarms are controlled autonomously or by humans. It will help me analyse the approaches needed for a trustworthy human-robot interaction system. You will also get a reward for the time that you have dedicated to the study. But you will only get the reward on the successful completion of the study. The payment will be in the form of prolific credits.

What are the possible benefits of taking part?
All information collected for this study will be stored securely on a password-protected computer and backed up on a secure server. In addition, all data will be pooled and only compiled into data summaries or summary reports.

The information collected will be analyzed and written up as part of the researcher’s dissertation.

The University of Southampton conducts research to the highest standards of ethics and research integrity. In accordance with our Data Management Policy, data will be securely destroyed after the conferment of the researcher’s degree in November 2023.

What happens if there is a problem?
If you are unhappy about any aspect of this study and would like to make a formal complaint, you can contact the Head of Research Integrity and Governance, University of Southampton, on the following contact details:
Email: rgoinfo@soton.ac.uk
Phone: +44 2380 595058

Please quote the Ethics/ERGO number above. Please note that by making a complaint you might be no longer anonymous.

More information on your rights as a study participant is available via this link:
https://www.southampton.ac.uk/about/governance/participant-information.page

Thank you for reading this information sheet and considering taking part in this research.

Participant Consent and Information

Thank you for choosing to take part in this study. Please complete this consent form before taking part in the study. It is required for your participation. This study has been approved by the Faculty Research Ethics Committee (FREC) at the University of Southampton (Ethics/ERGO Number: ERGO/FEPS/81410).

Participant Consent


I have read and understood the participant information sheet.

I am aged 18 or over and agree to take part in this study.

I understand that my participation is voluntary and I may withdraw (at any time) for any reason without my participation rights being affected.

Demographic Information


What is your gender?

What level of education do you hold?

What is your level of computer expertise?

How familiar are you with Unmanned Aerial Vehicles (UAVs) and/or Swarm robotics?

Study Scenario I: Fully Autonomous

This is the Fully Autonomous study scenario. The performance of the system is as shown in the table below.

Speed (target per minutes) Accuracy (percentage %)
5.33 88

Do you trust the operation of the swarm in the video below? Please click play to watch the video. It is recommended to watch the video in fullscreen. The red circle indicates the chosen classification option while the black arrow indicates where the unknown object being classified is detected.

Click Next to continue to the questionnaire.

Study Scenario I Questionnaire

This is the Fully Autonomous study scenario questionnaire. Please complete the form below and click next to continue.

Trust Index


The swarm is deceptive.

The swarm behaves in an underhanded manner.

I am suspicious of the swarm's intent, action, or outputs.

I am wary of the swarm.

The swarm's actions will have a harmful or injurious outcome.

I am confident in the swarm.

This is an attention check. Please select number 4.

The swarm provides security.

The swarm has integrity.

I trust the scenario that I saw in the video.

The swarm is dependable.

I am confident in the autonomous swarm.

The swarm is reliable.

I can trust the swarm.

I am familiar with the swarm.

Study Scenario II: Human Operated

This is the Human Operated study scenario. The performance of the system is as shown in the table below.

Speed (target per minutes) Accuracy (percentage %)
8.78 67

Do you trust the operation of the swarm in the video below? Please click play to watch the video. It is recommended to watch the video in fullscreen. The red circle indicates the chosen classification option while the black arrow indicates where the unknown object being classified is detected.

Click Next to continue to the questionnaire.

Study Scenario II Questionnaire

This is the Human Operated study scenario questionnaire. Please complete the form below and click next to continue.

Trust Index


The swarm is deceptive.

The swarm behaves in an underhanded manner.

I am suspicious of the swarm's intent, action, or outputs.

I am wary of the swarm.

The swarm's actions will have a harmful or injurious outcome.

I am confident in the swarm.

This is an attention check. Please select number 4.

The swarm provides security.

The swarm has integrity.

I trust the scenario that I saw in the video.

The swarm is dependable.

I am confident in the human-operated swarm.

The swarm is reliable.

I can trust the swarm.

I am familiar with the swarm.

Submit to Finish

Thank you for participating in our study. Please submit to finish and be automatically redirected back to prolific to confirm study completion.

Relative Comparison


In your opinion, which of the two swarm scenarios had a better performance?

The swarm's behaviour seemed trustworthy.

Which scenario did you trust more?

Which of these options affected your trust the most in the scenario chosen above?

It is important to know if the human or an autonomous system is operating the swarm.

Will the swarm being autonomous or human-operated affect your decision?

If you are biased either towards the human-operated system or the autonomous system then, would the performance affect your decision?

Autonomous systems are more trustworthy than human-operated systems.

Human-operated systems are more trustworthy than autonomous systems.

List the advantages of human-operated system in a single word. List atleast three advantages (if possible).

List the advantages of autonomous system in a single word. List atleast three advantages (if possible).

An autonomous swarm can be trusted to make ethical decisions on its own.

A human can be trusted more than an autonomous system to make decisions in a critical situation.

Humans should have the ability to override the swarm decisions in critical situations.

You would trust an autonomous system that detects casualties faster but might be wrong in certain classifications rather than a human-operated system that takes much longer than the autonomous system but gives accurate results.

A human-operated system that has 80% performance (slightly weaker than autonomous) just because it is operated by a human can be trusted and preferred more than an autonomous system that has 85% performance.

How much performance are you willing to sacrifice just so that the human is accountable for the mission? Answer in percentage (0 - 100).

If the human slows down the mission, how much percentage of speed are you willing to sacrifice to make sure that the human is in the loop? Answer in percentage (0 - 100).

If the autonomous swarm makes an error while detecting casualties, it can be trusted again.

What has to be done to gain your trust back if the autonomous swarm makes errors while detecting casualties?

An autonomous system can be trusted if the swarm provided clear explanation of its intentions and actions in comparison to human-operated swarm.

In which situation(s) or application(s) do you trust the autonomous swarms more than human operated: