Kelly Dillon, a Doctoral Candidate at the Ohio State University School of Communication, found that people generally won’t act when the see someone bullied online. This news, while not surprising to anyone with a passing interest in technology, is worrying. In short, if you’re bullied online, don’t expect help.
I asked Dillon to clarify a few of her points including sample size and the generality of her research. More important, we talked a bit about how the study was conducted and what it really means to those of us who spend our time online.
TC: You studied a little over 200 students. What would happen if you increased the sample size?
Dillon: Scientifically, since I met the statistical threshold, we would expect the numbers to remain the same. I’m not certain of your comfort with statistical power, but these data would not have been published if I hadn’t collected enough data from enough participants. I actually have a second study I am currently writing up for publication, and the trends, results, and assumptions remain true with additional participants in a similar design. In short (without nerd speak), I would expect the results to be the same. These data are also on par with offline bystander intervention that have been replicated over and over again since the 1970s. Even television programs like What Would You Do? and Pranksters have replicated similar data in field experiments.
TC: What kind of bullying did you display?
Dillon: The attached manuscript (below) may describe the design a little better for you, but in order for it to be considered bullying (per experts) behaviors must be repetitive, intentional, and include a power imbalance. The bully was a research assistant who was supposed to help research participants. The victim was a research participant. This establishes the power imbalance. In real-life cyberbullying, this power imbalance is most likely social and the bullying can either establish this imbalance or perpetuate it. The cyberbully in my experiment was rude, impolite, and increasingly aggressive in their communication not once, but twice during the experiment (about 5min apart). This meets the intentional and repetitive thresholds. The research “participant” had a problem answering a question on a survey. The “bully” then got frustrated with the type of question, the participant’s lack of progress, called them “stupid” and questioned “how did you get into college.” Finally, the “bully” told them “I’m not trained to babysit subjects” and to figure it out themselves. It’s fairly mild as far as cyberbullying goes, which matches with the rates of intervention in offline studies (from the 70s, again). But, given it is online, we don’t have those non-verbal cues to help us understand how severe something is.
From the manuscript:
At the 1/3, 2/3, and end survey points, participants were presented on-screen within the survey to write as such in the chat- room. These check-ins ensured the chat window remained on the screen so participants would have at least peripheral opportunities to see the conversations. A small sample of participants piloted the study without these prompts to observe how users interacted with this chat room (n = 15). During the debriefing, participants were asked how they interacted with the chatroom, and 87% (n = 13) reported they minimized the chatroom because it was “distracting.” Therefore, the prompts were kept in the experimental design to ensure the chatroom was kept visible.
The cyberbullying of the ‘other participant’ began after 3 min. The confederate stated difficulty with a survey question, which elicited a response from the chat monitor (bully). The conversation continued with the chat monitor answering the confederate’s questions in an increasingly aggressive manner. The chat monitor concluded the conversation with an insulting remark and wrote, “figure it out yourself.” At no point in the experiment did the confederate victim address the rudeness. After an additional three minutes had passed, the ‘other participant’ had a different problem, and the harassment again. Throughout the experiment, the confederate posted benchmarks within 2 min of the participants’ benchmarks to continue the ruse that each ‘participant’ was completing the same questions. After all data were collected, all partic- ipants were thoroughly debriefed. All participants provided informed consent prior to participating. All procedures first approved by the Institutional Review Board per Human Subjects Guidelines.
TC: What does this study say about general Internet use?
Dillon: While this wasn’t a main concern or goal of the study at hand, as a researcher I can try to make some connections. These data tell me most people notice when things are turning negative, which is a good thing. We cannot help change an environment without noticing what is wrong in it (or if something needs changing). The majority of individuals did not act directly, but they acted behind the scenes. This tells me more people, nearly 7 out of 10 people, wanted to help someone they didn’t know when they were being harassed. Said otherwise, the normative response was “this isn’t okay,” which is great as an internet user myself. It also says, most individuals, when they see something, want to do something, and search for indirect ways to do that something. Internet applications, platforms, or administrators, then, need to help users find ways to indirectly intervene. Are report buttons noticeable? Do people feel blocking or reporting to admins work? Dislike or down-votes are considered indirect interventions. Making these available will help others see what types of behaviors aren’t okay, and maybe to avoid doing those things.
These data also tell us that cyberbystanders prefer to remain anonymous in their intervention (direct intervention was still anonymous in my experiments, but they were still identifiable in the chat room themselves). The same anonymity or deindividuation of the internet that protects cyberbullies, trolls, flamers, or harassers can be used to help cyberbystanders intervene. Direct intervention may be most effective, but it’s not likely, at least not according to my data. Helping users identify ways they can intervene while remaining socially safe will help more individuals do more.
from TechCrunch http://feedproxy.google.com/~r/Techcrunch/~3/3ZxoH956BRY/
via IFTTT
0 коммент.:
Отправить комментарий