2nd Workshop on Abusive Language Online (Humanists, SocSci, Arts – join us!)

HASTAC friends and colleagues – I am a member of the organizing committee for this workshop and I am particularly keen to have humanists, social scientists, and artists engaged too. If you have questions about format or any thing else related to the CfP, please feel free to contact me directly. I hope to see some of you in Brussels for this timely (feels sadly evergreen) event!

ALW2: 2nd Workshop on Abusive Language Online

EMNLP 2018 (Brussels, Belgium), October 31st or November 1st, 2018

Submission deadline: July 20th, 2018

Website: https://sites.google.com/view/alw2018

Submission link: https://www.softconf.com/emnlp2018/ALW2/

Overview

Interaction amongst users on social networking platforms can enable constructive and insightful conversations and civic participation; however, on many sites that encourage user interaction, verbal abuse has become commonplace, leading to negative outcomes such as cyberbullying, hate speech, and scapegoating. In online contexts, aggressive behavior may be more frequent than in face-to-face interaction, which can poison the social climates within online communities. The last few years have seen a surge in such abusive online behavior, leaving governments, social media platforms, and individuals struggling to deal with the consequences.

For instance, in 2015, Twitter’s CEO publicly admitted that online abuse on their platform was resulting in users leaving the platform, and in some cases even having to leave their homes. More recently, Facebook, Twitter, YouTube and Microsoft pledged to remove hate speech from their platforms within 24 hours in accordance with the EU commission code of conduct and face fines of up to €50M in Germany if they systematically fail to remove abusive content within 24 hours. While governance demands the ability to respond quickly and at scale, we do not yet have effective human or technical processes that can address this need. Abusive language can often be extremely subtle and highly context dependent. Thus we are challenged to develop scalable computational methods that can reliably and efficiently detect and mitigate the use of abusive language online within variable and evolving contexts.

As a field that works directly with computational analysis of language, NLP (Natural Language Processing) is in a unique position to address this problem. Recently there have been a greater number of papers dealing with abusive language in the computational linguistics community. Abusive language is not a stable or simple target: misclassification of regular conversation as abusive can severely impact users’ freedom of expression and reputation, while misclassification of abusive conversations as unproblematic on the other hand maintains the status quo of online communities as unsafe environments. Clearly, there is still a great deal of work to be done in this area. More practically, as research into detecting abusive language is still in its infancy, the research community has yet to agree upon a suitable typology of abusive content as well as upon standards and metrics for proper evaluation, where research in media studies, rhetorical analysis, and cultural analysis can offer many insights.

In this second edition of this workshop, we continue to emphasize the computational detection of abusive language as informed by interdisciplinary scholarship and community experience. We invite paper submissions describing unpublished work from relevant fields including, but not limited to: natural language processing, law, psychology, network analysis, gender and women’s studies, and critical race theory.
 

Paper Topics

We invite long and short papers on any of the following general topics:

related to developing computational models and systems:

NLP models and methods for detecting abusive language online, including, but not limited to hate speech, cyberbullying etc.

Application of NLP tools to analyze social media content and other large data sets

NLP models for cross-lingual abusive language detection

Computational models for multi-modal abuse detection

Development of corpora and annotation guidelines

Critical algorithm studies with a focus on abusive language moderation technology

Human-Computer Interaction for abusive language detection systems

Best practices for using NLP techniques in watchdog settings

 

or related to legal, social, and policy considerations of abusive language online:

 

The social and personal consequences of being the target of abusive language and targeting others with abusive language

Assessment of current non-NLP methods of addressing abusive language

Legal ramifications of measures taken against abusive language use

Social implications of monitoring and moderating unacceptable content

Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.

 

In addition, in this one-day workshop, we will have a multidisciplinary panel discussion and a forum for plenary discussion on the issues that researchers and practitioners face in efforts to work with abusive language detection. We are also looking into the possibility of publishing a special issue journal to this iteration of the workshop.

We seek to have a greater focus on policy aspects of online abuse through invited speakers and panels.

 

Submission Information

We will be using the EMNLP 2018 Submission Guidelines. Authors are invited to submit a full paper of up to 8 pages of content with up to 2 additional pages for references. We also invite short papers of up to 4 pages of content, including 2 additional pages for references.

Accepted papers will be given an additional page of content to address reviewer comments.  We also invite papers which describe systems. If you would like to present a demo in addition to presenting the paper, please make sure to select either “full paper + demo” or “short paper + demo” under “Submission Category” in the START submission page.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …”, should be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …”.

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.

We will be using the START conference system to manage submissions.

Important Dates

Submission due: July 20, 2018

Author Notification: August 18, 2018

Camera Ready: August 31, 2018

Workshop Date: Oct 31st or Nov 1st, 2018

Submission link: https://www.softconf.com/emnlp2018/ALW2/

Unshared task

In order to encourage focused contributions, we encourage researchers to consider using one or more of the following datasets in their experiments:

StackOverflow Offensive Comments [To be released]

Yahoo News Dataset of User Comments [Nobata et al., WWW 2016]

Twitter Data Set [Waseem and Hovy, NAACL 2016]

German Twitter Data Set [Ross et al. NLP4CMC 2016]

Greek News Data Set [Pavlopoulos et al., EMNLP 2017]

Wikimedia Toxicity Data Set [Wulczyn et al., WWW 2017]

SFU Opinion and Comment Corpus [Kolhatkar et al., In Review]

 

Organizing Committee

Darja Fišer, University of Ljubljana & the Jožef Stefan Institute

Ruihong Huang, Texas A&M University

Vinodkumar Prabhakaran, Stanford University

Rob Voigt, Stanford University

Zeerak Waseem, University of Sheffield

Jacqueline Wernimont, Arizona State University

Program Committee/Reviewers

The following researchers have agreed to serve on the program committee as reviewers.

Mark Alfano, Delft University of Technology, Netherlands

Natalie Alkiaviadou, UCLAN, Cyprus, Cyprus

Ion Androutsopoulos, Department of Informatics, Athens University of Economics and Business, Greece

Veronika Bajt, Peace Institute, Slovenia

Alistar Baron, Lancaster University, United Kingdom

Susan Benesch, Berkman Klein Center, United States of America

Darina Benikova, University of Duisburg-Essen, Germany

Joachim Bingel, University of Copenhagen, Denmark

Kalina Bontcheva, University of Sheffield, United Kingdom

Pete Burnap, Cardiff University, United Kingdom

Guillermo Carbonell, University Duisburg-Essen, Germany

Wendy Chun, Brown University, United States of America

Isobelle Clarke, Birmingham University, United Kingdom

Kelly Dennis, University of Connecticut, United States of America

Guy De Pauw, Textgain, Belgium

Mona Diab, George Washington University, United States of America

Lucas Dixon, Jigsaw (Google), United States of America

Nemanja Djuric, Uber, United States of America

Marisa Duarte, Arizona State University, United States of America

Hugo Jair Escalante, National Institute of Astrophysics, Optics and Electronics (INAOE), Mexico

Björn Gambäck, Norwegian University of Science and Technology, Norway

Lee Gillam, University of Surrey, United Kingdom

Tassie  Gnady, University of Illinois, United States of America

Jen Golbeck, University of Maryland, United States of America

Vojko Gorjanc, University of Ljubljana, Slovenia

Erica Greene, Jigsaw, United States of America

Joris Van Hoboken, Vrije Universiteit Brussels, Belgium

Veronique Hoste, University of Ghent, Belgium

Dirk Hovy, Bocconi University, Italy

Dan Jurafsky, Stanford, United States of America

George Kennedy, Intel, United States of America

Neža Kogovšek Šalomon, Peace Institute, Slovenia

Varada Kolhatkar, University of Toronto, Canada

Els Lefever, University of Ghent, Belgium

Chuan-Jie Lin, National Taiwan Ocean University, Taiwan

Elizabeth Losh, William and Mary, United States of America

Prodromos Malakasiotis, StrainTek, Greece

Shervin Malmasi, Harvard University, United States of America

Diana Maynard, University of Sheffield, United Kingdom

Kathleen McKoewn, Columbia University, United States of America

Rada Mihalcea, University of Michigan, United States of America

Mainack Mondal, Max Planck Institute for Software Systems, Germany

Hamdy Mubarak, Qatar Computing Research Institute, Qatar

Smruthi Mukund, A9.com Inc, United States of America

Kevin Munger, New York University, United States of America

Andreas Musolff, University of East Anglia, United Kingdom

Preslav Nakov, Qatar Computing Research Institute, Qatar

Anne Brigitta Nilsen, Oslo and Akershus University College of Applied Sciences, Norway

Chikashi Nobata, Apple, United States of America

John Pavlopoulos, StrainTek, Greece

Daniel Preoțiuc-Pietro, Bloomberg, United States of America

Michal Ptaszynski, University of Duisburg-Essen, Germany

Srividya Ramasubramanian, Texas A&M University, United States of America

Georg Rehm, Deutsche Forschungszentrum für Künstliche Intelligenz, Germany

Björn Ross, University of Duisburg-Essen, Germany

Masoud Rouhizadeh, Stony Brook University & University of Pennsylvania, United States of America

Niloofar Safi Samghabadi, University of Houston, United States of America

Christina Sauper, Facebook, United States of America

Xanda Schofield, Cornell, United States of America

Caroline Sinders, Wikimedia Foundation, United States of America

Dimitris Spathis, StrainTek, Greece

Mark Stevenson, University of Sheffield, United Kingdom

Maite Taboada, Simon Fraser University, Canada

Dennis Yi Tenen, Columbia University, United States of America

Ingmar Weber, Qatar Computing Research Institute, Qatar

Amanda Williams, University of Bristol, United Kingdom

Michael Wojatzki, University of Duisburg-Essen, Germany

Lilja Øvrelid, University of Oslo, Norway

Related Events

Workshop: The turn to artificial intelligence in governing communication online

First Workshop on Trolling, Aggression and Cyberbullying

The 1st Workshop on Abusive Language Online: the first edition of the workshop.

CHI Workshop on Online Harassment: a workshop focused on developing datasets for researching online harassment

Text Analytics for Cyber Security and Online Safety, LREC 2016

Discourses of Aggression and Violence in Greek Digital Communication, ICGL13

Conceptualizing, Creating, & Controlling Constructive and Controversial Comments: A CSCW Research-athon

​Image credit: https://www.sott.net/image/s19/380111/full/438753_how_to_be_a_jerk_in_in…