Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Counter-speech effectiveness in Twitter Dataset

Dataset Details

We developed a theoretical framework grounded in linguistics, argumentation, rhetoric, and communication theories to evaluate counter-speech effectiveness. The framework includes six key dimensions: Emotional Appeal, Audience Adaptation, Clarity, Effectiveness, Rebuttal, and Fairness. We use our framework to annotate the six dimensions on two existing datasets: CONAN (Chung et al., 2019), and the Twitter Dataset (Albanyan and Blanco, 2022).

Dataset Description

The Twitter Dataset2 is a real-world dataset, containing 5,652 hateful tweets and replies obtained from social media (Twitter/X), capturing the brevity and style typical of online discourse. We focus on the subset labeled as counter-speech, where a clear target is identifiable, yielding 367 HS/CS pairs.

  • Curated by: [Greta Damo, Elena Cabrio, Serena Villata]
  • Funded by: [the French government, through the 3IA Cote d’Azur investments in the project managed by the National Research Agency (ANR) with the reference number ANR-23-IACL-0001.]
  • Language(s) (NLP): [English]
  • License: [Inria holds all the ownership rights on the codes and data contained in this repository (the Software). The Software is still being currently developed. It is the Inria's aim for the Software to be used by the scientific community so as to test it and, evaluate it so that Inria may improve it. For these reasons Inria has decided to distribute the Software. Inria grants to the academic user, a free of charge, without right to sublicense non-exclusive right to use the Software for research purposes. Any other use without of prior consent of Inria is prohibited. The academic user explicitly acknowledges having received from Inria all information allowing him to appreciate the adequacy between of the Software and his needs and to undertake all necessary precautions for his execution and use. The Software is provided only as a executable file. In case of using the Software for a publication or other results obtained through the use of the Software, user should cite the Software as follows : Every user of the Software could communicate to the developers his or her remarks as to the use of the Software.

THE USER CANNOT USE, EXPLOIT OR COMMERCIALY DISTRIBUTE THE SOFTWARE WITHOUT PRIOR AND EXPLICIT CONSENT OF INRIA. ANY SUCH ACTION WILL CONSTITUTE A FORGERY. THIS SOFTWARE IS PROVIDED "AS IS" WITHOUT ANY WARRANTIES OF ANY NATURE AND ANY EXPRESS OR IMPLIED WARRATIES,WITH REGARDS TO COMMERCIAL USE, PROFESSIONNAL USE, LEGAL OR NOT, OR OTHER, OR COMMERCIALISATION OR ADAPTATION. UNLESS EXPLICITLY PROVIDED BY LAW, IN NO EVENT, SHALL INRIA OR THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONNSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, LOSS OF USE, DATA, OR PROFITS OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.]

Dataset Sources

Uses

The dataset is made available to the research community to support studies on counter-speech strategies, hate speech mitigation, and online discourse analysis. It can be used to train and evaluate models that detect, classify, or generate counter-speech responses, and to better understand the dynamics between harmful and countering messages online.

Direct Use

This dataset is suitable for:

🧠 Research on counter-speech — studying the effectiveness of different types of counter-speech against online hate or abusive content.

🤖 Machine learning model training and evaluation — for example:

  • Text classification (detecting abusive content vs. counter-speech)
  • Effectiveness scoring (predicting which counter-speech strategies are more effective)
  • Multidimensional annotation analysis (studying linguistic, emotional, and pragmatic features)

🗣️ Computational social science studies — exploring patterns of discourse, hate mitigation, and social interaction in online environments.

💬 Responsible NLP research — for the development of safer and more inclusive language technologies.

Out-of-Scope Use

This dataset should not be used for:

  • Generating or amplifying harmful or abusive content, hate speech, or harassment.
  • Training models for content moderation without proper bias and fairness evaluation — as the dataset was designed for research, not deployment in production systems.
  • Profiling or targeting individuals or groups based on their language or opinions.
  • Commercial exploitation that could misrepresent or misuse the data or annotations.

Dataset Structure

This dataset contains hate speech instances collected from Twitter/X, representing real-world examples of hateful messages and their corresponding counter-speech replies. Each entry links a hateful tweet to its counter-speech reply and includes detailed annotations for counter-speech effectiveness dimensions.

Fields:

  • number_id: Row index or unique sample identifier
  • Hateful Tweet ID: Unique identifier of the original hateful tweet
  • Reply Tweet ID: Unique identifier of the counter-speech reply tweet
  • counter_speech: Text of the counter-speech reply
  • Q2, Q3, Q4: Intermediate annotation columns (from the original dataset), encoding counter-speech metadata
  • target: The target group or individual against whom the hate speech was directed
  • clarity: Whether the counter-speech message is linguistically clear and easy to understand
  • evidence: Degree to which the counter-speech provides factual information, reasoning, or supporting arguments
  • emotional_appeal: Presence of emotional tone or empathy used in the counter-speech
  • rebuttal: Whether the counter-speech directly challenges or refutes the hateful statement
  • audience_adaptation: Whether the counter-speech is tailored to its audience (e.g., aggressor vs. bystanders)
  • fairness: Whether the counter-speech maintains a fair, respectful, and non-hostile tone

Dataset Creation

Source Data

The dataset extends and integrates an existing benchmark: the Twitter dataset (Albanyan and Blanco, 2022).

Data Collection and Processing

The Twitter dataset consists of 5,652 hateful tweets and replies collected from the social media platform Twitter/X, representing authentic, user-generated online discourse. Only tweet IDs are publicly available due to Twitter’s privacy policy, but the full data (including text) was obtained directly from the original authors. For this project, we focused on the subset explicitly labeled as counter-speech, where a clear target of hate speech could be identified. This filtering produced 367 Hate Speech / Counter-Speech pairs used for annotation and analysis. Compared to the expert-curated CONAN dataset, the Twitter subset captures the brevity, informality, and spontaneity typical of online interactions, offering a valuable real-world complement to expert-written counter-speech examples. All tweets were cleaned, anonymized, and standardized to remove URLs, mentions, and metadata that could reveal user identity.

Who are the source data producers?

The original tweets and replies were written by public Twitter/X users, representing spontaneous online exchanges between individuals. To ensure compliance with privacy and ethical standards:

  • Only publicly available tweets were used.
  • Personally identifiable information (usernames, mentions, URLs) was removed or masked.
  • No manual reconstruction of deleted content was performed. The dataset reflects natural online behavior and the diversity of language and tone in social media counter-speech, but does not include demographic information about the users.

Annotations

The dataset is expert-annotated along multiple dimensions of counter-speech effectiveness.

Annotation process

A team of three annotators (with backgrounds in computational linguistics and computer science) developed and refined the annotation guidelines. A pilot study was conducted on 50 pairs from each dataset to iteratively consolidate the guidelines. Two annotation rounds were carried out:

  • After the first round, disagreements were reviewed and guidelines updated.
  • A second round achieved stronger agreement and finalized labels. Inter-Annotator Agreement (IAA) was measured using:
  • Fleiss’s κ for binary labels,
  • Krippendorff’s α for categorical labels, and
  • Percent Agreement for “Audience Adaptation” (due to near-perfect consensus). The final IAA results showed strong agreement across all dimensions, and the remaining portion of the dataset was labeled by one annotator following the agreed-upon guidelines.

Who are the annotators?

Three expert annotators affiliated with the research team conducted the labeling. All have prior experience in NLP, annotation, and discourse analysis, and were trained using detailed written guidelines.

Citation

If you use this dataset, please cite the following paper:

BibTeX:

@article{damo2025effectiveness,
  title   = {Effectiveness of Counter-Speech against Abusive Content: A Multidimensional Annotation and Classification Study},
  author  = {Damo, Greta and Cabrio, Elena and Villata, Serena},
  journal = {arXiv preprint arXiv:2506.11919},
  year    = {2025}
}

APA:

Damo, G., Cabrio, E., & Villata, S. (2025). Effectiveness of Counter-Speech against Abusive Content: A Multidimensional Annotation and Classification Study. arXiv preprint arXiv:2506.11919.

📄 Read the paper on arXiv

Dataset Card Author

Greta Damo

Dataset Card Contact

greta.damo@univ-cotedazur.fr

Downloads last month
6