You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Counter-speech effectiveness in CONAN

Dataset Details

We developed a theoretical framework grounded in linguistics, argumentation, rhetoric, and communication theories to evaluate counter-speech effectiveness. The framework includes six key dimensions: Emotional Appeal, Audience Adaptation, Clarity, Effectiveness, Rebuttal, and Fairness. We use our framework to annotate the six dimensions on two existing datasets: CONAN (Chung et al., 2019), and the Twitter Dataset (Albanyan and Blanco, 2022).

Dataset Description

We select CONAN as it is the first expert-curated dataset of HS/CS pairs, widely used as a benchmark. It is a multilingual (English, French, Italian) dataset centered on Islamophobia, containing 4,078 expert-annotated HS/CS pairs. Through translation and paraphrasing, it is expanded to 14,988 pairs. For our experiments, we retain only the English, non-augmented instances, resulting in 3,847 pairs.

  • Curated by: [Greta Damo, Elena Cabrio, Serena Villata]
  • Funded by: [the French government, through the 3IA Cote d’Azur investments in the project managed by the National Research Agency (ANR) with the reference number ANR-23-IACL-0001.]
  • Language(s) (NLP): [English]
  • License: [Inria holds all the ownership rights on the codes and data contained in this repository (the Software). The Software is still being currently developed. It is the Inria's aim for the Software to be used by the scientific community so as to test it and, evaluate it so that Inria may improve it. For these reasons Inria has decided to distribute the Software. Inria grants to the academic user, a free of charge, without right to sublicense non-exclusive right to use the Software for research purposes. Any other use without of prior consent of Inria is prohibited. The academic user explicitly acknowledges having received from Inria all information allowing him to appreciate the adequacy between of the Software and his needs and to undertake all necessary precautions for his execution and use. The Software is provided only as a executable file. In case of using the Software for a publication or other results obtained through the use of the Software, user should cite the Software as follows : Every user of the Software could communicate to the developers his or her remarks as to the use of the Software.

THE USER CANNOT USE, EXPLOIT OR COMMERCIALY DISTRIBUTE THE SOFTWARE WITHOUT PRIOR AND EXPLICIT CONSENT OF INRIA. ANY SUCH ACTION WILL CONSTITUTE A FORGERY. THIS SOFTWARE IS PROVIDED "AS IS" WITHOUT ANY WARRANTIES OF ANY NATURE AND ANY EXPRESS OR IMPLIED WARRATIES,WITH REGARDS TO COMMERCIAL USE, PROFESSIONNAL USE, LEGAL OR NOT, OR OTHER, OR COMMERCIALISATION OR ADAPTATION. UNLESS EXPLICITLY PROVIDED BY LAW, IN NO EVENT, SHALL INRIA OR THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONNSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, LOSS OF USE, DATA, OR PROFITS OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.]

Dataset Sources

Uses

The dataset is made available to the research community to support studies on counter-speech strategies, hate speech mitigation, and online discourse analysis. It can be used to train and evaluate models that detect, classify, or generate counter-speech responses, and to better understand the dynamics between harmful and countering messages online.

Direct Use

This dataset is suitable for:

🧠 Research on counter-speech — studying the effectiveness of different types of counter-speech against online hate or abusive content.

🤖 Machine learning model training and evaluation — for example:

  • Text classification (detecting abusive content vs. counter-speech)
  • Effectiveness scoring (predicting which counter-speech strategies are more effective)
  • Multidimensional annotation analysis (studying linguistic, emotional, and pragmatic features)

🗣️ Computational social science studies — exploring patterns of discourse, hate mitigation, and social interaction in online environments.

💬 Responsible NLP research — for the development of safer and more inclusive language technologies.

Out-of-Scope Use

This dataset should not be used for:

  • Generating or amplifying harmful or abusive content, hate speech, or harassment.
  • Training models for content moderation without proper bias and fairness evaluation — as the dataset was designed for research, not deployment in production systems.
  • Profiling or targeting individuals or groups based on their language or opinions.
  • Commercial exploitation that could misrepresent or misuse the data or annotations.

Dataset Structure

The dataset consists of 3,847 Hate Speech / Counter-Speech (HS/CS) pairs, combining expert-curated and real-world data. Each instance includes both the hateful content and its counter-speech response, annotated along multiple effectiveness dimensions.

Fields:

  • cn_id: Unique counter-speech identifier
  • hateSpeech: The hateful or abusive message
  • counterSpeech: The counter-speech reply
  • hsType: Type of hate speech (e.g., religious, gender-based, etc.)
  • hsSubType: More specific subcategory of the hate speech
  • cnType: Type or strategy of counter-speech
  • age, gender, educationLevel: Demographic metadata for the counter-speaker (if available)
  • clarity, evidence, emotional_appeal, rebuttal, audience_adaptation, fairness: Counter-speech effectiveness dimensions annotated by experts

Dataset Creation

Source Data

The dataset extends and integrates an existing benchmark: the CONAN dataset (Chung et al., 2019).

Data Collection and Processing

CONAN dataset: The first expert-curated multilingual dataset of hate speech and counter-speech (HS/CS) pairs, focused on Islamophobia.

Languages: English, French, Italian.

Size: 4,078 expert-annotated pairs, expanded via translation and paraphrasing to 14,988 pairs.

For our experiments, we retained only the English, non-augmented subset, resulting in 3,847 pairs.

Who are the source data producers?

CONAN data was produced by expert annotators under controlled conditions to model structured counter-speech strategies. The dataset does not include personally identifiable information (PII) beyond the textual content itself; user handles were anonymized when applicable.

Annotations

The dataset is expert-annotated along multiple dimensions of counter-speech effectiveness.

Annotation process

A team of three annotators (with backgrounds in computational linguistics and computer science) developed and refined the annotation guidelines. A pilot study was conducted on 50 pairs from each dataset to iteratively consolidate the guidelines. Two annotation rounds were carried out:

  • After the first round, disagreements were reviewed and guidelines updated.
  • A second round achieved stronger agreement and finalized labels. Inter-Annotator Agreement (IAA) was measured using:
  • Fleiss’s κ for binary labels,
  • Krippendorff’s α for categorical labels, and
  • Percent Agreement for “Audience Adaptation” (due to near-perfect consensus). The final IAA results showed strong agreement across all dimensions, and the remaining portion of the dataset was labeled by one annotator following the agreed-upon guidelines.

Who are the annotators?

Three expert annotators affiliated with the research team conducted the labeling. All have prior experience in NLP, annotation, and discourse analysis, and were trained using detailed written guidelines.

Citation

If you use this dataset, please cite the following paper:

BibTeX:

@article{damo2025effectiveness,
  title   = {Effectiveness of Counter-Speech against Abusive Content: A Multidimensional Annotation and Classification Study},
  author  = {Damo, Greta and Cabrio, Elena and Villata, Serena},
  journal = {arXiv preprint arXiv:2506.11919},
  year    = {2025}
}

APA:

Damo, G., Cabrio, E., & Villata, S. (2025). Effectiveness of Counter-Speech against Abusive Content: A Multidimensional Annotation and Classification Study. arXiv preprint arXiv:2506.11919.

📄 Read the paper on arXiv

Dataset Card Author

Greta Damo

Dataset Card Contact

greta.damo@univ-cotedazur.fr

Downloads last month
4