Last updated: 27 Jan 2026

SocialNLP Research Lab

Highlights

Recent achievements and recognitions from our group.

  • Jan 2026 — 🏆 Usman Naseem won Best Paper at AAAI 2026.
  • Jan 2026 — 🏆 6 papers accepted at WebConf 2026.
  • Jan 2026 — 🏆 6 papers accepted at EACL 2026.
  • Nov 2025 — 🎉 Juan (Ada) Ren’s paper accepted at IEEE Big Data 2025.
  • Nov 2025 — 🌟 Usman Naseem received the Outstanding SAC Award at EMNLP 2025.
  • Oct 2025 — 🎉 Yiran Zhang (AAAI Demo) and Usman Naseem (LLM2CLIP, Main Track).
  • Sep 2025 — 🎉 Afrozah Nadeem won Best Paper & Best Presentation at CASE @ RANLP 2025.
  • Aug 2025 — 🏆 10 papers (3 Main, 4 Findings, 2 Industry, 1 Workshop) at EMNLP 2025.
  • Aug 2025 — 🎓 Invited tutorials on AI Alignment at ALTA 2025 & AJCAI 2025.

Research Vision

At SocialNLP, we explore how language models can understand, reason, and act in ways that align with human values and social contexts — advancing research at the intersection of AI Alignment, Social Good, and Online Trust & Safety.

Research Areas

AI Alignment

Advancing methods to align LLMs with human values, and preferences.

  • Value Alignment — Ensuring models act in ways consistent with human ethics, integrity, and the HHH principles (Helpful, Harmless, Honest). Also, reflecting diverse cultural and moral perspectives beyond a single worldview.
  • Safety Alignment — Developing defenses against jailbreaks, adversarial attacks, and unsafe content generation.
  • Reasoning Alignment — Promoting logical consistency, factual grounding, and transparent reasoning chains to minimize hallucinations.
  • Cultural Alignment — Embedding diverse moral and social perspectives to ensure global inclusivity and fairness in LLM behavior.

NLP for Social Good

Applying NLP technologies to enhance societal welfare, inclusivity, and equitable access to information across diverse communities.

  • Healthcare NLP — Combating misinformation and supporting public health communication through evidence-grounded LLMs.
  • Low-Resource Languages — Developing inclusive datasets and language tools for under-represented regions.
  • Climate & Crisis Communication — Enabling real-time, multilingual information flow during emergencies and disasters.

Online Trust and Safety

Building transparent, interpretable, and ethically aware AI systems that preserve trust and mitigate online harms at scale.

  • Misinformation Detection — Identifying and explaining false or AI-generated content across text and media.
  • Toxicity & Abuse Prevention — Detecting hate speech, harassment, and coordinated manipulation campaigns.
  • Content Moderation — Aligning automated filters with community guidelines and global cultural norms.
  • Trustworthy Agents — Designing accountable conversational models with auditability and user-level transparency.

People

Faculty Lead

Faculty Lead

Dr. Usman Naseem

Macquarie University, Australia

HDR @ SocialNLP

PhD Student

Gautam Siddharth Kashyap

Macquarie University, Australia

PhD Student

Kaixuan Ren (Victor)

Macquarie University, Australia

PhD Student

Utsav Maskey

Macquarie University, Australia

PhD Student

Afrozah Nadeem

Macquarie University, Australia

PhD Student

Juan (Ada) Ren

Macquarie University, Australia

PhD Student

Yiran Zhang (Grant)

Macquarie University, Australia

MRes Student

Sean Chan

Macquarie University, Australia

MRes Student

Md Azizul Hoque

Macquarie University, Australia

MRes Student

Kritesh Rauniyar

Macquarie University, Australia

RA @ SocialNLP

Affiliated with @ SocialNLP

Graduated with @ SocialNLP

Recent Publications

EMNLP 2025
CASE @ RANLP 2025
ACL 2025

Contact

Faculty Lead

Dr. Usman Naseem

Macquarie University, Australia

📧 usman.naseem@mq.edu.au

School of Computing, Macquarie University, Sydney, NSW 2113, Australia