Last updated: 19 Nov 2025

SocialNLP Research Lab

Highlights

Recent achievements and recognitions from our group.

  • Nov 2025 — 🎉 Congrats to Juan (Ada) Ren for her paper acceptance at IEEE Big Data 2025!
  • Nov 2025 — 🌟 Congrats to Usman Naseem for receiving the Outstanding Senior Area Chair Award at EMNLP 2025!
  • Oct 2025 — 🎉 Congrats to Yiran Zhang for his AAAI 2026 Demo Track paper, and to Usman Naseem for his Main Track paper LLM2CLIP.
  • Sep 2025 — 🎉 Congrats to Afrozah Nadeem for winning Best Paper and Best Presentation at CASE @ RANLP 2025.
  • Aug 2025 — 🏆 10 papers (3 Main, 4 Findings, 2 Industry Track, 1 Workshop) accepted at EMNLP 2025. Congrats to the team!
  • Aug 2025 — 🎓 Two invited tutorials on AI Alignment with Human Values and Preferences to be delivered at ALTA 2025 and AJCAI 2025.

Research Vision

At SocialNLP, we explore how language models can understand, reason, and act in ways that align with human values and social contexts — advancing research at the intersection of AI Alignment, Social Good, and Online Trust & Safety.

Research Areas

AI Alignment

Advancing methods to align LLMs with human values, and preferences.

  • Value Alignment — Ensuring models act in ways consistent with human ethics, integrity, and the HHH principles (Helpful, Harmless, Honest). Also, reflecting diverse cultural and moral perspectives beyond a single worldview.
  • Safety Alignment — Developing defenses against jailbreaks, adversarial attacks, and unsafe content generation.
  • Reasoning Alignment — Promoting logical consistency, factual grounding, and transparent reasoning chains to minimize hallucinations.
  • Cultural Alignment — Embedding diverse moral and social perspectives to ensure global inclusivity and fairness in LLM behavior.

NLP for Social Good

Applying NLP technologies to enhance societal welfare, inclusivity, and equitable access to information across diverse communities.

  • Healthcare NLP — Combating misinformation and supporting public health communication through evidence-grounded LLMs.
  • Low-Resource Languages — Developing inclusive datasets and language tools for under-represented regions.
  • Climate & Crisis Communication — Enabling real-time, multilingual information flow during emergencies and disasters.

Online Trust and Safety

Building transparent, interpretable, and ethically aware AI systems that preserve trust and mitigate online harms at scale.

  • Misinformation Detection — Identifying and explaining false or AI-generated content across text and media.
  • Toxicity & Abuse Prevention — Detecting hate speech, harassment, and coordinated manipulation campaigns.
  • Content Moderation — Aligning automated filters with community guidelines and global cultural norms.
  • Trustworthy Agents — Designing accountable conversational models with auditability and user-level transparency.

People

Faculty Lead

Faculty Lead

Dr. Usman Naseem

Macquarie University, Australia

HDR @ SocialNLP

PhD Student

Gautam Siddharth Kashyap

Macquarie University, Australia

PhD Student

Kaixuan Ren (Victor)

Macquarie University, Australia

MRes Student

Utsav Maskey

Macquarie University, Australia

MRes Student

Afrozah Nadeem

Macquarie University, Australia

MRes Student

Juan (Ada) Ren

Macquarie University, Australia

MRes Student

Yiran Zhang (Grant)

Macquarie University, Australia

RA @ SocialNLP

Affiliated with @ SocialNLP

Recent Publications

EMNLP 2025
CASE @ RANLP 2025
ACL 2025

Contact

Faculty Lead

Dr. Usman Naseem

Macquarie University, Australia

📧 usman.naseem@mq.edu.au

School of Computing, Macquarie University, Sydney, NSW 2113, Australia