>

<

>

Umibot chatbot

Image-Based Abuse Project Umibot

RMIT University

Safe, trauma-informed guidance when it matters most.

The brief

RMIT’s research team, lead by Professor Nicola Henry, wanted to explore how an AI-enabled chatbot could help people experiencing image-based abuse access clear information and appropriate support pathways.

The brief was inherently multi-audience. Given differences in laws and reporting pathways, the chatbot needed to support:

  • people who had experienced image-based abuse

  • their support networks and bystanders

  • perpetrators seeking guidance

  • both adults and minors

The challenge

This was a high-sensitivity domain where safety, accuracy and predictability mattered as much as usability.

Key challenges included:

  • Response relevance depends on who is asking.
    The same question can require a different response depending on age and role (self, supporting someone else, or concerned about something they have done).

  • The experience needed to stay “on script”.
    Because of the nature of queries, responses needed to be controlled and auditable. This was also the early period of large language model adoption, and we could not risk generative replies drifting into unsafe or inaccurate territory.

  • User wellbeing needed to be actively protected.
    We had to anticipate distress, urgency, and situations where someone may need to leave the site quickly.

  • Privacy constraints shaped everything.
    The platform needed to avoid capturing personal information or recording conversations, while still allowing the research team to learn how the service was being used at an anonymised, aggregate level.

Choosing the right platform

Given the requirements of the project, the UX and Development team spent significant time researching available AI-enabled platforms to find the right fit. Ultimately we landed on Amazon Lex for both UX and development reasons. Lex enabled:

  • hosting in Australia, which was important given privacy requirements and the sensitive nature of the subject matter

  • a chatbot experience not tied to any social platform login

  • future flexibility via integration options with channels like Facebook or Slack

Most importantly, Lex gave us tight control over intent matching and response behaviour, allowing us to define which keywords and phrases map to which responses. That control was a deliberate safety decision: we needed consistent, predictable answers and could not risk the bot going off-script.

Giving the right info to the right people

The first step was acknowledging that a single “one size fits all” conversation flow would not work. We needed a way to confidently segment people into the right support pathway before the bot began answering in detail.

Working in conjunction with our lead copywriter and the RMIT team, we mapped conversation flows that segmented users by age, role, and topic of interest.

To support this segmentation, we needed a way to identify what type of user was using the chatbot without any potential ambiguity. To enable this I implemented a solution for how we identified audience membership reliably in a conversational interface:

  • A front-loaded qualifying conversation tree, where the earliest interactions constrained responses to specific options for the questions that determine audience group.

  • This ensured users could be routed to the most relevant pathway early, instead of relying on ambiguous free-text interpretation.

  • Once routed, the bot would then support natural language questions, but within a contextually appropriate “lane”.

Design approach

A “trusted advisor” voice

RMIT had already invested in Umibot’s identity and branding. Through workshops with the project team, we translated those values into a clear conversational voice: empathic, factual and reassuring.

To make that voice practical (and repeatable across a large content set), we:

  • developed tone and language principles

  • created sample utterances to demonstrate how to handle sensitive prompts and scenarios

  • applied final copyediting to ensure flows worked within platform constraints, while still sounding human and supportive

Designing for safety and inclusivity

Given the likelihood of distress or risk, we designed guardrails into the experience:

  • trigger words that divert users to immediate support and safety resources (including self-harm and emergency service references)

  • intent verification checks to avoid unnecessary diversions and keep the conversation relevant

  • inclusive language and accessibility considerations, plus dedicated pathways for specific community needs and support services

The result

Umibot launched publicly on 1 December 2022.

While we cannot speak about impact on individuals for privacy reasons, usage and attention demonstrated clear demand:

  • RMIT’s impact case study reported the tool supporting over one hundred conversations a month in its first six months.

  • The project received media coverage, including Nine’s parenting site highlighting the tool and its role in directing users to relevant services with tailored support for under and over 18s.

  • The team has signalled intent to develop a Version 2 in the future.

Overall, Umibot is a strong example of how conversational UX can be designed responsibly in a high-risk domain: balancing empathy with accuracy, enabling access to help without forcing disclosure, and building measurement systems that respect privacy while still supporting continuous improvement.

Other projects

Want to connect?

Let’s chat about how I can help you push your digital experiences to the next level.

Want to connect?

Let’s chat about how I can help you push your digital experiences to the next level.

Want to connect?

Let’s chat about how I can help you push your digital experiences to the next level.

Copyright 2025 by Nathan Cocks

Copyright 2025 by Nathan Cocks

Copyright 2025 by Nathan Cocks