Can AI improve how we care for children and young people?
Apr 2025
Written by Kelly Royds
Anyone working in child protection, youth justice, disability services, or out-of-home care knows the weight of documentation. Whether you’re a case manager, youth worker, behaviour support practitioner, or clinical lead, the reports, assessments, and plans you write don’t just fill a file—they shape decisions about a young person’s life. And let’s be real: documentation can feel relentless. Writing case notes, behaviour support plans, risk assessments, and meeting minutes takes up a huge amount of time—time that many professionals wish they could spend more directly supporting young people.
With AI-powered tools like ChatGPT becoming more widely available, there’s growing curiosity about whether they could help with the workload—not for handling confidential information about children, young people and families, but for supporting other aspects of the job like structuring reports, drafting general templates, or highlighting key points in policy updates. However, the risks of AI in children, young people and families-facing documentation are serious, and it’s critical that professionals know where the boundaries are. Recent cases, like the false court report information in Victoria, have highlighted just how easily AI can produce misleading or fabricated information. Just as concerning, the biggest red flag for privacy regulators is the possibility of workers unknowingly inputting sensitive children, young people and families data into AI tools—a major confidentiality breach.

So, where does that leave us? Could AI be useful in client-facing tasks, helping professionals think, plan, and organise more effectively—without the risks of handling personal data?
What is AI and how is it being used?
AI (Artificial Intelligence) refers to computer programs that can generate text, summarise information, or automate tasks based on patterns in data. While AI has existed for years in predictive analytics and automation, the most recent wave of AI tools—called generative AI—can produce human-like text, answer questions, and assist with writing and research.
Some of the most well-known AI tools include:
- ChatGPT (by OpenAI) – A chatbot that generates text-based responses, summarises information, and helps structure documents.
- Microsoft Copilot – An AI-powered writing assistant integrated into Microsoft 365 (Word, Outlook, Teams) to help summarise emails, draft reports, and improve writing clarity.
- Google Gemini (formerly Bard) – A conversational AI tool similar to ChatGPT that integrates with Google tools.
- AI-driven analytics tools – Some organisations are exploring AI to analyse patterns in case data, flag risk factors, or identify trends across services.
These tools are not designed for handling confidential or personal client information—and most sector policies explicitly prohibit inputting private data into them. However, they can be used in ways that support teams and professionals without breaching confidentiality.

AI in documentation: where it gets risky
AI’s use in client-facing documentation—like case notes, behaviour support plans, and court reports—raises major ethical concerns.
Recently in Victoria, a child protection worker used ChatGPT to draft court submissions, only to find that the AI had fabricated information. The inaccuracies were so serious that the Victorian government banned AI use in child protection work, citing concerns about privacy, errors, and bias.
The Victorian Privacy Commissioner has made it clear: the biggest risk is workers inputting personal or sensitive client information into AI tools. Many of these tools store, learn from, and sometimes retain data, which creates serious privacy risks—especially in child protection, youth justice, and disability settings.
Even without direct privacy breaches, AI isn’t neutral. It can:
- Reinforce bias – AI pulls from existing data, meaning it might unintentionally repeat harmful stereotypes about young people’s behaviours.
- Misinterpret trauma – AI can summarise an event, but it can’t understand the emotions, context, or trauma responses behind a young person’s behaviour.
- Create false objectivity – AI-generated text often sounds polished and neutral, but if the underlying data is flawed, it can reinforce harmful narratives in a more subtle way.

So, what’s the right approach?
AI isn’t going away—but neither should human judgement, professional experience, and ethical oversight. The challenge now is finding a balanced approach that makes AI useful without putting young people at risk.
Instead of banning AI outright, the focus should be on:
- AI-assisted, not AI-reliant work – Using AI to generate discussion prompts, summarise policies, or structure internal documents is very different from letting it draft case notes or reports on young people.
- Clear ethical guidelines – Organisations need strong policies on what AI can and can’t be used for, especially when it comes to sensitive or legal documentation.
- Training on AI literacy – Many workers don’t know how AI stores or processes data. Training is essential to ensure privacy isn’t compromised.
- Transparency in AI use – If AI is used in internal documents, training materials, or research summaries, organisations should be upfront about it—but never in ways that involve private client information.
- Applying our existing risk lens – In child protection, out-of-home care, and youth justice, professionals already make constant risk assessments—both formal and informal—across every aspect of their work. That same lens can be applied to AI. Before using a tool, assess the potential risks, implement appropriate safeguards, and monitor use over time. The sector is well-practiced in evolving alongside new risks; AI is no different.

Final thoughts
AI has the potential to support professionals in non-client-facing work, making teams more efficient, structured, and informed. But its use in client documentation or decision-making carries serious risks that can’t be ignored.
Whether it’s structuring a team meeting or summarising research, AI can be a useful tool for the sector—but it should never replace the professional judgement, critical thinking, and ethical responsibility that comes with working with young people.
For professionals in child protection, out-of-home care, and youth justice, the key question is:
Are we using AI to strengthen our practice—or are we letting it shape the way we see young people?
What do you think? Is AI helping or harming the sector?
Keen to learn more about how AI is impacting our sector? Check out the following links:
ICMEC Australia: A discussion paper on AI and child protection in Australia
Voluntary AI Safety Standards – Helping businesses be safe and responsible when using AI.
Australian Government: Guidance on privacy and the use of commercially viable AI products