View this email in your browser
Issue #119
Newsletter_______
The risks and opportunities of AI

Generative AI & Work-slop

As I write this, news has emerged that Deloitte Australia have been caught red-handed using a large language model to produce a $439,000 government report that contained citations for academic studies that don't exist, and referenced a made-up court judgement. After months of denials, they're now refunding part of the fee.

The ultimate irony? This was a report intended to examine the failures of robodebt's successor system—itself part of one of the worst cases globally of automated injustice gone wrong.

This is textbook "work-slop"—what Stanford researchers call AI-generated content that looks professional, sounds confident, but ultimately has no substance and wastes everyone's time. According to their study, 40% of workers received work-slop in the last month, costing nearly two hours per incident to fix.

There are any number of ethical concerns around the using LLMs—from the power needed to keep them running, through to their various applications across sectors and society. But knowledge workers using them to shortcut 'work products' is absolutely one of the drivers of their incredible uptake. 

Everyone has a choice to make in the use of these tools. We can be pilots—directing them purposefully towards valuable outcomes—or passengers, hitting enter on prompts and passing along the mess.

But before we even get to using these tools, we need to create space for communities to shape how they should and shouldn't be used. This month's newsletter presents two different approaches to ethical AI engagement.

The first shows how we're working with communities to explore facial recognition technology through speculative scenarios and playful simulations—not to implement the tech, but to gather public input that informs protections and sets the guardrails. It's about creating conversations, ensuring those affected have a voice in how these systems might impact their lives.

The second explores how AI image generation tools can be "piloted" by designers to jump start creative exploration and support collaboration. Not as a replacement for human work, but to enable faster learning and shared discovery.

Both examples share a commitment to transparency and human oversight. Whether we're facilitating public dialogue about AI ethics or using AI tools in our practice, the difference between work-slop and meaningful work isn't the technology—it's the intention, oversight, and mindset we bring to it.

Chris Marmo
Chief Executive Officer

Enabling conversations about the ethics of facial recognition technology

We partnered with the UTS Centre for Social Justice & Inclusion to create an interactive tool that invites the public into the debate around facial recognition technology.

Through speculative scenarios and playful simulations, users experience how facial recognition might be used or misused in everyday contexts. The application collects community feedback to inform a model law that aims to protect rights, promote accountability, and spark broader public conversation.
 

Designed for impact:

Visually rich, user-friendly interfaces

Thought-provoking, shareable experiences

Privacy-conscious data collection for future research

The tool premiered at Vivid Sydney, helping bring complex legal and ethical ideas into the public spotlight.
 

Learn how thoughtful design can shape public debate
Read the full case study

The Disruptive Potential of AI Image Generators for Design Professionals

From visualising abstract ideas to exploring stylistic possibilities, AI image tools like Midjourney and DALL-E are starting to change the way designers think, sketch and share ideas.

In our latest article, we explore how these tools can:

Jumpstart creative exploration with fast, generative prompts

Speculate playfully on alternative design histories and aesthetics

Support collaboration by visualising multiple options quickly

Spark learning across the studio through shared experiments

While the hype is real, so are the challenges—copyright, environmental impact, and the limits of the tech itself. But with care, these tools might help us expand what creative work can look like.

Read the full reflection

Explore the article

Stay Connected

We share ideas, reflections and project stories that explore how design can shape a more just, equitable and sustainable world.

Follow us for regular updates:

LinkedIn | Instagram | Facebook | Twitter

Or head to papergiant.net to explore our latest articles and case studies.

We’d love to hear from you.

Got a question, idea, or reflection? Just hit reply — we read every message.

We are a strategic design consultancy that helps organisations deliver better products, services and policy.
LinkedIn
Instagram
Paper Giant acknowledges the Wurundjeri and Boonwurrung people of the Kulin nation, and the Ngunnawal people as the traditional owners of the lands on which our offices are located, and the traditional owners of country on which we meet and work throughout Australia. We recognise that sovereignty over the land has never been ceded, and pay our respects to Elders past, present and emerging.
Copyright © *|CURRENT_YEAR|* *|LIST:COMPANY|*, All rights reserved.
*|IFNOT:ARCHIVE_PAGE|* *|LIST:DESCRIPTION|*

Our mailing address is:
*|HTML:LIST_ADDRESS_HTML|* *|END:IF|*

Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list.

*|IF:REWARDS|* *|HTML:REWARDS|* *|END:IF|*