Tech for good: Three evidence-based insights to build better humanitarian tech

Tech for good: Three evidence-based insights to build better humanitarian tech

By Adelide Mutinda, Innovation Program Manager at Australian Red Cross

Responding to humanitarian crises is quickly evolving. Many in the sector are considering what role technology can play in transforming the way we respond to disasters, emergencies, conflict, and climate change.

At Australian Red Cross, we have been exploring what’s possible through our innovation program, the Humanitech Lab. With the support of our founding partner Telstra Foundation, we have collaborated with 13 pioneering organisations to validate a range of innovative projects; completed pilots with Climasens, FloodMapp and MamaTalk; and are currently conducting four new pilot projects while scaling the work of Climasens.

A year ago, we wrote about what we were learning at Red Cross through the Lab about developing tech for good. We promised to continually share the insights learned along the way about what it takes to pursue ethical, safe and inclusive technology innovation. Here is a follow-up with our latest findings as we continue to evolve our understanding of what it takes to pursue responsible technology innovation.

The Australian Red Cross Get Prepared app empowers Australians to make important decisions ahead of time by completing their RediPlan. 

Design for cultural safety

While conversations around safety in tech often focus on the ways technologists can reduce digital forms of harm and protect personal information, we believe designing for cultural safety is also critical to reduce harm.

Cultural safety is reflected in shared respect, shared meaning, shared knowledge and a shared experience of learning, living and working together with dignity. Culturally safe spaces allow for diverse histories, identities, and experiences to not just be visible, but also heard, supported and respected. Designing for cultural safety is key to increasing access and inclusion of marginalised and digitally excluded groups.

How does this look like in practice? Cultural safety must be considered from the outset of a project, from the design process across the project life cycle, through to the proposed outcomes and monitoring and evaluation frameworks. It involves exploring how projects resonate with diverse communities and perspectives, building our own capability to moderate conversations with impacted communities, seeing the design through the eyes of all possible users, and ensuring that sufficient engagement has occurred to reflect a diversity of voices, needs and ways of engaging with new technologies.

Designing for cultural safety requires that we ask, understand, and adhere to the ethics and protocols of the communities that we are engaging with, the cultures within which we are working, and the places in which we are operating.

We must also be prepared to confront existing power structures and be willing to challenge our own cultural systems and biases rather than prioritise becoming ‘culturally competent’ in the cultures of others. To achieve cultural safety, we must constantly interrogate both methods (“how”) and motivations (“why”) for engagement.

Mitigate the unintended consequences of the technologies you develop or deploy

How do we mitigate the consequences of something that is unintended or even unknown? This question was recently posed during a panel discussion at the Humanitech Summit 2023. The answer? Expect them.

By anticipating that tech solutions are prone to potential misuses and can cause harm, we can get better at considering the potential consequences and attempt to prevent them from the outset. The goal is not complete (and unrealistic) clairvoyance, but rather reasonable foresight.

To sustain the virtue of our technology solutions, we must rigorously consider the unintended consequences from the outset and to monitor for them over time to mitigate them significantly. We have previously spoken of designing for the misuse case(s) as one example of how we can be intentional about considering possible harms of technology and building safeguards to mitigate potential abuses and risks.

In addition, we should articulate all the ways that the technology could be abused and cause harm and be willing to ask (and honestly answer) what happens when the solutions we are developing are misused and fall into the wrong hands. From there, we can begin taking action to establish guardrails that mitigate and protect against these harms.

By articulating the different known types of negative impacts from technology on people and communities - from propagation of misinformation, to breaches of privacy and personal information, to amplification of racism and sexism and homophobia, to a host of mental health impacts and damage to the environment - we can begin to develop shared language and build understanding of the possible unintended consequences of our innovations.

We then urgently need to move from asking ‘What happens if?’ questions to ‘What happens when?’ questions. For example:

  • What happens when my technology is misused or falls into the wrong hands?
  • What happens when there are data breaches in my system?
  • What happens when my business model interests don’t align with the best interests of the community?

It is also important to add that identifying and reducing unintended consequences calls for greater humility and acceptance of the unknown; it requires us to take the time to explore what we don’t know and actively seek alternative possibilities. To truly avoid the risk of making wrong assumptions or thinking that we, as designers or technologists, know best, we must be willing to put ourselves in the humble position of being complete novices before a certain user group. For technologists and social innovators, this requires a healthy balance of believing in yourself and your ideas but doubting your current knowledge - a concept termed “epistemic humility” and to be open to saying, ‘I don’t know’.

"How do we mitigate the consequences of something that is unintended or even unknown? Expect them."

Recognise your tech’s impact on the planet

We can no longer ignore the planet. Whether we’re a humanitarian, technologist, designer, policy maker, or decision-maker, we all have a part to play in recognising our own impacts on environmental degradation and climate change.

While we are advocating for tech that is equitable and responsible and benefits humanity, our approach is not exclusively human-oriented at the expense of nature and our planet. We must take account of the planet's needs and other non-human life in distress in the wider context of ethics and sustainability in tech design and use. We have to take steps to understand both the social and environmental impacts of technology and proactively address these risks in order to create a sustainable present and future for us all.

While technology has caused many environmental and social problems, it also presents incredible opportunities to address environmental degradation, climate change, food scarcity, waste management, and other pressing global climate challenges. However, we cannot ignore that a fundamental issue is that technology advancements have been characterised by apathy and an assumption for healthy natural systems rather than valuing and nurturing these systems. We need to harness the power of technology to drive forward environmental sustainability and to engage and empower governments, companies, communities and individuals to adopt environmentally sustainable practices, policies and business models that support renewing and restoring the health of our nature and planet.

At Australian Red Cross, we are witnessing firsthand the human impacts of a changing climate and extreme weather. In particular, we see the detrimental health and social impacts brought on by more frequent and intense disasters and crises. We are uniquely placed and have an urgent responsibility to adapt and innovate to better support communities to prepare for, recover from and adapt to climate-related disasters to ensure the health and wellbeing of Australians, especially the most vulnerable in our society. Through our work at the Lab with Climasens, a climate intelligence platform that helps identify climate risks and impacts, and AirSeed which uses drone-planting solutions to support communities impacted by extreme weather events, we are exploring how technology and innovation can be ethically and responsibly integrated with humanitarian responses to enhance efficiencies in service delivery and accelerate disaster recovery and regeneration efforts.

Bruce Grady and the FloodMapp team run through a demonstration of their product.

In conclusion

At Red Cross, we are continuing to evolve our understanding of what it takes to pursue ethical, safe and inclusive technology through Humanitech. We believe that this isn’t the work of one organisation but the work of all of us together, so we continue to urge others to come along on this journey.

If this work resonates with you, these are some of the insights that have shaped our Humanity First principles and the practicalities of pursuing tech for good, which you can read more about here.

Together, we can reimagine a future with technology that is good for both people and the planet.