As AI becomes an integral part of humanitarian operations, ensuring trust in these tools is paramount. This session will delve into the ethical considerations, transparency measures, and inclusive design principles needed to build and maintain trust in AI-powered systems. By prioritizing human-centered approaches, organizations can mitigate risks and maximize the potential of these technologies. We will focus on GANNET, cutting-edge AI tools designed to empower humanitarian workers by providing real-time, tailored insights. Built to alleviate the strain of information overload, our GANNET tools enable teams to rapidly synthesize data and make informed decisions under tight deadlines. This session will demonstrate how it can adapt to diverse operational contexts, bridging the gap between analysis and action. In the face of rapidly evolving crises, humanitarian organizations require robust tools to make sense of vast and fragmented data streams. Designed for accessibility and scalability, GANNET empowers decision-makers at every level to act swiftly and effectively.
Drawing on lessons learned from GANNET’s development, we will share strategies for fostering accountability and collaboration across stakeholders. Attendees will leave with actionable recommendations for embedding trust and equity into the design and deployment of AI systems, ensuring they serve as enablers of positive change. Attendees will explore the unique features and will provide an in-depth look at the platform's architecture, showcasing its ability to integrate AI and humans-in-the-loop. We will share real-world applications, highlighting how GANNET SituationHub has supported more targeted interventions in protracted and sudden-onset emergencies such as Lebanon and Sudan. Join us to see how AI can augment—not replace—human expertise in the field.
|