Global iSAFE Hackathon - Inclusive & Secure AI For Everyone

iSAFE Hackathon

A global, hybrid hackathon to build trustworthy generative AI that defends truth, protects citizens, and promotes online peace.

An Official Pre-Summit Event of the AI Impact Summit 2026

3

Edition

120+

Countries

1500+

Participants

$20K

Prize Pool

Why This Matters?

Generative AI can create content at scale — images, video, audio, and text. That capability is a force for good, and a source of new harms: deepfakes, targeted scams, automated disinformation, and model misuse. HackForTrust invites the global community to build practical, deployable tools that protect truth, safeguard citizens, and make AI accountable. This is a hands-on competition: prototypes must be demonstrable and ready for pilot testing.

Event Timeline

Your journey to building digital trust

November 20, 2025

Launch

Official launch of the Global iSAFE Hackathon

November 20, 2025 – March 31, 2026

Registration Period

Participants can register individually or as teams; open to global participants

April 1 – April 30, 2026

Shortlisting Phase

Evaluation of submitted ideas/concepts; top 50 entries are shortlisted

May 5, 2026

Finalists Announcement

Official announcement of top 20 teams selected for the Development Round

May 10, 2026

Mentorship Session

Mentorship and guidance session for finalists

May 10 – May 22, 2026

Virtual Presentations

Top 20 finalists present their working solutions or demo videos virtually

July 6 – July 10, 2026

Winner Announcement @ WSIS Forum 2026

Top 10 finalists pitch to jury at WSIS Forum 2026 in Geneva, Switzerland. Live demo, Q&A session, and selection of 3 winners

Competition Tracks

Choose your challenge and innovate

Detect the Deceptive

Tracing Synthetic Realities

Problem Statement

Synthetic content — AI-generated images, deepfakes, cloned voices, and fabricated texts — has blurred the line between truth and deception. The challenge is to design GenAI-powered tools that can detect, authenticate, and trace the origins of such content across platforms.

Core Challenge

Build systems capable of verifying digital media authenticity, providing content provenance, and flagging manipulative or synthetic content in real time.

Expected Outcome

A robust AI-driven verification system capable of identifying and classifying deceptive content with high accuracy.

Expected Deliverables

  • Working prototype with real-time detection accuracy metrics
  • Dataset or model fine-tuned for deepfake/synthetic content classification
  • Demo-ready browser extension or verification dashboard

Ideas to Explore

  • AI Watermark & Provenance Detectors: Verify content authenticity and trace its digital lineage
  • Cross-Platform Deepfake Verifiers: Unified detection API for image, video, and text manipulations
  • AI Truth Lens Browser Plug-ins: Real-time deception detection integrated with browsers and messengers
  • Voice Clone Authenticity Checkers: Distinguish genuine human voices from AI-generated imitations
Media ForensicsTruth VerificationAI TransparencyContent Provenance

Defend the Digital Citizen

AI for Protection and Empowerment

Problem Statement

With GenAI accelerating scams, frauds, and misinformation, citizens need AI allies to stay informed and safe. The challenge invites solutions that use Generative AI to proactively educate, guide, and protect users from emerging digital threats.

Core Challenge

Design AI-powered educational or assistive systems that simulate real-world scam scenarios, train users to identify manipulative patterns, and offer personalized protection advice.

Expected Outcome

An AI-powered digital safety ecosystem that helps users recognize and avoid scams, enhances cyber awareness.

Expected Deliverables

  • Working AI assistant or simulation module
  • Learning outcomes or training effectiveness report
  • UX flow demonstrating human–AI safety learning

Ideas to Explore

  • Conversational AI Assistants for cyber hygiene and scam prevention
  • Generative Role-Play Simulators to train users in recognizing phishing or AI manipulation
  • AI Mentors for Vulnerable Groups — children, women, seniors — guiding them toward safe digital practices
Digital SafetyScam DetectionCyber HygieneAwareness AI

Building the Guardrails of Trustworthy AI

Practical Red-Teaming & Governance

Problem Statement

As Large Language Models (LLMs) and Generative AI tools become more integrated into daily life, they also become potential vectors for misinformation, bias, and misuse. The challenge is to create real, testable safety guardrails that make AI systems safer, transparent, and aligned with human and societal values.

Core Challenge

Develop tools or frameworks that can test, monitor, and govern the behavior of AI models in real-world settings — making them more accountable and predictable.

Expected Outcome

A functional prototype or dashboard demonstrating how AI systems can be tested or audited for safety and alignment.

Expected Deliverables

  • A functional prototype or dashboard demonstrating how AI systems can be tested or audited for safety and alignment
  • Documentation describing the framework, ethical considerations, and use cases
  • Dataset or evaluation results showing how the system improves AI behavior over baseline models

Ideas to Explore

  • AI Red Team Simulator: A sandbox where developers can test LLMs against prompt injection, jailbreaks, or harmful content scenarios
  • Trustworthiness Scorecard: A dashboard that measures model safety, bias, and compliance based on inputs and outputs
  • Ethics Layer Plugin: A middleware tool that sits between user prompts and model responses, filtering unsafe or misleading outputs
  • LLM Audit API: API that logs and explains decisions made by the model to support transparency and regulatory auditing
AI SafetyRed TeamingModel GovernanceAI Ethics

Create for Peace

Generative AI for Social Good and Digital Civility

Problem Statement

Amid growing digital toxicity, misinformation, and polarization, this challenge reimagines Generative AI as a force for empathy, education, and peacebuilding. The goal is to create AI systems that inspire positive behavior and social resilience.

Core Challenge

Harness the creative power of GenAI to educate, unite, and heal — through storytelling, art, and immersive experiences promoting peace and digital ethics.

Expected Outcome

A GenAI-powered peace tech solution that inspires constructive dialogue and strengthens social harmony.

Expected Deliverables

  • Creative GenAI application prototype (text, visual, or immersive)
  • Storyboards or campaign content samples
  • Impact measurement framework for community engagement

Ideas to Explore

  • Generative Storytelling Engines that teach empathy and digital harmony
  • AI Campaigns for Digital Civility — narrative-driven outreach countering hate and misinformation
  • Generative AR/VR Experiences for peace education and AI ethics learning
Conflict ResolutionPeace TechSocial HarmonyAI Storytelling

Enhance the CyberPeace Chatbot

Generative & Agentic AI Capabilities

Problem Statement

CyberPeace currently hosts a chatbot designed to promote online safety and responsible digital behavior. The next phase of innovation focuses on transforming this chatbot into an intelligent, context-aware, and proactive digital peace agent through the integration of Generative AI, CyberPeace GPT, and Agentic AI frameworks.

Core Challenge

Design and develop advanced modules that empower the CyberPeace Chatbot to provide real-time, adaptive, and personalized guidance to users on cyber safety, misinformation, mental health, and ethical AI use.

Expected Outcome

A working prototype of the upgraded CyberPeace Chatbot with integrated GenAI + Agentic AI capabilities.

Expected Deliverables

  • A working prototype of the upgraded CyberPeace Chatbot with integrated GenAI + Agentic AI
  • Technical documentation explaining data pipelines, agent design, and ethical safeguards
  • Dashboard for monitoring chatbot activity, safety analytics, and feedback loops
Chatbot EnhancementAgentic AIDigital PeaceAI Agents

Registration & Submission

What you need to prepare

Team Formation

  • Anyone globally — students, researchers, startups, NGOs, and early-stage companies
  • Maximum team size: 3 members
  • Team-based participation encouraged
  • Open to global participants

Submission Requirements

  • Source code & Repository: Link to project's code repository (GitHub, GitLab) with open-source license
  • Working Demo: Deployed app, Docker container, or hosted instance; alternatively, short video or interactive environment
  • Supporting Material: 3–5 minute video walkthrough explaining features, architecture, and results; screenshots, diagrams, API specs, user guides
  • Team & Contributor Info: Short bio or description for each contributor with pictures; company/organization overview if applicable

Additional Requirements

Clearly define the problem your solution tackles, the proposed AI-driven approach, and the impact it will have on cybersecurity. Highlight how your solution was designed, any unique algorithms or techniques used, and how AI improves upon existing methods.

Create an open-source (or openly distributed) solution that addresses pressing cybersecurity challenges by integrating AI. Include information on the open-source license you're using to ensure others can freely adapt and build upon your work.

Evaluation Framework

How submissions are evaluated

Innovation & Technical Depth

30%

Trust & Safety Integration

25%

Societal Impact & Usability

20%

Scalability & Localization (Indic languages included)

15%

Presentation, Documentation & Ethics

10%

Process

The panel includes AI researchers, CyberPeace leads, industry partners, policy experts, and civil society.

Ideation Round

Submission of concept + flow design. Shortlisting of top 50 ideas

Development Round

Mentorship + prototype building. Submission of working solution or demo video. Shortlisting of top 20 finalists

Grand Finale

Top 10 finalists pitch to jury. Live demo and Q&A session. Selection of 3 winners

Finale Details

  • Top 10 finalists present at WSIS Forum 2026 (July 6-10, 2026) in Geneva, Switzerland
  • Live prototype demonstration required
  • Q&A session with jury panel
  • Selection of 3 winners announced during the Forum

Format: Hybrid — Global online build + Finale at New Delhi + Opportunity to Showcase at Bharat Mandapam. Winner announcement during WSIS Forum 2026 (July 6-10, 2026) in Geneva, Switzerland

Who Can Participate?

Open to innovators across nations

University Students

AI Researchers and Students

CyberPeace Corps

Volunteers and Innovators

Startups

Trust & Safety, Fintech, Media Tech

Defence & Police

Academics for Red-Team-Blue-Team Drills

Proposed Themes

Five powerful tracks to choose from

Detect the Deceptive

Trace, verify, and neutralize synthetic media (images, video, voice, text).

Defend the Digital Citizen

Build proactive GenAI tools that prevent scams, educate vulnerable users, and rebuild trust.

Building the Guardrails of Trustworthy AI

Practical red-teaming, audit, and real-time governance tools for LLMs.

Create for Peace

Generative experiences that teach empathy, counter polarization, and rebuild civic resilience.

Enhance the CyberPeace Chatbot

Integrate Generative AI, CyberPeace GPT, and Agentic AI to make CyberPeace's chatbot an active digital peace agent.

Prizes & Support

Over

$20,000

worth of prizes

includes provision of cloud credits and technology licenses to accelerate project development to top teams

Cash Prizes

Winners will receive cash prizes and recognition

Cloud Credits

Selected teams can apply for partner cloud credits and technology licenses

Incubation Support

Support for startup incubation to help transform ideas into viable businesses

Pilot Projects

Fast-track pilots with CyberPeace for winning teams

ITU Internships

Exclusive internship opportunities at ITU, offering hands-on experience in AI and cybersecurity

Global Exposure

Global exposure and visibility at the prestigious WSIS Forum 2026, connecting innovators with industry leaders