Global iSAFE Hackathon - Inclusive & Secure AI For Everyone

iSAFE Hackathon

A global, hybrid hackathon to build trustworthy generative AI that defends truth, protects citizens, and promotes online peace.

An Official Pre-Summit Event of the AI Impact Summit 2026

36

Hours

20

Max Teams

5

Tracks

60

Participants

Why This Matters?

Generative AI can create content at scale — images, video, audio, and text. That capability is a force for good, and a source of new harms: deepfakes, targeted scams, automated disinformation, and model misuse. HackForTrust invites the global community to build practical, deployable tools that protect truth, safeguard citizens, and make AI accountable. This is a hands-on competition: prototypes must be demonstrable and ready for pilot testing.

Event Timeline

Your journey to building digital trust

November 20, 2025

Launch

Official launch of the Global iSAFE Hackathon

November 20, 2025 – January 15, 2026

Registration Period

Participants can register individually or as teams; open to global participants

January 15–25, 2026

Shortlisting Phase

Evaluation of submitted ideas/concepts; top entries are shortlisted for the finale

January 27, 2026

Finalists Announcement

Official announcement of teams selected for the Grand Finale

February 8–9, 2026

Grand Finale

36 Hours continuous development with mentorship and guidance from jury at New Delhi

February 10, 2026

Showcase @ Bharat Mandapam

Presentation of final solutions at Bharat Mandapam during the CyberPeace Summit, followed by recognition and awards

Competition Tracks

Choose your challenge and innovate

Detect the Deceptive

Tracing Synthetic Realities

Problem Statement

Synthetic content — AI-generated images, deepfakes, cloned voices, and fabricated texts — has blurred the line between truth and deception. The challenge is to design GenAI-powered tools that can detect, authenticate, and trace the origins of such content across platforms.

Core Challenge

Build systems capable of verifying digital media authenticity, providing content provenance, and flagging manipulative or synthetic content in real time.

Expected Outcome

A robust AI-driven verification system capable of identifying and classifying deceptive content with high accuracy.

Deliverables

  • AI Watermark & Provenance Detectors: Verify content authenticity and trace its digital lineage
  • Cross-Platform Deepfake Verifiers: Unified detection API for image, video, and text manipulations
  • AI Truth Lens Browser Plug-ins: Real-time deception detection integrated with browsers and messengers
  • Voice Clone Authenticity Checkers: Distinguish genuine human voices from AI-generated imitations
Media ForensicsTruth VerificationAI TransparencyContent Provenance

Defend the Digital Citizen

AI for Protection and Empowerment

Problem Statement

With GenAI accelerating scams, frauds, and misinformation, citizens need AI allies to stay informed and safe. The challenge invites solutions that use Generative AI to proactively educate, guide, and protect users from emerging digital threats.

Core Challenge

Design AI-powered educational or assistive systems that simulate real-world scam scenarios, train users to identify manipulative patterns, and offer personalized protection advice.

Expected Outcome

An AI-powered digital safety ecosystem that helps users recognize and avoid scams, enhances cyber awareness.

Deliverables

  • Conversational AI Assistants for cyber hygiene and scam prevention
  • Generative Role-Play Simulators to train users in recognizing phishing or AI manipulation
  • AI Mentors for Vulnerable Groups — children, women, seniors — guiding them toward safe digital practices
Digital SafetyScam DetectionCyber HygieneAwareness AI

Building the Guardrails of Trustworthy AI

Practical Red-Teaming & Governance

Problem Statement

As Large Language Models (LLMs) and Generative AI tools become more integrated into daily life, they also become potential vectors for misinformation, bias, and misuse. The challenge is to create real, testable safety guardrails that make AI systems safer, transparent, and aligned with human and societal values.

Core Challenge

Develop tools or frameworks that can test, monitor, and govern the behavior of AI models in real-world settings — making them more accountable and predictable.

Expected Outcome

A functional prototype or dashboard demonstrating how AI systems can be tested or audited for safety and alignment.

Deliverables

  • AI Red Team Simulator: A sandbox where developers can test LLMs against prompt injection, jailbreaks, or harmful content scenarios
  • Trustworthiness Scorecard: A dashboard that measures model safety, bias, and compliance based on inputs and outputs
  • Ethics Layer Plugin: A middleware tool that sits between user prompts and model responses, filtering unsafe or misleading outputs
  • LLM Audit API: API that logs and explains decisions made by the model to support transparency and regulatory auditing
AI SafetyRed TeamingModel GovernanceAI Ethics

Create for Peace

Generative AI for Social Good and Digital Civility

Problem Statement

Amid growing digital toxicity, misinformation, and polarization, this challenge reimagines Generative AI as a force for empathy, education, and peacebuilding. The goal is to create AI systems that inspire positive behavior and social resilience.

Core Challenge

Harness the creative power of GenAI to educate, unite, and heal — through storytelling, art, and immersive experiences promoting peace and digital ethics.

Expected Outcome

A GenAI-powered peace tech solution that inspires constructive dialogue and strengthens social harmony.

Deliverables

  • Generative Storytelling Engines that teach empathy and digital harmony
  • AI Campaigns for Digital Civility — narrative-driven outreach countering hate and misinformation
  • Generative AR/VR Experiences for peace education and AI ethics learning
Conflict ResolutionPeace TechSocial HarmonyAI Storytelling

Enhance the CyberPeace Chatbot

Generative & Agentic AI Capabilities

Problem Statement

CyberPeace currently hosts a chatbot designed to promote online safety and responsible digital behavior. The next phase of innovation focuses on transforming this chatbot into an intelligent, context-aware, and proactive digital peace agent through the integration of Generative AI, CyberPeace GPT, and Agentic AI frameworks.

Core Challenge

Design and develop advanced modules that empower the CyberPeace Chatbot to provide real-time, adaptive, and personalized guidance to users on cyber safety, misinformation, mental health, and ethical AI use.

Expected Outcome

A working prototype of the upgraded CyberPeace Chatbot with integrated GenAI + Agentic AI capabilities.

Deliverables

  • A working prototype of the upgraded CyberPeace Chatbot with integrated GenAI + Agentic AI
  • Technical documentation explaining data pipelines, agent design, and ethical safeguards
  • Dashboard for monitoring chatbot activity, safety analytics, and feedback loops
Chatbot EnhancementAgentic AIDigital PeaceAI Agents

Registration & Submission

What you need to prepare

Team Formation

  • Individual or team participation
  • Maximum team size: 3 members
  • Team-based participation encouraged
  • 20 teams maximum for finale

Submission Requirements

  • Abstract in PPT or PDF format
  • Problem statement being addressed
  • How the solution addresses the problem
  • Concept and target users

Additional Submission Details

Methodology and technological stack
Workflow/Technical Diagram
Example use cases
Relevant experience or prior work

Evaluation Framework

How submissions are evaluated

Innovation & Technical Depth

30%

Trust & Safety Integration

25%

Societal Impact & Usability

20%

Scalability & Localization

15%

Presentation, Documentation & Ethics

10%

The Grand Finale

During the Grand Finale, shortlisted participants will have 36 hours to transform their proposed solutions into working prototypes.

Day 1 - February 8, 2026

Jury meets teams, assesses initial ideas, provides mentorship and guidance for concept refinement

Day 2 - February 9, 2026

Continued development under mentorship, final presentations, Q&A sessions, and jury evaluation

Presentation Format

  • 15 minutes total per team (10 min demo + 5 min Q&A)
  • PowerPoint presentation (10-12 slides maximum)
  • Live prototype demonstration required
  • Evaluated on performance, speed, precision, and scalability

Format: Hybrid — Global online build + Finale at New Delhi + Opportunity to Showcase at Bharat Mandapam

Who Can Participate?

Open to innovators across nations

University Students

AI Researchers and Students

CyberPeace Corps

Volunteers and Innovators

Startups

Trust & Safety, Fintech, Media Tech

Defence & Police

Academics for Red-Team-Blue-Team Drills

Proposed Themes

Five powerful tracks to choose from

Detect the Deceptive

Trace, verify, and neutralize synthetic media (images, video, voice, text).

Defend the Digital Citizen

Build proactive GenAI tools that prevent scams, educate vulnerable users, and rebuild trust.

Building the Guardrails of Trustworthy AI

Practical red-teaming, audit, and real-time governance tools for LLMs.

Create for Peace

Generative experiences that teach empathy, counter polarization, and rebuild civic resilience.

Enhance the CyberPeace Chatbot

Integrate Generative AI, CyberPeace GPT, and Agentic AI to make CyberPeace's chatbot an active digital peace agent.

Prizes & Support

Cash Prizes

Winners will receive cash prizes and recognition

Cloud Credits

Selected teams can apply for partner cloud credits

Incubation Support

Fast-track incubation opportunities for winning teams

Pilot Projects

Opportunities for pilot projects with CyberPeace