More
    HomeEnglish NewsDeepfake Detection, Desi LLMs, Safe AI, homegrown Foundational Models in focus at...

    Deepfake Detection, Desi LLMs, Safe AI, homegrown Foundational Models in focus at Summit 2026

    India is set to showcase its strides in artificial intelligence innovation when it hosts a big ticket ‘AI Impact Summit’ in February 2026. The summit promises to bring together policymakers, entrepreneurs, researchers, and civil-society voices under one roof to chart a future where AI serves all sectors – from farms to hospital

    Backed by the IndiaAI Mission, the week-long event will spotlight everything from India’s own LLMs (Large Language Models) trained on our myriad vernacular dialects to cutting-edge deepfake detection tools and privacy-first legal frameworks.

    The AI Impact Summit 2026 will offer a blueprint for how a nation of 1.4 billion can harness, govern, and scale AI in ways that address local challenges and inspire global collaboration. From homegrown foundational models to robust techno-legal safeguards, the summit embodies India’s ambition to not only be a consumer of AI, but a leader in shaping its future that is inclusive, secure, and profoundly transformative.

    The delegates will explore how subsidized GPU clusters and the AIKosh datasets portal are empowering startups and academic teams to build solutions tailored to Indian realities. With more than thirty AI applications already in the pipeline – spanning climate forecasting, crop diagnostics, healthcare screening, and citizen services – the summit is set to showcase prototypes and success stories that highlight India’s growing prowess in both developing and responsibly deploying artificial intelligence.

    The summit is a testimony to India’s commitment to democratize artificial intelligence, address real-world challenges across pivotal sectors and foster global collaboration.

    AI Satyapikaanan: API based face verification systemdeveloped by Centre of Excellence – Artificial Intelligence (COE-AI). The model can do face detection, recognition, verification, re-idetifications, anti-spoofing, quality assessment, gender age prediction. crowd analysis, image reduplication.

    Development of India’s Foundational Models
    Under the IndiaAI mission, the government is funding the creation of homegrown large and small language models trained on Indian datasets to capture local contexts, dialects, and cultural nuances. Target sectors include healthcare, education, agriculture, climate, and governance. Four startups – Sarvam AI, Soket AI, Gnani AI, and Gan AI – have been selected to build these foundational models, which will be released as open source to spur further innovation by domestic entrepreneurs and researchers.

    IndiaAI Compute Capacity: Establishing High-End AI Infrastructure
    To support safe and scalable AI research, the IndiaAI Compute Portal has provisioned 34,381 GPUs, accessible to academia, MSMEs, startups, the research community, and government bodies. The government subsidizes 40% of the cost, bringing the average price to around ₹67 per GPU-hour – roughly one-third of the global average. Available hardware spans Nvidia H100, H200, B200; Intel Gaudi 2 & 3; and AMD MI300X, among others.

    IndiaAI Datasets Platform AIKosh
    AIKosh offers over 1,000 India-specific datasets and 208 AI models across health, agriculture, education, and more, with strict privacy safeguards. Notable examples include farmer query logs from Kisan Call Centres, geological surveys, and clinical imaging data to support AI-based diagnosis of brain lesions. The platform also hosts small models such as text-to-speech engines in Bengali, Gujarati, Kannada, and Malayalam.

    Development of AI-Based Applications
    Thirty AI applications addressing public interest problems in governance, health, and climate are currently funded under the mission. These projects adhere to ethical design frameworks and span from prototype to advanced stages.

    Support to AI-Based Startups in India
    The IndiaAI Startups Global program, launched in partnership with Station F and HEC Paris, is mentoring ten Indian AI startups to expand into European markets. Participants include PrivaSapien Technologies, focusing on privacy-enhancing AI, and Secure Blink, specializing in AI-driven cybersecurity.

    Safe and Trusted AI
    The IndiaAI Safety Institute will spearhead research on AI safety and security, engaging academia, industry, startups, and government bodies through a hub-and-spoke model. Ten themes – ranging from machine unlearning and bias mitigation to watermarking and ethical AI – guide the development of tools and frameworks. Eight Responsible AI projects are underway, with additional initiatives in risk assessment, stress testing, and deepfake detection under evaluation.

    Legal Framework for Mitigating AI-Related Risks
    Recognizing risks such as hallucination, bias, misinformation, and deepfakes, the government is leveraging existing laws and rules:

    • Information Technology Act, 2000: Sections 66C–66E address identity theft and misuse of images; Sections 67A–67B criminalize the distribution of obscene or deepfake content.
    • Bharatiya Nyay Sanhita, 2023: Sections 111, 318, 319, 353, and 356 cover economic offences, cyber-crimes, cheating, and public mischief.
    • Digital Personal Data Protection Act, 2023: Establishes obligations for data fiduciaries and empowers citizens with consent-based control.
    • IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Mandates social media platforms to remove misinformation and deepfakes within prescribed timelines and provides grievance redressal mechanisms, including escalation to appellate committees. A dedicated national cybercrime portal (cybercrime.gov.in) and toll-free number 1930 support reporting.

    Techno-Legal Approach to Regulate AI
    India’s balanced strategy combines legal safeguards with technological interventions. The government is funding R&D at premier institutions like IITs to develop AI tools for deepfake detection, privacy enhancement, and cybersecurity, ensuring that innovation thrives alongside robust governance.

    Pradeep Rana
    Pradeep Ranahttps://theliberalworld.com/
    Journalist: Geopolitics, Law, Health, Technology, STM, Governance, Foreign Policy
    RELATED ARTICLES

    Most Popular

    Recent Comments