deepfakes regulation: ‘One-size-fits-all’ approach not fit for deepfakes: BSA to MeitY
Public policy solutions to address the issue of deep fakes remain unclear and continue to elude policymakers, said BSA in a letter to the ministry of electronics and IT earlier this month. The government plans to amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to include regulations for deep fakes.
MeitY had also sent an advisory to social media intermediaries in December last year mandating the identification and removal of misinformation and deep fakes within 36 hours. Venkatesh Krishnamoorthy, country manager of India, BSA.
The Software Alliance, in a letter said that MeitY should consider the differences in the role and function of intermediaries when prescribing obligations related to the spread of deep fakes. “All intermediaries do not have the same ability to address this issue and services provided by intermediaries may not pose the same kind of risk,” he said. Business-to-business and enterprise software services pose limited risk to user safety and public order given the size of their user base and the fact that they do not provide services directly to consumers, Krishnamoorthy said. Santosh Jinugu, partner in consulting firm Deloitte India, told ET that combating deepfakes needs a multifaceted approach with many mitigation strategies.
These include deploying digital watermarks, leveraging photoplethysmography (PGP) analysis to scrutinise blood flow in video pixels, harnessing convolutional neural networks (CNNs) for automated detection, and scrutinising facial characteristics for signs of fabrication.
Discover the stories of your interest
Ashok Hariharan, cofounder, IDfy, a Mumbai-based identity verification, biometric and risk assessment company, said liveness solutions do a great job at detecting deepfakes with the help of parameters like light reflections on the face, or asking questions in real-time in an agent-led journey. “Unfortunately, these solutions are not an industry norm. Only a handful of companies have certifications like iBeta, which is the gold standard for liveness checks,” he said.
These checks and certifications should be encouraged and mandated by the regulators to fight the issue of deep fakes, he said.
Krishnamoorthy suggested the use of watermarks for AI-generated content to help users differentiate between real content and AI-generated content and prevent misinformation. An open-source standard developed by the Coalition for Content Provenance and Authenticity generates tamper-evident content credentials (C2PA). This standard will help consumers decide if content is trustworthy and promote transparency around the use of AI, he said.
Public policy solutions to address the issue of deep fakes remain unclear and continue to elude policymakers, said BSA in a letter to the ministry of electronics and IT earlier this month. The government plans to amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to include regulations for deep fakes.
MeitY had also sent an advisory to social media intermediaries in December last year mandating the identification and removal of misinformation and deep fakes within 36 hours. Venkatesh Krishnamoorthy, country manager of India, BSA.
The Software Alliance, in a letter said that MeitY should consider the differences in the role and function of intermediaries when prescribing obligations related to the spread of deep fakes. “All intermediaries do not have the same ability to address this issue and services provided by intermediaries may not pose the same kind of risk,” he said. Business-to-business and enterprise software services pose limited risk to user safety and public order given the size of their user base and the fact that they do not provide services directly to consumers, Krishnamoorthy said. Santosh Jinugu, partner in consulting firm Deloitte India, told ET that combating deepfakes needs a multifaceted approach with many mitigation strategies.
These include deploying digital watermarks, leveraging photoplethysmography (PGP) analysis to scrutinise blood flow in video pixels, harnessing convolutional neural networks (CNNs) for automated detection, and scrutinising facial characteristics for signs of fabrication.
Discover the stories of your interest
Ashok Hariharan, cofounder, IDfy, a Mumbai-based identity verification, biometric and risk assessment company, said liveness solutions do a great job at detecting deepfakes with the help of parameters like light reflections on the face, or asking questions in real-time in an agent-led journey. “Unfortunately, these solutions are not an industry norm. Only a handful of companies have certifications like iBeta, which is the gold standard for liveness checks,” he said.
These checks and certifications should be encouraged and mandated by the regulators to fight the issue of deep fakes, he said.
Krishnamoorthy suggested the use of watermarks for AI-generated content to help users differentiate between real content and AI-generated content and prevent misinformation. An open-source standard developed by the Coalition for Content Provenance and Authenticity generates tamper-evident content credentials (C2PA). This standard will help consumers decide if content is trustworthy and promote transparency around the use of AI, he said.