Digital practitioners advise cautious observation amid the AI regulation debate

regulation

Should AI be regulated? Could early regulation stifle innovation? The answers point to a complex and developing space that needs to be regulated for risks when identified.

Artificial intelligence (AI) is now used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa, translation services, automated safety functions on cars (and the promised self-driving cars of the future), cybersecurity, airport body scanning, banking and financial services, fighting disinformation on social media and medical analysis. Reports are now rife that Google is developing AI tools to automate ads and offer customer support. 

According to Fortune Business Insights, the global artificial intelligence market size was valued at $428.00 bn in 2022 and is projected to grow from $515.31 bn in 2023 to $2,025.12 bn by 2030. The International Data Corporation (IDC) Worldwide Artificial Intelligence Spending Guide shows that global spending on AI, including software, hardware, and services, will reach $154 bn in 2023, an increase of 26.9 pc over 2022. 

Many argue that AI improves the quality of everyday life by doing routine and even complicated tasks better than humans, making life simpler, safer, and more efficient. Others argue that AI poses dangerous privacy risks, exacerbates racism by standardising people, and costs workers their jobs, leading to unemployment.

Hence the question of whether AI should be regulated is a complex and hotly debated topic.

Kowshik Komandur

“AI technologies can have significant societal impacts, raising ethical concerns around privacy, fairness, accountability, and transparency. AI algorithms can inadvertently perpetuate biases and discrimination present in training data, leading to unfair outcomes. Some AI applications, such as autonomous vehicles or healthcare systems, have direct implications for human safety,” observed Kowshik Komandur, Associate Vice-President with OnMobile Global.

He added, “There have been instances where facial recognition systems exhibited biases, leading to inaccuracies and discriminatory outcomes. Deep Fakes are manipulated or synthesised media, often using AI techniques, to create deceptive and realistic videos or audio having the potential for misuse, such as spreading misinformation, fake news and impersonation, which can have serious consequences for individuals and society at large.”

According to him, regulation can help set standards and requirements to ensure the safe and secure operation of AI technologies, reducing risks to individuals and society. 

Abil M Nair

Abhil M Nair, Co-Founder and CEO, SmartMatrix Global Technologies believes that AI should not be regulated. 

He noted, “If we regulate AI then the machine wouldn’t be able to learn real life scenarios and can’t serve human needs. Rather than regulating it, think of it from another perspective of keeping the content discreet, silent and safeguarded whenever it needs to be done. If the machine can learn all the human ways it can disrupt the excellence in many industries such as medicine, technology, finance etc.”

Himanshu Arora

Himanshu Arora, Co-Founder, Social Panga, said, “AI is not an Indian phenomenon, it’s happening at a global level. And regulating it will need common principles based on the same value system. And it will not be easy to find those and agree on a common ground.”

“This kind of technology/tool needs multiple stakeholders like policymakers, tech evangelists, government authorities and many more (to work together for regulation). And it cannot be run by tech startups or organisations. At the same time, there is something altogether different happening in China, where attempts are being made to regulate AI and follow the Chinese value system. It’s a point of being liberal or radical in our approaches,” he added. 

Ajit Narayan
Ajit Narayan

Ajit Narayan, CMO, Socxo, observed, “While AI and possibilities have been around for a while now – five years or more – the real buzz started with generative AI. Or a real world application where there is input and output. This opens out huge possibilities of human machine interaction and solutions which might have otherwise been difficult for the human brain to dig into and build.

In that comes the problem of AI being used in ways beyond the norm.”

He elaborated, “For example, developers claim the current AI models have restrictions on what prompts they respond to or do not and those restrictions are more like a doorkeeper to prevent misuse. However just like an iPhone can be jailbroken this was done too. And a couple of users tricked the Discord chatbot Clyde to share the formulas. While it sounds cute at the moment, this is the crossroad of deciding what fundamentally must be laws for AI. And this cannot be one person or a few deciding what it could be. There need to be debates on scale across borders to set up for development of AI models. Something existential that will not cause us to harm ourselves and other life forms” 

Could governmental regulation in AI stifle innovation?

Komandur believes  that excessive or overly restrictive regulation could stifle innovation and hinder the development of AI technologies. Striking the right balance between regulation and allowing room for technological progress can be challenging, he adds.

Concurring, Nair, said, “Generative AI is still in the learning phase from the data which is given by humans. If it’s regulated, we can’t say it is AI; rather it can be framed as a predefined set of software that limits what to explore. I believe that a lot of people’s fears right now are because of overthinking. Yes, AI should understand the concept of confidentiality when it is needed. As a matter of national security AI should be modified from its neural terms and help the machine to understand the dos and don’ts of the real-time world which can help in solving a lot of issues and concerns.”

Vivek Kumar Anand
Vivek Kumar Anand

According to Vivek Kumar Anand, Chief Business Officer – DViO Digital, instead of focusing on restricting AI itself, addressing specific risks and potential harm associated with its use is crucial. 

“To effectively regulate AI, defining AI and comprehending its anticipated risks and benefits is necessary. However, due to the ongoing evolution of AI technologies, it is challenging to formulate a stable legal definition, making comprehensive regulation complex. Furthermore, many machine learning and deep learning algorithms operate as black boxes, with their inner workings often considered proprietary and inaccessible to the public. Consequently, regulating it becomes problematic if we do not fully understand how a deep learning model reaches a decision. However, it is more feasible to establish guidelines for AI use cases,” he explained.

“The societal impacts of AI systems primarily depend on who utilises them, their intended purposes, and the parties involved, all of which can be subject to regulation. The upcoming year will determine whether the regulatory approach leans towards excessive regulation or adopts practical methods while fostering innovative technology uses,” added Anand.

Arora believes that it is too early for the government to regulate, as we are seeing possibilities but the possibilities are yet to turn into reality. 

“Regulations can’t be implemented on possibilities. Once we start seeing some of the implementation use cases, regulator bodies will have better visibility on the direction. It’s the similar headspace of the drone industry and till the time the government didn’t see any security lapse, it was an unregulated industry and now, it’s a fully regulated industry,” he added. 

“Ultimately, the question of AI regulation requires careful consideration of the potential benefits and risks, while striving for a balanced approach that promotes innovation, protects societal interests and ensures the ethical and responsible development and use of AI technologies,” Komandur surmised. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here