FOMCA fully appreciates the perspective of YB Chang, Minister of Science, Technology and Innovation (MOSTI), that in his article “Navigating the future: Malaysia’s ethical vision” he recognises the growing impact of AI on society and the challenging balance between the pursuit of innovation and the protection of human values and well-being.
While the Minister has mentioned some issues that could have impact on consumers and society as a whole namely, I would like to build on these issues and farther elaborate the potential harm of AI on consumers and society large. These include:
- Opaque systems and lack of accountability – Currently, these systems are highly opaque, and consumers have no information on how the system works or the basis of decision making. The general lack of transparency could have significant effects on consumers
- Mistakes and inaccurate data -Generative models do not “understand” context, thus the content it produces may look convincing and correct but is factually incorrect. It can be difficult for the person using or promoting the content to notice these errors if they are not already familiar with the facts of the relevant subject.
- Deepfakes and Disinformation –As generative AI models keep getting increasingly powerful, it becomes easier to use them to create realistic synthetic images, text or voice recordings that can be mistaken for the real content. . In early 2024 a Hong Kong company employee transferred US$ 25 million to a scam account based on instructions from a deepfake image of the Chief Executive Officer, A 2002 Europe Report estimates that by 2026, about 90% of online content maybe online generated. As the volume of synthetic content grows, it becomes difficult to trust one’s own eyes and ears. The long term effects of this can be devastating on trust in institutions and each other.
- AI in Advertising –The adverse effects of AI on consumers is that it is making it easier and more efficient to manipulate people through creating personalized advertising.
- Bias and Discrimination –AI models can perpetuate or create new biases. AI models are trained on vast amount of information from the internet thus they will inherit the biases of its training data, As such the model will generate content that reproduce bias, negative or unwanted tendencies.
- Privacy and Data Protection –Personal data has long been coveted as highly valuable for business to be used in target advertising to individuals and groups. When generative AI models are trained on material scrapped from the internet, the training data usually contains large amounts of personal data. As generative AI models are developed and deployed, the issues related to data protection and personal data can lead to substantial privacy harms.
- Security Vulnerabilities and Fraud – Generative AI models can be abused by malicious actors to augment or supercharge criminal activities. Generative AI models can be used to make fraud, scams and other illegal activities mode efficient.
- Environmental Impact -Tech companies are already emitting a substantial amount of carbon. AI companies use substantial amount of energy and water. It has been reported that a single training a single model can gobble up more electricity than 100 US households use in an entire year, This exponential use of energy and water has a severe negative impact on the environment. Clearly, AI technology comes with a high carbon footprint.
Further the Minister had mentioned that there are three regulatory framework models that is the rights based model, the state driven model and the market based model. However, at this point of time Malaysia will be following a fourth form of regulatory framework – an ethics based framework. Generally that would mean a set of “ethical guidelines” and tech companies need to voluntarily adhere to those guidelines. This approach assumes that the tech companies have the consumers and society’s well being as a core component of their growth and development. Is it true – do tech companies really have the society’s well-being at heart?
Past experiences in other jurisdictions suggest otherwise. While the European Union was the first region to develop laws to regulate AI, there have been reports of intense lobbying by tech companies with immense resources to water down any form of mandatory requirements. Their rationale would be that regulation would weaken “innovation” and their preferred regulation is self-regulation. Even in the US , there has been intense lobbying to weaken any form of regulation that would impose mandatory requirements on industry.
Technology is not an untamable beast but must be adapted and shaped by the rules and values ensuring consumer protection and consumer well-being. To ensure that generative AI is developed and used in accordance with consumer and human rights, it is clearly insufficient to rely on companies to regulate themselves. It is the responsibility of policy makers and enforcement agencies to set boundaries for how technology is trained, developed, deployed and used. Therefor policy makers must pass laws and regulations which are necessary to provide safe and consumer centric technology in the years to come.
To ensure that generative AI is safe trustworthy, fair, equitable and accountable, there is a need for overarching principles that address consumer rights. The principles set below provide a foundation for how policy makers and enforcement agencies should approach the opportunities and pitfalls of generative AI.
These rights can be summarized as below:
- Consumer rights must be respected
- Consumer must have the right to object and to an explanation
- Consumers must have the “right to be forgotten” to have personal data deleted
- Consumers must have the right to interact with a human instead of generative AI
- Consumers must have a right to redress and compensation for any damages
- Consumers must have the right to collective redress
- Consumers must have the right to complain to supervisory authorities or launch legal actions
- Developers and deployers of generative a model must establish systems to ensure that these rights are available
Regulation of AI is still at an early stage of development. However, some proposals to protect consumers and society from other jurisdictions include:
- Mandatory Disclosure
- Companies deploying generative AI should be required to disclose prominently when AI is used in generating content or recommendations. This transparency allows consumers to make informed decisions and understand the source of the information they are consuming.
- Quality and Accuracy Standards:
- Establish quality and accuracy standards for content generated by AI systems. Companies should be held responsible for ensuring that the content generated by their AI systems meets these standards. This could include measures to minimize misinformation, bias, or harmful content.
- Liability for Harmful Content
- Define liability for companies whose generative AI systems produce harmful or misleading content. This could include provisions holding companies accountable for damages caused by content generated by their AI systems, particularly in cases of defamation, infringement of intellectual property rights, or dissemination of false information.
- Data Protection and Privacy
- Strengthen data protection and privacy provisions to safeguard consumer data used to train generative AI systems. Companies should be required to obtain explicit consent from users before using their data for training AI models, and ensure that data privacy rights are respected throughout the process.
- Monitoring and Enforcement
- Allocate resources for monitoring the use of generative AI systems by companies and enforcing compliance with the regulations. This could involve establishing specialized units within regulatory agencies tasked with overseeing AI-related issues and conducting regular audits of companies’ AI systems.
- Consumer Education and Awareness
- Launch public awareness campaigns to educate consumers about the capabilities and risks of generative AI technology. Empowering consumers with knowledge about how AI works and its potential impacts can help them make more informed choices and protect themselves from potential harm.
We call on policy makers and lawmakers to take a strong stance in favor of consumer protection and preserving human rights. It is necessary to have robust legal measures including strict obligations on developers and deployers of generative AI systems to operate in a transparent and accountable manner and to restrict development, deployment and use of systems that are fundamentally incompatible with these rights.
Clearly, in the battle between “innovation” and “protection of human rights”, tech companies with their deep pockets would intensively lobby for “innovation” and weak or no regulation.
The Minister has rightly indicated that the journey to shape an AI-enhanced world that respects human dignity and promotes societal well-being cannot be undertaken by the Ministry alone but requires a collective effort by all stakeholders.
Recognising the extreme power of big tech and their focus on “innovation” and the need for “weak regulation” to facilitate “innovation” we call on the Minister to invest to create a Consumer/Society Well-Being Task Force and build its capacity to be able to realistically negotiate with big tech to protect human values and society’s well-being. Due to the rapid and intense evolution of AI, the Task Force would continuously keep the Ministry updated on:
- Latest ai development specifically the potential of harm on society,
- Learn how other jurisdictions are regulating ai and provide policy advice to the Malaysian regulators
- Coordinate with other institutions for example ASEAN or Asia Pacific region on possible collective actions
It cannot be denied that without building strong local capacity the interests of the powerful AI tech would overwhelm the interests of society at large. A strong consumer/society voice to some extent, though maybe not fully, help to bring about a stronger voice for society’s well-being in that balance.