Tiffany C.H. Leung
President of the General Assembly
Macau Institute for Corporate Social Responsibility in Greater China (MICSRGC)
Artificial Intelligence (AI) indicates the ability of machines to conduct tasks that usually need human intelligence, including narrow AI and general AI. The former is often designed to perform natural language processing tasks, while the latter performs human-like problem-solving tasks. Generative AI is usually used as a conversational user interface to generate texts, images, and videos. However, generative AI is ineffective in prediction, forecasting, and decision intelligence. Although different AI tools, generative AI services, and other Large Language Models (LLMs) are available online, ethics and governance should be considered when using AI tools and systems.
Misleading Information and Deepfakes
LLMs can disseminate fake news involving incorrect information to cause public panic. Similarly, AI tools can manipulate images and videos to generate false narratives that erode public trust in media. Both induce harmful effects on social media platforms and have a profound negative impact on the local community and the public. Such queries on AI may cause over-confidence in developing nonsensical responses to unknown issues and incomplete responses to confuse users. LLMs cannot distinguish between fact and fiction, and insufficient updated data to train AI leads to hallucinating outputs and misleading information. Thus, companies could adopt different strategies to address misinformation, including fact-checking tools to verify the accuracy of information, develop media literacy, and invest in developing trustworthy AI.
AI applications have potential risks, particularly in the healthcare, legal, and financial industries. Errors in medical diagnosis, fake law cases, and accounting transactions can induce potential risks and harm to both individuals’ careers and organizations’ reputations. These incorrect or incomplete responses from AI can lead to misinterpretation for decision-makers in the professional industry. Therefore, professionals and experts must develop critical thinking and intelligent search when using AI tools to critically evaluate data sources and validate the information from recognized authorities.

Fairness and Bias in AI systems
Three primary sources of bias in AI, namely training data, algorithms, and human decision-making, lead to unfair outcomes for particular stakeholder groups. AI systems are increasingly adopted for the hiring process that involves bias and unfairness against specific groups of people, such as women, the elderly, and people of color. Similarly, AI systems are adopted for legal enforcement to target specific communities, for example, people of color, underprivileged groups, and ethnic minorities. To mitigate bias in AI models, AI companies need to include a variety of sources of training data, enhance explainable algorithms for individual users to understand the AI model, and increase the diverse professional backgrounds of the AI development team.
Data Privacy and Surveillance Issues
Data privacy involves data security issues. By using AI systems, governments and companies can access a lot of personal data that can be deployed for other targeted services. On the other hand, governments and companies need to ensure data collection and storage are ethical and secure. There are potential risks of exposing sensitive information, such as discrimination based on individuals’ characteristics, identity theft for illegal activities, and loss of personal data without informed consent. To minimize these risks, companies could adopt strategies to protect sensitive data in AI systems, such as reducing data collection, implementing privacy-enhancing tools, and applying rigorous data protection in AI laws and regulations.
Accountability Frameworks for Transparency
There is a debate about insufficient transparency in AI decision-making. AI systems can make decisions that create difficulties in comprehension due to a lack of accountability and transparency. When individual users are found to understand how AI systems are making decisions, this doubt can result in distrust. Thus, AI accountability frameworks for both developers and users should be further developed by establishing ethical standards, regulatory frameworks, and auditing mechanisms that verify compliance with normative principles to enhance accountability, responsibility, and transparency in AI.
Ethical Frameworks for AI Governance
In 2019, the European Union’s (EU) ethical guidelines for AI highlighted the importance of accountability, transparency, and human rights and developed a framework for creating trustworthy, robust, and ethical AI systems. These EU guidelines involved seven principles for AI systems: human agency, technical robustness, data governance, transparency, diversity, societal and environmental well-being, and accountability. Similarly, UNESCO developed the first global standard on AI ethics in 2021 based on the development of fundamental principles of accountability, fairness, and transparency and enhanced the growth of transparent AI technologies in data governance and ecosystems. To improve AI’s ethical and governance development, moral frameworks and regulatory ecosystems should be applied in organizations, and ongoing assessments should be maintained to identify room for improvement.
Conclusion
AI systems and technologies have the potential to transform the world. However, companies should fully consider the ethical challenges of the fast-growing AI technologies. Further collaboration with regulators, industry, and academia continues to develop ongoing research on reducing AI hallucinations to ensure the information’s reliability and safety. By sharing experiences with AI professionals and academics, companies could collaborate to create better algorithms and models that are accountable. In particular, governments and organizations can consider applying and integrating AI ethical and governance frameworks as a part of business strategy, organizational culture, and social responsibility to ensure AI is safely developed and deployed ethically to promote digital transformation in business and society.
IN PARTNERSHIP WITH

link
More Stories
When Ethics Meet Reality: Inside a Frank Discussion on Compliance’s Future
Peacock: The enduring value of integrity | Community Columnists
Shifting From Exception To The Norm, Awareness To Action