Regulating AI

Artificial intelligence (AI) has rapidly evolved from hype to a technological powerhouse, sparking debates on its regulation and ethical use.

In Pakistan, AI is gradually influencing sectors like e-commerce and finance, mirroring global trends. However, fears linger about AI potentially turning against its creators, as seen in movies like Terminator. While these risks seem distant, the “AI safety” camp warns of scenarios where AI could surpass human intelligence and act independently with misaligned goals.

In November 2023, global leaders gathered at an AI safety summit to address potential threats. Critics, however, argue that the focus should be on current issues like bias in AI, disinformation, and the violation of intellectual property and human rights. These problems already impact industries and individuals, particularly in countries like Pakistan, where data and technology regulations are still developing. The challenge lies in balancing innovation with safety.AI systems have often failed in real-world scenarios. Google’s image-labelling AI once misidentified black individuals as gorillas and facial recognition tech often misidentified people of colour due to biased training data. In recruitment, AI has favoured male candidates, while deepfakes are being used for malicious purposes like fake political speeches. In Pakistan, these risks grow with the rise of social media, while lawsuits from artists highlight AI’s misuse of intellectual property.

In a recent statement, experts emphasised the necessity for AI systems to respect human rights, embrace diversity, and promote fairness. This guiding principle compels a thorough examination of how AI technologies are designed and implemented, ensuring they foster equality rather than perpetuating or worsening existing biases.

Equitable access to artificial intelligence learning must be prioritised, along with addressing its impact on the job market

“Despite the remarkable advancements made by the current generation of large language models (LLMs) in mimicking human-like intelligence, these systems are not without significant flaws. Key issues such as hallucinations, lack of grounding in real-world contexts, unreliable reasoning, and opacity stem from the fundamental architectures and training methodologies of these models. These challenges are not simply technical glitches; they represent inherent limitations that raise serious concerns about the safety, robustness, and true intelligence of AI systems,” says Jawad Raza, Corinium Global Top 100 Innovators in Data & Analytics.

https://www.dawn.com/news/card/1737918

Adding to this further, he said that the call for ethical AI deployment is echoed by various organisations, including Unesco, which stresses the importance of transparency and explainability in AI systems to safeguard human rights and fundamental freedoms. The organisation advocates for robust oversight and impact assessments to prevent conflicts with human rights norms. Moreover, the UN High Commissioner for Human Rights has highlighted the need for regulations that prioritise human rights in developing AI technologies.

This includ

Leave a Comment

Your email address will not be published. Required fields are marked *