1. Introduction

In recent times, the use of AI systems has escalated albeit in a largely unregulated or poorly regulated global landscape. The proliferation of artificial intelligence (AI) use has led to an increase in the demand to learn more and understand the operations of AI systems. In turn, this has led to a scamper by governments and stakeholders in developed and developing nations to develop rules, regulations and guidelines for the use of artificial intelligence in the modern world. The Nigerian business and social strata, has witnessed the application of AI systems to various degrees and the Government has also taken steps to harness the advantages of the use and deployment of AI.

According to Google's records, searches emanating from Nigeria surrounding the phrase “Artificial Intelligence (AI)” rose by a staggering 100% in 2022.[1] The demand to know more has also fueled the calls for regulation of the use of AI across the world and Nigeria is not excluded. Globally, some of the sectors where Artificial Intelligence systems have been deployed include Business Administration, Automobile industry, Financial Institutions, Space exploration industry, Real Estate, Weapons and Defense Industries, Agriculture, etc.

2. AI Systems Utilization in Nigeria

Public and private entities have been immersed in the possibilities that abound from the use of AI for institutional and business solutions, and the country has seen AI systems gain prominence especially in the business sphere as corporate entities seek for ways to ease business processes, and increase efficiency and productivity. Some examples are found in the use of AI powered customer care systems by institutions cutting across various sectors: Ziva, an AI powered chatbot utilized by Zenith Bank,[2] Timi by Lawpavilion - an AI tool for lawyers,[3] Zigi by MTN - a customer care digital assistant,[4] Lara.ng - an AI powered chatbot that offers individuals conversation-style directions and transport fare estimate while using public transport in Lagos, etc.

AI has been deployed in Nigeria for security enhancement, and Chiniki Guard is a company popular in this regard. Chiniki Guard deploys artificial intelligence security solutions for retail stores and supermarkets to monitor, detect and alert shop owners of shoplifting as well as suspicious behavior in real-time.[5]  In the health sector, 247Medic, a health ICT application connects doctors with patients across Nigeria within a maximum waiting time of 10 minutes. The application facilitates the provision of auxiliary services, which include lab testing, and doorstep delivery of drugs, to users.[6]

On November 13th, 2020, the former Minister of Communications and Digital Economy, Prof. Isa Ali Ibrahim Pantami, commissioned the National Centre for Artificial Intelligence and Robotics (NCAIR) along with its modern digital fabrication laboratory (FabLab). The Centre is one of NITDA’s special purpose vehicles created to promote research and development on emerging technologies and their practical application in areas of Nigerian national interest. The facility is focused on Artificial Intelligence (AI), Robotics and Drones, Internet of Things (IoT), and other emerging technologies, aimed at transforming the Nigerian digital economy, in line with the National Digital Economy Policy and Strategy (NDEPS).[7] This was a statement on the interest of the Nigerian government to harness the use of artificial intelligence to aid economic growth. The Federal Government, sequel to the launch of the Nigeria Artificial Intelligence Research Scheme announced in October 2023, that it will award N5 million each to 45 startups and researchers focusing on AI.[8] This is geared towards support the mainstreaming of the application of Artificial Intelligence for economic prosperity.

The release of artificial intelligence (AI) chatbot, “ChatGPT” on November 20, 2023 by OpenAI[9] was a blockbuster, and heightened the awareness of generative AI in Nigeria. The ChatGPT storm triggered a conversation whirlwind in the Nigerian environment, and brought massive attention to AI and its uses, despite that various AI systems as identified above had already registered their presence within the Nigerian business and professional ecosystem. Generative AI (GenAI) is a type of Artificial Intelligence that can create a wide variety of data, such as images, videos, audio, text, and 3D models.[10] On July 21 2023, OpenAI revealed its plan to launch an Android version of the popular artificial intelligence (AI) ChatGPT chatbot in the coming week. The launch follows its release to iOS users in May.[11]

A 2023 “mobile phone market” analysis places Google’s Android as the dominant mobile operating system operating in the Nigerian market with a market share of 78.7 percent as of April 2023.[12] Apple's iOS is Android's only major competitor with a share of 14.8 percent. The release of ChatGPT for Android users will see its user skyrocket in view of the number of potential Android users available in the Nigerian ecosystem.

Google released Bard AI[13] in response to ChatGPT, and this is now available in Nigeria[14] as well as over 225 other countries and in 40 languages. Software giant Adobe Inc, also released “Firefly”, a generative AI-powered content creation platform specifically for creative needs, such as generating images, transforming text, etc.[15] Snap Inc. which owns snapchat also released an AI tool for its multimedia instant messaging service that can engage in conversations with users.

These powerful AI tools have become available to Nigerians in an unregulated Nigerian AI environment. Some of these platforms for instance Open AI’s ChatGPT were trained using data scraped from the internet which included the personal data of individuals.[16] If an individual enters a prompt on ChatGPT requesting for information about themselves, there is a chance it may return information about that individual depending on their societal status. OpenAI’s deployed a large language model (LLM) to crunch the data and information used in training ChatGPT.[17] A large language model (LLM) is defined as a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other forms of content based on knowledge gained from massive datasets.[18] Based on these learning algorithms, AI applications can swiftly summarize documents and articles, generate stories and fictional content, suggest educational content, and engage in lengthy and logical conversations with a human. 

Zoom, the virtual conferencing application is very popular in Nigeria and uses virtual backgrounds and noise detection features to improve user experience on the application. As noted by Intel, these features rely on AI and Deep Learning inference to facilitate enhanced user experience during virtual calls.[19]Zoom announced it was evolving the use of its AI powered assistant (Zoom IQ) to summarize chat threads, organize ideas, draft content for chats, emails, and whiteboard sessions, create meeting agendas, etc.[20] The use of AI is a key component of Zoom’s identity.[21]

Smart homes have found their place in the Nigerian real estate setting[22] because of the unique benefits and convenience it provides. These structures employ a variety of interconnected technologies including Artificial Intelligence (AI), broadband wireless and Internet of Things (IoT) networks to improve operational efficiency and enable a safer environment.[23] In addition to the above, the Nigerian Federal Government has set a target of producing approximately 50,000 job opportunities through Artificial Intelligence (AI) by 2030.[24] The influence of technology and Artificial Intelligence has enveloped Nigeria, and a large number of young persons and fast-paced business environment will only act as a catalyst for its growth and increased use, as individuals aim to enhance efficiency and productivity and redefine how social activities are carried out.

2.1. Inherent Risks in the Use of AI Systems in Nigeria

The creation, and deployment of AI systems and technologies can involve the processing of significant amounts of personal data about individuals, including creating biometric identifiers that can observe their behaviors, preferences, thoughts and emotions[25] therefore creating data privacy and protection concerns. The use of AI for large scale surveillance and monitoring of individuals also increases data privacy risks in the event of a breach of an AI system that contains personal data. The proliferation of Model inversion attacks is another risk that arises from the presence and availability of AI systems. Model inversion attacks are designed to expose sensitive data present in a dataset that was used to train an AI model which in this case will be the personal data of individuals.[26]

There are also Intellectual property rights concerns which arises surrounding the use of materials that have intellectual property rights protection to train AI systems. An example is where a copyright protected work is used to train an AI model without the author’s authorization and against their interests. Perpetuation of bias is another inherent risk present in AI systems. When a training dataset containing societally constructed and systemic biased information or viewpoints is used to train AI systems, decisions made by such AI systems will always be reflective of those biases which may lead to discrimination against certain individuals in its decision making. The implication is that the AI systems decisions will always toe the line of bias inherent in its training data unless it is retrained with a new set of data.

The deployment of AI systems equally raises accountability concerns premised on the fact that multiple hardware and software development entities, individuals and datasets may be involved in the development of an AI system, making the issue of accountability complex. The question becomes, which particular entities involved in the development of an AI system will be held responsible or liable in the event of a harm or defect that tramples on any of the fundamental rights of individuals?

It is now easier to propagate disinformation and hate speech, on a massive scale and also maliciously distort content resulting in deepfakes, to the detriment of individuals and businesses. This raises concerns regarding individual and public safety and the impacts of the use of these systems on the rights and freedoms of individuals in the society. These instances are non-exhaustive and new risks will continue to emerge as AI systems use increase.

3. The Current Nigerian AI Regulatory Landscape: Is there One?

The Nigerian legal jurisprudence does not currently have an AI Policy framework although the National Information Technology Development Agency (NITDA) on 11 August 2022[27] called for contributions of stakeholders to enable the development of the National Artificial Intelligence Policy (NAIP).[28] This was in line with the OECD’s principle of fostering a digital ecosystem for AI, and providing an enabling policy environment for AI.[29] The development of the NAIP is envisaged to maximize the benefits, mitigate possible risks, and address some of the complexities attributed to using AI in our daily activities. It is envisaged that the NAIP would provide directions for Nigeria to take advantage of AI. This will include the development, use, and adoption of AI in ways that will proactively facilitate the development of a sustainable digital economy.”[30]

The Nigerian government through the National Information Technology Development Agency (NITDA) announced in March 2023 that it was set to roll out the National Policy on Artificial Intelligence, having completed the first draft. In June 2023, NITDA further revealed it had begun the drafting of Nigeria’s code of practice for Artificial Intelligence (AI) tools such as ChatGPT and others.[32] The Code is said to seek to address issues reported on generative AI tools, such as fake news, transparency issues, lack of data privacy, bias, accountability concerns, and several others. The Federal Government recently issued an invite to Nigerian and non-Nigerian top researchers across the globe to help the country design its National Artificial Intelligence (AI) Strategy.[33]

In summary, Nigeria does not have an Artificial Intelligence regulatory framework at the time of this work. In consideration of the fact that one happens to be in the works, and bearing in mind the uncertainty of wait period for the draft to be published, it is imperative to examine existing AI frameworks around the globe that could serve as a guide in creating a regulatory framework for the use and implementation of AI in Nigeria. Noteworthy is the fact that in November 2019 Nigeria published the National Digital Economy Policy and Strategy (2020-2030), and AI was identified as emerging technology with the potential to aid the nation attain in the development of the digital economy.[34]

4. Insight From Existing Global AI Frameworks

There are several interesting global government and industry regulations and frameworks that create excellent examples and models that could form pillars for developing a comprehensive and inclusive AI regulatory ecosystem in Nigeria. It is noteworthy that each of the frameworks to be mentioned are broad enough to constitute the subject and entire body of comprehensive individual articles, and debates, and as such will only be explored in summary to give a general insight into the content and context of these Frameworks and the lessons Nigeria could draw from them individually or aggregates of these Frameworks.

4.1. EU AI Act 2023

The European Parliament, the EU’s main legislative arm has approved the EU AI Act, in another landmark step for the first formal regulation of AI law in history.[35] draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law.[36]An analysis of the Act states that transparency requirements are one mandatory obligation for AI systems under the Act, Systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.[37]

Some of the notable provisions of the Act are the categorization of the risk character of AI systems into unacceptable risk, high risk and limited risk.

The first category is the unacceptable risk AI systems, and are systems considered to be a threat to people and will be banned. The listed examples are social scoring, real-time and remote biometric identification systems, such as facial recognition and Cognitive behavioral manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behavior in children.[38] The second category is the high-risk AI systems, which are defined as systems that negatively affect individuals’ safety or fundamental rights. All AI systems that are high-risk are to be assessed before release into the market and, are to be assessed throughout their lifecycle.[39] The second category of AI systems is split into two forms as follows:

a) AI systems that are used in products that fall under the EU’s product safety legislation which include toys, aviation, cars, medical devices and lifts; and

b) AI systems falling into specific areas that will have to be registered in an EU database. These areas are i) Biometric identification and categorization of natural persons, ii) Education and vocational training, iii) Management and operation of critical infrastructure, d) Law enforcement, iv) Migration, asylum and border control management, v) Employment, worker management and access to self-employment, vi) Access to and enjoyment of essential private services and public services and benefits, vii) and Assistance in legal interpretation and application of the law.[40]

The third category of AI systems is the limited risk AI systems. All systems in this category are only subject to minimal transparency requirements that would allow users to make informed decisions while deciding if they want to continue using such AI systems. To enable such informed decisions, users should be made aware of the fact that the systems they are interacting with are AI systems.[41] This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.

To combat the high risk of copyright infringement, the legislation will mandate developers of AI chatbots such as ChatGPT to publish all the works of creatives which include scientists, photographers and journalists and musicians, which were used to train such AI systems. and disclose that the content was AI-generated. Developers under the regulation will also have to show evidence that the processes employed to train AI systems complied with the law, to prevent the generation of illegal content.[42] The law places a fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher, for engaging in prohibited AI practices.[43]

4.2. Africa CPHR – Resolution 473

The African Commission on Human and People’s Rights (ACPHR)[44], adopted the “Resolution on the need to undertake a Study on human and peoples’ rights and artificial intelligence (AI), robotics and other new and emerging technologies in Africa” - ACHPR/Res. 473 (EXT.OS/ XXXI) 2021, on February 25 2021 in remarkable event within the AI regulatory landscape in Africa. The Resolution noted that the uses and potential uses of AI technologies, and other new and emerging technologies within the African setting has numerous implications for human rights under the African Charter and the quality of life generally.[45] The Resolution called on members of African Union to urgently place on their agendas the rapid issue of AI technologies, robotics and other new and emerging technologies in consideration of developing a regional regulatory framework that guides the use of these technologies in ways that are responsible, with premium placed on maintaining meaningful human control over AI systems as to mitigate and avert the threats that they may pose.

4.3. OECD Recommendation on Artificial Intelligence (AI) 2019

The OECD Recommendation on Artificial Intelligence (AI) is the first intergovernmental standard on AI, and it was adopted by the OECD Council at Ministerial level on 22 May 2019 on the proposal of the Committee on Digital Economy Policy (CDEP).[46] The Organization for Economic Co-operation and Development (OECD) is an organization of 38 Member-countries that span the globe, from North and South America to Europe and Asia-Pacific, and works to build better policies for better lives.[47] The OECD works with a range of non-member countries as well through Country programs and Country-specific approaches – to help them move closer to OECD standards and policy recommendations and support their policy reforms.[48]

The OECD AI instrument recommends 5 value-based AI Principles, and also includes concrete recommendations for the development of public policy and strategy. The general scope of the principles is structured to ensure they can be applied to AI developments globally by any nation or organization that desires to do so. The principles include: a) Inclusive growth, sustainable development and well-being, b) Human-centered values and fairness, c) Transparency and explainability, d) Robustness, security and safety, and e) Accountability. The OECD Principles promote trustworthy and innovative AI that engenders respect for human rights and democratic values.

Members and non-members who adhere to this Recommendation were encouraged by the instrument to invest in AI research and development, shape an enabling policy environment for AI, foster a digital ecosystem for AI, engender international cooperation for trustworthy AI, build human capacity and prepare for labor market transformation. The Recommendation focuses on AI-specific issues and sets an implementable standard which will be sufficiently flexible to maintain longevity in the rapidly evolving field. The Recommendation complements existing OECD standards in areas such as digital security risk management, privacy, and responsible business conduct.

4.4 UNESCO Recommendation on AI Ethics 2021

UNESCO in November 2021 issued the first-ever global standard on AI ethics which is the Recommendation on the Ethics of Artificial Intelligence’. The framework was adopted by all 193 Member States. While the OECD Recommendation was adopted by members of the OECD, the UNESCO Recommendations is of a global standard stemming across all UNESCO member states.[49] The Recommendation interprets AI as systems that have the ability to process data in a way similar to intelligent behavior.[50]

The Recommendation adopts a human rights centric approach and outlines ten core AI principles thus:

1. Proportionality and Do No Harm

This principle states that use of AI systems should not go beyond what is necessary to achieve a legitimate purpose. It recommends risk assessment as a mechanism to prevent harm which may result from the use of AI.

2. Safety and Security

This principle demands that Safety risks and vulnerabilities to attack  should always be factored, avoided and addressed by AI actors at all times.

3. Right to Privacy and Data Protection

This advocates that privacy must be protected and promoted throughout the lifecycle of any AI system through adequate data protection frameworks.

4. Multi-stakeholder and Adaptive Governance & Collaboration

It is beyond peradventure that an important factor in AI regulation is the participation of diverse stakeholders to ensure an inclusive approach to AI governance. Under this principle, international law & national sovereignty must also be respected in the use of data for AI purposes.

5. Responsibility and Accountability

Key among AI principles under the Recommendation is that AI systems should be auditable and traceable through audit, oversight, impact assessment, and due diligence mechanisms. The presence of these mechanisms is essential to avoid conflicts with human rights norms, and mitigate any threat to environmental serenity.

6. Transparency and Explainability

The principle refers to the need that artificial intelligence systems be created and deployed in a way that makes it possible for people to comprehend and interpret the decisions that the systems make. This principle is essential to guaranteeing accountability, openness, and ethical application of AI technology. It is imperative however that the level of transparency should be appropriate to the context, as there may be tensions between transparency and explainability and other principles such as privacy, safety and security.

7. Human Oversight and Determination

This principle states that UNESCO Member States should ensure that AI systems do not displace ultimate human responsibility and accountability, during their use.

8. Sustainability

The sustainability principle advocates that AI technologies should be assessed against their impacts on ‘sustainability’, and should be understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

9. Awareness & Literacy

This principle dictates that there should be a public understanding of AI and data which should be promoted through open & accessible education, media & information literacy, civic engagement, and digital skills & AI ethics training.

10. Fairness and Non-Discrimination

This is one of the key principles of AI and it dictates that AI actors should promote social justice, fairness, and non-discrimination in the deployment of AI systems. It also enjoins AI actors to adopt an inclusive approach to ensure AI’s benefits are accessible to all.

4.5. Universal Guidelines for AI

In 2018, scientific societies, computer scientists, human rights experts, and NGO advocates, issued the Universal Guidelines for AI, a framework for the regulation of Artificial Intelligence (UGAI),[51] at the Privacy Assembly Conference held at the European Parliament in October 2018. UGAI sets out 12 principles which it is believed will aid to maximize the benefits and minimize the risks of AI[52] as follows; Right to Transparency, Right to a Human Determination, Identification Obligation, Fairness Obligation, Assessment and Accountability Obligation, Accuracy, Reliability, and Validity Obligations, Data Quality Principle, Public Safety Obligation, Cybersecurity Obligation, Prohibition on Secret Profiling, Prohibition on Unitary Scoring, and Termination Obligation. These principles are deemed necessary to build AI systems on to create a balance where AI systems are maximized but used responsibly.

4.6. US AI Bill Of Rights

The U.S White House Office of Science and Technology Policy (OSTP)  in October 2022[53] released a Blueprint for an AI Bill of Rights[54] set of guidelines. The OSTP hoped that the Blueprint will spur companies to develop and deploy artificial intelligence systems responsibly, and limit AI-based surveillance at all times during an AI systems lifecycle. The Bill of Rights outlines expectations for AI use for citizens and residents and states five principles which should be adhered to for responsible AI use.[55]

These principles mandate a) safe and effective systems: users should be protected from unsafe or ineffective systems; b) algorithmic discrimination protections: users should not face discrimination by algorithms, and systems should be used and designed in an equitable way; c) data privacy: users should be protected from abusive data practices via built-in protections, and should have input over how data is used; d) notice and explanation: users should know that an automated system is being used and understand how and why it contributes to outcomes that impact such user; and e) alternative options: users should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems they encounter.

The Guidelines are not binding and one of the criticisms of non-binding rules is its eventual ineffectiveness in regulating the use of AI, and combatting malicious or discriminatory use of AI. However, they could still serve as a resource tool to ensure responsible use of AI, by AI actors.

4.7. Singapore’s Model AI Governance Framework (“Model Framework”)

As part of their National AI strategy, Singapore launched the Model AI Governance Framework (“Model Framework”) at the World Economic Forum in Davos, Switzerland, in January 2019.[56] Further editions have been launched after that. The voluntary Model Framework was launched as a general, ready-to-use tool to enable organizations that  deploy AI solutions at scale to do so in a responsible manner. This will translate into increased stakeholder confidence in AI evidenced through the responsible use of AI, to manage different risks that arise in AI deployment.[57] In another highly commendable collaboration, Singapore’s Info-communications Media Development Authority (“IMDA”) and the Personal Data Protection Commission (“PDPC”) partnered the World Economic Forum Centre for the Fourth Industrial Revolution, to develop an “Implementation and Self-Assessment Guide for Organizations (“ISAGO”)”. The ISAGO complements the Model Framework by allowing organizations to assess how their AI governance practices align with the Model Framework.[58] The model Framework is built on two core principles; 1) the use of AI in decision making should be explainable, transparent and fair, and 2) AI solutions should be designed to be human-centric i.e., while AI is used to amplify human capabilities, the protection of the interests of human beings, including their well-being and safety, should be the primary considerations and factors in the design, development and deployment of AI.[59]

5. Lessons For the Nigerian Ecosystem

The above Regulations/Frameworks does not constitute an exhaustive list of attempts to regulate AI. They are however, some of the most comprehensive and represent some of the most sophisticated attempts made yet to create regulations and frameworks within which AI can be deployed and used safely. Some of the recurring principles inherent in these Frameworks guiding the development and use of AI are transparency, safety and security, accountability, promotion of inclusive growth, human-centered values, data quality, data privacy etc.

Generative AI systems and AI use in general have found their way into the Nigeria technology space and accountability, responsibility, and transparency must be entrenched in the regulatory framework to govern generative AI and other kinds of AI systems.  Nigeria can leverage on lessons learned by AI stakeholders, contributors, experts in other jurisdictions while drafting regulations for AI deployment and use, to ensure that any draft AI regulations has quality and adaptability to daily AI related innovations.

In the course of training, AI systems might pick up social biases from their training data, resulting in biased results which could ultimately lead to discrimination. Systems trained with dataset built on bias if allowed to operate unregulated, will perpetuate bias in the course of its decision making to the detriment of Nigerian citizens. For instance, if an AI system used for hiring purposes is trained with dataset that promotes the denigration and relegation of any set of individuals with a certain ethnic or religious characteristic, the outcome of such AI systems decision making will be discriminatory of any individual that possess such characteristics. The Constitution states as follows:

“Accordingly, national integration shall be actively encouraged, whilst discrimination on the grounds of place of origin, sex, religion, stats, ethnic or linguistic association shall be prohibited”

In view of the above, any AI policy must ensure the presence of checks, balances, and auditability of AI systems to aid in non-discriminatory decision making by AI systems regarding Nigerians.

The Transparency principle which is prevalent in various global frameworks is to ensure that prior to deployment and at any other time, users will be alerted where AI systems will be used to make automated decisions about them, and also allow users to opt out of such assessment unless for any other legitimate grounds. It also means that individuals must know when they are interacting with AI systems, and any practices that encourage non-disclosure of these facts must be prohibited. The transparency principle equally implies that any entity deploying AI systems in Nigeria for any purpose must be identifiable, and any facts regarding the system training data that contains identified or identifiable personal data of individuals must be disclosed and auditable. Where AI systems are imported from abroad, the applicability of this principle will ensure that there is algorithmic transparency into how these systems function and any biases auditable.

The Accountability principle ensures that throughout the lifecycle of AI systems, which includes its design, development, operation or deployment, organizations or individuals will ensure the proper functioning of this systems.[60] While accountability and liability are not synonymous, this responsibility will enhance the possibility of aggrieved data subjects or individuals to seek regulatory and judicial redress when necessary, against the unauthorized use of their data for training of AI systems or for profiling or other prohibited purposes.

The security and safety principle dictates that AI systems do not pose unreasonable safety risks to individuals during its lifecycle.[61] One of the primary functions of government is to protect lives and property in the society, and in this context, this principle of AI regulation cannot be overestimated. This requires that an impact assessment of the risks posed by AI systems to individuals must be conducted prior to deployment and a report generated. The impact assessment report should also be made publicly available in extension of the transparency principle and will be subject to review by a supervisory or regulatory authority.

It is also paramount that an AI policy framework will incorporate data privacy and protection principles with data minimization being a key feature, as well as enhanced protection of the personal data of vulnerable citizens such as children and elderly persons. The ACPHR emphasized the importance of adjustments of AI systems to align with the needs of Africans. There should also be some local content transfer principles to increase AI systems knowledge transfer to Nigerians. This will be in line with the provisions of the Constitution which states that “Government shall promote science and technology”[62]

It is beyond peradventure that AI systems especially those utilized in critical infrastructure such as healthcare or transport facilities, can pose immense risks to the freedoms and rights of users if not deployed in a properly regulated system with transparency, safety, and accountability as some of the key watchwords. The OECD Recommendations, EU AI Act, UNESCO Recommendations etc., are notable examples that ought to serve as illumination into what an appropriate, functional and robust AI Regulatory ecosystem should look like. A sophisticated AI regulatory framework strikes a balance between promoting innovation and safeguarding the well-being of individuals and society.

6. Conclusion

The deployment and use of AI systems in Nigeria’s public or private sector raises risks and legal concerns that border on the rights and freedoms of Nigerian citizens. This highlights the necessity for a robust and live AI regulatory framework. An AI policy framework that does not focus on safety and security, accountability, promotion of inclusive growth, human-centered values, data privacy, transparency, etc., will create a risk-laden environment for individuals whether or not they interact with such systems. Such an environment will be inimical to the development of Nigeria’s digital economy.

The developing regulatory frameworks examined above have set examples and foundations regards possible lessons and approaches, to address the possible nature of risks and legal concerns inherent in the use of AI systems within the Nigerian public and private sector. 

However, Nigeria has its peculiarities determined by its societal and democratic values. As promoted by the ACPHR, local and cultural considerations will have an important role in aiding the formulation of an alluring AI regulatory framework for the responsible use of AI streamlined for the Nigerian ecosystem. An AI policy Framework must be tailored to the Nigerian Constitutional and Democratic norms. To achieve this process, there must be intense, and widespread engagement between all concerned regulatory bodies such as NITDA, the Nigeria Data Protection Commission (NDPC), the Federal Competition and Consumer Protection Commission, etc, and stakeholders such as Lawyers, Civil Rights Groups, AI Researchers, AI system developers, etc., every step of the process.

Deliberations will also be channeled towards the structure of any supervisory authority(ies) that may be responsible for enforcing compliance. Critical to this point is the need to ensure that there will not be duplication of competencies among agencies which may hamper a coordinated approach to the issue of AI policy development and enforcement. This process will not be unchallenging, and it is important to note that the process should remain live and continuous to ensure that the law and regulation adjust to existing technological realities and also anticipate processes as much as possible.