Logotype Knowledge Exchange

The ethical question of AI: Key considerations

Sylvia Colacios

As AI adoption continues to boom, there are a number of concerns about its ethical implications.

Data privacy and protection, bias and discrimination, intellectual property rights, and climate change are among the concerns that must be addressed if we are to benefit from AI's advantages while minimizing its potential harm.

Concern 1 - Bias and Discrimination

Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.

Sign up now or Log In

[um_loggedin show_lock="no"]

One of the most pressing issues regarding AI ethics is the potential for bias and discrimination. AI systems are trained on data, and if that data is skewed by societal biases, algorithms can perpetuate or even exacerbate these inequalities.

Bias reduces AI’s accuracy and can lead to discriminatory outcomes, particularly in law enforcement, healthcare and hiring practices. To combat bias in AI proactively, developers must ensure that the data collected and used is diverse and representative.

Many organizations are incorporating AI governance policies and practices to identify and address bias, including regular AI algorithmic audits. Further, the implementation of the EU AI act (in effect as of August 1st, 2024), and the introduction of the Algorithmic Accountability Act of 2023 to Congress set clear rules and standards companies must abide by in AI development and use in order to limit bias and discrimination, as well as cover privacy rights. Additionally, resources to help mitigate bias across AI are now available, such as IBM Policy Lab’s suite of awareness and debiasing tools.

Concern 2 – Transparency and Accountability   


To build trust in AI, transparency and accountability are ethical implications that need to be addressed. Disclosing how and why AI is being used, as well as the ability to explain the decision-making process to AI users in an understandable way gives credibility to AI decisions, helps prevent AI bias, and enhances the acceptance and adoption of AI technologies across society. Transparency is particularly crucial in areas such as healthcare, finance, and criminal justice, where AI decisions significantly impact individuals’ lives and the public as a whole.

Many companies are creating their own internal AI regulatory practices by introducing techniques, tools and strategies to demystify AI decisions and improve transparency, making them more trustworthy. For instance, AI model interpretation can be applied using various tools to visualize the internal workings of an AI system to determine how it arrived at a specific decision, as well as detect and rectify biases or errors.

Establishing clear lines of accountability for AI actions and outcomes is essential for fostering transparency. Holding organizations, AI developers, and AI users accountable motivates them to provide understandable explanations about how their AI systems work, what data was used and why, and the decision-making processes. This promotes responsible and ethical AI development and deployment, which in turn improves the general public’s acceptance and adoption of AI technologies.

Some regulatory laws and requirements have already been developed to ensure AI ethical responsibility. In addition to the EU AI Acted noted earlier, there is:

  • The European Commission’s Ethics Guidelines for Trustworthy AI
  • The U.S. Executive Order on Safe, Secure, and Trustworthy AI
  • The National Institute of Standards and Technology (NIST) Risk Management Framework.

As global collaboration on monitoring and enforcement measures for AI systems increases, more regulatory frameworks are expected in the future.

Concern 3 – Privacy and Data Protection

Due to AI’s collection of vast amounts of personal data, data privacy is another critical ethical consideration in the development and deployment of AI. While our lack of control over personal information has been growing since the beginning of the internet decades ago, AI’s exponential increase in the quantities of personal data gathered has heightened data privacy concerns.

Currently, there are very limited controls that restrict how personal data is collected and used, as well as our ability to correct or remove personal information from the extensive data gathering that fuels AI model training and algorithms.

The EU’s General Data Privacy Regulation (GDPR) and the California Privacy Rights Act (CPRA) both restrict AI’s use of personal data without explicit consent, and provide legal repercussions for entities that violate data privacy. Generally, though, the ethical implications of personal data collection, consent, and data security still depend significantly on each organization’s internal regulation leaving society to trust that they will do the right thing, often with only the risk of reputational damage as motivation.

When companies don’t establish internal ethical data practices in AI training, personal data risks can include:

  • Misuse of Personal Data:
    A growing concern, generative AI models are being trained on scraped internet data, which can include personal information about individuals and their relationships. This data helps enable spear-phishing and identity theft.
  • Repurposing Data:
    Data shared for one purpose (e.g., a resume or photograph) can be repurposed for training AI systems without consent and sometimes with direct civil rights implications.

To combat data privacy violations, some recommendations to future data privacy regulations include:

  • Data minimization and purpose limitation regulations that restrict companies to only gather the data they need for a limited purpose.
  • A shift from opt-out to opt-in data sharing by implementing new software technology and mechanisms.
  • Data intermediaries, such as digital agent software, data platforms, and trusted third parties, that work on behalf of individuals to gain consent, protect personal data, and build trust in data-driven companies, including those using AI-generated data.

Concern 4 – Creativity and Ownership

With its ability to create massive amounts of new output quickly that appears to be developed by humans, the benefits of generative AI are transforming creative industries. However, since AI-generated creative works are sourced from vast quantities of existing creative content, and due to the current proliferation of creative AI tools used, ethical and regulatory concerns about creative ownership have emerged.

  • Intellectual Property Rights

    Using copyrighted information to train and develop AI systems to create new content poses significant intellectual property issues. However, the question of whether training generative AI models constitutes copyright infringement is still undecided.

    For instance, existing legal frameworks of “fair use” in the U.S. and other countries permit some uses of copyrighted material for training algorithms, but this permission predates the current reality of Generative AI’s extensive unregulated capabilities in using unauthorized use of copyrighted material. Many areas of law are now grappling with this, and recently, the legal debate surrounding AI and intellectual property rights has led to numerous copyright infringement lawsuits against creators of generative AI systems, such as ChatGPT.
  • Eligibility for Copyright Protection

    Many countries don’t consider AI-generated creative works eligible for copyright protection because they don’t result from human creativity. However, AI-generated creative output, whether written or visual, that entails substantial human contribution may pose a challenge to the traditional notion that only works created by humans are protected by copyright laws.

Concern 5 – Impact on the Job Market

The impact of AI on the job market is twofold: there are benefits such as increased productivity, economic growth, and the creation of new employment opportunities, but there are also significant concerns about the potential negative impact of job displacement due to AI technology.

  •   Automation

    Many routine and repetitive tasks are increasingly being performed by AI machines, leading to significant job displacement, particularly for low-skilled workers. Upskilling and reskilling will be critical in limiting unemployment, financial hardships, and income inequality.  

    A prime example of how to remedy job displacement by adapting employee skills can be seen at entry-level within creative industries. From advertising to publishing to sales, junior creatives faced the AI appropriation of their tasks by focusing on enhancing their creative skill-set, including developing generative AI expertise. This creative upskilling has been paramount in adapting entry-level creative roles to new opportunities and job advancement.

    Therefore, investing in education and training programs that equip workers with the skills needed will be key to ensuring the successful transition of displaced workers. Organizations that prioritize the reskilling of their workforce can benefit from greater adaptability and an increase in competitiveness.

    On the government level, policymakers will need to take an active role in creating social safety nets to support affected workers during this transition, and providing incentives to companies that offer worker training.
  • Job creation

    By 2025, The World Economic Forum's Future of Jobs Report projects 97 million new roles to emerge. Many of these new jobs will directly result from the evolution of AI, from new creative roles that manage fresh AI-generated content to new careers that focus on deploying AI technologies, such as AI specialists, data scientists, and machine-learning engineers across various industries. Job opportunities are also growing in the field of AI ethics and regulation. Therefore, the future of work, energized by AI, may not be about job displacement, but one of job growth and expansion.

Concern 6 – Environmental Sustainability and Energy Consumption

AI technology is dually positioned to have a significant impact on the environment, as well as offer solutions to help alleviate climate change. Consequently, managing these contrasting effects will require AI companies and governments to ethically commit to driving sustainability in AI development and deployment.

Environmental Impact

As the use of AI technology continues to surge, so does its energy consumption and carbon footprint. In fact, due to AI’s energy-intensive computations and data centres, the power to sustain AI’s rise is currently doubling approximately every 100 days and will continue to increase, translating to substantial carbon emissions that directly impact climate change.

Government and AI companies must take action to reduce the effects of AI on the environment. Recommended steps towards sustainability include:

  • Establishing power usage limits
    Setting a cap on the level of power would minimally increase the time needed to process AI but would measurably reduce energy consumption.  This would reduce energy consumption.
  • Optimizing scheduling
    Managing AI workloads to align with times of lower energy demand would deliver substantial energy savings.
  • Sharing AI resources
    Moving towards the use of shared data centres and cloud computing resources for AI technology and processes would reduce energy consumption. In addition, strategically locating this centralization in areas with lower energy costs could also increase energy savings.
  • Investing in energy efficiency

    Reducing energy consumption must become a priority, from building data infrastructures to incorporating renewable energy resources to improving hardware efficiency to reduce energy needs.
  • Regulatory policies for sustainability

    Governments will need to take a greater role in regulating the impact of AI on the environment, from establishing energy consumption limits to offering sustainability incentives. For example, organizations developing AI should be required to invest in sustainable initiatives to offset any negative environmental impact.  

Climate Change Mitigation and Adaptation

In contrast, the power of AI technology has the potential to offer climate change solutions.  AI can assist in analysing climate data, optimizing energy usage, and enhancing renewable energy systems. AI contributions such as these to climate change mitigation and adaptation are necessary to counteract the impact of AI technology, and additional AI-supported solutions need to be explored further.

Final Thoughts

In conclusion, as AI continues to transform various facets of society, addressing its ethical implications becomes paramount. The challenges of bias and discrimination, transparency and accountability, privacy and data protection, intellectual property rights, job displacement, and environmental sustainability require comprehensive strategies to mitigate potential harm. Regulatory frameworks like the EU AI Act and the Algorithmic Accountability Act of 2023 are crucial steps towards establishing ethical boundaries. However, responsibility also lies with AI developers and organizations to self-regulate AI. By fostering an environment of ethical AI development, we can harness AI's extraordinary potential while safeguarding societal values and human rights. The path forward demands collaboration across sectors to create a future where AI serves as a tool for positive change, rather than a source of ethical dilemmas.

[/um_loggedin]

*The images in this post were created using AI.
key account manager
unlock 
the power
related articles
Serverless AI – Leveraging AI and Cloud for Smarter Applications
Source: Protech Insights In the fast-paced world of technology, the fusion of artificial intelligenc...
Read More
Beware the AI-des of March. 
At the beginning of the year, Google sent out a message to users about changes to its policy in rela...
Read More
What does the EU’s AI Act mean for artificial intelligence innovation? 
The EU's landmark AI Act gained momentum last week as negotiators from the EU parliament, EU commiss...
Read More
Roadmap
Development
book a date
unlock
the power
If you are creating a roadmap for your IT infrastructure and need some advice to focus your goals and reach your deadlines, our Account Manager are here to help you, guide you, and put you in contact with the right suppliers. Do not hesitate to get in touch with us today.
COPYRIGHT © 2023 ANTERIAD