AI Risks and Ethics: What is NOT Openly Discussed

The transformative power of artificial intelligence (AI) is undeniable, captivating the attention of executives across industries. With IDC projecting AI-centric spending to exceed $300 billion by 2026, organizations are poised to take advantage of the immense benefits this technology offers. However, amidst the increasing adoption rate, a growing chorus of questions echos:


  • What are the risks of AI?
  • What are the ethical concerns with this technology?
  • How can those risks be mitigated?

Artificial intelligence (AI) is already proving to be a double-edged sword. It has created many opportunities globally, from facilitating healthcare diagnoses to enabling human connections through social media and creating labor efficiencies through automated data workflows. These advancements translate into tangible business value, with over 84% of executives surveyed expressing confidence that AI will drive their companies’ profitability.



Yet, the rapid adoption of AI also triggers ethical concerns. AI systems, trained on vast amounts of data, can inadvertently embed biases, leading to unintended (or even malicious) consequences for individuals, organizations, and society.

Prefer to listen to the content?

Hear ELASTECH’s CEO, Armen Tatevosian, and EVP of Global Operations, Jim Zordani, in a conversation about what is NOT openly being discussed when they talk about AI: Risks and Ethics.

Play Video

The Spectrum of AI’s Unintended Consequences

AI’s Impact on Individuals

AI’s impact on individuals is multifaceted and can extend beyond the immediate context of AI-powered decision-making. At its core, AI can influence the very fabric of our lives, shaping our opportunities, outcomes, and even perceptions.


Examples of unintended consequences for individuals:

Autonomous Vehicles: Autonomous vehicles (AVs) hold the promise of revolutionizing transportation, but their development is not without its perils. AV malfunctions could lead to accidents, injuries, or even fatalities. Despite the advancements made in AV technology, there have been several instances of AVs failing to detect pedestrians or cyclists, causing collisions. These incidents highlight the need for robust safety measures and rigorous testing before AVs become widely adopted.


Misdiagnoses in Healthcare: AI is increasingly being used in Healthcare to analyze medical images, predict patient outcomes, and even prescribe medications. However, the potential for biased or inaccurate AI models to lead to misdiagnoses is a significant concern. For instance, an AI model trained on a dataset that underrepresents certain demographics could misdiagnose illnesses more frequently in those groups. This could lead to delayed treatment and potentially negative health outcomes.


Discriminatory Hiring and Loan Decisions: AI-powered hiring and loan decision-making algorithms can perpetuate and even amplify existing biases in society. If these algorithms are trained on historical data that reflects societal prejudices, they may be more likely to discriminate against marginalized groups. For example, AI algorithms used for loan applications could deny loans to applicants from certain neighborhoods or with certain ethnic backgrounds, perpetuating financial inequalities.

AI’s Impact on Organizations

AI’s influence extends beyond individuals, impacting organizations across industries. While AI offers numerous benefits, its adoption also comes with potential risks that can affect an organization’s stability, reputation, and bottom line.


Examples of unintended consequences for organizations:

Compliance Issues: Organizations using AI in regulated industries like healthcare and finance face increased compliance risks. If AI models are not properly trained or deployed, there is a higher chance of data breaches, unauthorized access, or the disclosure of sensitive patient or user information. These incidents can lead to significant fines, legal liabilities, and damage to an organization’s reputation.


Financial Missteps: AI-powered decision-making can affect an organization’s financial performance. For instance, AI algorithms used for pricing or resource allocation may make suboptimal decisions that lead to financial losses. Misjudging market elasticity or failing to optimize supply chains can result in reduced revenue, increased costs, and decreased profitability.

AI’s Impact on Society

Unintended consequences associated with the adoption and use of AI can also potentially impact society as a whole. While AI holds the potential to address societal challenges and improve well-being, its adoption also brings about potential risks that can affect our social structures, values, and overall well-being.


Examples of unintended consequences for society:

Infrastructure Misuse: Biased or poorly-trained AI models can lead to inefficiencies and disruptions in infrastructure systems. For instance, AI-powered traffic routing algorithms that prioritize efficiency over local knowledge can divert traffic into residential areas, causing congestion and safety hazards. This highlights the need for careful consideration of AI’s impact on existing infrastructure and social dynamics.


Financial Market Disruptions: Automated trading algorithms, while efficient in executing trades, can exacerbate volatility in financial markets. These algorithms, operating in real-time and often without human oversight, can amplify market fluctuations and lead to unintended consequences, such as market crashes or manipulation.


Erosion of Trust and Misinformation: AI-driven manipulation of news and information can erode public trust and distort perceptions of reality. The ability to generate deepfakes or spread misinformation through social media can undermine democratic processes and hinder informed decision-making.


Increased Surveillance and Social Control: AI-powered surveillance systems can raise concerns about privacy and the potential for unwarranted government or corporate intrusion. The use of AI for facial recognition, social media monitoring, or location tracking raises questions about individual liberties and the potential for misuse.

While these examples can be intimidating and potentially create hesitation towards AI adoption, leaders need to become familiar with the potential risks for their company and identify how to mitigate them, along with understanding interdependencies and underlying causes. There are multiple factors driving the risks with AI, which the next section highlights. 

Factors Driving the Risks with AI

Data Difficulties

Extracting, organizing, connecting, and leveraging data has become more challenging as the volume of unstructured data from various sources has increased. Data sources range from the web and social media to mobile devices, sensors, and even the Internet of Things (IoT). This opens up pitfalls, such as accidentally using or disclosing sensitive information hidden within anonymized data.


Companies should always address two questions when developing their AI model:

  • Do I have a large enough sample size to effectively train an AI model that is non-biased?
  • And if not, how can I use public data securely to guarantee compliance and maintain accuracy and objectivity?

Technology Challenges

A second challenge that many organizations face is technology. Operating environments can present technical and procedural problems that negatively impact AI performance.


It is the best practice to map out organizational and technical infrastructure to confirm where AI can effectively be integrated. Working with an experienced data architect and technical expert like ELASTECH will provide the necessary expertise to achieve successful results.

Security Vulnerabilities

AI models are in danger of breaches and exploitation, especially if security is inadequate. Fraudsters can leverage seemingly innocuous information from marketing, health, and financial data to create fake identities or deepfakes. While target companies may be unaware of their role in these schemes, there are many examples where they still experienced consumer backlash and regulatory scrutiny. 

The adoption of AI should always include the implementation of sufficient security measures. Leading companies have recognized blockchain as the most secure way to safeguard their data and eliminate the risk of unauthorized access to that data.

AI Model Misbehavior

AI models are subject to misbehavior, sometimes called hallucinations, as well as unintended consequences. AI models can develop harmful biases, leading to unfair or potentially dangerous outcomes. This is especially concerning in cases where the model is used to make important decisions, such as how to navigate or whether or not to grant an inmate parole. A lack of model transparency exacerbates how and where biases occur. It can be difficult to understand how AI models work and why they make the decisions they do. 

Even well-intentioned AI models can unintentionally discriminate based on factors like race, gender, or socioeconomic status. This can happen as a byproduct of the volume of recorded historical data that is used to train the model.

There are several approaches organizations are taking to mitigate the risk of biased AI models. One important step is to use fair and representative data to train the models compared to historical data. Another important step is to make AI models transparent and accountable, allowing people to challenge or appeal decisions made by AI models. This can either be done to adjust the model or to adjust the decisions made from the outputs of the model. Finally, it is important to monitor and continuously train AI models for bias.

The Interface between AI and Humans

The interface between people and machines is another key risk area of AI. This is especially evident in automated infrastructure, manufacturing, and transportation systems. Accidents and injuries can occur when operators do not recognize or are distracted when systems need to be overruled. This is a particular concern when operating heavy machinery or self-driving cars. Conversely, this re-introduced human error, leading to operators overcontrolling or overriding systems in ways that are counterproductive or unsafe.

Generally speaking, these risks can be mitigated. AI systems should be designed with safeguards to prevent human errors and malicious interference. A rapidly growing area within AI is focused on the ethical and responsible use of AI. Organizations can implement policies and procedures to ensure that AI systems are used to ensure this occurs. Leaders play a vital role in demonstrating and driving transparency by asking tough questions on safety, security, fairness, and accountability. They must challenge their teams on where AI could create undesirable consequences or harm. Methods like cross-functional workshops and scenario planning to pressure test ethics help teams surface concerns and opportunities to prevent blind spots.


AI holds immense potential to improve our lives, drive significant business value, and even address global challenges. However, to harness this potential responsibly, companies need to acknowledge and address the ethical concerns and potential unintended consequences associated with AI. Successfully adopting and leveraging AI starts with a commitment to explore, understand, and discuss potential pitfalls. By adopting responsible AI practices, fostering open discussions on ethics, and implementing robust governance and security measures, we can ensure that AI benefits society as a whole while upholding ethical principles. The ability to assess those risks and to engage workers at all levels in defining and implementing controls is a new source of competitive advantage to invest in.

ELASTECH can help you assess and evaluate the potential risks associated with AI, specifically for your organization, to prescribe the approach and roadmap for your solutions. AI has proven to drive additional business value, and we encourage every leader to take advantage of the benefits the technology delivers. The potential gains and profitability available through the use of AI outweigh the risks when they are addressed with the right expertise.



We invite you to a free consultation to address your individual opportunities and create your AI roadmap. Click below to schedule directly and take advantage of ELASTECH’s AI services.

Schedule your free consultation with an AI Expert

Share This Article

Request Whitepaper