Bias – The Ghost in the Machine

Bias (German: Vorurteil | French: Biais),

is a disproportionate weight in favour of or against one thing, person, or group compared with another, usually in a way considered to be unfair (Oxford University Press, 2026). In the world of AI, we need to consider specifically Cognitive Bias (The inherent human prejudices) and Algorithmic Bias, where AI systems reflect the prejudices of its creators or the inequities present in the “real world” training data, leading to unfair outcomes for specific groups (IBM 2025).

It’s no longer a secret that bias is baked into both our daily lives and the AI systems we build. As these models gain the power to amplify prejudices and memorize vast amounts of data, controlling these biases has become an uphill battle. To build an equitable digital future, we must address three critical pillars of AI development.


Who decides what is “fair”?

Fixing the machine means fixing what we feed it. We must take much greater care in selecting “fairer” input data. However, this raises complex questions like how do we define “fair” data in a world of conflicting cultural perspectives? Who is responsible for guaranteeing data quality? Should it be private corporations, government regulators, or a new set of international standards (Frontiers, 2025)? Perhaps it is time to ask if we need a “Digital Constitution” to enforce these standards and protect users from systemic data failures.

AI literacy and value alignment

We cannot blame the algorithm if we accept its output without question. Defining the “expected output” is vital if we want to avoid becoming the primary enablers of machine bias. To stay in the driver’s seat, we need to focus on AI Literacy, in order to understand how these systems “think” so we can spot errors (European Commission & OECD, 2025).

Furthermore, it seems vital to leverage frameworks like the EU AI Act to manage high-risk activities through mandatory institutional and human oversight (European Parliament, 2024). However, such oversight is only effective if we achieve “value alignment”, ensuring that functions inside the machines actually reflect the nuanced ethical values of the humans it serves.

Breaking the “Black Box”

For too long, major developers have kept their AI systems as “black boxes,”. As this might be understandable from a business point of view, it does obscure the internal “weighting” of the neural networks, leaving us to wonder how much of the data or the layers of a model contribute to biased results.

Increasing public pressure is probably going to force a move toward a more ethical stance, demanding explainable and transparent AI models, that can be trusted and audited, like the current shift toward Explainable AI (XAI) as a new standard (Valueans, 2026).


Reference list:

IBM (2025), What Is AI Bias? [Online] Available at: https://www.ibm.com/think/topics/ai-bias [Accessed 25 February 2026].

European Parliament (2024), EU AI Act: first regulation on artificial intelligence. [Online] Available at: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence [Accessed 25 February 2026].

European Commission & OECD (2025), Empowering Learners for the Age of AI: An AI Literacy Framework. [Online] Available at: https://ailiteracyframework.org/ [Accessed 25 February 2026].

Frontiers (2025), Reimagining FAIR for an AI World: Advancing FAIR for the AI Era. [Online] Available at: https://www.frontiersin.org/news/2025/03/03/reimagining-fair-for-an-ai-world-frontiers-introduces-fair-data-management [Accessed 25 February 2026].

Oxford University Press (2026). Oxford English Dictionary. [Online] Available at: https://www.oed.com/ [Accessed 25 February 2026].

Valueans (2026), Architecting for Accountability: Why Explainable AI (XAI) is the New Engineering Standard. [Online] Available at: https://valueans.com/blog/tech-transparency-explainable-ai-systems-trust [Accessed 25 February 2026].

Similar Posts

  • Finding a definition of artificial intelligence

    The four approaches to defining AI according to Russell and Norvig (2020): Acting humanly ( The Turing Test approach) The output counts. It doesn’t matter how the machine “thinks,” as long as its behavior is indistinguishable from that of a human. Thinking humanly (The cognitive modeling approach) The process counts. We must first understand how the…

  • The Centaurs

    We’ve talked about the nine Gardnerian intelligences and how AI as a technology appears to have developed competence across many of them, including making it close to emotional intelligence (EQ). But “appears” is the pivot here because what large language models and affective computing systems actually do is produce an expected and predicted output without possessing…

  • The powerful imperfection

    Imagine packing for holidays. You have a fixed amount of space in the car and a variety of irregular shapes (suitcases, coolers, a guitar, tennis rackets, two dogs, a bag of snacks, …). Humans can easily solve this problem because we use a heuristic approach (a mental shortcut), like “put the big heavy boxes in first…

  • Is the Turing Test Contentious?

    In 1950, Alan Turing proposed a simple solution to the complex question, “Can machines think?” He suggested that if a human interrogator could not distinguish between a computer and a human through text-based conversation, the machine should be considered “intelligent.” While this “Imitation Game” became the foundational benchmark for the field, it has since become one…

  • Industry 4.0 and AI

    Historically, the development of a General Purpose Technology does not follow a straight line. Instead, societies usually move through cycles of high excitement followed by periods of losing interest. Even though we are currently in the middle of the Fourth Industrial Revolution, we can still use old frameworks to understand what is happening. One important concept…