Blogs / Humans as Bugs in AI Systems: When We Become the Algorithm's Error

Humans as Bugs in AI Systems: When We Become the Algorithm's Error

انسان به‌عنوان باگ سیستم‌های هوش مصنوعی: وقتی ما خطای الگوریتم می‌شویم

Introduction

Imagine an AI system designed to optimize city traffic suddenly encountering the unpredictable behavior of a pedestrian who stops in the middle of the street for no logical reason to check their phone. Or a recommendation system trying to understand your shopping patterns, but you sometimes buy completely random products just because you were in a "good mood." In these moments, humans are nothing more than a bug in the intelligent system.
We live in a world where artificial intelligence is rapidly penetrating every aspect of life - from self-driving cars to security systems, from social media platforms to financial systems. These systems are designed with the goal of optimization, prediction, and automation, but they have one fundamental problem: humans are inherently unpredictable, illogical, and emotional.
This article explores the paradox of how humans, the very beings who created these complex systems, are now considered the biggest challenge and "bug" of these systems. From self-driving cars that cannot predict human behavior to security systems defeated by social engineering attacks, this incompatibility forces us to think deeply about the future of human-machine interaction.

Why is a Human a "Bug"?

Behavioral Uncertainty

AI systems work based on patterns and historical data. They expect human behavior to be modelable and predictable. But reality is something else:
A real example: In 2024, Tesla's self-driving car system encountered a situation where a human driver suddenly braked on the highway for no particular reason to look at an advertising billboard. The car's AI system recognized this behavior as an "error" because there was no danger ahead, but for the human, it was completely natural behavior.
This uncertainty manifests itself at different levels:
  • Momentary emotions: Humans can make illogical decisions in a fraction of a second
  • Behavioral contradictions: You love a product today, find it hateful tomorrow
  • Environmental susceptibility: A simple sound can change your entire decision

Cognitive Biases

Machine learning models try to be unbiased, but humans are creatures full of bias. These biases not only enter AI systems through training data but also manifest themselves in daily interactions with these systems.
Confirmation Bias: Suppose you ask an AI chatbot about a controversial topic. If the answer aligns with your prior beliefs, you call it "intelligent," but if it opposes, you consider the system "wrong." This contradiction prevents AI systems from finding a consistent pattern for user satisfaction.
Recency Bias: Recommendation systems like Netflix or Spotify face the problem that users give the most weight to their recent experiences. Maybe you've watched action movies for years, but because of one drama film you watched last night, you now expect the system to only recommend drama.

Emotional Decision-Making

One of the biggest challenges for AI systems in areas such as online shopping, investing, and even medical diagnosis is that humans make decisions based on emotions, not pure logic.
Real example in financial trading: An AI system for trading might suggest holding a stock based on data analysis, but a human investor might act completely contrary to this advice due to fear of missing out (FOMO) or panic from market decline. This behavior is a "bug" for the AI system that cannot be resolved.

Real Examples of Humans as Bugs

Self-Driving Cars and Unpredictable Human Behavior

Self-driving cars are one of the most advanced examples of artificial intelligence, but their biggest challenge is pedestrians and human drivers.
Real Waymo test case: In 2023, Waymo cars in San Francisco encountered a strange situation - a group of protesters against self-driving cars blocked these vehicles and prevented their movement. The AI system couldn't determine whether these people were in danger, whether it should wait, or whether it should change routes. Eventually, several cars remained there for hours because the system couldn't model this "protest behavior."
Pedestrians who ignore rules: Self-driving cars are programmed to always follow traffic rules. But human pedestrians often cross in the middle of the street, move without looking at lights, or suddenly change their path. This causes the system to go into "overly defensive" mode and instead of smooth movement, constantly brake or slow down.

Recommendation Systems and User Contradictions

The YouTube Paradox: YouTube's algorithm is one of the most complex AI systems for content recommendation. But human users have contradictory behaviors:
  • Morning: educational and motivational videos
  • Noon: entertaining and comedy videos
  • Night: ASMR videos for relaxation
These mood and time changes prevent the system from building a stable user profile. Additionally, sometimes a user clicks on a video but immediately closes it (indicating disinterest), but later watches the same video completely. These contradictions confuse the system.
Spotify personalization example: Suppose you've listened to rock music for weeks and suddenly play a classical song. The system thinks your taste has changed and starts recommending classical music, but in reality, you only listened to it once. This "noise" in behavioral data causes recommendation systems to constantly adjust and readjust.

Security Systems and Social Engineering

AI-based security systems can analyze millions of cyber attacks per second, but a simple human can bypass the entire system with a clever phishing email.
2020 Twitter attack: Hackers couldn't break Twitter's security systems, but through social engineering they managed to trick employees into providing system login information. This shows that humans are the weakest link, not the AI system.
Face recognition and masks: Facial recognition systems have become increasingly accurate, but during the COVID-19 pandemic, when people wore masks, these systems encountered problems. Humans, by changing their appearance (whether intentional or unintentional), cause biometric security systems to lose their effectiveness.

Banking Systems and Illogical Financial Behaviors

Fraud Detection: Banks use machine learning to identify suspicious transactions. But a customer who usually shops in their city suddenly makes a large purchase on an international trip. The system identifies this as "suspicious" and blocks the card, while this is completely natural human behavior.
Real example: One Revolut bank user reported that while traveling to Japan, the bank's security system blocked all his transactions due to "unusual behavior," even after he had previously informed the bank that he was traveling. The reason? His purchase pattern (local food, electronics, souvenirs) didn't match any of his previous patterns.

Comparison Table: AI System Expectations vs. Human Reality

Domain AI System Expectation Human Behavior Reality
Self-driving car Following traffic rules, logical and predictable movement Crossing red lines, sudden stops, emotional decisions
Content recommender Stable interests, clear content consumption pattern Momentary mood changes, random selections, accidental clicks
Cybersecurity Complete awareness of threats, no response to phishing Clicking suspicious links, sharing passwords, excessive trust
Banking transactions Stable purchase pattern, logical and regular transactions Impulsive purchases, sudden large transactions, unexpected location changes
Digital health Following health recommendations, regular health behaviors Ignoring warnings, unhealthy eating, intentional sleep deprivation
Social networks Logical interaction with content, responding to relevant content Aimless scrolling, accidental likes, interaction based on momentary feeling

Consequences and Challenges of This Incompatibility

Reduced System Efficiency

When AI systems must be prepared for unpredictable human behaviors, their efficiency decreases. For example:
  • Self-driving cars must move at lower speeds to react to sudden human behaviors
  • Security systems must issue more warnings, many of which are "false positives"
  • Recommendation systems cannot predict with high certainty and must constantly adjust themselves

Creating Poor User Experience

This tension between human and machine causes user experience to suffer:
Real Uber example: Uber's surge pricing algorithm works based on supply and demand and is completely logical. But when users see prices several times higher at 2 AM or during a storm, they feel that "the system is exploiting them," even if it's completely fair algorithmically.
This feeling of injustice and exploitation causes users not to trust AI systems, even when these systems are working correctly technically.

Security and Ethical Risks

When humans are considered a "bug," there's a risk that systems try to eliminate or control humans.
Real scenario in manufacturing: Some automated factories using advanced robotics have created environments that are inaccessible to humans because human presence reduces productivity. This is a step towards a world where humans are completely removed from the equation.
Ethical issue in decision-making: AI systems in medicine might make decisions based on data that are statistically optimal but appear "unethical" from a human perspective. For example, a system might suggest allocating limited resources to patients who have a higher chance of survival, but this decision conflicts with human values of equality and justice.

Solutions: How to Deal with This Bug?

Human-Centered AI Design

Instead of trying to adapt humans to systems, we should design AI systems for humans. This means:
  • Accepting uncertainty: Systems should be designed to cope with unpredictable human behaviors, not treat them as errors
  • Flexibility: Algorithms should be able to quickly react to behavioral changes and rewrite themselves
  • Transparency: Users should know why a system makes a particular decision to build trust
Successful example: Explainable AI projects where AI systems not only make decisions but also explain their reasoning have increased user trust.

Two-way Education

Not only should AI systems learn to cope with humans, but humans should also increase their AI literacy:
  • Understanding limitations: People should know that AI systems are not capable of understanding all human complexities
  • Responsible behavior: Just as we have traffic rules, there should also be "AI interaction rules"
  • Bias awareness: Users should know that algorithms might have biases and how they can correct them
AI education should be part of school and university curricula so future generations can better interact with these technologies.

Using Hybrid Models

Instead of leaving everything to AI, hybrid models that combine human and machine decision-making can be more effective:
  • Human-in-the-Loop: In important decisions, a human makes the final decision, not the machine
  • AI-Augmented Decision Making: AI makes suggestions, but humans make the final decision
  • Collaborative AI: Systems that cooperate with humans, not replace them
Successful example in medicine: Many AI-based medical diagnosis systems are designed as "assistive tools" for doctors, not replacements. This increases diagnostic accuracy, but the final decision still rests with a human expert.

AI Ethics and Laws

One of the most important solutions is creating ethical and legal frameworks for AI that protect human rights:
  • Privacy regulations: Systems should not use personal data without consent
  • Algorithmic accountability: Creators of AI systems should be responsible for the consequences of their algorithms' decisions
  • Right to deletion and being forgotten: Users should be able to delete their data from systems
The European Union's AI Act has taken important steps in this direction, but there's still a long way to go.

Developing Emotional AI

One way to deal with the "human bug" is developing emotional AI that can understand human emotions and moods:
  • Face and voice recognition: Systems that can detect user mood from voice tone or facial expression
  • Dynamic adaptation: Changing system behavior based on user's emotional state
  • Algorithmic empathy: Systems that are not only logical but also "empathetic"
Real example: Some advanced chatbots can detect when a user is disappointed or angry and change their response style to be more empathetic.

The Future: Will Humans Always Be Bugs?

Scenario 1: Humans Learn

In this scenario, humans gradually learn how to have better interactions with AI systems. This includes:
  • Higher digital literacy: Future generations growing up with AI from childhood will have better behaviors
  • New social norms: Unwritten rules for AI interaction are formed (like mobile phone etiquette)
  • Gradual adaptation: Humans somewhat align their behaviors with smart system expectations
Ongoing example: Generation Alpha, who grew up with voice assistants and AI systems, naturally have better behaviors in interacting with these technologies.

Scenario 2: AI Becomes More Complex

In this scenario, AI systems become so advanced they can understand all human complexities:
  • Advanced behavioral models: Systems that can predict even illogical behaviors
  • Deep learning from psychology: Algorithms designed based on cognitive sciences
  • AGI (Artificial General Intelligence): Achieving artificial general intelligence that can think like humans
Risks of this scenario: If AI becomes so advanced it completely understands humans, it might gain excessive power and become artificial superintelligence (ASI) with its own specific risks.

Scenario 3: Balanced Coexistence

The most realistic scenario is probably that a balance forms between humans and machines:
  • Flexible systems: AIs designed to work with "imperfect" humans
  • Separated domains: Some areas completely automated and others under human control
  • Continuous monitoring: Legal and ethical frameworks that protect both sides
This scenario requires cooperation between developers, policymakers, and civil society.

Conclusion

Humans as "bugs" in AI systems is an undeniable reality. We are unpredictable, emotional, contradictory, and sometimes illogical beings - and that's exactly what makes us human.
But this "bug" is not necessarily a problem; rather, it's an opportunity. It reminds us that artificial intelligence should be in the service of humanity, not vice versa. We shouldn't adapt ourselves to systems, but rather design systems to cope with all the complexities and beauties of being human.
The future belongs to systems that are empathetic, flexible, and transparent - systems that accept that humans aren't bugs but an inseparable part of the equation. And perhaps this very "being a bug" is the last thing that distinguishes us from machines.
Ultimately, the main question isn't how to "fix" humans to be compatible with AI systems, but how to design AI systems that not only accept our humanity but also respect it.