Lecture 10 — Bias, Ethics & Human-in-the-Loop (HITL) in Multimodal AI

~3–4 hours (human-centered AI lecture)


🌍 Why This Lecture Matters More Than Any Other

Power without ethics is danger.

Modern AI systems:

  • see humans
  • read private documents
  • make recommendations
  • influence decisions
  • act autonomously

Without ethics:

  • bias scales
  • harm multiplies
  • trust collapses

Ethics is not optional. It is engineering.


⚖️ What Is Bias in AI?

Bias is systematic unfairness that disadvantages individuals or groups.

Bias can appear in:

  • data
  • models
  • evaluation
  • deployment
  • human usage

AI does not create bias — it amplifies it.


🧠 Types of Bias (Must-Know)

1️⃣ Data Bias

  • Skewed demographics
  • Missing populations
  • Historical inequality

Example:

Face recognition trained mostly on light-skinned faces.


2️⃣ Annotation Bias

  • Subjective labels
  • Cultural assumptions
  • Inconsistent annotators

3️⃣ Model Bias

  • Shortcut learning
  • Spurious correlations
  • Overgeneralization

4️⃣ Deployment Bias

  • Model used outside training context
  • Different population
  • High-stakes environment

🖼 Bias in Multimodal Systems

Multimodal AI adds new bias risks:

  • Vision stereotypes
  • Language prejudice
  • Accent discrimination
  • Cultural misinterpretation

Example:

Describing professions differently based on gender in images.


🎥 Temporal & Contextual Bias (Video)

Video models may:

  • Misinterpret behavior
  • Infer intent incorrectly
  • Over-police certain actions

Seeing is not understanding.


⚠️ Ethical Risks of Multimodal AI

Risk Example
Surveillance Facial recognition misuse
Privacy Reading personal documents
Manipulation Deepfakes
Automation bias Blind trust in AI
Exclusion Accessibility gaps

🧠 Ethics ≠ Rules

Ethics involves:

  • Values
  • Context
  • Trade-offs
  • Human judgment

Ethical AI is not “always right” — it is accountable.


👥 What Is Human-in-the-Loop (HITL)?

HITL = Humans actively guide, verify, and override AI systems.

HITL is used when:

  • Stakes are high
  • Errors are costly
  • Context matters
  • Accountability is required

🔁 HITL Interaction Modes

1️⃣ Human-in-the-Loop

Human approves or corrects outputs.

2️⃣ Human-on-the-Loop

Human monitors and intervenes if needed.

3️⃣ Human-out-of-the-Loop

Fully autonomous (⚠️ risky).


🧩 Where HITL Fits in AI Pipelines


Data → Model → Prediction → Human Review → Decision

Examples:

  • Medical diagnosis
  • Legal document review
  • Loan approval
  • Content moderation

🐍 Python: HITL Pattern (Conceptual)

prediction = model(input)

if confidence < threshold:
    send_to_human(prediction)
else:
    accept(prediction)

Uncertainty is a signal, not a failure.


🧠 Designing HITL Systems Well

Good HITL systems:

  • Are transparent
  • Minimize human fatigue
  • Respect human expertise
  • Log decisions
  • Learn from corrections

Bad HITL systems:

  • Treat humans as rubber stamps
  • Overload reviewers
  • Hide model uncertainty

⚖️ Fairness Metrics (High-Level)

Common notions:

  • Demographic parity
  • Equal opportunity
  • Equalized odds

⚠️ Important:

You cannot satisfy all fairness definitions simultaneously.

Ethics requires choices.


🧠 Accountability & Responsibility

Key questions:

  • Who is responsible for errors?
  • Who audits the system?
  • Who can appeal decisions?
  • Who benefits?

AI shifts power — ethics decides where it goes.


📜 Regulation & Governance (Brief)

Trends:

  • AI Act (EU)
  • Model cards
  • Data sheets
  • Audit trails

Purpose:

  • Transparency
  • Safety
  • Human rights protection

🧠 Research Insight

The most dangerous AI is not malicious — it is confident, biased, and unchecked.

Future AI research must integrate:

  • Ethics-by-design
  • Value alignment
  • Continuous monitoring
  • Human agency

🧪 Student Knowledge Check (Hidden)

Q1 — Objective

Does AI create bias?

Answer

No. It amplifies existing bias in data and systems.


Q2 — MCQ

Which is NOT a type of bias?

A. Data bias B. Annotation bias C. Hardware bias D. Deployment bias

Answer

C. Hardware bias


Q3 — MCQ

When is HITL most important?

A. Low-risk chatbots B. Image filters C. High-stakes decisions D. Games

Answer

C. High-stakes decisions


Q4 — Objective

What is automation bias?

Answer

Humans over-trusting AI outputs without critical thinking.


Q5 — Objective

Why is uncertainty important?

Answer

It signals when human review is needed.


🌱 Final Reflection

If AI becomes very powerful, what must always remain human?

Values, responsibility, empathy, and moral judgment.


✅ Key Takeaways

  • Bias is systemic, not accidental
  • Multimodal AI increases ethical risk
  • HITL is a design principle, not a patch
  • Fairness requires trade-offs
  • Humans must remain accountable

Previous
Next