Week 5: Ethics in Social Network Analytics
Feb-2025
What ethical responsibility do we have when gathering and using information, behavior patterns, and recommendation systems?
The General Data Protection Regulation (GDPR) is an EU law governing data protection and privacy. It is a key component of EU privacy law and human rights law (Article 8, Charter of Fundamental Rights of the European Union).
Note: The UK now follows UK GDPR.
🔍 Anonymized data can never be fully anonymous!
✅ Differential Privacy (DP)
- Adds mathematical noise to protect individual data points.
- Example: Used by Apple & Google in analytics.
✅ Federated Learning (FL)
- Decentralized AI training → No raw data is shared.
- Example: Google’s Gboard keyboard learns from users locally.
✅ Homomorphic Encryption (HE)
- AI can process encrypted data without decrypting it.
- Example: Used in healthcare AI for private medical analysis.
✅ Synthetic Data
- AI-generated artificial data mimicking real datasets.
- Example: Amazon & NVIDIA use it to train AI without privacy risks.
✅ Secure Multi-Party Computation (SMPC)
- Multiple parties collaborate to compute results without exposing private data.
- Example: Used in financial risk analysis.
What trade-offs are you willing to accept between convenience and privacy?
Examples of personal data sharing:
🔍 Case Study:
Amazon shut down its AI hiring tool after discovering gender bias, as the system penalized female candidates.
📌 ProPublica Investigation:
A recidivism risk scoring tool was biased against African Americans, leading to unjust sentencing disparities.
Bias comes from: \[ Data + Model/Algorithm = Prediction (Decision) \]
Common bias sources:
📌 Data Bias occurs when:
Fact:
Most datasets are biased unless generated from carefully controlled randomized experiments.
❌ NO!
⚖️ Bias Mitigation Techniques fall into three categories:
✅ Pre-processing (before training) → Modify data to reduce bias
✅ In-processing (during training) → Adjust models to reduce discrimination
✅ Post-processing (after training) → Adjust predictions to ensure fairness
🔹 Re-sampling: Adjust class distributions (SMOTE for imbalanced datasets)
🔹 Re-weighting: Assign weights to samples to balance representation
🔹 Fair Representation Learning: LFR (Learning Fair Representations)
🛠 Frameworks:
✅ AI Fairness 360
🔹 Adversarial Debiasing: Train a second model to remove bias signals
🔹 Fairness Constraints: Use equalized odds, demographic parity
🔹 Differentially Private Training: Protects individual data points
🛠 Frameworks:
✅ Fairness Indicators (TensorFlow Extended)
✅ AIF360 Adversarial Debiasing Models
🔹 Equalized Odds Post-processing: Adjusts predictions for equal fairness across groups (More Info)
🔹 Calibrated Equalized Odds: Ensures fairness without sacrificing accuracy
🔹 Reject Option-Based Fairness: Tweaks uncertain predictions (More Info)
🛠 Frameworks:
✅ Fairlearn– Post-processing Tools
✅ AIF 360– Post-processing Fairness Algorithms
The EU AI Act is the world’s first legal framework regulating AI, focusing on risk-based classification and accountability.
✅ GDPR → Protects personal data & privacy
✅ EU AI Act → Regulates AI models & their risks