Curriculum

Core Modules

Responsible Artificial Intelligence I & II

These modules introduce the ethical theories, accountability frameworks, and privacy principles essential to responsible AI. The curriculum covers moral frameworks, designing AI with ethical agents, and balancing privacy with functionality.

Key Topics:

  • Ethical AI design
  • Algorithmic accountability
  • Privacy principles
  • Fairness in AI
  • Security protocols

Practical Applications:

  • Case studies on biased algorithms
  • Ethical decision-making scenarios
  • Methods to design transparent AI systems

Human-Centered Design and User Experience

This module series focuses on user-centered design principles and usability testing, preparing students to create intuitive and accessible AI solutions. Key areas include requirement engineering and interface design.

Key Topics:

  • User requirements
  • Usability testing
  • Interface and interaction design
  • User experience (UX)

Practical Applications:

  • Designing mock-ups
  • Prototyping user-centered interfaces
  • Testing user interaction with AI systems

Cybersecurity and Data Privacy

This module covers the fundamental and advanced aspects of securing AI systems and managing data privacy. The curriculum emphasizes data protection regulations and best practices for maintaining user trust.

Key Topics:

  • Data encryption
  • Privacy by design
  • Regulatory compliance (e.g., GDPR)
  • Risk assessment

Practical Applications:

  • Projects on secure data management
  • Building privacy-compliant AI systems
  • Implementing cybersecurity measures for AI

Ethics and Professionalism in Data Science

This module addresses ethical considerations in data science, including discrimination, bias, and privacy concerns, and helps students develop a critical approach to fairness in AI.

Key Topics:

  • Ethical frameworks
  • Discrimination prevention
  • Professional responsibility
  • Fairness metrics

Practical Applications:

  • Assessing fairness in data science
  • Applying ethical frameworks to real-world AI
  • Developing bias-mitigating solutions

Natural Language Processing (NLP) and Human-Centered AI

This module covers human-centered NLP, focusing on the design of language models that prioritize interpretability and user comprehension. Advanced NLP topics, including deep learning techniques, are also explored.

Key Topics:

  • Syntactic parsing
  • Machine translation
  • Attention models
  • Deep neural networks

Practical Applications:

  • Building and testing NLP models
  • Creating interpretable AI language systems
  • Applying NLP in sentiment analysis

Advanced Modules

Explainable Artificial Intelligence (XAI)

This module focuses on making complex AI models interpretable for end-users. Students learn methods to present AI-driven insights transparently.

Key Topics:

  • Model interpretability
  • Data visualization
  • Explainability tools

Practical Applications:

  • Creating reports that explain model behavior
  • Developing visualization tools for AI outputs
  • Designing interpretable AI applications for healthcare and finance

Equity and Discrimination in Computing Systems

This module addresses bias and discrimination in computing systems, focusing on ethical machine learning practices.

Key Topics:

  • Fairness metrics
  • Bias identification
  • Anti-discrimination strategies

Practical Applications:

  • Analyzing case studies on biased AI
  • Using fairness metrics to assess model performance
  • Designing unbiased ML algorithms

Research Data Management and Sharing

This module explores the management, archiving, and sharing of research data, ensuring data handling aligns with best practices.

Key Topics:

  • Data archiving
  • Open-access policies
  • Reproducibility
  • Data sharing ethics

Practical Applications:

  • Developing data management plans
  • Archiving research data
  • Creating frameworks for responsible data sharing