Fair, Transparent and Interpretable Machine Learning

Vrije Universiteit Amsterdam

Course Description

  • Course Name

    Fair, Transparent and Interpretable Machine Learning

  • Host University

    Vrije Universiteit Amsterdam

  • Location

    Amsterdam, The Netherlands

  • Area of Study

    Economics

  • Language Level

    Taught In English

  • Course Level Recommendations

    Upper

    ISA offers course level recommendations in an effort to facilitate the determination of course levels by credential evaluators.We advice each institution to have their own credentials evaluator make the final decision regrading course levels.

    Hours & Credits

  • ECTS Credits

    6
  • Recommended U.S. Semester Credits
    3
  • Recommended U.S. Quarter Units
    4
  • Overview

    Course Objective

    Students know about, and are able to apply basic statistical concepts of fairness in algorithmic decision making. They also understand the challenges and limitations of these concepts in real-world settings. Furthermore, students know and are able to apply different approaches to interpret the model outcomes of supervised machine learning methods, including both intrinsically interpretable models and model-agnostic methods.

     

    Course Content

    Machine learning algorithms are increasingly used to make or improve predictions, which then serve as a basis for decision making. Examples include bank lending, college admissions, and bail decisions in criminal proceedings.

    Though the use of algorithmic decision making is often justified as being "more objective" than human decision making, there are many instances demonstrating that it can produce biased or discriminatory predictions or decisions that unfairly disadvantage certain individuals or groups. Awareness of this issue and knowledge about approaches to address it are of high importance for data scientists and policy makers.

     

    Another highly relevant aspect for decision making based on data is the interpretability of the estimation outcomes and decisions obtained using a machine learning method. One possibility is to restrict the class of applied algorithms to interpretable models (e.g. decision trees, linear regression, logistic regression). However, "black-box" methods (e.g. random forests, deep neural networks) have proven to be highly effective in many more complex settings, while not providing a means to understand the sources of a particular prediction or decision. Model-agnostic methods such as partial dependence plots, Local Surrogate models (LIME), and Shapley Values are important concepts enhancing interpretability and allowing for comparisons of any set of machine learning outcomes.

     

    This course is divided into two parts. The first part of the course (weeks 1-3) addresses the topic of interpretability in supervised machine learning settings. We will study local and global methods to interpret the outcomes of black-box models and apply them to a range of real-world examples. The second part (weeks 4-6) is concerned with fairness in machine learning. We will introduce formal definitions of fairness, analyze real-world data sets, and discuss what algorithmic decision making can and cannot achieve.

     

     

    Additional Information Teaching Methods

    4 hours per week of lectures, 2 hours per week of tutorials.

     

    Method of Assessment

    Written exam, group assignment.

Course Disclaimer

Courses and course hours of instruction are subject to change.

Some courses may require additional fees.

X

This site uses cookies to store information on your computer. Some are essential to make our site work; others help us improve the user experience. By using the site, you consent to the placement of these cookies.

Read our Privacy Policy to learn more.

Confirm