Page 30 - FIGI - Big data, machine learning, consumer protection and privacy
P. 30

socially and politically unacceptable to allow biases   Other approaches that have been suggested
            in the case of race, ethnicity and gender.         include consumer agencies randomly reviewing
               A key question is to what degree industry should   scoring  systems  of  financial  service  providers  (and
            bear the responsibility and cost of identifying bias,   health providers, educational institutions and other
            using data to identify discrimination. When auto-  bodies that routinely make decisions about people)
            mated decision-making causes unlawful discrim-     from time to time. They might run hypothetical sce-
            ination and harm under existing laws, firms relying   narios to assess whether the models were effectively
            on such processing might employ tools (and, under   using statistical proxies for protected groups, such
            some laws, they may be responsible) to ensure that   as race, gender, religion and disability. Such auditing
            using data will not amplify historical bias, and to use   might encourage firms to design against such risks. 140
            data processing methods that avoid using proxies
            for protected classes. In addition, human reviews of   Differential pricing and other terms
            algorithm outputs may be necessary. It may also be   Availability of data allows a financial service provider
            possible to use data to identify discrimination, and to   to better assess the risk that a consumer represents,
            require companies by regulation to do so.          and so to offer services that might not otherwise be
               Even if the result may not violate existing laws   available. However, the availability of a potentially
            prohibiting discrimination on the basis of race, reli-  vast array of data about a consumer also creates an
            gion or another protected class, the unfair harm to   information asymmetry whereby the provider knows
            individuals may merit requiring industry to employ   more about the consumer than the consumer knows
            ethical  frameworks  and  “best  practices”  to  adjust   about the  provider.  The provider  may take advan-
            algorithms  to ensure that outcomes will be  moni-  tage of such situation and be able to engage in what
            tored and evaluated. Other mitigating measures may   economists refer to as “differential pricing,” in which
            include providing individuals the opportunity (or   the  provider  charges  different  prices to  different
            right) to receive an explanation for automated deci-  consumers for the same product.
            sions (see section 7.2), and employing data protec-  Differential pricing is common and often has con-
            tion impact assessments (DPIAs) (see section 8).   sumer benefits, for example, for train tickets are often




                Monetary Authority of Singapore’s FEAT Principles
                1. Individuals or groups of individuals are not systematically disadvantaged through AIDA-driven deci-
                sions unless these decisions can be justified.
                3. Data and models used for AIDA-driven decisions are regularly reviewed and validated […] to mini-
                mize unintentional bias.

                Smart Campaign’s draft Digital Credit Standards
                Indicator 5�2�1�0
                Protected Categories include ethnicity, gender, age, disability, political affiliation, sexual orientation,
                caste, and religion.
                Indicator 5�2�3�0
                Algorithms are designed to reduce the risk of client discrimination based on Protected Categories.
                Indicator 5�2�3�1
                After an initial learning phase provider conducts analysis on connections between non-discriminatory
                variables and discriminatory variables in order to check for unintentional bias in automated credit
                decisions.
                Indicator 5�2�3�2
                If the provider outsources the algorithm development, the provider must require the same standards
                of the indicator above be met by the third party. The provider has access to the following information
                from the third party: algorithm features and documentation, material of training provided to the team,
                and  documents  tracking testing  history including  date,  description, outcome,  discrimination  items
                identified, corrective action taken.






           28    Big data, machine learning, consumer protection and privacy
   25   26   27   28   29   30   31   32   33   34   35