Page 39 - FIGI - Big data, machine learning, consumer protection and privacy
P. 39

Improving explanations                             explanations may lead to exposure of trade secrets
            An alternative or supplement to providing an expla-  and violations of non-disclosure obligations. Provid-
            nation has also been suggested – that consumers    ing counterfactuals may avoid having to disclose the
            should be provided “counterfactual” feedback on    internal logic of the algorithms of the decision-mak-
            automated (and only predominantly automated)       ing system. This could be a practical, results-oriented
            decisions, positive or negative. Counterfactual expla-  approach to transparency, and may have advantag-
            nations can inform the concerned individual not so   es over requirements to provide an explanation that
            much how a decision was reached but rather what    may be so complex that it neither increases under-
            variations in the input data might have led to a   standing nor enables improvements in a consumer’s
            different decision.  For instance, a digital financial   situation.
                            191
            service provider could inform the consumer, “Your    While referring to counterfactuals is a relatively
            loan application stated that your annual income is   light means of improving the position of consumers,
            $30,000. If your income were $45,000, you would    not least in opening up alternative means to obtain
            have been offered a loan.”                         the services they seek, there are deeper ways to
                                   192
               Of course, there are many input variables to deci-  improve accountability of machine learning systems.
            sion making, and many combinations of such vari-   It might be possible, for instance, to review and cer-
            ables that could produce a near infinite number of   tify properties of computer systems, and to ensure
            potential counterfactuals. Thus, it is unlikely that one   that automated decisions are reached in accordance
            can reduce an explanation for a decision to one or   with rules that have been agreed upon, for example
            even a few variables. In addition, such an approach   to protect against discrimination. Such an approach
            would need to be wary of the commitment it may     is referred to by some as “procedural regularity.” 193
            make to offer the service on the alternative terms   For a machine learning model to function in an
            (if the individual then presents with an income of   accountable manner, accountability must be designed
            $45,000, they might have a legitimate expectation   into the system. System designers, and those who
            that the loan will be approved).                   oversee design need to begin with accountability
               However, if such counterfactuals were coded into   and oversight in mind. The IEEE’s Global Initiative for
            the service, the counterfactual results could be pro-  Ethical Considerations in Artificial Intelligence and
            vided rapidly to the consumer, who could potentially   Autonomous Systems recommends:  194
            experiment with different levels of variables. Indeed,   Although it is acknowledged this cannot be done
            consumer interfaces could even provide a sliding     currently, A/IS should be designed so that they
            scale for inputs, allowing some experimentation by   always are able, when asked, to show the registered
            the consumer. It may thus be possible to provide     process which led to their actions to their human
            some counterfactuals that would improve the con-     user, identify to the extent possible sources of
            sumer’s understanding, and offer an opportunity to   uncertainty, and state any assumptions relied upon.
            contest the decision, or even to modify their situation   The IEEE also proposes designing and program-
            to allow a more favourable decision. For instance, by   ming AI systems “with transparency and account-
            understanding that stopping smoking would entitle   ability as primary objectives,”  and to “proactively
                                                                                         195
            the individual to health insurance, or that paying off   inform users” of their uncertainty. 196
            a certain debt or increasing his or her income would
            result in a positive credit decision, the individual can   5�3  Empowering consumers to contest decisions
            exercise more affirmative agency over his or her life   As discussed in section 7.2, data protection laws typi-
            than being the passive recipient of the decision.  cally do not give a right to contest the accuracy of
               This may narrow the gap in negotiating positions   decisions made with their data. However, consum-
            and result in a commercially profitable offer for a   ers  are  increasingly  provided  the  opportunity  to
            desired service to be made and accepted, benefit-  contest decisions made on the basis solely of auto-
            ting both provider and consumer. There may, then, be   mated processing. Novel risks arise from automated
            reasons to expect market participants to introduce   decision-making in life-affecting areas of financial
            such features as a differentiating element of their   services such as credit, insurance and risky or cost-
            services in a competitive market, although a regu-  ly financial products.  The IEEE Global Initiative
                                                                                  197
            latory “nudge” could be useful in some cases to get   recommends that “Individuals should be provid-
            such practices started and make them mainstream.   ed a forum to make a case for extenuating circum-
               It has been suggested that the counterfactual   stances that the AI system may not appreciate—in
            approach might also mitigate concerns that requiring   other words, a recourse  to a human appeal.”  An
                                                                                                        198


                                                             Big data, machine learning, consumer protection and privacy  37
   34   35   36   37   38   39   40   41   42   43   44