Ethics, Data and Insurance: 4 Developments Worth Watching

Artikkelforfatter: Duncan Minty
E-mail: duncan@duncanminty.co.uk
About:

Duncan is an independent ethics consultant, specialising in the insurance sector. He’s also a Chartered Insurance Practitioner, having worked in the UK market for 18 years. His long running blog has highlighted a wide range of ethical issues for insurers.


Udgave:
4, 2017
Språk: Engelsk
Kategori:

 

The digital transformation of insurance is underway. And while we’re seeing less hype and more reality now, the momentum is continuing unabated. These are exciting times, watching traditional notions and established practices being questioned and often reinvented.

Yet there’s one underlying factor that will ultimately mark out each of these changes as a success or failure. It’s the trust of the insurance buying public. Many ‘insurtech’ start-ups talk about getting closer to the customer, but they often confuse proximity with intimacy. What they need to achieve are customers wanting to get closer to them. That’s what trust does.

So as we head into 2018, what are the developments in this digital transformation of insurance that could have most affect on the long term trust that the insurance buying public has in the market. I’m going to outline four developments that have ethical implications.

Price and Claims Optimisation

As data pours into insurers, it not only tells them about factors influencing the risk to be covered, but about the policyholder’s purchase habits as well. Information about how they shop, when they shop, what they shop for and how they pay can tell the insurer something about what the policyholder might be prepared to pay for their insurance. After all, if a policyholder is prepared to pay more, perhaps because they have a risk averse character, then the argument is: why not charge them more? And why offer someone an introductory discount if they would still be happy to insure with you without it? This is price optimisation and it is being widely adopted across insurance markets. It is controversial however.

Twenty US state insurance regulators have banned price optimisation, having deemed it to be unfair. And doubts have been raised in the UK insurance market, on the basis that it reinforces the focus on price rather than cover, unduly impacts vulnerable people and is seen as an alienating practice by consumers in general. Price optimisation has its supporters too, who see it as innovative and little different from an underwriter’s traditional subjectivity.

Those supporters of optimisation are now applying it to claims. The idea is that if an insurer has data (from a credit report for example) that indicates a claimant would be happy to receive a quick but reduced claims settlement, why not offer it to them?

Price optimisation is a controversial practice, and its claims variant even more so. Many see it as unethical for a risk based business such as insurance. Insurers should therefore proceed with caution, with these three steps worth considering:

  • establish the key triggers for moving prices and weigh them up against regulatory and reputation impacts. For example, are complaints one of the optimisation triggers?

  • undertake fairness data mining to establish the outcomes being experienced by certain categories of customers. Watch for outcomes that can be influenced by ‘manufactured information’;

  • map out the lines of accountability for adopting price and claim optimisation, and ensure that they have a full and rounded awareness of its implications. Prepare them to defend the practice in public.

Profiling and Bias

Another consequence of all that data flowing into insurers is the increasingly automated profiling of consumers for insurance purposes. Price optimisation is an ‘ability to pay’ category of profiling, but it is only one of many. One insurer is said to have over 1,000 rating factors for its motor portfolio alone, including whether you drink tap or bottled water. It looks like risk related ‘correlations’ are popping up in all sorts of surprising places! Hardly surprising therefore to hear of underwriters often talking about no longer knowing how their premiums are calculated.

Yet this comes with ethical implications. As underwriting becomes increasingly automated, through artificial intelligence tools such as machine learning, there’s an increased risk of discriminatory outcomes being experienced by consumers. Insurers will quite naturally protest that that is a road down which none of them would ever consider going. And some would simply dismiss the prospect, on the basis that their systems are not configured that way. Such reassurances would be premature however, if two factors are taken into account.

Firstly, insurers want their machine learning algorithms to uncover new risk related insights for them. They will do this through tools such as correlation clustering, which looks at the relationship between data objects rather than the actual representation of the objects themselves. Out of such cluster analysis emerges a piece of ‘manufactured information’ that the algorithm then learns to associate with certain identities. No data field is created for that manufactured information: it is just learnt by the algorithm.

This will happen automatically, deep within the underwriting model, without any humans being involved. Yet there is growing evidence that it can generate discriminatory outcomes for some consumers. As such outcomes have increased, so have calls for firms to be proactive rather than just reactive on the issue.

The second issue to watch out for relates to the historical data that machine learning relies on. Recent research has established widespread gender bias in datasets used to train algorithms. This means that artificial intelligence (AI) is going to pick up that bias and then propagate it into the automated decisions that insurers will be making in underwriting, claims and marketing.

What is important to remember here is that while an insurer can ensure that the inputs to its AI enabled systems are all in order, the very power of AI that insurers are interested in exploiting could produce outcomes that for some people are far from in order. Digital is bringing about a change in focus, not only in business, but in society as well. People are less concerned about whether a firm has done the right things, and more concerned about whether it has achieved the right outcomes. That’s why compliance is no longer enough.

So what should insurers do? These three steps would be a good start:

  • adopt a structured approach to the ethical issues raised by the digital tools being adopted by insurers. Organise the issues into three categories: the data itself, the algorithms being used and the practices for managing and overseeing them;

  • introduce algorithmic impact assessments into every data project, with race and gender bias as standing components;

  • adopt tools for testing historic data for evidence of discriminatory features, and use techniques such as fairness data mining to check on the outcomes being generated by those algorithms.

Algorithmic accountability

Insurers have always been accountable for how they run their businesses. This could be to internal audiences such as the independent board directors representing the interests of investors. And this could be to external audiences as well, such as the regulator. So how will the insurers engaged in this digital transformation maintain that accountability? This is a challenge on two levels.

The first level is a relatively practical one involving accountability and responsibility. How will non-executive directors judge the veracity of big data projects? Will they have access to the right information, and the capacity to both understand and question it, and then to properly weigh up the answers? This is feasible with hand written algorithms, but how effective can this be when self learning algorithms are being deployed?

And what happens during the transition from largely human decision making, to largely algorithmic decision making? How is responsibility assigned?

I’ve seen cases of hugely unfair policy changes being blamed on ‘the data’, as the human element to a decision shifts responsibility for a particular unfair outcome on to the algorithmic element. Yet can an artificial agent bear moral responsibility? That’s very contentious. What about the alternative – will the human element accept responsibility for the unethical behaviour of their increasingly autonomous procedures?

Think of these situations. In an environment where underwriting decisions are derived from a mix of algorithmic and human involvement, should the firm’s code of ethics apply only to the latter and not to the former? Do your firm’s ethical values apply only to the human element and not to the algorithm element of ‘how things get done round here’? And if both, how is this being put into practice?

As algorithms with some degree of learning capacity are introduced, no one person has enough control over the machine’s learning to be able to assume full responsibility for them. The complex and fluid nature of countless decision-making rules inhibit oversight. And their modular design means that no one person or group will be able to fully grasp the manner in which one algorithmic element responds to another. Thus emerges a significant gap, between the algorithm’s behaviour and the sense of accountability for the outcomes generated.

What this adds up to is the considerable challenge of maintaining accountability, not only during a period of transformation, but of a transformation that itself is built around systems that make it more difficult to do so. And it is around that last point that the second level of challenge for accountability exists.

The second level of challenge for accountability will come from the changes that data and analytics have on the culture of firms. This will happen in several ways, and I’ll outline some of them here:

  • The complexity of AI will open up an accountability gap, between the decisions of individual people and the effects that those decisions produce. Even though the detriment is evident, individuals in the insurance firm could hesitate to see it as their responsibility.

  • A dilution of accountability. That complexity of AI can also result in individuals feeling that their input, their decision, is so marginal as to obviate any responsibility for the consequences that collectively result. The view would be that ‘I couldn’t have done anything wrong because my input has been so marginal’.

  • A silo’ing of accountability. That complexity also results in greater compartmentalisation of actions and decisions on AI projects. It’s all too easy for individuals, unable to see how an outcome could have resulted from their particular silo, to ignore it altogether.

  • A blinkering of accountability. Some people think of ethical issues only in relation to the behaviours of real, physical people. As a result, they fail to see, or even understand, the implications of the decisions they make in relation to ‘artificial’ intelligence.

What is needed, alongside the digital transformation of insurance, is an accompanying transformation of accountability mechanisms within insurance firms. The implications for market stability could be significant.

Regulating the Digital Transformation

One danger for insurers is that they often focus very much on their own transformation and fail to take account of other transformations happening around them. Academics have been tracking changes in how society thinks of data and have been working on a new framework for data ethics to reflect this. At the same time, regulators of financial services have been weighing up options for their own transformation as well.

An early illustration of this happened a few years ago, when concerns in the UK Parliament about the detriment caused by the short term credit market resulted in some new thinking on the part of the regulator. The Financial Conduct Authority acquired pricing and product data from ‘pay day loan’ firms that together represented about 80% of that market. This billion or so points of data was then analysed and modelled to produce a set of pricing and product regulations that pushed the market in quite dramatic fashion onto a more ethical path.

What this exercise highlighted was that just as insurers can model the behaviours of consumers in order to price their products and settle their claims, so the regulator could pull in the insurance market’s data and model that according to its own needs and goals. It is called panoptic regulation and could represent the future path of insurance oversight.

Another example came from the United States, where in November 2015, the association of state insurance regulators issued their recommendation to all of its members that price optimisation should be banned in a range of circumstances. The association also recommended that state regulators consider the introduction of rules that would give them digital access into insurer’s rating models. Rather than have US insurers submit rating reports on increasingly massive amounts of paper, the regulators would instead obtain the required rating information directly from within the insurers’ systems. It is a development that is worth watching.

Another development to keep an eye on relates to the use of artificial intelligence by regulators. Financial markets are now seeing investment managers using AI to analyse the voice and words of chief executives when talking about their firm’s results and plans. If the voice analysis signals red, the investment manager will reconsider their holdings. This type of analysis is not something new to insurers, who for some time have been analysing the voices of claimants to identify potential fraudsters.

Could a similar approach transform the ethical standing of insurers? If as in the UK, insurers are required to demonstrate integrity at both the individual and individual levels, might their capacity to do so be assessed by a similar use of AI-driven voice analysis? Could all senior executives be required to talk about integrity and ethics for long enough for the software to assess if they really mean it, really understand it, are really doing something about it?

Digital technology is going to transform regulation and insurers need to recognise this. Just as they reinvent their products and services, they may well have to reinvigorate the strength of their ethical culture as well.

These are exciting times for insurance. Underwriting, claim and marketing are being transformed, but to truly succeed, insurance need to transform the trust that the public has in it too. There’s a real opportunity here, but it needs to be addressed, not left to chance.