New Report Explores The Ethics Of Algorithmic Decision Making

Earlier this year the House of Lords Select Committee on Artificial Intelligence released a report exploring the ethical development of artificial intelligence in the future.  The Committee developed five principles around which they urge the development of AI to revolve:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

This work built upon previous work that was published at the end of last year, a new report was launched by the UK government into how the country could support the development of AI technologies.  The report touched upon a number of areas, including closing the skills gap that exists in AI, the efficient transfer of AI research from lab to market and the various steps required to encourage the uptake of AI.

The authors of that report, the Science and Technology Committee, have returned with a new paper exploring the ethics of algorithmic decision making.

Algorithmic ethics

The report, whose launch coincides with the roll out of GDPR, chimes with the earlier House of Lords report.  Both highlight the tremendous potential of AI, but the need to ensure development is pursued in an ethical way.

The authors call for a ‘Centre for Data Ethics & Innovation’ to be established by the government to ensure algorithms are transparent and free from bias, whilst allowing individuals to challenge decisions that affect them and seek redress where appropriate.

The report also proposes greater government oversight of instances whereby companies make use of public datasets in their algorithms, and indeed how those datasets could be monetized.

“Algorithms present the Government with a huge opportunity to improve public services and outcomes, particularly in the NHS. They also provide commercial opportunities to the private sector in industries such as insurance, banking and advertising. But they can also make flawed decisions which may disproportionately affect some people and groups,” the authors say.  “The Centre for Data Ethics & Innovation should review the operation of the GDPR, but more immediately learn lessons from the Cambridge Analytica case about the way algorithms are governed when used commercially.”

The Committee also recommends that the government should:

  • Continue to make public sector datasets available for both ‘big data’ developers and algorithm developers through new ‘data trusts’, and make better use of its databases to improve public service delivery
  • Produce, maintain and publish a list of where algorithms are being used within Central Government, or are planned to be used, to aid transparency, and identify a ministerial champion with oversight of public sector algorithm use.
  • Commission a review from the Crown Commercial Service which sets out a model for private/public sector involvement in developing algorithms.

Related

Facebooktwitterredditpinterestlinkedinmail