DeepMind Ethics and Society hallmark of a change in attitude

The unit, called DeepMind Ethics and Society, is not the AI Ethics Board that DeepMind was promised when it agreed to be acquired by Google in 2014. That board, which was convened by January 2016, was supposed to oversee all of the company’s AI research, but nothing has been heard of it in the three-and-a-half years since the acquisition. It remains a mystery who is on it, what they discuss, or even whether it has officially met.

DeepMind Ethics and Society is also not the same as DeepMind Health’s Independent Review Panel, a third body set up by the company to provide ethical oversight – in this case, of its specific operations in healthcare.

Nor is the new research unit the Partnership on Artificial Intelligence to Benefit People and Society, an external group founded in part by DeepMind and chaired by the company’s co-founder Mustafa Suleyman. That partnership, which was also co-founded by Facebook, Amazon, IBM and Microsoft, exists to “conduct research, recommend best practices, and publish research under an open licence in areas such as ethics, fairness and inclusivity”.

Nonetheless, its creation is the hallmark of a change in attitude from DeepMind over the past year, which has seen the company reassess its previously closed and secretive outlook. It is still battling a wave of bad publicity started when it partnered with the Royal Free in secret, bringing the app Streams to active use in the London hospital without being open to the public about what data was being shared and how.

The research unit also reflects an urgency on the part of many AI practitioners to get ahead of growing concerns on the part of the public about how the new technology will shape the world around us.

Source: The Guardian



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Why we launched DeepMind Ethics & Society

We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards.

Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work.

As history attests, technological innovation in itself is no guarantee of broader social progress. The development of AI creates important and complex questions. Its impact on society—and on all our lives—is not something that should be left to chance. Beneficial outcomes and protections against harms must be actively fought for and built-in from the beginning. But in a field as complex as AI, this is easier said than done.

As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work. At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. 

So today we’re launching a new research unit, DeepMind Ethics & Society, to complement our work in AI science and application. This new unit will help us explore and understand the real-world impacts of AI. It has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all. 

If AI technologies are to serve society, they must be shaped by society’s priorities and concerns.

Source: DeepMind


Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Can machines learn to be moral?  #AI

AI works, in part, because complex algorithms adeptly identify, remember, and relate data … Moreover, some machines can do what had been the exclusive domain of humans and other intelligent life: Learn on their own.

As a researcher schooled in scientific method and an ethicist immersed in moral decision-making, I know it’s challenging for humans to navigate concurrently the two disparate arenas. 

It’s even harder to envision how computer algorithms can enable machines to act morally.

Moral choice, however, doesn’t ask whether an action will produce an effective outcome; it asks if it is good decision. In other words, regardless of efficacy, is it the right thing to do? 

Such analysis does not reflect an objective, data-driven decision but a subjective, judgment-based one.

Individuals often make moral decisions on the basis of principles like decency, fairness, honesty, and respect. To some extent, people learn those principles through formal study and reflection; however, the primary teacher is life experience, which includes personal practice and observation of others.

Placing manipulative ads before a marginally-qualified and emotionally vulnerable target market may be very effective for the mortgage company, but many people would challenge the promotion’s ethicality.

Humans can make that moral judgment, but how does a data-driven computer draw the same conclusion? Therein lies what should be a chief concern about AI.

Can computers be manufactured with a sense of decency?

Can coding incorporate fairness? Can algorithms learn respect? 

It seems incredible for machines to emulate subjective, moral judgment, but if that potential exists, at least four critical issues must be resolved:

  1. Whose moral standards should be used?
  2. Can machines converse about moral issues?
  3. Can algorithms take context into account?
  4. Who should be accountable?

Source: Business Insider David Hagenbuch



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail