AI works, in part, because complex algorithms adeptly identify, remember, and relate data … Moreover, some machines can do what had been the exclusive domain of humans and other intelligent life: Learn on their own.
As a researcher schooled in scientific method and an ethicist immersed in moral decision-making, I know it’s challenging for humans to navigate concurrently the two disparate arenas.
It’s even harder to envision how computer algorithms can enable machines to act morally.
Moral choice, however, doesn’t ask whether an action will produce an effective outcome; it asks if it is a good decision. In other words, regardless of efficacy, is it the right thing to do?
Such analysis does not reflect an objective, data-driven decision but a subjective, judgment-based one.
Individuals often make moral decisions on the basis of principles like decency, fairness, honesty, and respect. To some extent, people learn those principles through formal study and reflection; however, the primary teacher is life experience, which includes personal practice and observation of others.
Placing manipulative ads before a marginally-qualified and emotionally vulnerable target market may be very effective for the mortgage company, but many people would challenge the promotion’s ethicality.
Humans can make that moral judgment, but how does a data-driven computer draw the same conclusion? Therein lies what should be a chief concern about AI.
Can computers be manufactured with a sense of decency?
Can coding incorporate fairness? Can algorithms learn respect?
It seems incredible for machines to emulate subjective, moral judgment, but if that potential exists, at least four critical issues must be resolved:
- Whose moral standards should be used?
- Can machines converse about moral issues?
- Can algorithms take context into account?
- Who should be accountable?
Source: Business Insider David Hagenbuch