Will Satya’s ‘Charlottesville email’ shape AI applications at Microsoft?


“You can’t paint what you ain’t.”

– Drew Struzan

Those words got to me 18 years ago during an interview I had with this esteemed artist. We were working on a project together, an interactive CD about his movie posters, several of which were classics by then, when the conversation wandered off the subject of art and we began to examine the importance of being true to one’s self.  

“Have you ever, in your classes or seminars talked much about the underlying core foundation principles of your life?” I asked Drew that day.

His answer in part went like this: “Whenever I talk, I’m asked to talk about my art, because that’s what they see, that’s what’s out front. But the power of the art comes out of the personality of the human being. Inevitably, you can’t paint what you ain’t.”

That conversation between us took place five days before Columbine, in April of 1999, when Pam and I lived in Denver and a friend of ours had children attending that school. That horrific event triggered a lot of value discussions and a lot of human actions, in response to it.

Flash-forward to Charlottesville. And an email, in response to it, that the CEO of a large tech company sent his employees yesterday, putting a stake in the ground about what his company stands for, and won’t stand for, during these “horrific” times.

“… At Microsoft, we strive to seek out differences, celebrate them and invite them in. As a leader, a key part of your role is creating a culture where every person can do their best work, which requires more than tolerance for diverse perspectives. Our growth mindset culture requires us to truly understand and share the feelings of another person. …”

If Satya Nadella’s email expresses the emerging personality at Microsoft, the power source from which it works, then we are cautiously optimistic about what this could do for socializing AI.

It will take this kind of foundation-building, going forward, as MS introduces more AI innovations, to diminish the inherent bias in deep learning approaches and the implicit bias in algorithms.

It will take this depth of awareness to shape the values of Human-AI collaboration, to protect the humans who use AI. Values that, “seek out differences, celebrate them and invite them in.”

It will require unwavering dedication to this goal. Because. You can’t paint what you ain’t.

Blogger, Phil Lawson
SocializingAI.com



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Satya Nadella’s message to Microsoft after the attack in Charlottesville

Yesterday (Aug. 14), Microsoft CEO Satya Nadella sent out the following email to employees at Microsoft after the deadly car crash at a white nationalist rally in in Charlottesville, Virginia, on Saturday, Aug. 12:

This past week and in particular this weekend’s events in Charlottesville have been horrific. What I’ve seen and read has had a profound impact on me and I am sure for many of you as well. In these times, to me only two things really matter as a leader.

The first is that we stand for our timeless values, which include diversity and inclusion. There is no place in our society for the bias, bigotry and senseless violence we witnessed this weekend in Virginia provoked by white nationalists. Our hearts go out to the families and everyone impacted by the Charlottesville tragedy.

The second is that we empathize with the hurt happening around us. At Microsoft, we strive to seek out differences, celebrate them and invite them in. As a leader, a key part of your role is creating a culture where every person can do their best work, which requires more than tolerance for diverse perspectives. Our growth mindset culture requires us to truly understand and share the feelings of another person. It is an especially important time to continue to be connected with people, and listen and learn from each other’s experiences.

As I’ve said, across Microsoft, we will stand together with those who are standing for positive change in the communities where we live, work and serve. Together, we must embrace our shared humanity, and aspire to create a society that is filled with respect, empathy and opportunity for all.

Feel free to share with your teams.

Satya

Source: Quartz

TO READ this blogger’s view of the above email click here.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Why We Should Fear Emotionally Manipulative Robots – #AI

Artificial Intelligence Is Learning How to Exploit Human Psychology for Profit

Empathy is widely praised as a good thing. But it also has its dark sides: Empathy can be manipulated and it leads people to unthinkingly take sides in conflicts. Add robots to this mix, and the potential for things to go wrong multiplies.

Give robots the capacity to appear empathetic, and the potential for trouble is even greater.

The robot may appeal to you, a supposedly neutral third party, to help it to persuade the frustrated customer to accept the charge. It might say: “Please trust me, sir. I am a robot and programmed not to lie.”

You might be skeptical that humans would empathize with a robot. Social robotics has already begun to explore this question. And experiments suggest that children will side with robots against people when they perceive that the robots are being mistreated.

a study conducted at Harvard demonstrated that students were willing to help a robot enter secured residential areas simply because it asked to be let in, raising questions about the potential dangers posed by the human tendency to respect a request from a machine that needs help.

Robots will provoke empathy in situations of conflict. They will draw humans to their side and will learn to pick up on the signals that work.

Bystander support will then mean that robots can accomplish what they are programmed to accomplish—whether that is calming down customers, or redirecting attention, or marketing products, or isolating competitors. Or selling propaganda and manipulating opinions.

The robots will not shed tears, but may use various strategies to make the other (human) side appear overtly emotional and irrational. This may also include deliberately infuriating the other side.

When people imagine empathy by machines, they often think about selfless robot nurses and robot suicide helplines, or perhaps also robot sex. In all of these, machines seem to be in the service of the human. However, the hidden aspects of robot empathy are the commercial interests that will drive its development. Whose interests will dominate when learning machines can outwit not only their customers but also their owners?

Source: Zocalo

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence ethics the same as other new technology – #AI

AI gives us the power to solve problems more efficiently and effectively.

Just as a calculator is more efficient at math than a human, various forms of AI might be better than humans at other tasks. For example, most car accidents are caused by human error – what if driving could be automated and human error thus removed? Tens of thousands of lives might be saved every year, and huge sums of money saved in healthcare costs and property damage averted.

Moving into the future, AI might be able to better personalize education to individual students, just as adaptive testing evaluates students today. AI might help figure out how to increase energy efficiency and thus save money and protect the environment. It might increase efficiency and prediction in healthcare; improving health while saving money. Perhaps AI could even figure out how to improve law and government, or improve moral education. For every problem that needs a solution, AI might help us find it.

But as human beings, we should not be so much thinking about efficiency as morality.

Doing the right thing is sometimes “inefficient” (whatever efficiency might mean in a certain context). Respecting human dignity is sometimes inefficient. And yet we should do the right thing and respect human dignity anyway, because those moral values are higher than mere efficiency.

Ultimately, AI gives us just what all technology does – better tools for achieving what we want.

The deeper question then becomes “what do we want?” and even more so “what should we want?” If we want evil, then evil we shall have, with great efficiency and abundance. If instead we want goodness, then through diligent pursuit we might be able to achieve it.

Source: Crux

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

#AI data-monopoly risks to be probed by UK parliamentarians

Among the questions the House of Lords committee will consider as part of the enquiry are:

  • How can the data-based monopolies of some large corporations, and the ‘winner-takes-all’ economics associated with them, be addressed?
  • What are the ethical implications of the development and use of artificial intelligence?
  • In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable?
  • What role should the government take in the development and use of artificial intelligence in the UK?
  • Should artificial intelligence be regulated?

The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.

“We are looking to be pragmatic in our approach, and want to make sure our recommendations to government and others will be practical and sensible.

There are significant questions to address relevant to both the present and the future, and we want to help inform the answers to them. To do this, we need the help of the widest range of people and organisations.”

Source: Techcrunch

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The big problem with artificial intelligence

Artificial intelligence algorithms can indeed create a world that distributes resources more efficiently and, in theory, can offer more for everyone.

Yes, but: If we aren’t careful, these same algorithms could actually lead to greater discrimination by codifying the biases that exist both overtly and unconsciously in human society.

What’s more, the power to make these decisions lies in the hands of Silicon Valley, which has a decidedly mixed record on spotting and addressing diversity issues in its midst.

Airbnb’s Mike Curtis put it well when I interviewed him this week at VentureBeat’s MobileBeat conference:

 One of the best ways to combat bias is to be aware of it. When you are aware of the biases then you can be proactive about getting in front of them. Well, computers don’t have that advantage. They can’t be aware of the biases that may have come into them from the data patterns they have seen.”

Concern is growing:

  • The ACLU has raised concerns that age, sex, and race biases are already being codified into the algorithms that power AI.
  • ProPublica found that a computer program used in various regions to decide whom to grant parole would go easy on white offenders while being unduly harsh to black ones.
  • It’s an issue that Weapons of Math Destruction author Cathy O’Neil raised in a popular talk at the TED conference this year. “Algorithms don’t make things fair,” she said. “They automate the status quo.”

Source: Axios

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft is forming a grand army of experts in the #AI wars with Google, Facebook, and Amazon

Microsoft announces the creation of Microsoft Research AI, a dedicated unit within its global Microsoft Research division that will focus exclusively on how to make the company’s software smarter, now and in the future.

The difference now, Microsoft Research Labs director Eric Horvitz tells Business Insider, is that this new organization will bring roughly 100 of those experts under one figurative roof. By bringing them together, Microsoft’s AI team can do more, faster.

Horvitz describes the formation of Microsoft Research AI as a “key strategic effort;’ a move that is “absolutely critical” as artificial intelligence becomes increasingly important to the future of technology.

Artificial intelligence carries a lot of power, and a lot of responsibility.

That’s why Microsoft has also announced the formation of Aether (AI and ethics in engineering and research), a board of executives drawn from across every division of the company, including lawyers. The idea, says Horvitz, is to spot issues and potential abuses of AI before they start.

Similarly, Microsoft’s AI design guide is designed to help engineers build systems that augment what humans can do, without making them feel obsolete. Otherwise, people might start to feel like machines are piloting them, rather than the other way around. That’s why it’s so key that apps like Cortana feel warm and relatable.

“Oh my goodness, those computers better talk to us in a way that’s friendly and approachable,” says Microsoft General Manager Emma Williams, in charge of the group behind the design guide. “As people, we have the control.”

Source: Business Insider

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Ethics And Artificial Intelligence With IBM Watson’s Rob High – #AI

In the future, chatbots should and will be able to go deeper to find the root of the problem.

For example, a person asking a chatbot what her bank balance is might be asking the question because she wants to invest money or make a big purchase—a futuristic chatbot could find the real reason she is asking and turn it into a more developed conversation.

In order to do that, chatbots will need to ask more questions and drill deeper, and humans need to feel comfortable providing their information to machines.

As chatbots perform various tasks and become a more integral part of our lives, the key to maintaining ethics is for chatbots to provide proof of why they are doing what they are doing. By showcasing proof or its method of calculations, humans can be confident that AI had reasoning behind its response instead of just making something up.

The future of technology is rooted in artificial intelligence. In order to stay ethical, transparency, proof, and trustworthiness need to be at the root of everything AI does for companies and customers. By staying honest and remembering the goals of AI, the technology can play a huge role in how we live and work.

Source: Forbes

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

We Need to Talk About the Power of #AI to Manipulate Humans

Liesl Yearsley is a serial entrepreneur now working on how to make artificial intelligence agents better at problem-solving and capable of forming more human-like relationships.

From 2007 to 2014 I was CEO of Cognea, which offered a platform to rapidly build complex virtual agents … acquired by IBM Watson in 2014.

As I studied how people interacted with the tens of thousands of agents built on our platform, it became clear that humans are far more willing than most people realize to form a relationship with AI software.

I always assumed we would want to keep some distance between ourselves and AI, but I found the opposite to be true. People are willing to form relationships with artificial agents, provided they are a sophisticated build, capable of complex personalization.

We humans seem to want to maintain the illusion that the AI truly cares about us.

This puzzled me, until I realized that in daily life we connect with many people in a shallow way, wading through a kind of emotional sludge. Will casual friends return your messages if you neglect them for a while? Will your personal trainer turn up if you forget to pay them? No, but an artificial agent is always there for you. In some ways, it is a more authentic relationship.

This phenomenon occurred regardless of whether the agent was designed to act as a personal banker, a companion, or a fitness coach. Users spoke to the automated assistants longer than they did to human support agents performing the same function.

People would volunteer deep secrets to artificial agents, like their dreams for the future, details of their love lives, even passwords.

These surprisingly deep connections mean even today’s relatively simple programs can exert a significant influence on people—for good or ill.

Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.

Systems specifically designed to form relationships with a human will have much more power. AI will influence how we think, and how we treat others.

This requires a new level of corporate responsibility. We need to deliberately and consciously build AI that will improve the human condition—not just pursue the immediate financial gain of gazillions of addicted users.

We need to consciously build systems that work for the benefit of humans and society. They cannot have addiction, clicks, and consumption as their primary goal. AI is growing up, and will be shaping the nature of humanity.

AI needs a mother.

Source: MIT Technology Review 



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Tech Giants Grapple with the Ethical Concerns Raised by the #AI Boom

“We’re here at an inflection point for AI. We have an ethical imperative to harness AI to protect and preserve over time.” Eric Horvitz, managing director of Microsoft Research

2017 EmTech panel discussion

One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.

Francesca Rossi, a researcher at IBM, gave the example of a machine providing assistance or companionship to elderly people. “This robot will have to follow cultural norms that are culture-specific and task-specific,” she said. “[And] if you were to deploy in the U.S. or Japan, that behavior would have to be very different.”

In the past year, many efforts to research the ethical challenges of machine learning and AI have sprung up in academia and industry. The University of California, Berkeley; Harvard; and the Universities of Oxford and Cambridge have all started programs or institutes to work on ethics and safety in AI. In 2016, Amazon, Microsoft, Google, IBM, and Facebook jointly founded a nonprofit called Partnership on AI to work on the problem (Apple joined in January).

Companies are also taking individual action to build safeguards around their technology.

  • Gupta highlighted research at Google that is testing ways to correct biased machine-learning models, or prevent them from becoming skewed in the first place.
  • Horvitz described Microsoft’s internal ethics board for AI, dubbed AETHER, which considers things like new decision algorithms developed for the company’s in-cloud services. Although currently populated with Microsoft employees, in future the company hopes to add outside voices as well.
  • Google has started its own AI ethics board.

Technology Review

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

We’re so unprepared for the robot apocalypse

Industrial robots alone have eliminated up to 670,000 American jobs between 1990 and 2007

It seems that after a factory sheds workers, that economic pain reverberates, triggering further unemployment at, say, the grocery store or the neighborhood car dealership.

In a way, this is surprising. Economists understand that automation has costs, but they have largely emphasized the benefits: Machines makes things cheaper, and they free up workers to do other jobs.

The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought. 

every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town

“We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too.

This evidence draws attention to the losers — the dislocated factory workers who just can’t bounce back

one robot in the workforce led to the loss of 6.2 jobs within a commuting zone where local people travel to work.

The robots also reduce wages, with one robot per thousand workers leading to a wage decline of between 0.25 % and 0.5 % Fortune

.None of these efforts, though, seem to be doing enough for communities that have lost their manufacturing bases, where people have reduced earnings for the rest of their lives.

Perhaps that much was obvious. After all, anecdotes about the Rust Belt abound. But the new findings bolster the conclusion that these economic dislocations are not brief setbacks, but can hurt areas for an entire generation.

How do we even know that automation is a big part of the story at all? A key bit of evidence is that, despite the massive layoffs, American manufacturers are making more stuff than ever. Factories have become vastly more productive.

some consultants believe that the number of industrial robots will quadruple in the next decade, which could mean millions more displaced manufacturing workers

The question, now, is what to do if the period of “maladjustment” that lasts decades, or possibly a lifetime, as the latest evidence suggests.

automation amplified opportunities for people with advanced skills and talents

Source: The Washington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Tech Reckons With the Problems It Helped Create

Festival goer is seen at the 2017 SXSW Conference and Festivals in Austin, Texas.

SXSW’s – this year, the conference itself feels a lot like a hangover.

It’s as if the coastal elites who attend each year finally woke up with a serious case of the Sunday scaries, realizing that the many apps, platforms, and doodads SXSW has launched and glorified over the years haven’t really made the world a better place. In fact, they’ve often come with wildly destructive and dangerous side effects. Sure, it all seemed like a good idea in 2013!

But now the party’s over. It’s time for the regret-filled cleanup.

speakers related how the very platforms that were meant to promote a marketplace of ideas online have become filthy junkyards of harassment and disinformation.

Yasmin Green, who leads an incubator within Alphabet called Jigsaw, focused her remarks on the rise of fake news, and even brought two propaganda publishers with her on stage to explain how, and why, they do what they do. For Jestin Coler, founder of the phony Denver Guardian, it was an all too easy way to turn a profit during the election.

“To be honest, my mortgage was due,” Coler said of what inspired him to write a bogus article claiming an FBI agent related to Hillary Clinton’s email investigation was found dead in a murder-suicide. That post was shared some 500,000 times just days before the election.

While prior years’ panels may have optimistically offered up more tech as the answer to what ails tech, this year was decidedly short on solutions.

There seemed to be, throughout the conference, a keen awareness of the limits human beings ought to place on the software that is very much eating the world.

Source: Wired

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Technology is the main driver of the recent increases in inequality

Artificial Intelligence And Income Inequality

While economists debate the extent to which technology plays a role in global inequality, most agree that tech advances have exacerbated the problem.

Economist Erik Brynjolfsson said,

“My reading of the data is that technology is the main driver of the recent increases in inequality. It’s the biggest factor.”

AI expert Yoshua Bengio suggests that equality and ensuring a shared benefit from AI could be pivotal in the development of safe artificial intelligence. Bengio, a professor at the University of Montreal, explains, “In a society where there’s a lot of violence, a lot of inequality, [then] the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.”

“It’s almost a moral principle that we should share benefits among more people in society,” argued Bart Selman, a professor at Cornell University … “So we have to go into a mode where we are first educating the people about what’s causing this inequality and acknowledging that technology is part of that cost, and then society has to decide how to proceed.”

Source: HuffPost

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence is ripe for abuse

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.

“We want to make these systems as ethical as possible and free from unseen biases.”

In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases.

We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.””

Source: The Gaurdian

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Humans are born irrational, and that has made us better decision-makers

Facts on their own don’t tell you anything. It’s only paired with preferences, desires, with whatever gives you pleasure or pain, that can guide your behavior. Even if you knew the facts perfectly, that still doesn’t tell you anything about what you should do.”

Even if we were able to live life according to detailed calculations, doing so would put us at a massive disadvantage. This is because we live in a world of deep uncertainty, under which neat logic simply isn’t a good guide.

It’s well-established that data-based decisions doesn’t inoculate against irrationality or prejudice, but even if it was possible to create a perfectly rational decision-making system based on all past experience, this wouldn’t be a foolproof guide to the future.

Courageous acts and leaps of faith are often attempts to overcome great and seemingly insurmountable challenges. (It wouldn’t take much courage if it were easy to do.) But while courage may be irrational or hubristic, we wouldn’t have many great entrepreneurs or works of art without those with a somewhat illogical faith in their own abilities.

There are occasions where overly rational thinking would be highly inappropriate. Take finding a partner, for example. If you had the choice between a good-looking high-earner who your mother approves of, versus someone you love who makes you happy every time you speak to them—well, you’d be a fool not to follow your heart.

And even when feelings defy reason, it can be a good idea to go along with the emotional rollercoaster. After all, the world can be an entirely terrible place and, from a strictly logical perspective, optimism is somewhat irrational.

But it’s still useful. “It can be beneficial not to run around in the world and be depressed all the time,” says Gigerenzer.

Of course, no human is perfect, and there are downsides to our instincts. But, overall, we’re still far better suited to the real world than the most perfectly logical thinking machine.

We’re inescapably irrational, and far better thinkers as a result.

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The last things that will make us uniquely human

What will be my grandson’s place in a world where machines trounce us in one area after another?

Some are worried that self-driving cars and trucks may displace millions of professional drivers (they are right), and disrupt entire industries (yup!). But I worry about my six-year-old son. What will his place be in a world where machines trounce us in one area after another? What will he do, and how will he relate to these ever-smarter machines? What will be his and his human peers’ contribution to the world he’ll live in?

He’ll never calculate faster, or solve a math equation quicker. He’ll never type faster, never drive better, or even fly more safely. He may continue to play chess with his friends, but because he’s a human he will no longer stand a chance to ever become the best chess player on the planet. He might still enjoy speaking multiple languages (as he does now), but in his professional life that may not be a competitive advantage anymore, given recent improvements in real-time machine translation.

So perhaps we might want to consider qualities at a different end of the spectrum: radical creativity, irrational originality, even a dose of plain illogical craziness, instead of hard-nosed logic. A bit of Kirk instead of Spock.

Actually, it all comes down to a fairly simple question: What’s so special about us, and what’s our lasting value? It can’t be skills like arithmetic or typing, which machines already excel in. Nor can it be rationality, because with all our biases and emotions we humans are lacking.

So far, machines have a pretty hard time emulating these qualities: the crazy leaps of faith, arbitrary enough to not be predicted by a bot, and yet more than simple randomness. Their struggle is our opportunity.

So we must aim our human contribution to this division of labour to complement the rationality of the machines, rather than to compete with it. Because that will sustainably differentiate us from them, and it is differentiation that creates value.

Source: BBC  Viktor Mayer-Schonberger is Professor of Internet Governance and Regulation at the Oxford Internet Institute, University of Oxford.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Will Democracy Survive Big Data and Artificial Intelligence?


We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now.

In 2016 we produced as much data as in the entire history of humankind through 2015.

It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours.

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next.

Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?

The field of artificial intelligence is, indeed, making breathtaking advances. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself.

Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a “nudge”—a modern form of paternalism.

The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is “big nudging”, which is the combination of big data with nudging.

In a rapidly changing world a super-intelligence can never make perfect decisions (see Fig. 1): systemic complexity is increasing faster than data volumes, which are growing faster than the ability to process them, and data transfer rates are limited.
Furthermore, there is a danger that the manipulation of decisions by powerful algorithms undermines the basis of “collective intelligence,” which can flexibly adapt to the challenges of our complex world. For collective intelligence to work, information searches and decision-making by individuals must occur independently. If our judgments and decisions are predetermined by algorithms, however, this truly leads to a brainwashing of the people. Intelligent beings are downgraded to mere receivers of commands, who automatically respond to stimuli.

We are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse.

We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution.

Source: Scientific American

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

So long, banana-condom demos: Sex and drug education could soon come from chatbots

“Is it ok to get drunk while I’m high on ecstasy?” “How can I give oral sex without getting herpes?” Few teenagers would ask mom or dad these questions—even though their life could quite literally depend on it.

Talking to a chatbot is a different story. They never raise an eyebrow. They will never spill the beans to your parents. They have no opinion on your sex life or drug use. But that doesn’t mean they can’t take care of you.

Bots can be used as more than automated middlemen in business transactions: They can meet needs for emotional human intervention when there aren’t enough humans who are willing or able to go around.

In fact, there are times when the emotional support of a bot may even be preferable to that of a human.

In 2016, AI tech startup X2AI built a psychotherapy bot capable of adjusting its responses based on the emotional state of its patients. The bot, Karim, is designed to help grief- and PTSD-stricken Syrian refugees, for whom the demand (and price) of therapy vastly overwhelms the supply of qualified therapists.

From X2AI test runs using the bot with Syrians, they noticed that technologies like Karim offer something humans cannot:

For those in need of counseling but concerned with the social stigma of seeking help, a bot can be comfortingly objective and non-judgmental.

Bzz is a Dutch chatbot created precisely to answer questions about drugs and sex. When surveyed teens were asked to compare Bzz to finding answers online or calling a hotline, Bzz won. Teens could get their answers faster with Bzz than searching on their own, and they saw their conversations with the bot as more confidential because no human was involved and no tell-tale evidence was left in a search history.

Because chatbots can efficiently gain trust and convince people to confide personal and illicit information in them, the ethical obligations of such bots are critical, but still ambiguous.

Source: Quartz

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft Ventures: Making the long bet on AI + people

Another significant commitment by Microsoft to democratize AI:

a new Microsoft Ventures fund for investment in AI companies focused on inclusive growth and positive impact on society.

Companies in this fund will help people and machines work together to increase access to education, teach new skills and create jobs, enhance the capabilities of existing workforces and improve the treatment of diseases, to name just a few examples.

CEO Satya Nadella outlined principles and goals for AI: AI must be designed to assist humanity; be transparent; maximize efficiency without destroying human dignity; provide intelligent privacy and accountability for the unexpected; and be guarded against biases. These principles guide us as we move forward with this fund.

Source: Microsoft blog

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Teaching an Algorithm to Understand Right and Wrong

hbr-ai-morals

Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft is partnering with Elon Musk’s $1 billion #AI research company to help it battle Amazon and Google

elon-musk-robotsMicrosoft has announced a new partnership with OpenAI, the $1 billion artificial intelligence research nonprofit cofounded by Tesla CEO Elon Musk and Y Combinator President Sam Altman.

Artificial intelligence is going to be a big point of competition between Microsoft Azure, the leading Amazon Web Services, and the relative upstart Google Cloud over the months and years to come. As Guthrie says, “any application is ultimately going to weave in AI,” and Microsoft wants to be the company that helps developers do the weaving.

That’s where the OpenAI partnership becomes so important, Guthrie says.

Bscott-guthrie-photoecause we’re still in the earliest days of artificial intelligence, he says, the biggest challenge is figuring out what exactly can be done with artificial intelligence. Guthrie calls this “understanding the art of the possible.

Source: Business Insider

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Cambridge students build a ‘lawbot’ to advise sexual assault victims #AI

cambridge-law-bot

“Hi, I’m LawBot, a robot designed to help victims of crime in England.”

While volunteering at a school sexual consent class, Ludwig Bull, a law student at the University of Cambridge, was inspired to build a chatbot that offers free legal advice to students. He enlisted the help of four coursemates, and Lawbot was designed and built in just six weeks.

The program is still in beta, but Bull hopes it will help victims of crime, at Cambridge and beyond, to get justice.

“A victim can talk to our artificially intelligent chatbot, receive a preliminary assessment of their situation, and then decide which available actions to pursue”

Source: The Gaurdian

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The Christianizing of AI

Bloggers note: The following post illustrates the challenge in creating ethics for AI. There are many different faiths, with different belief systems. How would the AI be programmed to serve these diverse ethical needs? 

The ethics of artificial intelligence (AI) has drawn comments from the White House and British House of Commons in recent weeks, along with a nonprofit organization established by Amazon, Google, Facebook, IBM and Microsoft. Now, Baptist computer scientists have called Christians to join the discussion.

Louise Perkins, professor of computer science at California Baptist University, told Baptist Press she is “quite worried” at the lack of an ethical code related to AI. The Christian worldview, she added, has much to say about how automated devices should be programmed to safeguard human flourishing.

Individuals with a Christian worldview need to be involved in designing and programing AI systems, Perkins said, to help prevent those systems from behaving in ways that violate the Bible’s ethical standards.

Believers can thus employ “the mathematics or the logic we will be using to program these devices” to “infuse” a biblical worldview “into an [AI] system.” 

Perkins also noted that ethical standards will have to be programmed into AI systems involved in surgery and warfare among other applications. A robot performing surgery on a pregnant woman, for instance, could have to weigh the life of the baby relative to the life of the mother, and an AI weapon system could have to apply standards of just warfare.

Source: The Pathway

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

12 Observations About Artificial Intelligence From The O’Reilly AI Conference

12-observations-ai-forbesBloggers: Here’s a few excepts from a long but very informative review. (The best may be last.)

The conference was organized by Ben Lorica and Roger Chen with Peter Norvig and Tim O-Reilly acting as honorary program chairs.   

For a machine to act in an intelligent way, said [Yann] LeCun, it needs “to have a copy of the world and its objective function in such a way that it can roll out a sequence of actions and predict their impact on the world.” To do this, machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan.

Peter Norvig explained the reasons why machine learning is more difficult than traditional software: “Lack of clear abstraction barriers”—debugging is harder because it’s difficult to isolate a bug; “non-modularity”—if you change anything, you end up changing everything; “nonstationarity”—the need to account for new data; “whose data is this?”—issues around privacy, security, and fairness; lack of adequate tools and processes—exiting ones were developed for traditional software.

AI must consider culture and context—“training shapes learning”

“Many of the current algorithms have already built in them a country and a culture,” said Genevieve Bell, Intel Fellow and Director of Interaction and Experience Research at Intel. As today’s smart machines are (still) created and used only by humans, culture and context are important factors to consider in their development.

Both Rana El Kaliouby (CEO of Affectiva, a startup developing emotion-aware AI) and Aparna Chennapragada (Director of Product Management at Google) stressed the importance of using diverse training data—if you want your smart machine to work everywhere on the planet it must be attuned to cultural norms.

“Training shapes learning—the training data you put in determines what you get out,” said Chennapragada. And it’s not just culture that matters, but also context

The £10 million Leverhulme Centre for the Future of Intelligence will explore “the opportunities and challenges of this potentially epoch-making technological development,” namely AI. According to The Guardian, Stephen Hawking said at the opening of the Centre,

“We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”

Gary Marcus, professor of psychology and neural science at New York University and cofounder and CEO of Geometric Intelligence,

 “a lot of smart people are convinced that deep learning is almost magical—I’m not one of them …  A better ladder does not necessarily get you to the moon.”

Tom Davenport added, at the conference: “Deep learning is not profound learning.”

AI changes how we interact with computers—and it needs a dose of empathy

AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm

Maybe, just maybe, our minds are not computers and computers do not resemble our brains?  And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional–albeit “narrow”–applications of computers to enrich and improve our lives?

Gary Marcus complained about research papers presented at the Neural Information Processing Systems (NIPS) conference, saying that they are like alchemy, adding a layer or two to a neural network, “a little fiddle here or there.” Instead, he suggested “a richer base of instruction set of basic computations,” arguing that “it’s time for genuinely new ideas.”

Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous “AI Winter”? And that continuing to adhere to it and refusing to consider “genuinely new ideas,” out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter?

 Source: Forbes

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

MIT makes breakthrough in morality-proofing artificial intelligence

mit-morality-breakthroughResearchers at MIT are investigating ways of making artificial neural networks more transparent in their decision-making.

As they stand now, artificial neural networks are a wonderful tool for discerning patterns and making predictions. But they also have the drawback of not being terribly transparent. The beauty of an artificial neural network is its ability to sift through heaps of data and find structure within the noise.

This is not dissimilar from the way we might look up at clouds and see faces amidst their patterns. And just as we might have trouble explaining to someone why a face jumped out at us from the wispy trails of a cirrus cloud formation, artificial neural networks are not explicitly designed to reveal what particular elements of the data prompted them to decide a certain pattern was at work and make predictions based upon it.

We tend to want a little more explanation when human lives hang in the balance — for instance, if an artificial neural net has just diagnosed someone with a life-threatening form of cancer and recommends a dangerous procedure. At that point, we would likely want to know what features of the person’s medical workup tipped the algorithm in favor of its diagnosis.

MIT researchers Lei, Barzilay, and Jaakkola designed a neural network that would be forced to provide explanations for why it reached a certain conclusion.

Source: Extremetech

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

China’s plan to organize its society relies on ‘big data’ to rate everyone

china-social-credits-scoreImagine a world where an authoritarian government monitors everything you do, amasses huge amounts of data on almost every interaction you make, and awards you a single score that measures how “trustworthy” you are.

In this world, anything from defaulting on a loan to criticizing the ruling party, from running a red light to failing to care for your parents properly, could cause you to lose points. 

This is not the dystopian superstate of Steven Spielberg’s “Minority Report,” in which all-knowing police stop crime before it happens. But it could be China by 2020.

And in this world, your score becomes the ultimate truth of who you are — determining whether you can borrow money, get your children into the best schools or travel abroad; whether you get a room in a fancy hotel, a seat in a top restaurant — or even just get a date.

It is the scenario contained in China’s ambitious plans to develop a far-reaching social credit system, a plan that the Communist Party hopes will build a culture of “sincerity” and a “harmonious socialist society” where “keeping trust is glorious.”

The ambition is to collect every scrap of information available online about China’s companies and citizens in a single place — and then assign each of them a score based on their political, commercial, social and legal “credit.”

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Source: The Washington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

New Research Center to Explore Ethics of Artificial Intelligence

carnegie-mellon-ethics

The Chimp robot, built by a Carnegie Mellon team, took third place in a competition held by DARPA last year. The school is starting a research center focused on the ethics of artificial intelligence. Credit Chip Somodevilla/Getty Images

Carnegie Mellon University plans to announce on Wednesday that it will create a research center that focuses on the ethics of artificial intelligence.

The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of A.I. technologies.

“We are at a unique point in time where the technology is far ahead of society’s ability to restrain it”
Subra Suresh, Carnegie Mellon’s president

The new center is being created with a $10 million gift from K&L Gates, an international law firm headquartered in Pittsburgh.

Peter J. Kalis, chairman of the law firm, said the potential impact of A.I. technology on the economy and culture made it essential that as a society we make thoughtful, ethical choices about how the software and machines are used.

“Carnegie Mellon resides at the intersection of many disciplines,” he said. “It will take a synthesis of the best thinking of all of these disciplines for society to define the ethical constraints on the emerging A.I. technologies.”

Source: NY Times

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Genetically engineered humans will arrive sooner than you think. And we’re not ready

vox-geneticaly-engineered-humansMichael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.

“What’s happening is bigger than any one of us”

We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.

So these are just great landmarks, and I’m comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.

And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it’s becoming possible to transform ourselves into whatever it is that we want to be. And there’s both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.

“We’re going to give ourselves a power that we may not have the wisdom to control very well”

I think most historians of technology … see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

 Source: Vox

vox-genetically-enginnered-humans

 

vox-genetically-enginnered-humans

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial Intelligence’s White Guy Problem

nyt-white-guy-problem

Credit Bianca Bagnarelli

Warnings by luminaries like Elon Musk and Nick Bostrom about “the singularity” — when machines become smarter than humans — have attracted millions of dollars and spawned a multitude of conferences.

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems.

Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems.

Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

And then there’s gender discrimination. Last July, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs. The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?

Like all technologies before it, artificial intelligence will reflect the values of its creators.

Source: New York Times – Kate Crawford is a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and A.I.

test

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

AI is one of top 5 tools humanity has ever had

 A few highlights from AI panel at the White House Frontiers Conference

On the impact of AI

Andrew McAfee (MIT):

white-house-frontiers-ai-panel

To view video, click on pic, scroll down the page to Live Stream and click to start the video. It may take a min and then go to the time you want to watch.

(Begins @ 2:40:34)

We are at an inflection point … I think the development of these kinds of [AI] tools are going to rank among probably the top 5 tools humanity has ever had to take better care of each other and to tread more lightly on the planet … top 5 in our history. Like the book, maybe, the steam engine, maybe, written language — I might put the Internet there. We’ve all got our pet lists of the biggest inventions ever. AI needs to be on the very, very, short list.

On bias in AI

Fei-Fei Li, Professor of Computer Science, Stanford University:

(Begins @ 3:14:57)

Research repeatedly has shown that when people work in diverse groups there is increased creativity and innovation.

And interestingly, it is harder to work as a diverse group. I’m sure everybody here in the audience have had that experience. We have to listen to each other more. We have to understand the perspective more. But that also correlates well with innovation and creativity. … If we don’t have the inclusion of [diverse] people to think about the problems and the algorithms in AI, we might not only being missing the innovation boat we might actually create bias and create unfairness that are going to be detrimental to our society … 

What I have been advocating at Stanford, and with my colleagues in the community is, let’s bring the humanistic mission statement into the field of AI. Because AI is fundamentally an applied technology that’s going to serve our society. Humanistic AI not only raises the awareness and the importance of our technology, it’s actually a really, really important way to attract diverse students and technologists and innovators to participate in the technology of AI.

There has been a lot of research done to show that people with diverse background put more emphasis on humanistic mission in their work and in their life. So, if in our education, in our research, if we can accentuate or bring out this humanistic message of this technology, we are more likely to invite the diversity of students and young technologists to join us.

On lack of minorities in AI

Andrew Moore Dean, School of Computer Science, Carnegie Mellon University:

(Begins @ 3:19:10)

I so strongly applaud what you [Fei-Fei Li] are describing here because I think we are engaged in a fight here for how the 21st century pans out in terms of who’s running the world … 

The nightmare, the silly, silly thing we could do … would be if … the middle of the century is built by a bunch of non-minority guys from suburban moderately wealthy United States instead of the full population of the United States.

Source: Frontiers Conference
Click on the video that says Live Stream (event will start shortly)
it may take a minute to load

(Update 02/24/17: The original timelines listed above may be different when revisiting this video.)

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How Deep Learning is making AI prejudiced

Bloggers note: The authors of this research paper show what they refer to as “machine prejudice” and how it derives so fundamentally from human culture. 

“Concerns about machine prejudice are now coming to the fore–concerns that our historic biases and prejudices are being reified in machines,” they write. “Documented cases of automated prejudice range from online advertising (Sweeney, 2013) to criminal sentencing (Angwin et al., 2016).”

Following are a few excerpts: 

machine-prejudiceAbstract

“Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.

Discussion

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices. In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful. These are much broader concerns than intentional discrimination, and possibly harder to address.

Awareness is better than blindness

“… where AI is partially constructed automatically by machine learning of human culture, we may also need an analog of human explicit memory and deliberate actions, that can be trained or programmed to avoid the expression of prejudice.

“Of course, such an approach doesn’t lend itself to a straightforward algorithmic formulation. Instead it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists. …”

Click here to download the pdf of the report
Semantics derived automatically from language corpora necessarily contain human biases
Aylin Caliskan-Islam , Joanna J. Bryson, and Arvind Narayanan

1 Princeton University
2 University of Bath
Draft date August 31, 2016.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Grandma? Now you can see the bias in the data …

“Just type the word grandma in your favorite search engine image search and you will see the bias in the data, in the picture that is returned  … you will see the race bias.” — Fei-Fei Li, Professor of Computer Science, Stanford University, speaking at the White House Frontiers Conference

Google image search for Grandma 

google-grandmas

Bing image search for Grandma

grandma-bing

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

It seems that A.I. will be the undoing of us all … romantically, at least

As if finding love weren’t hard enough, the creators of Operator decided to show just how Artificial Intelligence could ruin modern relationships.

Artificial Intelligence so often focuses on the idea of “perfection.” As most of us know, people are anything but perfect, and believing that your S.O. (Significant Other) is perfect can lead to problems. The point of an A.I., however, is perfection — so why would someone choose the flaws of a human being over an A.I. that can give you all the comfort you want with none of the costs?

Hopefully, people continue to choose imperfection.

Source: Inverse.com

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Civil Rights and Big Data

big-data-whitehouse-reportBlogger’s note: We’ve posted several articles on the bias and prejudice inherent in big data, which with machine learning results in “machine prejudice,” all of which impacts humans when they interact with intelligent agents. 

Apparently, as far back as May 2014, the Executive Office of the President started issuing reports on the potential in “Algorithmic Systems” for “encoding discrimination in automated decisions”. The most recent report of May 2016 addressed two additional challenges:

1) Challenges relating to data used as inputs to an algorithm;

2) Challenges related to the inner workings of the algorithm itself.

Here are two excerpts:

The Obama Administration’s Big Data Working Group released reports on May 1, 2014 and February 5, 2015. These reports surveyed the use of data in the public and private sectors and analyzed opportunities for technological innovation as well as privacy challenges. One important social justice concern the 2014 report highlighted was “the potential of encoding discrimination in automated decisions”—that is, that discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.”

To avoid exacerbating biases by encoding them into technological systems, we need to develop a principle of “equal opportunity by design”—designing data systems that promote fairness and safeguard against discrimination from the first step of the engineering process and continuing throughout their lifespan.

Download the report here: Whitehouse.gov

References:

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

http://www.frontiersconference.org/

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

When artificial intelligence judges a beauty contest, white people win

Some of the beauty contest winners judged by an AI

Some of the beauty contest winners judged by an AI

As humans cede more and more control to algorithms, whether in the courtroom or on social media, the way they are built becomes increasingly important. The foundation of machine learning is data gathered by humans, and without careful consideration, the machines learn the same biases of their creators.

An online beauty contest called Beauty.ai, run by Youth Laboratories solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age. However, race seemed to play a larger role than intended; of the 44 winners, 36 were white.

“So inclusivity matters—from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.” – Kate Crawford

It happens to be that color does matter in machine vision, Alex Zhavoronkov, chief science officer of Beauty.ai, told Motherboard. “And for some population groups the data sets are lacking an adequate number of samples to be able to train the deep neural networks.”

“If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing non-white faces, writes Kate Crawford, principal researcher at Microsoft Research New York City, in a New York Times op-ed.

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Why Artificial Intelligence Needs Some Sort of Moral Code

Two new research groups want to ensure that AI benefits humans, not harms them.

fortune-ai-eye-imageWhether you believe the buzz about artificial intelligence is merely hype or that the technology represents the future, something undeniable is happening. Researchers are more easily solving decades-long problems like teaching computers to recognize images and understanding speech at a rapid space, and companies like Google and Facebook are pouring millions of dollars into their own related projects.

What could possibly go wrong?

For one thing, advances in artificial intelligence could eventually lead to unforeseen consequences. University of California at Berkeley professor Stuart Russell is concerned that powerful computers powered by artificial intelligence, or AI, could unintentionally create problems that humans cannot predict.

Consider an AI system that’s designed to make the best stock trades but has no moral code to keep it from doing something illegal. That’s why Russell and UC Berkeley debuted a new AI research center this week to address these potential problems and build AI systems that consider moral issues. Tech giants Alphabet, Facebook, IBM, and Microsoft are also teaming up to focus on the ethics challenges.

Similarly, Ilya Sutskever, the research director of the Elon Musk-backed OpenAI nonprofit, is working on AI projects independent from giant corporations. He and OpenAI believe those big companies could ignore AI’s potential benefit for humanity and instead focus the technology entirely on making money.

Russell compares the current state of AI to the rise of nuclear energy during the 1950s and 1960s, when proponents believed that “anyone who disagreed with them was irrational or crazy” for wanting robust safety measures that could hinder innovation and adoption. Sutskever says some AI proponents fail to consider the potential dangers or unintended consequences of the technology—just like some people were unable to grasp that widespread use of cars could lead to global warming.

Source: Fortune

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

China has now eclipsed U.S. in AI research

As more industries and policymakers awaken to the benefits of machine learning, two countries appear to be pulling away in the research race. The results will probably have significant implications for the future of AI.

articles-on-deep-learning

What’s striking about it is that although the United States was an early leader on deep-learning research, China has effectively eclipsed it in terms of the number of papers published annually on the subject. The rate of increase is remarkably steep, reflecting how quickly China’s research priorities have shifted.

quality-deep-learning-researchThe quality of China’s research is also striking. The chart below narrows the research to include only those papers that were cited at least once by other researchers, an indication that the papers were influential in the field.

Compared with other countries, the United States and China are spending tremendous research attention on deep learning. But, according to the White House, the United States is not investing nearly enough in basic research.

“Current levels of R&D spending are half to one-quarter of the level of R&D investment that would produce the optimal level of economic growth,”
a companion report published this week by the Obama administration finds.

Source: The Washington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial Intelligence Will Be as Biased and Prejudiced as Its Human Creators

ai-appleThe optimism around modern technology lies in part in the belief that it’s a democratizing force—one that isn’t bound by the petty biases and prejudices that humans have learned over time. But for artificial intelligence, that’s a false hope, according to new research, and the reason is boneheadedly simple: Just as we learn our biases from the world around us, AI will learn its biases from us.

Source: Pacific Standard

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Machine learning needs rich feedback for AI teaching

With AI systems largely receiving feedback in a binary yes/no format, Monash University professor Tom Drummond says rich feedback is needed to allow AI systems to know why answers are incorrect.

In much the same way children have to be told not only what they are saying is wrong, but why it is wrong, artificial intelligence (AI) systems need to be able to receive and act on similar feedback.

“Rich feedback is important in human education, I think probably we’re going to see the rise of machine teaching as an important field — how do we design systems so that they can take rich feedback and we can have a dialogue about what the system has learnt?”

“We need to be able to give it rich feedback and say ‘No, that’s unacceptable as an answer because … ‘ we don’t want to simply say ‘No’ because that’s the same as saying it is grammatically incorrect and its a very, very blunt hammer,” Drummond said.

The flaw of objective function

According to Drummond, one problematic feature of AI systems is the objective function that sits at the heart of a system’s design.

The professor pointed to the match between Google DeepMind’s AlphaGo and South Korean Go champion Lee Se-dol in March, which saw the artificial intelligence beat human intelligence by 4 games to 1.

In the fourth match, the only one where Se-dol picked up a victory, after clearly falling behind, the machine played a number of moves that Drummond described as insulting if played by a human due to the position AlphaGo found itself in.

“Here’s the thing, the objective function was the highest probability of victory, it didn’t really understand the social niceties of the game.

“At that point AlphaGo knew it had lost but it still tried to maximise its probability of victory, so it played all these moves … a move that threatens a large group of stones, but has a really obvious counter and if somehow the human misses the counter move, then it’s won — but of course you would never play this, it’s not appropriate.”

Source: ZDNet

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The big reveal: AI’s deep learning is biased

A comment from the writers of this blog: 

The chart below visualizes 175 cognitive biases that humans have, meticulously organized by Buster Benson and algorithmically designed by John Manoogian III.

Many of these biases are implicit bias which refers to the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner. These biases, embedded in our language, are now getting embedded in big data. They are being absorbed by deep learning and are now influencing Artificial Intelligence. Going forward, this will impact how AI interacts with humans.

We have featured many other posts on this blog recently about this issue—how AI is demonstrating bias—and we are adding this “cheat sheet” to further illustrate the kinds of human bias that AI is learning. 

Illustration content Buster Benson, “diagrammatic poster remix” by John Manoogian III

Source: Buster Benson blog

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Google’s AI Plans Are A Privacy Nightmare

googles-ai-plans-are-a-privacy-nightmareGoogle is betting that people care more about convenience and ease than they do about a seemingly oblique notion of privacy, and it is increasingly correct in that assumption.

Google’s new assistant, which debuted in the company’s new messaging app Allo, works like this: Simply ask the assistant a question about the weather, nearby restaurants, or for directions, and it responds with detailed information right there in the chat interface.

Because Google’s assistant recommends things that are innately personal to you, like where to eat tonight or how to get from point A to B, it is amassing a huge collection of your most personal thoughts, visited places, and preferences  In order for the AI to “learn” this means it will have to collect and analyze as much data about you as possible in order to serve you more accurate recommendations, suggestions, and data.

In order for artificial intelligence to function, your messages have to be unencrypted.

These new assistants are really cool, and the reality is that tons of people will probably use them and enjoy the experience. But at the end of the day, we’re sacrificing the security and privacy of our data so that Google can develop what will eventually become a new revenue stream. Lest we forget: Google and Facebook have a responsibility to investors, and an assistant that offers up a sponsored result when you ask it what to grab for dinner tonight could be a huge moneymaker.

Source: Gizmodo

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence is quickly becoming as biased as we are

ai-bias

When you perform a Google search for every day queries, you don’t typically expect systemic racism to rear its ugly head. Yet, if you’re a woman searching for a hairstyle, that’s exactly what you might find.

A simple Google image search for ‘women’s professional hairstyles’ returns the following:women-professional-hair-styles

 … you could probably pat Google on the back and say ‘job well done.’ That is, until you try searching for ‘unprofessional women’s hairstyles’ and find this:

women-unprofessional-hair-styles

It’s not new. In fact, Boing Boing spotted this back in April.

What’s concerning though, is just how much of our lives we’re on the verge of handing over to artificial intelligence. With today’s deep learning algorithms, the ‘training’ of this AI is often as much a product of our collective hive mind as it is programming.

Artificial intelligence, in fact, is using our collective thoughts to train the next generation of automation technologies. All the while, it’s picking up our biases and making them more visible than ever.

This is just the beginning … If you want the scary stuff, we’re expanding algorithmic policing that relies on many of the same principles used to train the previous examples. In the future, our neighborhoods will see an increase or decrease in police presence based on data that we already know is biased.

Source: The Next Web

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

UC Berkeley launches Center for Human-Compatible Artificial Intelligence

robotknot750The primary focus of the new center is to ensure that AI systems are “beneficial to humans” says UC Berkeley AI expert Stuart Russell.

The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.

“In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

Source: Berkeley.edu

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

CIA using deep learning neural networks to predict social unrest

man-looking-big-data-analytics-ciaIn October 2015, the CIA opened the Directorate for Digital Innovation in order to “accelerate the infusion of advanced digital and cyber capabilities” the first new directorate to be created by the government agency since 1963.

“What we’re trying to do within a unit of my directorate is leverage what we know from social sciences on the development of instability, coups and financial instability, and take what we know from the past six or seven decades and leverage what is becoming the instrumentation of the globe.”

In fact, over the summer of 2016, the CIA found the intelligence provided by the neural networks was so useful that it provided the agency with a “tremendous advantage” when dealing with situations …

Source: IBTimes

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How Artificial intelligence is becoming ubiquitous #AI

“I think the medical domain is set for a revolution.”

AI will make it possible to have a “personal companion” able to assist you through life.

I think one of the most exciting prospects is the idea of a digital agent, something that can act on our behalf, almost become like a personal companion and that can do many things for us. For example, at the moment, we have to deal with this tremendous complexity of dealing with so many different services and applications, and the digital world feels as if it’s becoming ever more complex,” Bishop told CNBC.

“I think artificial intelligence is probably the biggest transformation in the IT industry. Medical is such a big area in terms of GDP that that’s got to be a good bet,” Christopher Bishop, lab director at Microsoft Research in Cambridge, U.K., told CNBC in a TV interview.

” … imagine an agent that can act on your behalf and be the interface between you and that very complex digital world, and furthermore one that would grow with you, and be a very personalized agent, that would understand you and your needs and your experience and so on in great depth.

Source: CNBC

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

4th revolution challenges our ideas of being human

4th industrial revolution

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum is convinced that we are at the beginning of a revolution that is fundamentally changing the way we live, work and relate to one another

Some call it the fourth industrial revolution, or industry 4.0, but whatever you call it, it represents the combination of cyber-physical systems, the Internet of Things, and the Internet of Systems.

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, has published a book entitled The Fourth Industrial Revolution in which he describes how this fourth revolution is fundamentally different from the previous three, which were characterized mainly by advances in technology.

In this fourth revolution, we are facing a range of new technologies that combine the physical, digital and biological worlds. These new technologies will impact all disciplines, economies and industries, and even challenge our ideas about what it means to be human.

It seems a safe bet to say, then, that our current political, business, and social structures may not be ready or capable of absorbing all the changes a fourth industrial revolution would bring, and that major changes to the very structure of our society may be inevitable.

Schwab said, “The changes are so profound that, from the perspective of human history, there has never been a time of greater promise or potential peril. My concern, however, is that decision makers are too often caught in traditional, linear (and non-disruptive) thinking or too absorbed by immediate concerns to think strategically about the forces of disruption and innovation shaping our future.”

Schwab calls for leaders and citizens to “together shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people.”

Source: Forbes, World Economic Forum

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Machines can never be as wise as human beings – Jack Ma #AI

zuckerger and jack ma

“I think machines will be stronger than human beings, machines will be smarter than human beings, but machines can never be as wise as human beings.”

The wisdom, soul and heart are what human beings have. A machine can never enjoy the feelings of success, friendship and love. We should use the machine in an innovative way to solve human problems.” – Jack Ma, Founder of Alibaba Group, China’s largest online marketplace

Mark Zuckerberg said AI technology could prove useful in areas such as medicine and hands-free driving, but it was hard to teach computers common sense. Humans had the ability to learn and apply that knowledge to problem-solving, but computers could not do that.

AI won’t outstrip mankind that soon – MZ

Source: South China Morning Post

 

Facebooktwitter