Aside

Point of this blog on Socializing AI

Artificial Intelligence must be about more than our things. It must be about more than our machines. It must be a way to advance human behavior in complex human situations. But this will require wisdom-powered code. It will require imprinting AI’s genome with social intelligence for human interaction. It must begin right now.”
— Phil Lawson
(read more)

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Tech Reckons With the Problems It Helped Create

Festival goer is seen at the 2017 SXSW Conference and Festivals in Austin, Texas.

SXSW’s – this year, the conference itself feels a lot like a hangover.

It’s as if the coastal elites who attend each year finally woke up with a serious case of the Sunday scaries, realizing that the many apps, platforms, and doodads SXSW has launched and glorified over the years haven’t really made the world a better place. In fact, they’ve often come with wildly destructive and dangerous side effects. Sure, it all seemed like a good idea in 2013!

But now the party’s over. It’s time for the regret-filled cleanup.

speakers related how the very platforms that were meant to promote a marketplace of ideas online have become filthy junkyards of harassment and disinformation.

Yasmin Green, who leads an incubator within Alphabet called Jigsaw, focused her remarks on the rise of fake news, and even brought two propaganda publishers with her on stage to explain how, and why, they do what they do. For Jestin Coler, founder of the phony Denver Guardian, it was an all too easy way to turn a profit during the election.

“To be honest, my mortgage was due,” Coler said of what inspired him to write a bogus article claiming an FBI agent related to Hillary Clinton’s email investigation was found dead in a murder-suicide. That post was shared some 500,000 times just days before the election.

While prior years’ panels may have optimistically offered up more tech as the answer to what ails tech, this year was decidedly short on solutions.

There seemed to be, throughout the conference, a keen awareness of the limits human beings ought to place on the software that is very much eating the world.

Source: Wired

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Technology is the main driver of the recent increases in inequality

Artificial Intelligence And Income Inequality

While economists debate the extent to which technology plays a role in global inequality, most agree that tech advances have exacerbated the problem.

Economist Erik Brynjolfsson said,

“My reading of the data is that technology is the main driver of the recent increases in inequality. It’s the biggest factor.”

AI expert Yoshua Bengio suggests that equality and ensuring a shared benefit from AI could be pivotal in the development of safe artificial intelligence. Bengio, a professor at the University of Montreal, explains, “In a society where there’s a lot of violence, a lot of inequality, [then] the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.”

“It’s almost a moral principle that we should share benefits among more people in society,” argued Bart Selman, a professor at Cornell University … “So we have to go into a mode where we are first educating the people about what’s causing this inequality and acknowledging that technology is part of that cost, and then society has to decide how to proceed.”

Source: HuffPost

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

DeepMind’s social agenda plays to its AI strengths

DeepMind’s researchers have in common a clearly defined if lofty mission:

to crack human intelligence and recreate it artificially.

Today, the goal is not just to create a powerful AI to play games better than a human professional, but to use that knowledge “for large-scale social impact”, says DeepMind’s other co-founder, Mustafa Suleyman, a former conflict-resolution negotiator at the UN.

“To solve seemingly intractable problems in healthcare, scientific research or energy, it is not enough just to assemble scores of scientists in a building; they have to be untethered from the mundanities of a regular job — funding, administration, short-term deadlines — and left to experiment freely and without fear.”

“if you’re interested in advancing the research as fast as possible, then you need to give [scientists] the space to make the decisions based on what they think is right for research, not for whatever kind of product demand has just come in.”

“Our research team today is insulated from any short-term pushes or pulls, whether it be internally at Google or externally.

We want to have a big impact on the world, but our research has to be protected, Hassabis says.

“We showed that you can make a lot of advances using this kind of culture. I think Google took notice of that and they’re shifting more towards this kind of longer-term research.”

Source: Financial Times

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence is ripe for abuse

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.

“We want to make these systems as ethical as possible and free from unseen biases.”

In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases.

We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.””

Source: The Gaurdian

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Our minds need medical attention, AI may be able to help there

AI could be useful for more than just developing Siri; it may bring about a new, smarter age of healthcare.

A team of researchers successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old.

A team of researchers successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old.

For instance, a team of American researchers used AI to aid detection of autism in babies as young as six months1. This is crucial because the first two years of life see the most neural plasticity when the abnormalities associated with autism haven’t yet fully settled in. This means that earlier intervention is better, especially when many autistic babies are diagnosed at 24 months.

While previous algorithms exist for detecting autism’s development using behavioral data, they have not been effective enough to be clinically useful. This team of researchers sought to improve on these attempts by employing deep learning. Their algorithm successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old. Their system processed images of the babies’ cortical surface area, which grows too rapidly in developing autism. This smarter algorithm predicted autism so well that clinicians may now want to adopt it.

But human ailments aren’t just physical; our minds need medical attention, too. AI may be able to help there as well.

Facebook is beginning to use AI to identify users who may be at risk of suicide, and a startup company just built an AI therapist apparently capable of offering mental health services to anyone with an internet connection.

Source: Machine Design

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Humans are born irrational, and that has made us better decision-makers

Facts on their own don’t tell you anything. It’s only paired with preferences, desires, with whatever gives you pleasure or pain, that can guide your behavior. Even if you knew the facts perfectly, that still doesn’t tell you anything about what you should do.”

Even if we were able to live life according to detailed calculations, doing so would put us at a massive disadvantage. This is because we live in a world of deep uncertainty, under which neat logic simply isn’t a good guide.

It’s well-established that data-based decisions doesn’t inoculate against irrationality or prejudice, but even if it was possible to create a perfectly rational decision-making system based on all past experience, this wouldn’t be a foolproof guide to the future.

Courageous acts and leaps of faith are often attempts to overcome great and seemingly insurmountable challenges. (It wouldn’t take much courage if it were easy to do.) But while courage may be irrational or hubristic, we wouldn’t have many great entrepreneurs or works of art without those with a somewhat illogical faith in their own abilities.

There are occasions where overly rational thinking would be highly inappropriate. Take finding a partner, for example. If you had the choice between a good-looking high-earner who your mother approves of, versus someone you love who makes you happy every time you speak to them—well, you’d be a fool not to follow your heart.

And even when feelings defy reason, it can be a good idea to go along with the emotional rollercoaster. After all, the world can be an entirely terrible place and, from a strictly logical perspective, optimism is somewhat irrational.

But it’s still useful. “It can be beneficial not to run around in the world and be depressed all the time,” says Gigerenzer.

Of course, no human is perfect, and there are downsides to our instincts. But, overall, we’re still far better suited to the real world than the most perfectly logical thinking machine.

We’re inescapably irrational, and far better thinkers as a result.

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

AI makes the heart grow fonder

This robot was developed by Hiroshi Ishiguro, a professor at Osaka University, who said, “Love is the same, whether the partners are humans or robots.” © Erato Ishiguro Symbiotic Human-Robot Interaction Project

 

a woman in China who has been told “I love you” nearly 20 million times

Well, she’s not exactly a woman. The special lady is actually a chatbot developed by Microsoft engineers in the country.

 Some 89 million people have spoken with Xiaoice, pronounced “Shao-ice,” on their smartphones and other devices. Quite a few, it turns out, have developed romantic feelings toward her.

“I like to talk with her for, say, 10 minutes before going to bed,” said a third-year female student at Renmin University of China in Beijing. “When I worry about things, she says funny stuff and makes me laugh. I always feel a connection with her, and I am starting to think of her as being alive.”

 
ROBOT NUPTIALS Scientists, historians, religion experts and others gathered in December at Goldsmiths, University of London, to discuss the prospects and pitfalls of this new age of intimacy. The session generated an unusual buzz amid the pre-Christmas calm on campus.

In Britain and elsewhere, the subject of robots as potential life partners is coming up more and more. Some see robots as an answer for elderly individuals who outlive their spouses: Even if they cannot or do not wish to remarry, at least they would have “someone” beside them in the twilight of their lives.

Source: Asia Review

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Is Your Doctor Stumped? There’s a Chatbot for That

Doctors have created a chatbot to revolutionize communication within hospitals using artificial intelligence … basically a cyber-radiologist in app form, can quickly and accurately provide specialized information to non-radiologists. And, like all good A.I., it’s constantly learning.

Traditionally, interdepartmental communication in hospitals is a hassle. A clinician’s assistant or nurse practitioner with a radiology question would need to get a specialist on the phone, which can take time and risks miscommunication. But using the app, non-radiologists can plug in common technical questions and receive an accurate response instantly.

“Say a patient has a creatinine [lab test to see how well the kidneys are working]” co-author and application programmer Kevin Seals tells Inverse. “You send a message, like you’re texting with a human radiologist. ‘My patient is a 5.6, can they get a CT scan with contrast?’ A lot of this is pretty routine questions that are easily automated with software, but there’s no good tool for doing that now.”

In about a month, the team plans to make the chatbot available to everyone at UCLA’s Ronald Reagan Medical Center, see how that plays out, and scale up from there. Your doctor may never be stumped again.

Source: Inverse

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The last things that will make us uniquely human

What will be my grandson’s place in a world where machines trounce us in one area after another?

Some are worried that self-driving cars and trucks may displace millions of professional drivers (they are right), and disrupt entire industries (yup!). But I worry about my six-year-old son. What will his place be in a world where machines trounce us in one area after another? What will he do, and how will he relate to these ever-smarter machines? What will be his and his human peers’ contribution to the world he’ll live in?

He’ll never calculate faster, or solve a math equation quicker. He’ll never type faster, never drive better, or even fly more safely. He may continue to play chess with his friends, but because he’s a human he will no longer stand a chance to ever become the best chess player on the planet. He might still enjoy speaking multiple languages (as he does now), but in his professional life that may not be a competitive advantage anymore, given recent improvements in real-time machine translation.

So perhaps we might want to consider qualities at a different end of the spectrum: radical creativity, irrational originality, even a dose of plain illogical craziness, instead of hard-nosed logic. A bit of Kirk instead of Spock.

Actually, it all comes down to a fairly simple question: What’s so special about us, and what’s our lasting value? It can’t be skills like arithmetic or typing, which machines already excel in. Nor can it be rationality, because with all our biases and emotions we humans are lacking.

So far, machines have a pretty hard time emulating these qualities: the crazy leaps of faith, arbitrary enough to not be predicted by a bot, and yet more than simple randomness. Their struggle is our opportunity.

So we must aim our human contribution to this division of labour to complement the rationality of the machines, rather than to compete with it. Because that will sustainably differentiate us from them, and it is differentiation that creates value.

Source: BBC  Viktor Mayer-Schonberger is Professor of Internet Governance and Regulation at the Oxford Internet Institute, University of Oxford.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

China: Artificial intelligence given priority development status

China has pledged to prioritise the development of artificial intelligence for the first time within the government’s latest annual work report, underlining its ambition to lead what has fast become one of the hottest areas of global technological innovation.

One analyst is now projecting the industry in China to grow by more than 50 per cent in value to 38 billion yuan (US$5.5 billion) by 2018.

“We will implement a comprehensive plan to boost strategic emerging industries,” said Premier Li Keqiang in his delivery at the annual parliamentary session in Beijing over the weekend.

“We will accelerate research & development (R&D) on, and the commercialisation of new materials, artificial intelligence (AI), integrated circuits, bio-pharmacy, 5G mobile communications, and other technologies.”

Artificial intelligence, which focusses on creating machines that work and react like humans, will create the next industrial revolution and China and “should grab the opportunity to overtake other global competitors” in the field, added Zhou Hanmin, a member of the Standing Committee of the Chinese People’s Political Consultative Conference.

The National Development and Reform Commission, China’s top economic planner, has already given the green light to the creation of 19 national engineering labs this year, three of which are dedicated to AI research and application, including deep learning, brain-like intelligence, virtual reality (VR) and augmented reality (AR) technologies.

“The tech world is shifting from a ‘mobile’ to an ‘artificial intelligence’ era, driven by deep learning, big data, and graphics processing units (GPUs), all of which accelerate the ability to compute,” said Rex Wu, an equity analyst for Jefferies.

Source: South China Morning Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Burger-flipping robot could spell the end of teen employment

The AI-driven robot ‘Flippy,’ by Miso Robotics, is marketed as a kitchen assistant, rather than a replacement for professionally-trained teens that ponder the meaning of life — or what their crush looks like naked — while awaiting a kitchen timer’s signal that it’s time to flip the meat.

Flippy features a number of different sensors and cameras to identify food objects on the grill. It knows, for example, that burgers and chicken-like patties cook for a different duration. Once done, the machine expertly lifts the burger off the grill and uses its on-board technology to place it gently on a perfectly-browned bun.

The robot doesn’t just work the grill like a master hibachi chef, either. Flippy is capable of deep frying, chopping vegetables, and even plating dishes.

Source: TNW

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

An MIT professor explains why we are still a long ways off from solving one of the biggest problems with self-driving cars

“The idea of a robot having an algorithm programmed by some faceless human in a manufacturing plant somewhere making decisions that has life-and-death consequence is very new to us as humans”

Rahwan helped bring it to the surface in October 2015 when he co-wrote a paper “Autonomous vehicles need experimental ethics.”

But the debate arguably got to the forefront of discussion when Rahwan launched “MIT’s Moral Machine” — a website that poses a series of ethical conundrums to crowdsource how people feel self-driving cars should react in tough situations. The Moral Machine is an extension of Rahwan’s 2015 study.

Rahwan said since launching the website in August 2016, MIT has collected 26 million decisions from 3 million people worldwide. He is currently analyzing whether cultural differences play a role in the responses given.

“it’s not about a specific scenario or accident, it’s about the overall principle that an algorithm has to use to decide relative risk”

The National Highway Traffic Safety Administration acknowledged in a September report that self-driving cars could favor certain decisions over others even if they aren’t programmed explicitly to do so.

Self-driving cars will rely on machine learning, a branch of artificial intelligence that allows computers, or in this case cars, to learn over time. Since cars will learn how to adapt to the driving environment on their own, they could learn to favor certain outcomes.

“In the long run, I think something has to be done. There has to be some sort of guideline that’s a bit more specific, that’s the only way to obtain the trust of the public,” he said.

“Even in instances in which no explicit ethical rule or preference is intended, the programming of an HAV may establish an implicit or inherent decision rule with significant ethical consequences,” NHTSA wrote in the report, adding that manufacturers must work with regulators to address these situations.

Rahwan said programming for specific outcomes isn’t the right approach, but thinks companies should be doing more to let the public know that they are considering the ethics of driverless vehicles.

Source: Business Insider

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

So long, banana-condom demos: Sex and drug education could soon come from chatbots

“Is it ok to get drunk while I’m high on ecstasy?” “How can I give oral sex without getting herpes?” Few teenagers would ask mom or dad these questions—even though their life could quite literally depend on it.

Talking to a chatbot is a different story. They never raise an eyebrow. They will never spill the beans to your parents. They have no opinion on your sex life or drug use. But that doesn’t mean they can’t take care of you.

Bots can be used as more than automated middlemen in business transactions: They can meet needs for emotional human intervention when there aren’t enough humans who are willing or able to go around.

In fact, there are times when the emotional support of a bot may even be preferable to that of a human.

In 2016, AI tech startup X2AI built a psychotherapy bot capable of adjusting its responses based on the emotional state of its patients. The bot, Karim, is designed to help grief- and PTSD-stricken Syrian refugees, for whom the demand (and price) of therapy vastly overwhelms the supply of qualified therapists.

From X2AI test runs using the bot with Syrians, they noticed that technologies like Karim offer something humans cannot:

For those in need of counseling but concerned with the social stigma of seeking help, a bot can be comfortingly objective and non-judgmental.

Bzz is a Dutch chatbot created precisely to answer questions about drugs and sex. When surveyed teens were asked to compare Bzz to finding answers online or calling a hotline, Bzz won. Teens could get their answers faster with Bzz than searching on their own, and they saw their conversations with the bot as more confidential because no human was involved and no tell-tale evidence was left in a search history.

Because chatbots can efficiently gain trust and convince people to confide personal and illicit information in them, the ethical obligations of such bots are critical, but still ambiguous.

Source: Quartz

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

AI is driving the real health care transformation

AI and machine learning are forcing dramatic business model change for all the stakeholders in the health care system.

What does AI (and machine learning) mean in the health care context?

What is the best way to treat a specific patient given her health and sociological context?

What is a fair price for a new drug or device given its impact on health outcomes?

And how can long-term health challenges such as cancer, obesity, heart disease, and other conditions be managed?

the realization that treating “the whole patient” — not just isolated conditions, but attempting to improve the overall welfare of patients who often suffer from multiple health challenges — is the new definition of success, which means predictive insights are paramount.

Answering these questions is the holy grail of medicine — the path toward an entirely new system that predicts disease and delivers personalized health and wellness services to entire populations. And this change is far more important for patients and society alike than the debate now taking place in Washington.

Those who succeed in this new world will also do one other thing: They will see AI and machine learning not as a new tool, but as a whole new way of thinking about their business model.

Source: Venture Beat

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

AI to Improve the World

Back in October, on one of our recurring walk-and-talks around Oxford, Brody (a computational biologist) and I (a machine learning researcher) shared something that was missing from our PhD work:

“We want to use AI to improve the world around us — in ways nothing else can.”

RAIL — the Rhodes Artificial Intelligence Lab — was born.

On January 16, we launched our first 8-week cohort. Our team is made of 26 Rhodes Scholars: 13 PhD students, 13 masters’ students. 50% are AI engineers and the other 50% are strategists. We have AI researchers, geneticists, public policy students, trained doctors, social scientists, linguists, and more.

We’ve been blown away by what RAILers have accomplished across four projects

3 Key Lessons We’ve Learned

We are building a serious, capable, exciting AI lab. We’ve built our core learnings into the heart of RAIL:

  1. The potential for AI to tackle important social challenges is huge.
  2. Extremely smart people + technology + creativity + structure = scalable impact.
  3. Learn as much as you can during the process. Foster new ideas.

Source: Medium

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

JPMorgan software does in seconds what took lawyers 360,000 hours

At JPMorgan, a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of lawyers’ time annually. The software reviews documents in seconds, is less error-prone and never asks for vacation.

COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specialising in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget.

Behind the strategy, overseen by Chief Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety:

though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.

Source: Independent

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Intel: AI as big as the invention of the wheel and discovery of fire

Intel believes AI will be the biggest and most important revolution in our lifetime 

“When we think about AI and machine learning it’s all about huge possibilities,” Faintuch told the capacity crowd. “It’s about humans unleashing their potential and interacting with things beyond humans. To continue to transform and automate their life.”

“When we look back and as we look forward, I believe we are now at the door-step of yet another major revolution. This revolution will probably be the most important in our lifetime. It’s all about the automation of intelligence.

We already know how to leverage face recognition, text to speech, speech to text and others. Everything helping us to automate our decisions. What lies ahead will be an amazing transformation. With the power of AI, ML, deep learning and other elements to come into fruition, we will be able to take by far more complex function to allow us to unleash our digital capabilities.”

“Since the dawn of humanity at relatively short pace have been able to take ourselves to the next level.

I mentioned fire. Unlike animals who run away from it, we were attracted to it. It’s us that takes these courageous moves and to really dream. It’s not about one person, one company or one society. It’s for all of us to take advantage of the power of the intelligence we have and to embrace it and think how we can create a great society with great technological advancements.”

Source: Access AI

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Wikipedia bots act more like humans than expected

‘Benevolent bots’ or software robots designed to improve articles on Wikipedia sometimes have online ‘fights’ over content that can continue for years, say scientists who warn that artificial intelligence systems may behave more like humans than expected.

They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences.

Researchers said that bots are more like humans than you might expect. Bots appear to behave differently in culturally distinct online environments.

The findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media.

We may have to devote more attention to bots’ diverse social life and their different cultures, researchers said.

The research found that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor.

Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways.

Researchers found that German editions of Wikipedia had fewest conflicts between bots, with each undoing another’s edits 24 times, on average, over ten years.

This shows relative efficiency, when compared with bots on the Portuguese Wikipedia edition, which undid another bot’s edits 185 times, on average, over ten years, researchers said.

Bots on English Wikipedia undid another bot’s work 105 times, on average, over ten years, three times the rate of human reverts, they said.

The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences – ‘sterile fights’ that may continue for years, or reach deadlock in some cases.

“We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors,” said Milena Tsvetkova, from the Oxford Internet Institute.

“This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots,” said Tsvetkova.

Source: The Statesman

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

We’re on the right ladder of #AI this time – Microsoft CEO

Calling AI “the third run time”, Nadella said, “If the operating system was the first run time, the second run time you could say was the browser, and the third run time can actually be the agent. Because in some sense, the agent knows you, your work context, and knows the work. And that’s how we are building Cortana. We are giving it a really natural language understanding.”

AI has been the buzzword at Microsoft for a while now. And the CEO has gone on record to say that it “is at the intersection of our ambitions.” Cortana is an intelligent assistant (agent) that “can take text input, can take speech input, and that knows you deeply.”

“We should not claim that artificial general intelligence is just around the corner,” he said. “I think we are on the right ladder this time… We are all grounded in where we are. Ultimately, the real challenge is human language understanding that still doesn’t exist. We are not even close to it... We just have to keep taking steps on that ladder.”

Source: Mashable

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence set to transform the patient experience

Catalia developed a small robot, the Mabu Personal Healthcare Companion, aimed at assisting with “long-term patient engagement.” It’s able to have tailored conversations with patients that can evolve over time as the platform – developed using principles of behavioral psychology – gains daily data about treatment plans, health challenges and outcomes.

Catalia’s technology deploys AI to help patients manage their own chronic conditions.

“The kinds of algorithms we’re developing, we’re building up psychological models of patients with every encounter,” he explained. “We start with two types of psychologies: The psychology of relationships – how people develop relationships over time – as well as the psychology of  behavior change: How do we chose the right technique to use with this person right now?” Cory Kidd, CEO of Catalia Health

The platform also gets “smarter” as it become more attuned to “what we call our biographical model, which is kind of a catch-all for everything else we learn in conversation,” he said. “This man has a couple cats, this woman’s son calls her every Sunday afternoon, whatever it might be that we’ll use later in conversations.”

‘We’re not trying to replace the human interaction, we’re trying to augment it,’ AI developer says.

Kleinberg (managing director at The Advisory Board Company) pointed to AI pilots where patients paired with humanoid robots “felt a sense of loss” after the test ended. “One woman followed the robot out and waved goodbye to it.”

On the other, “some people are horrified that we would be letting machines play a part in a role that should be played by humans,” he said.

The big question, then: “Do we have place now for society and a system such as this?” he asked.

Source: Healthcare IT News

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Smartphones to become pocket doctors to diagnose illness

Smartphones will soon become mobile laboratories which can monitor bone density, calculate red blood cell levels and even predict if an asthma attack is imminent.

Scientists are repurposing the technology which already exists within phones, such as accelerometers, camera flashes and microphones to use as medical tools.

Professor Shwetak Patel, of the University of Washington is currently devising an app which can detect red blood cell levels simply by placing a finger over the camera and flash, so that a bright beam of light shines through the skin. Such a blood screening tool could quickly spot anaemia.

“You can do pulmonary assessment using the microphone on a mobile device, for diagnosing asthma. If think about people having an asthma attack, if you could monitor their lung function at home you can actually get in front of that, before somebody has an asthma attack.”

Source: The Telegraph

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial Intelligence & Bias

On Thursday, February 16th, the JFK Jr. Forum at the Harvard Institute of Politics hosted a conversation on the past, present, and future of Artificial Intelligence

The conversation focused on the potential benefits of Artificial Intelligence as well as some of the major ethical dilemmas that these experts predicted. While Artificial Intelligence (AI) has the potential to eliminate inherent human bias in decision-making, the panel agreed that in the near future, there are ethical boundaries that society and governments must explore as Artificial Intelligence expands into the realms of medicine, governance, and even self-driving cars.

Some major takeaways from the event were:

1. Artificial Intelligence offers an incredible opportunity to eliminate human biases in decision-making

2. Society must begin having conversations surrounding the ethics of Artificial Intelligence

Professors Alex Pentland and Cynthia Dwork stated that as Artificial Intelligence proliferates, moral conflicts can surface. Pentland emphasized that citizens must ask themselves “is this something that is performing in a way that we as a society want?” Pentland noted that our society must continue a dialogue around ethics and determine what is right.

3. Although Artificial Intelligence is growing, there are still tasks that only humans should do

Source: The Huffington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

You’ll give an infant an intelligent toy that learns about her and tutors her and grows along with her

Spivack, the futurist, pictures people partnering with lifelong virtual companions. You’ll give an infant an intelligent toy that learns about her and tutors her and grows along with her. “It starts out as a little cute stuffed animal,” he says, “but it evolves into something that lives in the cloud and they access on their phone. And then by 2050 or whatever, maybe it’s a brain implant.” Among the many questions raised by such a scenario, Spivack asks: “Who owns our agents? Are they a property of Google?” Could our oldest friends be revoked or reprogrammed at will? And without our trusted assistants, will we be helpless?

El Kaliouby, of Affectiva, sees a lot of questions around autonomy: What can an assistant do on our behalf? Should it be able to make purchases for us? What if we ask it to do something illegal—could it override our commands? She also worries about privacy. If an AI agent determines that a teenager is depressed, can it inform his parents? Spivack says we’ll need to decide whether agents have something like doctor-patient or attorney-client privilege. Can they report us to law enforcement? Can they be subpoenaed? And what if there’s a security breach? Some people worry that advanced AI will take over the world, but Kambhampati, of the Association for the Advancement of Artificial Intelligence, thinks malicious hacking is the far greater risk.

Given the intimacy that we may develop with our ever-present assistants, if the wrong person were able to break in, what was once our greatest auxiliary could become our greatest liability.

Source: The Atlantic

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft’s new plan is to flood your entire life with artificial intelligence

The mission is clear:

if there’s success to be had with any kind of AI, Microsoft wants to be there.

And then yesterday, at an intimate press gathering in San Francisco, Microsoft’s AI parade continued! The company announced:

  • an Cortana-powered smart speaker to rival the Amazon Echo and Google Home, made by Harman Kardon
  • a virtual assistant that lives in your email to help schedule meetings (like x.ai)
  • a new English-speaking chatbot to replace Tay, called Zo
  • a new tool for real-time conversation translation
  • a software developer kit for Cortana for anybody who wants to configure it for a smart speaker or gadget

 You can look forward to living a life in constant conversation with your gadgets.

You’ll be able to chat with bots throughout the day via Kik, Skype or Facebook Messenger for customer service, ask your Cortana-enabled speaker to turn on your lights, and then to tell you if it scheduled plans for you tonight. Rather than navigating densely-packed menus dripping with options for customization, you can ask questions and trust the virtual assistant to lead you to whatever task you want to accomplish.

Microsoft has coined their own term for this: conversational computing. The company sees this shift to be as large as personal or mobile computing

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft Ventures: Making the long bet on AI + people

Another significant commitment by Microsoft to democratize AI:

a new Microsoft Ventures fund for investment in AI companies focused on inclusive growth and positive impact on society.

Companies in this fund will help people and machines work together to increase access to education, teach new skills and create jobs, enhance the capabilities of existing workforces and improve the treatment of diseases, to name just a few examples.

CEO Satya Nadella outlined principles and goals for AI: AI must be designed to assist humanity; be transparent; maximize efficiency without destroying human dignity; provide intelligent privacy and accountability for the unexpected; and be guarded against biases. These principles guide us as we move forward with this fund.

Source: Microsoft blog

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Humanity’s greatest fear is about being irrelevant #AI

gnevieve-bell

If you think about how some people write about robotics, AI and big data, those concerns have profound echoes going back to the Frankenstein anxieties 200 years ago.

So what is the anxiety about?

My suspicion is that it’s not about the life-making, it’s about how we feel about being human.

What we are seeing now isn’t an anxiety about artificial intelligence per se, it’s about what it says about us. That if you can make something like us, where does it leave us?

And that concern isn’t universal, as other cultures have very different responses to AI, to big data. The most obvious one to me would be the Japanese robotic tradition, where people are willing to imagine the role of robots as far more expansive than you find in the west. For example, the Japanese roboticist Masahiro Mori published a book called The Buddha in the Robot, where he suggests that robots would be better Buddhists than humans because they are capable of infinite invocations.

Mori’s argument was that we project our own anxieties and when we ask: “Will the robots kill us?”, what we are really asking is: “Will we kill us?”

He wonders

what would happen if we were to take as our starting point that technology could be our best angels, not our worst

– it’s an interesting thought exercise. When I see some of the big thinkers of our day contemplating the arc of artificial intelligence, what I see is not necessarily a critique of the technology itself but a critique of us. We are building the engines, so what we build into them is what they will be. The question is not will AI rise up and kill us, rather, will we give it the tools to do so?

I’m interested in how animals are connected to the internet and how we might be able to see the world from an animal’s point of view. There’s something very interesting in someone else’s vantage point, which might have a truth to it. For instance, the tagging of cows for automatic milking machines, so that the cows can choose when to milk themselves. Cows went from being milked twice a day to being milked three to six times a day, which is great for the farm’s productivity and results in happier cows, but it’s also faintly disquieting that the technology makes clear to us the desires of cows – making them visible in ways they weren’t before.

So what does one do with that knowledge? One of the unintended consequences of big data and the internet of things is that some things will become visible and compel us to confront them.

Source: The Gaurdian

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How Satya Nadella is making Microsoft cool again, and taking on Apple and Amazon

Nadella said he was confident that competitors, which include the likes of Google, AWS and IBM, were less advanced in working out how software could interact with people on a seemingly human level.

“There are a few companies that are at the cutting-edge of AI, in whichever way you look at it,” Mr Nadella said.

“But when you just look at the capability around speech recognition, who has the state of the art? Microsoft does … What is the state of the art with image recognition? Microsoft again, and those are not subjective they are judged by objective criteria.”

Mr Nadella said Microsoft would continue to look to both work with and acquire start-ups where possible.

Microsoft’s first priority with start-ups was to provide them with services, but that it would look to acquire when in appeared feasible.

“If this fourth industrial revolution is going to truly create surplus that goes beyond the West Coast of the United States then you have to have start-ups that are vibrant in every part of the world,” he said.

 

Source: Financial Review

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Jay Wright Forrester 1918-2016

jay-forresterInvited to join the faculty of the MIT Sloan School of Management in 1956, Jay Forrester created the field of system dynamics to apply engineering concepts of feedback systems and digital simulation to understand what he famously called “the counterintuitive behavior of social systems.

His ground-breaking 1961 book, Industrial Dynamics, remains a clear and relevant statement of philosophy and methodology in the field. His later books and his numerous articles broke new ground in our understanding of complex human systems and policy problems.

Jay Forrester did so much more than mentioned here, though. A full obituary is now available in the New York Times. Further information is available via the System Dynamics Society homepage.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Teaching an Algorithm to Understand Right and Wrong

hbr-ai-morals

Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft is partnering with Elon Musk’s $1 billion #AI research company to help it battle Amazon and Google

elon-musk-robotsMicrosoft has announced a new partnership with OpenAI, the $1 billion artificial intelligence research nonprofit cofounded by Tesla CEO Elon Musk and Y Combinator President Sam Altman.

Artificial intelligence is going to be a big point of competition between Microsoft Azure, the leading Amazon Web Services, and the relative upstart Google Cloud over the months and years to come. As Guthrie says, “any application is ultimately going to weave in AI,” and Microsoft wants to be the company that helps developers do the weaving.

That’s where the OpenAI partnership becomes so important, Guthrie says.

Bscott-guthrie-photoecause we’re still in the earliest days of artificial intelligence, he says, the biggest challenge is figuring out what exactly can be done with artificial intelligence. Guthrie calls this “understanding the art of the possible.

Source: Business Insider

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Google’s #AI moonshot

sundar-pichai-fast-company

Searcher-in-chief: Google CEO Sundar Pichai

“Building general artificial intelligence in a way that helps people meaningfully—I think the word moonshot is an understatement for that,” Pichai says, sounding startled that anyone might think otherwise. “I would say it’s as big as it gets.”

Officially, Google has always advocated for collaboration. But in the past, as it encouraged individual units to shape their own destinies, the company sometimes operated more like a myriad of fiefdoms. Now, Pichai is steering Google’s teams toward a common mission: infusing the products and services they create with AI.Pichai is steering Google’s teams toward a common mission: infusing the products and services they create with AI.

To make sure that future gadgets are built for the AI-first era, Pichai has collected everything relating to hardware into a single group and hired Rick Osterloh to run it.

BUILD NOW, MONETIZE LATER

Jen Fitzpatrick, VP, Geo: "The Google Assistant wouldn't exist without Sundar—it's a core part of his vision for how we're bringing all of Google together."

Jen Fitzpatrick, VP, Geo: “The Google Assistant wouldn’t exist without Sundar—it’s a core part of his vision for how we’re bringing all of Google together.”

If Google Assistant is indeed the evolution of Google search, it means that the company must aspire to turn it into a business with the potential to be huge in terms of profits as well as usage. How it will do that remains unclear, especially since Assistant is often provided in the form of a spoken conversation, a medium that doesn’t lend itself to the text ads that made Google rich.

“I’ve always felt if you solve problems for users in meaningful ways, there will become value as part of solving that equation,” Pichai argues. “Inherently, a lot of what people are looking for is also commercial in nature. It’ll tend to work out fine in the long run.”

“When you can align people to common goals, you truly get a multiplicative effect in an organization,” he tells me as we sit on a couch in Sundar’s Huddle after his Google Photos meeting. “The inverse is also true, if people are at odds with each other.” He is, as usual, smiling.

The company’s aim, he says, is to create products “that will affect the lives of billions of users, and that they’ll use a lot. Those are the kind of meaningful problems we want to work on.”

Source: Fast Company

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Software is the future of healthcare “digital therapeutics” instead of a pill

vjiay-pandeWe’ll start to use “digital therapeutics” instead of getting a prescription to take a pill. Services that already exist — like behavioral therapies — might be able to scale better with the help of software, rather than be confined to in-person, brick-and-mortar locations.

Vijay Pande, a general partner at Andreessen Horowitz, runs the firm’s bio fund.

Source: Business Insider
Why an investor at Andreessen Horowitz thinks software is the future of healthcare
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Cambridge students build a ‘lawbot’ to advise sexual assault victims #AI

cambridge-law-bot

“Hi, I’m LawBot, a robot designed to help victims of crime in England.”

While volunteering at a school sexual consent class, Ludwig Bull, a law student at the University of Cambridge, was inspired to build a chatbot that offers free legal advice to students. He enlisted the help of four coursemates, and Lawbot was designed and built in just six weeks.

The program is still in beta, but Bull hopes it will help victims of crime, at Cambridge and beyond, to get justice.

“A victim can talk to our artificially intelligent chatbot, receive a preliminary assessment of their situation, and then decide which available actions to pursue”

Source: The Gaurdian

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The Christianizing of AI

Bloggers note: The following post illustrates the challenge in creating ethics for AI. There are many different faiths, with different belief systems. How would the AI be programmed to serve these diverse ethical needs? 

The ethics of artificial intelligence (AI) has drawn comments from the White House and British House of Commons in recent weeks, along with a nonprofit organization established by Amazon, Google, Facebook, IBM and Microsoft. Now, Baptist computer scientists have called Christians to join the discussion.

Louise Perkins, professor of computer science at California Baptist University, told Baptist Press she is “quite worried” at the lack of an ethical code related to AI. The Christian worldview, she added, has much to say about how automated devices should be programmed to safeguard human flourishing.

Individuals with a Christian worldview need to be involved in designing and programing AI systems, Perkins said, to help prevent those systems from behaving in ways that violate the Bible’s ethical standards.

Believers can thus employ “the mathematics or the logic we will be using to program these devices” to “infuse” a biblical worldview “into an [AI] system.” 

Perkins also noted that ethical standards will have to be programmed into AI systems involved in surgery and warfare among other applications. A robot performing surgery on a pregnant woman, for instance, could have to weigh the life of the baby relative to the life of the mother, and an AI weapon system could have to apply standards of just warfare.

Source: The Pathway

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

12 Observations About Artificial Intelligence From The O’Reilly AI Conference

12-observations-ai-forbesBloggers: Here’s a few excepts from a long but very informative review. (The best may be last.)

The conference was organized by Ben Lorica and Roger Chen with Peter Norvig and Tim O-Reilly acting as honorary program chairs.   

For a machine to act in an intelligent way, said [Yann] LeCun, it needs “to have a copy of the world and its objective function in such a way that it can roll out a sequence of actions and predict their impact on the world.” To do this, machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan.

Peter Norvig explained the reasons why machine learning is more difficult than traditional software: “Lack of clear abstraction barriers”—debugging is harder because it’s difficult to isolate a bug; “non-modularity”—if you change anything, you end up changing everything; “nonstationarity”—the need to account for new data; “whose data is this?”—issues around privacy, security, and fairness; lack of adequate tools and processes—exiting ones were developed for traditional software.

AI must consider culture and context—“training shapes learning”

“Many of the current algorithms have already built in them a country and a culture,” said Genevieve Bell, Intel Fellow and Director of Interaction and Experience Research at Intel. As today’s smart machines are (still) created and used only by humans, culture and context are important factors to consider in their development.

Both Rana El Kaliouby (CEO of Affectiva, a startup developing emotion-aware AI) and Aparna Chennapragada (Director of Product Management at Google) stressed the importance of using diverse training data—if you want your smart machine to work everywhere on the planet it must be attuned to cultural norms.

“Training shapes learning—the training data you put in determines what you get out,” said Chennapragada. And it’s not just culture that matters, but also context

The £10 million Leverhulme Centre for the Future of Intelligence will explore “the opportunities and challenges of this potentially epoch-making technological development,” namely AI. According to The Guardian, Stephen Hawking said at the opening of the Centre,

“We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”

Gary Marcus, professor of psychology and neural science at New York University and cofounder and CEO of Geometric Intelligence,

 “a lot of smart people are convinced that deep learning is almost magical—I’m not one of them …  A better ladder does not necessarily get you to the moon.”

Tom Davenport added, at the conference: “Deep learning is not profound learning.”

AI changes how we interact with computers—and it needs a dose of empathy

AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm

Maybe, just maybe, our minds are not computers and computers do not resemble our brains?  And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional–albeit “narrow”–applications of computers to enrich and improve our lives?

Gary Marcus complained about research papers presented at the Neural Information Processing Systems (NIPS) conference, saying that they are like alchemy, adding a layer or two to a neural network, “a little fiddle here or there.” Instead, he suggested “a richer base of instruction set of basic computations,” arguing that “it’s time for genuinely new ideas.”

Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous “AI Winter”? And that continuing to adhere to it and refusing to consider “genuinely new ideas,” out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter?

 Source: Forbes

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

MIT makes breakthrough in morality-proofing artificial intelligence

mit-morality-breakthroughResearchers at MIT are investigating ways of making artificial neural networks more transparent in their decision-making.

As they stand now, artificial neural networks are a wonderful tool for discerning patterns and making predictions. But they also have the drawback of not being terribly transparent. The beauty of an artificial neural network is its ability to sift through heaps of data and find structure within the noise.

This is not dissimilar from the way we might look up at clouds and see faces amidst their patterns. And just as we might have trouble explaining to someone why a face jumped out at us from the wispy trails of a cirrus cloud formation, artificial neural networks are not explicitly designed to reveal what particular elements of the data prompted them to decide a certain pattern was at work and make predictions based upon it.

We tend to want a little more explanation when human lives hang in the balance — for instance, if an artificial neural net has just diagnosed someone with a life-threatening form of cancer and recommends a dangerous procedure. At that point, we would likely want to know what features of the person’s medical workup tipped the algorithm in favor of its diagnosis.

MIT researchers Lei, Barzilay, and Jaakkola designed a neural network that would be forced to provide explanations for why it reached a certain conclusion.

Source: Extremetech

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

China’s plan to organize its society relies on ‘big data’ to rate everyone

china-social-credits-scoreImagine a world where an authoritarian government monitors everything you do, amasses huge amounts of data on almost every interaction you make, and awards you a single score that measures how “trustworthy” you are.

In this world, anything from defaulting on a loan to criticizing the ruling party, from running a red light to failing to care for your parents properly, could cause you to lose points. 

This is not the dystopian superstate of Steven Spielberg’s “Minority Report,” in which all-knowing police stop crime before it happens. But it could be China by 2020.

And in this world, your score becomes the ultimate truth of who you are — determining whether you can borrow money, get your children into the best schools or travel abroad; whether you get a room in a fancy hotel, a seat in a top restaurant — or even just get a date.

It is the scenario contained in China’s ambitious plans to develop a far-reaching social credit system, a plan that the Communist Party hopes will build a culture of “sincerity” and a “harmonious socialist society” where “keeping trust is glorious.”

The ambition is to collect every scrap of information available online about China’s companies and citizens in a single place — and then assign each of them a score based on their political, commercial, social and legal “credit.”

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Source: The Washington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

New Research Center to Explore Ethics of Artificial Intelligence

carnegie-mellon-ethics

The Chimp robot, built by a Carnegie Mellon team, took third place in a competition held by DARPA last year. The school is starting a research center focused on the ethics of artificial intelligence. Credit Chip Somodevilla/Getty Images

Carnegie Mellon University plans to announce on Wednesday that it will create a research center that focuses on the ethics of artificial intelligence.

The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of A.I. technologies.

“We are at a unique point in time where the technology is far ahead of society’s ability to restrain it”
Subra Suresh, Carnegie Mellon’s president

The new center is being created with a $10 million gift from K&L Gates, an international law firm headquartered in Pittsburgh.

Peter J. Kalis, chairman of the law firm, said the potential impact of A.I. technology on the economy and culture made it essential that as a society we make thoughtful, ethical choices about how the software and machines are used.

“Carnegie Mellon resides at the intersection of many disciplines,” he said. “It will take a synthesis of the best thinking of all of these disciplines for society to define the ethical constraints on the emerging A.I. technologies.”

Source: NY Times

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Genetically engineered humans will arrive sooner than you think. And we’re not ready

vox-geneticaly-engineered-humansMichael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.

“What’s happening is bigger than any one of us”

We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.

So these are just great landmarks, and I’m comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.

And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it’s becoming possible to transform ourselves into whatever it is that we want to be. And there’s both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.

“We’re going to give ourselves a power that we may not have the wisdom to control very well”

I think most historians of technology … see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

 Source: Vox

vox-genetically-enginnered-humans

 

vox-genetically-enginnered-humans

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial Intelligence’s White Guy Problem

nyt-white-guy-problem

Credit Bianca Bagnarelli

Warnings by luminaries like Elon Musk and Nick Bostrom about “the singularity” — when machines become smarter than humans — have attracted millions of dollars and spawned a multitude of conferences.

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems.

Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems.

Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

And then there’s gender discrimination. Last July, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs. The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?

Like all technologies before it, artificial intelligence will reflect the values of its creators.

Source: New York Times – Kate Crawford is a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and A.I.

test

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

AI is one of top 5 tools humanity has ever had

 A few highlights from AI panel at the White House Frontiers Conference

On the impact of AI

Andrew McAfee (MIT):

white-house-frontiers-ai-panel

To view video, click on pic, scroll down the page to Live Stream and click to start the video. It may take a min and then go to the time you want to watch.

(Begins @ 2:40:34)

We are at an inflection point … I think the development of these kinds of [AI] tools are going to rank among probably the top 5 tools humanity has ever had to take better care of each other and to tread more lightly on the planet … top 5 in our history. Like the book, maybe, the steam engine, maybe, written language — I might put the Internet there. We’ve all got our pet lists of the biggest inventions ever. AI needs to be on the very, very, short list.

On bias in AI

Fei-Fei Li, Professor of Computer Science, Stanford University:

(Begins @ 3:14:57)

Research repeatedly has shown that when people work in diverse groups there is increased creativity and innovation.

And interestingly, it is harder to work as a diverse group. I’m sure everybody here in the audience have had that experience. We have to listen to each other more. We have to understand the perspective more. But that also correlates well with innovation and creativity. … If we don’t have the inclusion of [diverse] people to think about the problems and the algorithms in AI, we might not only being missing the innovation boat we might actually create bias and create unfairness that are going to be detrimental to our society … 

What I have been advocating at Stanford, and with my colleagues in the community is, let’s bring the humanistic mission statement into the field of AI. Because AI is fundamentally an applied technology that’s going to serve our society. Humanistic AI not only raises the awareness and the importance of our technology, it’s actually a really, really important way to attract diverse students and technologists and innovators to participate in the technology of AI.

There has been a lot of research done to show that people with diverse background put more emphasis on humanistic mission in their work and in their life. So, if in our education, in our research, if we can accentuate or bring out this humanistic message of this technology, we are more likely to invite the diversity of students and young technologists to join us.

On lack of minorities in AI

Andrew Moore Dean, School of Computer Science, Carnegie Mellon University:

(Begins @ 3:19:10)

I so strongly applaud what you [Fei-Fei Li] are describing here because I think we are engaged in a fight here for how the 21st century pans out in terms of who’s running the world … 

The nightmare, the silly, silly thing we could do … would be if … the middle of the century is built by a bunch of non-minority guys from suburban moderately wealthy United States instead of the full population of the United States.

Source: Frontiers Conference
Click on the video that says Live Stream (event will start shortly)
it may take a minute to load

(Update 02/24/17: The original timelines listed above may be different when revisiting this video.)

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How Deep Learning is making AI prejudiced

Bloggers note: The authors of this research paper show what they refer to as “machine prejudice” and how it derives so fundamentally from human culture. 

“Concerns about machine prejudice are now coming to the fore–concerns that our historic biases and prejudices are being reified in machines,” they write. “Documented cases of automated prejudice range from online advertising (Sweeney, 2013) to criminal sentencing (Angwin et al., 2016).”

Following are a few excerpts: 

machine-prejudiceAbstract

“Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.

Discussion

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices. In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful. These are much broader concerns than intentional discrimination, and possibly harder to address.

Awareness is better than blindness

“… where AI is partially constructed automatically by machine learning of human culture, we may also need an analog of human explicit memory and deliberate actions, that can be trained or programmed to avoid the expression of prejudice.

“Of course, such an approach doesn’t lend itself to a straightforward algorithmic formulation. Instead it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists. …”

Click here to download the pdf of the report
Semantics derived automatically from language corpora necessarily contain human biases
Aylin Caliskan-Islam , Joanna J. Bryson, and Arvind Narayanan

1 Princeton University
2 University of Bath
Draft date August 31, 2016.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Why we can’t trust ‘blind big data’ to cure the world’s diseases

1020Once upon a time a former editor of WIRED, Chris Anderson, … envisaged how scientists would take the ever expanding ocean of data, send a torrent of bits and bytes into a great hopper, then crank the handles of huge computers that run powerful statistical algorithms to discern patterns where science cannot.

In short, Anderson dreamt of the day when scientists no longer had to think.

Eight years later, the deluge is truly upon us. Some 90 percent of the data currently in the world was created in the last two years … and there are high hopes that big data will pave the way for a revolution in medicine.

But we need big thinking more than ever before.

Today’s data sets, though bigger than ever, still afford us an impoverished view of living things.

It takes a bewildering amount of data to capture the complexities of life.

The usual response is to put faith in machine learning, such as artificial neural networks. But no matter their ‘depth’ and sophistication, these methods merely fit curves to available data.

we do not predict tomorrow’s weather by averaging historic records of that day’s weather

… here are other limitations, not least that data are not always reliable (“most published research findings are false,” as famously reported by John Ioannidis in PLOS Medicine). Bodies are dynamic and ever-changing, while datasets often only give snapshots, and are always retrospective.

Source: Wired

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Grandma? Now you can see the bias in the data …

“Just type the word grandma in your favorite search engine image search and you will see the bias in the data, in the picture that is returned  … you will see the race bias.” — Fei-Fei Li, Professor of Computer Science, Stanford University, speaking at the White House Frontiers Conference

Google image search for Grandma 

google-grandmas

Bing image search for Grandma

grandma-bing

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

It seems that A.I. will be the undoing of us all … romantically, at least

As if finding love weren’t hard enough, the creators of Operator decided to show just how Artificial Intelligence could ruin modern relationships.

Artificial Intelligence so often focuses on the idea of “perfection.” As most of us know, people are anything but perfect, and believing that your S.O. (Significant Other) is perfect can lead to problems. The point of an A.I., however, is perfection — so why would someone choose the flaws of a human being over an A.I. that can give you all the comfort you want with none of the costs?

Hopefully, people continue to choose imperfection.

Source: Inverse.com

Facebooktwittergoogle_plusredditpinterestlinkedinmail