The idea that Silicon Valley is the darling of our markets and of our society … is definitely turning

“Personally, I think the idea that fake news on Facebook, it’s a very small amount of the content, influenced the election in any way is a pretty crazy idea.”

Facebook CEO Mark Zuckerberg’s company recently said it would turn over to Congress more than 3,000 politically themed advertisements that were bought by suspected Russian operatives. (Eric Risberg/AP

Nine days after Facebook chief executive Mark Zuckerberg dismissed as “crazy” the idea that fake news on his company’s social network played a key role in the U.S. election, President Barack Obama pulled the youthful tech billionaire aside and delivered what he hoped would be a wake-up call.

Obama made a personal appeal to Zuckerberg to take the threat of fake news and political disinformation seriously. Unless Facebook and the government did more to address the threat, Obama warned, it would only get worse in the next presidential race.

“There’s been a systematic failure of responsibility. It’s rooted in their overconfidence that they know best, their naivete about how the world works, their extensive effort to avoid oversight, and their business model of having very few employees so that no one is minding the store.” Zeynep Tufekci

Zuckerberg acknowledged the problem posed by fake news. But he told Obama that those messages weren’t widespread on Facebook and that there was no easy remedy, according to people briefed on the exchange

One outcome of those efforts was Zuckerberg’s admission on Thursday that Facebook had indeed been manipulated and that the company would now turn over to Congress more than 3,000 politically themed advertisements that were bought by suspected Russian operatives.

These issues have forced Facebook and other Silicon Valley companies to weigh core values, including freedom of speech, against the problems created when malevolent actors use those same freedoms to pump messages of violence, hate and disinformation.

Congressional investigators say the disclosure only scratches the surface. One called Facebook’s discoveries thus far “the tip of the iceberg.” Nobody really knows how many accounts are out there and how to prevent more of them from being created to shape the next election — and turn American society against itself.

“There is no question that the idea that Silicon Valley is the darling of our markets and of our society — that sentiment is definitely turning,” said Tim O’Reilly, an adviser to tech executives and chief executive of the influential Silicon Valley-based publisher O’Reilly Media

Source: Washington Post


FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The idea was to help you and I make better decisions amid cognitive overload

IBM Chairman, President, and Chief Executive Officer Ginni Rometty. PHOTOGRAPHER: STEPHANIE SINCLAIR FOR BLOOMBERG BUSINESSWEEK

If I considered the initials AI, I would have preferred augmented intelligence.

It’s the idea that each of us are going to need help on all important decisions.

A study said on average that a third of your decisions are really great decisions, a third are not optimal, and a third are just wrong. We’ve estimated the market is $2 billion for tools to make better decisions.

That’s what led us all to really calling it cognitive

“Look, we really think this is about man and machine, not man vs. machine. This is an era—really, an era that will play out for decades in front of us.”

We set out to build an AI platform for business.

AI would be vertical. You would train it to know medicine. You would train it to know underwriting of insurance. You would train it to know financial crimes. Train it to know oncology. Train it to know weather. And it isn’t just about billions of data points. In the regulatory world, there aren’t billions of data points. You need to train and interpret something with small amounts of data.

This is really another key point about professional AI. Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.”

What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

It’s our responsibility if we build this stuff to guide it safely into the world.

Source: Bloomberg



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Siri as a therapist, Apple is seeking engineers who understand psychology

PL – Looks like Siri needs more help to understand.

Apple Job Opening Ad

“People have serious conversations with Siri. People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life. Does improving Siri in these areas pique your interest?

Come work as part of the Siri Domains team and make a difference.

We are looking for people passionate about the power of data and have the skills to transform data to intelligent sources that will take Siri to next level. Someone with a combination of strong programming skills and a true team player who can collaborate with engineers in several technical areas. You will thrive in a fast-paced environment with rapidly changing priorities.”

The challenge as explained by Ephrat Livni on Quartz

The position requires a unique skill set. Basically, the company is looking for a computer scientist who knows algorithms and can write complex code, but also understands human interaction, has compassion, and communicates ably, preferably in more than one language. The role also promises a singular thrill: to “play a part in the next revolution in human-computer interaction.”

The job at Apple has been up since April, so maybe it’s turned out to be a tall order to fill. Still, it shouldn’t be impossible to find people who are interested in making machines more understanding. If it is, we should probably stop asking Siri such serious questions.

Computer scientists developing artificial intelligence have long debated what it means to be human and how to make machines more compassionate. Apart from the technical difficulties, the endeavor raises ethical dilemmas, as noted in the 2012 MIT Press book Robot Ethics: The Ethical and Social Implications of Robotics.

Even if machines could be made to feel for people, it’s not clear what feelings are the right ones to make a great and kind advisor and in what combinations. A sad machine is no good, perhaps, but a real happy machine is problematic, too.

In a chapter on creating compassionate artificial intelligence (pdf), sociologist, bioethicist, and Buddhist monk James Hughes writes:

Programming too high a level of positive emotion in an artificial mind, locking it into a heavenly state of self-gratification, would also deny it the capacity for empathy with other beings’ suffering, and the nagging awareness that there is a better state of mind.

Source: Quartz

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Growing #AI Emotion Reading Tech Challenge

PL – The challenge of an AI using Emotion Reading Tech just got dramatically more difficult.

A new study identifies 27 categories of emotion and shows how they blend together in our everyday experience.

Psychology once assumed that most human emotions fall within the universal categories of happiness, sadness, anger, surprise, fear, and disgust. But a new study from Greater Good Science Center faculty director Dacher Keltner suggests that there are at least 27 distinct emotions—and they are intimately connected with each other.

“We found that 27 distinct dimensions, not six, were necessary to account for the way hundreds of people reliably reported feeling in response to each video”

Moreover, in contrast to the notion that each emotional state is an island, the study found that “there are smooth gradients of emotion between, say, awe and peacefulness, horror and sadness, and amusement and adoration,” Keltner said.

“We don’t get finite clusters of emotions in the map because everything is interconnected,” said study lead author Alan Cowen, a doctoral student in neuroscience at UC Berkeley.

“Emotional experiences are so much richer and more nuanced than previously thought.”

Source: Mindful

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

I prefer to be killed by my own stupidity rather than the codified morals of a software engineer

…or the learned morals of an evolving algorithm. SAS CTO Oliver Schabenberger

With the advent of deep learning, machines are beginning to solve problems in a novel way: by writing the algorithms themselves.

The software developer who codifies a solution through programming logic is replaced by a data scientist who defines and trains a deep neural network.

The expert who studied and learned a domain is replaced by a reinforcement learning algorithm that discovers the rules of play from historical data.

We are learning incredible lessons in this process.

But does the rise of such highly sophisticated deep learning mean that machines will soon surpass their makers? They are surpassing us in reliability, accuracy and throughput. But they are not surpassing us in thinking or learning. Not with today’s technology.

The artificial intelligence systems of today learn from data – they learn only from data. These systems cannot grow beyond the limits of the data by creating, innovating or reasoning.

Even a reinforcement learning system that discovers rules of play from past data cannot develop completely new rules or new games. It can apply the rules in a novel and more efficient way, but it does not invent a new game. The machine that learned to play Go better than any human being does not know how to play Poker.

Where to from here?

True intelligence requires creativity, innovation, intuition, independent problem solving, self-awareness and sentience. The systems built based on deep learning do not – and cannot – have these characteristics. These are trained by top-down supervised methods.

We first tell the machine the ground truth, so that it can discover its regularities. They do not grow beyond that.

Source: InformationWeek



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Can machines learn to be moral?  #AI

AI works, in part, because complex algorithms adeptly identify, remember, and relate data … Moreover, some machines can do what had been the exclusive domain of humans and other intelligent life: Learn on their own.

As a researcher schooled in scientific method and an ethicist immersed in moral decision-making, I know it’s challenging for humans to navigate concurrently the two disparate arenas. 

It’s even harder to envision how computer algorithms can enable machines to act morally.

Moral choice, however, doesn’t ask whether an action will produce an effective outcome; it asks if it is good decision. In other words, regardless of efficacy, is it the right thing to do? 

Such analysis does not reflect an objective, data-driven decision but a subjective, judgment-based one.

Individuals often make moral decisions on the basis of principles like decency, fairness, honesty, and respect. To some extent, people learn those principles through formal study and reflection; however, the primary teacher is life experience, which includes personal practice and observation of others.

Placing manipulative ads before a marginally-qualified and emotionally vulnerable target market may be very effective for the mortgage company, but many people would challenge the promotion’s ethicality.

Humans can make that moral judgment, but how does a data-driven computer draw the same conclusion? Therein lies what should be a chief concern about AI.

Can computers be manufactured with a sense of decency?

Can coding incorporate fairness? Can algorithms learn respect? 

It seems incredible for machines to emulate subjective, moral judgment, but if that potential exists, at least four critical issues must be resolved:

  1. Whose moral standards should be used?
  2. Can machines converse about moral issues?
  3. Can algorithms take context into account?
  4. Who should be accountable?

Source: Business Insider David Hagenbuch



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Why The Sensitive Intersection of Race, Hate Speech And Algorithms Is Heating Up #AI

SAN JOSE, CA – APRIL 18: Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook’s F8 Developer Conference on April 18, 2017 at McEnery Convention Center in San Jose, California. (Photo by Justin Sullivan/Getty Images)

… recent story in The Washington Post reported that “minority” groups feel unfairly censored by social media behemoth Facebook, for example, when using the platform for discussions about racial bias. At the same time, groups and individuals on the other end of the race spectrum are quickly being banned and ousted in a flash from various social media networks.

Most all of such activity begins with an algorithm, a set of computer code that, for all intents and purposes for this piece, is created to raise a red flag when certain speech is used on a site.

But from engineer mindset to tech limitation, just how much faith should we be placing in algorithms when it comes to the very sensitive area of digital speech and race, and what does the future hold?

Indeed, while Facebook head Mark Zuckerberg reportedly eyes political ambitions within an increasingly brown America in which his own company consistently has issues creating racial balance, there are questions around policy and development of such algorithms. In fact, Malkia Cyril executive director for the Center for Media Justice  told the Post  that she believes that Facebook has a double standard when it comes to deleting posts.

Cyril explains [her meeting with Facebook] “The meeting was a good first step, but very little was done in the direct aftermath.  Even then, Facebook executives, largely white, spent a lot of time explaining why they could not do more instead of working with us to improve the user experience for everyone.”

What’s actually in the hearts and minds of those in charge of the software development? How many more who are coding have various thoughts – or more extreme – as those recently expressed in what is now known as the Google Anti-Diversity memo?

Not just Facebook, but any and all tech platforms where race discussion occurs are seemingly at a crossroads and under various scrutiny in terms of management, standards and policy about this sensitive area. The main question is how much of this imbalance is deliberate and how much is just a result of how algorithms naturally work?

Nelson [National Chairperson National Society of Black Engineers] notes that the first source of error, however, is how a particular team defines the term hate speech. “That opinion may differ between people so any algorithm would include error at the individual level,” he concludes.

“I believe there are good people at Facebook who want to see justice done,” says Cyril. “There are steps being taken at the company to improve the experience of users and address the rising tide of hate that thwarts democracy, on social media and in real life.

That said, racism is not race neutral, and accountability for racism will never come from an algorithm alone.”

Source: Forbes



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Putin: Leader in artificial intelligence will rule world

Putin, speaking Friday at a meeting with students, said the development of AI raises “colossal opportunities and threats that are difficult to predict now.”

[He] warned that “it would be strongly undesirable if someone wins a monopolist position” and promised that Russia would be ready to share its know-how in artificial intelligence with other nations.

The Russian leader predicted that future wars will be fought by drones, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

Source: Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Behind the Google diversity memo furor is fear of Google’s vast opaque power

Fear of opaque power of Google in particular, and Silicon Valley in general, wields over our lives.

If Google — and the tech world more generally — is sexist, or in the grips of a totalitarian cult of political correctness, or a secret hotbed of alt-right reactionaries, the consequences would be profound.

Google wields a monopoly over search, one of the central technologies of our age, and, alongside Facebook, dominates the internet advertising market, making it a powerful driver of both consumer opinion and the media landscape. 

It shapes the world in which we live in ways both obvious and opaque.

This is why trust matters so much in tech. It’s why Google, to attain its current status in society, had to promise, again and again, that it wouldn’t be evil. 

Compounding the problem is that the tech industry’s point of view is embedded deep in the product, not announced on the packaging. Its biases are quietly built into algorithms, reflected in platform rules, expressed in code few of us can understand and fewer of us will ever read.

But what if it actually is evil? Or what if it’s not evil but just immature, unreflective, and uncompassionate? And what if that’s the culture that designs the digital services the rest of us have to use?

The technology industry’s power is vast, and the way that power is expressed is opaque, so the only real assurance you can have that your interests and needs are being considered is to be in the room when the decisions are made and the code is written. But tech as an industry is unrepresentative of the people it serves and unaccountable in the way it serves them, and so there’s very little confidence among any group that the people in the room are the right ones.

Source: Vox (read the entire article by Ezra Klein)



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

IBM Watson CTO on Why Augmented Intelligence Beats AI

If you look at almost every other tool that has ever been created, our tools tend to be most valuable when they’re amplifying us, when they’re extending our reach, when they’re increasing our strength, when they’re allowing us to do things that we can’t do by ourselves as human beings. That’s really the way that we need to be thinking about AI as well, and to the extent that we actually call it augmented intelligence, not artificial intelligence.

Some time ago we realized that this thing called cognitive computing was really bigger than us, it was bigger than IBM, it was bigger than any one vendor in the industry, it was bigger than any of the one or two different solution areas that we were going to be focused on, and we had to open it up, which is when we shifted from focusing on solutions to really dealing with more of a platform of services, where each service really is individually focused on a different part of the problem space.

what we’re talking about now are a set of services, each of which do something very specific, each of which are trying to deal with a different part of our human experience, and with the idea that anybody building an application, anybody that wants to solve a social or consumer or business problem can do that by taking our services, then composing that into an application.

If the doctor can now make decisions that are more informed, that are based on real evidence, that are supported by the latest facts in science, that are more tailored and specific to the individual patient, it allows them to actually do their job better. For radiologists, it may allow them to see things in the image that they might otherwise miss or get overwhelmed by. It’s not about replacing them. It’s about helping them do their job better.

That’s really the way to think about this stuff, is that it will have its greatest utility when it is allowing us to do what we do better than we could by ourselves, when the combination of the human and the tool together are greater than either one of them would’ve been by theirselves. That’s really the way we think about it. That’s how we’re evolving the technology. That’s where the economic utility is going to be.

There are lots of things that we as human beings are good at. There’s also a lot of things that we’re not very good, and that’s I think where cognitive computing really starts to make a huge difference, is when it’s able to bridge that distance to make up that gap

A way I like to say it is it doesn’t do our thinking for us, it does our research for us so we can do our thinking better, and that’s true of us as end users and it’s true of advisors.

Source: PCMag



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will Satya’s ‘Charlottesville email’ shape AI applications at Microsoft?


“You can’t paint what you ain’t.”

– Drew Struzan

Those words got to me 18 years ago during an interview I had with this esteemed artist. We were working on a project together, an interactive CD about his movie posters, several of which were classics by then, when the conversation wandered off the subject of art and we began to examine the importance of being true to one’s self.  

“Have you ever, in your classes or seminars talked much about the underlying core foundation principles of your life?” I asked Drew that day.

His answer in part went like this: “Whenever I talk, I’m asked to talk about my art, because that’s what they see, that’s what’s out front. But the power of the art comes out of the personality of the human being. Inevitably, you can’t paint what you ain’t.”

That conversation between us took place five days before Columbine, in April of 1999, when Pam and I lived in Denver and a friend of ours had children attending that school. That horrific event triggered a lot of value discussions and a lot of human actions, in response to it.

Flash-forward to Charlottesville. And an email, in response to it, that the CEO of a large tech company sent his employees yesterday, putting a stake in the ground about what his company stands for, and won’t stand for, during these “horrific” times.

“… At Microsoft, we strive to seek out differences, celebrate them and invite them in. As a leader, a key part of your role is creating a culture where every person can do their best work, which requires more than tolerance for diverse perspectives. Our growth mindset culture requires us to truly understand and share the feelings of another person. …”

If Satya Nadella’s email expresses the emerging personality at Microsoft, the power source from which it works, then we are cautiously optimistic about what this could do for socializing AI.

It will take this kind of foundation-building, going forward, as MS introduces more AI innovations, to diminish the inherent bias in deep learning approaches and the implicit bias in algorithms.

It will take this depth of awareness to shape the values of Human-AI collaboration, to protect the humans who use AI. Values that, “seek out differences, celebrate them and invite them in.”

It will require unwavering dedication to this goal. Because. You can’t paint what you ain’t.

Blogger, Phil Lawson
SocializingAI.com



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Satya Nadella’s message to Microsoft after the attack in Charlottesville

Yesterday (Aug. 14), Microsoft CEO Satya Nadella sent out the following email to employees at Microsoft after the deadly car crash at a white nationalist rally in in Charlottesville, Virginia, on Saturday, Aug. 12:

This past week and in particular this weekend’s events in Charlottesville have been horrific. What I’ve seen and read has had a profound impact on me and I am sure for many of you as well. In these times, to me only two things really matter as a leader.

The first is that we stand for our timeless values, which include diversity and inclusion. There is no place in our society for the bias, bigotry and senseless violence we witnessed this weekend in Virginia provoked by white nationalists. Our hearts go out to the families and everyone impacted by the Charlottesville tragedy.

The second is that we empathize with the hurt happening around us. At Microsoft, we strive to seek out differences, celebrate them and invite them in. As a leader, a key part of your role is creating a culture where every person can do their best work, which requires more than tolerance for diverse perspectives. Our growth mindset culture requires us to truly understand and share the feelings of another person. It is an especially important time to continue to be connected with people, and listen and learn from each other’s experiences.

As I’ve said, across Microsoft, we will stand together with those who are standing for positive change in the communities where we live, work and serve. Together, we must embrace our shared humanity, and aspire to create a society that is filled with respect, empathy and opportunity for all.

Feel free to share with your teams.

Satya

Source: Quartz

TO READ this blogger’s view of the above email click here.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Do we still need human judges in the age of Artificial Intelligence?

Technology and the law are converging, and where they meet new questions arise about the relative roles of artificial and human agents—and the ethical issues involved in the shift from one to the other. While legal technology has largely focused on the activities of the bar, it challenges us to think about its application to the bench as well. In particular,

Could AI replace human judges?

The idea of  AI judges raises important ethical issues around bias and autonomy. AI programs may incorporate the biases of their programmers and the humans they interact with.

But while such programs may replicate existing human biases, the distinguishing feature of AI over an algorithm  is that it can behave in surprising and unintended ways as it ‘learns.’ Eradicating bias therefore becomes even more difficult, though not impossible. Any AI judging program would need to account for, and be tested for, these biases.

Appealing to rationality, the counter-argument is that human judges are already biased, and that AI can be used to improve the way we deal with them and reduce our ignorance. Yet suspicions about AI judges remain, and are already enough of a concern to lead the European Union to promulgate a General Data Protection Regulation which becomes effective in 2018. This Regulation contains

“the right not to be subject to a decision based solely on automated processing”.

As the English utilitarian legal theorist Jeremy Bentham once wrote in An Introduction To The Principles of Morals and Legislation, “in principle and in practice, in a right track and in a wrong one, the rarest of all human qualities is consistency.” With the ability to process far more data and variables in the case record than humans could ever do, an AI judge might be able to outstrip a human one in many cases.

Even so, AI judges may not solve classical questions of legal validity so much as raise new questions about the role of humans, since—if  we believe that ethics and morality in the law are important—then they necessarily lie, or ought to lie, in the domain of human judgment.

In practical terms, if we apply this conclusion to the perspective of American legal theorist Ronald Dworkin, for example, AI could assist with examining the entire breadth and depth of the law, but humans would ultimately choose what they consider a morally-superior interpretation.

The American Judge Richard Posner believes that the immediate use of AI and automation should be restricted to assisting judges in uncovering their own biases and maintaining consistency.

At the heart of these issues is a hugely challenging question: what does it mean to be human in the age of Artificial Intelligence?

Source: Open Democracy

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We should not talk about jobs being lost but people suffering #AI

How can humans stay ahead of an ever growing machine intelligence? “I think the challenge for us is to always be creative,” says former world chess champion Garry Kasparov

He also discussed the threat that increasingly capable AI poses to (human) jobs, arguing that we should not try to predict what will happen in the future but rather look at immediate problems.

“We make predictions and most are wrong because we’re trying to base it on our past experience,” he argued. “I think the problem is not that AI is exceeding our level. It’s another cycle.

Machines have been constantly replacing all sorts of jobs… We should not talk about jobs being lost but people suffering.

“AI is just another challenge. The difference is that now intelligence machines are coming after people with a college degree or with social media and Twitter accounts,” he added.

Source: Tech Crunch



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The true secret of copying life lies in movement #AI

Random International’s Zoological, part of Wayne McGregor’s +/- Human Photograph: Ravi Deepres/Alicia Clarke

Random International’s installation, Zoological, features a flock of airborne spheres that glide and swoop and dance and swarm above and among us. What a mind-boggling show.

In the darkened heights of the Roundhouse in north London, a flying flock of white spheres that uncannily resemble Magritte’s dream objects float intelligently and curiously, checking out the humans below, hovering downward to see us better. They are the most convincing embodiment of artificial intelligence I have ever seen. For these responsive, even sensitive machines truly create a sense of encounter with a digital life form that mirrors, or mocks, human free will.

Nobody is hidden behind a screen piloting this robotic airborne dance troupe. Each sphere has its own decision-making electronic brain. They fly in elegant unison yet also break ranks as they check their positions against the images recorded by infra-red cameras surrounding the circular space where they float and their human visitors walk.

Yet the crucial fact that they guide themselves, mimicking conscious choice in their unplanned and to all intents and purposes spontaneous actions, is apparent without knowing anything about their design. You can tell by the way they move that they are free entities.

Looked at coldly, these devices are just inflated plastic balls whose movements are guided by rotors, like a toy drone.  Their behaviour is by turns entrancing and mildly menacing. They rise one after another from their resting positions in an upper gallery and calmly hover out into the open domed arena where their human guests are waiting. They are never at rest. As they glide in formation one or another is always changing its position, approaching the people below with what seems like curiosity. Then they all follow. It is when the entire swarm gathers directly above you that it suddenly becomes a threatening, sinister presence.

This artwork that opens visions of a future in which life evolves beyond biology itself.

The true secret of copying life, this installation shows, lies in movement. Dance, the oldest human art, turns out to be a key to comprehending life itself, and reproducing it. The orbs dance with you. They locate and follow members of the audience, not with mechanical inevitability but a complex, gracious harmony. Making and breaking patterns, coming together and loosely floating apart, they dance with each other, too.

Source: The Guardian



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

80% of what human physicians currently do will soon be done instead by technology, allowing physicians to

Data-driven AI technologies are well suited to address chronic inefficiencies in health markets, potentially lowering costs by hundreds of billions of dollars, while simultaneously reducing the time burden on physicians.

These technologies can be leveraged to capture the massive volume of data that describes a patient’s past and present state, project potential future states, analyze that data in real time, assist in reasoning about the best way to achieve patient and physician goals, and provide both patient and physician constant real-time support. Only AI can fulfill such a mission. There is no other solution.

Technologist and investor Vinod Khosla posited that 80 percent of what human physicians currently do will soon be done instead by technology, allowing physicians to focus their time on the really important elements of patient physician interaction.

Within five years, the healthcare sector has the potential to undergo a complete metamorphosis courtesy of breakthrough AI technologies. Here are just a few examples:

1. Physicians will practice with AI virtual assistants (using, for example, software tools similar to Apple’s Siri, but specialized to the specific healthcare application).

2. Physicians with AI virtual assistants will be able to treat 5X – 10X as many patients with chronic illnesses as they do today, with better outcomes than in the past.

Patients will have a constant “friend” providing a digital health conscience to advise, support, and even encourage them to make healthy choices and pursue a healthy lifestyle.

3. AI virtual assistants will support both patients and healthy individuals in health maintenance with ongoing and real-time intelligent advice.

Our greatest opportunity for AI-enhancement in the sector is keeping people healthy, rather than waiting to treat them when they are sick. AI virtual assistants will be able to acquire deep knowledge of diet, exercise, medications, emotional and mental state, and more.

4. Medical devices previously only available in hospitals will be available in the home, enabling much more precise and timely monitoring and leading to a healthier population.

5. Affordable new tools for diagnosis and treatment of illnesses will emerge based on data collected from extant and widely adopted digital devices such as smartphones.

6. Robotics and in-home AI systems will assist patients with independent living.

But don’t be misled — the best metaphor is that they are learning like humans learn and that they are in their infancy, just starting to crawl. Healthcare AI virtual assistants will soon be able to walk, and then run.

Many of today’s familiar AI engines, personified in Siri, Cortana, Alexa, Google Assistant or any of the hundreds of “intelligent chatbots,” are still immature and their capabilities are highly limited. Within the next few years they will be conversational, they will learn from the user, they will maintain context, and they will provide proactive assistance, just to name a few of their emerging capabilities.

And with these capabilities applied in the health sector, they will enable us to keep millions of citizens healthier, give physicians the support and time they need to practice, and save trillions of dollars in healthcare costs. Welcome to the age of AI.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial intelligence ethics the same as other new technology – #AI

AI gives us the power to solve problems more efficiently and effectively.

Just as a calculator is more efficient at math than a human, various forms of AI might be better than humans at other tasks. For example, most car accidents are caused by human error – what if driving could be automated and human error thus removed? Tens of thousands of lives might be saved every year, and huge sums of money saved in healthcare costs and property damage averted.

Moving into the future, AI might be able to better personalize education to individual students, just as adaptive testing evaluates students today. AI might help figure out how to increase energy efficiency and thus save money and protect the environment. It might increase efficiency and prediction in healthcare; improving health while saving money. Perhaps AI could even figure out how to improve law and government, or improve moral education. For every problem that needs a solution, AI might help us find it.

But as human beings, we should not be so much thinking about efficiency as morality.

Doing the right thing is sometimes “inefficient” (whatever efficiency might mean in a certain context). Respecting human dignity is sometimes inefficient. And yet we should do the right thing and respect human dignity anyway, because those moral values are higher than mere efficiency.

Ultimately, AI gives us just what all technology does – better tools for achieving what we want.

The deeper question then becomes “what do we want?” and even more so “what should we want?” If we want evil, then evil we shall have, with great efficiency and abundance. If instead we want goodness, then through diligent pursuit we might be able to achieve it.

Source: Crux

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Microsoft’s new corporate vision: artificial intelligence is in and mobile is out

Microsoft has inserted artificial intelligence into its vision for the first time, and removed references to a “mobile-first” world. That fits with Microsoft’s recent push into AI and retreat from the smartphone market.

“We believe a new technology paradigm is emerging that manifests itself through an intelligent cloud and an intelligent edge where computing is more distributed, AI drives insights and acts on the user’s behalf, and user experiences span devices with a user’s available data and information,” according to Microsoft’s vision statement.

Microsoft last year formed a new 5,000-person engineering and research team to focus on its artificial intelligence products — a major reshaping of the company’s internal structure reminiscent of its massive pivot to pursue the opportunity of the Internet in the mid-1990s.

Here is Microsoft’s full vision statement from the document:

Microsoft is a technology company whose mission is to empower every person and every organization on the planet to achieve more. We strive to create local opportunity, growth, and impact in every country around the world. Our strategy is to build best-in-class platforms and productivity services for an intelligent cloud and an intelligent edge infused with artificial intelligence (“AI”).

Source: Geekwire



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google Has Started Adding Imagination to Its DeepMind #AI

Researchers have started developing artificial intelligence with imagination – AI that can reason through decisions and make plans for the future, without being bound by human instructions.

Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.

The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they haven’t been specifically programmed for. Insert your usual fears of a robot uprising here.

“If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to ‘imagine’ and reason about the future. Beyond that they must be able to construct a plan using this knowledge.”

To do this, the researchers combined several existing AI approaches together, including reinforcement learning (learning through trial and error) and deep learning (learning through processing vast amounts of data in a similar way to the human brain).

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

#AI data-monopoly risks to be probed by UK parliamentarians

Among the questions the House of Lords committee will consider as part of the enquiry are:

  • How can the data-based monopolies of some large corporations, and the ‘winner-takes-all’ economics associated with them, be addressed?
  • What are the ethical implications of the development and use of artificial intelligence?
  • In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable?
  • What role should the government take in the development and use of artificial intelligence in the UK?
  • Should artificial intelligence be regulated?

The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.

“We are looking to be pragmatic in our approach, and want to make sure our recommendations to government and others will be practical and sensible.

There are significant questions to address relevant to both the present and the future, and we want to help inform the answers to them. To do this, we need the help of the widest range of people and organisations.”

Source: Techcrunch

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A blueprint for coexistence with #AI

In September 2013, I was diagnosed with fourth-stage lymphoma.

This near-death experience has not only changed my life and priorities, but also altered my view of artificial intelligence—the field that captured my selfish attention for all those years.

This personal reformation gave me an enlightened view of what AI should mean for humanity. Many of the recent discussions about AI have concluded that this scientific advance will likely take over the world, dominate humans, and end poorly for mankind.

But my near-death experience has enabled me to envision an alternate ending to the AI story—one that makes the most of this amazing technology while empowering humans not just to survive, but to thrive.

Love is what is missing from machines. That’s why we must pair up with them, to leaven their powers with what only we humans can provide. Your future AI diagnostic tool may well be 10 times more accurate than human doctors, but patients will not want a cold pronouncement from the tool: “You have fourth stage lymphoma and a 70 percent likelihood of dying within five years.” That in itself would be harmful.

Kai-Fu Lee. DAVID PAUL MORRIS/ BLOOMBERG

Patients would benefit, in health and heart, from a “doctor of love” who will spend as much time as the patient needs, always be available to discuss their case, and who will even visit the patients at home. This doctor might encourage us by sharing stories such as, “Kai-Fu had the same lymphoma, and he survived, so you can too.”

This kind of “doctor of love” would not only make us feel better and give us greater confidence, but would also trigger a placebo effect that would increase our likelihood of recuperation. Meanwhile, the AI tool would watch the Q&A between the “doctor of love” and the patient carefully, and then optimize the treatment. If scaled across the world, the number of “doctors of love” would greatly outnumber today’s doctors.

Let us choose to let machines be machines, and let humans be humans. Let us choose to use our machines, and love one another.

Kai-Fu Lee, Ph.D., is the Founder and CEO of Sinovation Ventures and the president of its Artificial Intelligence Institute.

Source: Wired

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The big problem with artificial intelligence

Artificial intelligence algorithms can indeed create a world that distributes resources more efficiently and, in theory, can offer more for everyone.

Yes, but: If we aren’t careful, these same algorithms could actually lead to greater discrimination by codifying the biases that exist both overtly and unconsciously in human society.

What’s more, the power to make these decisions lies in the hands of Silicon Valley, which has a decidedly mixed record on spotting and addressing diversity issues in its midst.

Airbnb’s Mike Curtis put it well when I interviewed him this week at VentureBeat’s MobileBeat conference:

 One of the best ways to combat bias is to be aware of it. When you are aware of the biases then you can be proactive about getting in front of them. Well, computers don’t have that advantage. They can’t be aware of the biases that may have come into them from the data patterns they have seen.”

Concern is growing:

  • The ACLU has raised concerns that age, sex, and race biases are already being codified into the algorithms that power AI.
  • ProPublica found that a computer program used in various regions to decide whom to grant parole would go easy on white offenders while being unduly harsh to black ones.
  • It’s an issue that Weapons of Math Destruction author Cathy O’Neil raised in a popular talk at the TED conference this year. “Algorithms don’t make things fair,” she said. “They automate the status quo.”

Source: Axios

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Microsoft is forming a grand army of experts in the #AI wars with Google, Facebook, and Amazon

Microsoft announces the creation of Microsoft Research AI, a dedicated unit within its global Microsoft Research division that will focus exclusively on how to make the company’s software smarter, now and in the future.

The difference now, Microsoft Research Labs director Eric Horvitz tells Business Insider, is that this new organization will bring roughly 100 of those experts under one figurative roof. By bringing them together, Microsoft’s AI team can do more, faster.

Horvitz describes the formation of Microsoft Research AI as a “key strategic effort;’ a move that is “absolutely critical” as artificial intelligence becomes increasingly important to the future of technology.

Artificial intelligence carries a lot of power, and a lot of responsibility.

That’s why Microsoft has also announced the formation of Aether (AI and ethics in engineering and research), a board of executives drawn from across every division of the company, including lawyers. The idea, says Horvitz, is to spot issues and potential abuses of AI before they start.

Similarly, Microsoft’s AI design guide is designed to help engineers build systems that augment what humans can do, without making them feel obsolete. Otherwise, people might start to feel like machines are piloting them, rather than the other way around. That’s why it’s so key that apps like Cortana feel warm and relatable.

“Oh my goodness, those computers better talk to us in a way that’s friendly and approachable,” says Microsoft General Manager Emma Williams, in charge of the group behind the design guide. “As people, we have the control.”

Source: Business Insider

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google Debuts PAIR Initiative to Humanize #AI

We’re announcing the People + AI Research initiative (PAIR) which brings together researchers across Google to study and redesign the ways people interact with AI systems.

The goal of PAIR is to focus on the “human side” of AI: the relationship between users and technology, the new applications it enables, and how to make it broadly inclusive.

PAIR’s research is divided into three areas, based on different user needs:

  • Engineers and researchers: AI is built by people. How might we make it easier for engineers to build and understand machine learning systems? What educational materials and practical tools do they need?
  • Domain experts: How can AI aid and augment professionals in their work? How might we support doctors, technicians, designers, farmers, and musicians as they increasingly use AI?
  • Everyday users: How might we ensure machine learning is inclusive, so everyone can benefit from breakthroughs in AI? Can design thinking open up entirely new AI applications? Can we democratize the technology behind AI?

Many designers and academics have started exploring human/AI interaction. Their work inspires us; we see community-building and research support as an essential part of our mission.

Focusing on the human element in AI brings new possibilities into view. We’re excited to work together to invent and explore what’s possible.

Source: Google blog

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The holy grail is modifying patients’ behavior – #AI

Companies like DexCom are focused on the diabetes epidemic, Jimenez said

the holy grail is modifying patients’ behavior.

That would mean combining the stream of data from glucose monitoring, insulin measurements, patient activity and meals, and applying machine learning to derive insights so the software can send alerts and recommendations back to patients and their doctors, she said.

“But where we are in our maturity as an industry is just publishing numbers,”

Jimenez explained. “So we’re just telling people what their glucose number is, which is critical for a type 1 diabetic. But a type 2 diabetic needs to engage with an app, and be compelled to interact with the insights. It’s really all about the development of the app.”

The ultimate goal, perhaps, would be to develop a user interface that uses the insights gained from machine learning to actually prompt diabetic patients to change their behavior.

This point was echoed by Jean Balgrosky, an investor who spent 20 years as the CIO of large, complex healthcare organizations such as San Diego’s Scripps Health. “At the end of the day,” she said, “all this machine learning has to be absorbed and consumed by humans—to take care of humans in healthcare.”

Source: Xconomy

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Inside Microsoft’s Artificial Intelligence Comeback

Yoshua Bengio

[Yoshua Bengio, one of the three intellects who shaped the deep learning that now dominates artificial intelligence, has never been one to take sides. But Bengio has recently chosen to sign on with Microsoft. In this WIRED article he explains why.]

“We don’t want one or two companies, which I will not name, to be the only big players in town for AI,” he says, raising his eyebrows to indicate that we both know which companies he means. One eyebrow is in Menlo Park; the other is in Mountain View. “It’s not good for the community. It’s not good for people in general.”

That’s why Bengio has recently chosen to forego his neutrality, signing on with Microsoft.

Yes, Microsoft. His bet is that the former kingdom of Windows alone has the capability to establish itself as AI’s third giant. It’s a company that has the resources, the data, the talent, and—most critically—the vision and culture to not only realize the spoils of the science, but also push the field forward.

Just as the internet disrupted every existing business model and forced a re-ordering of industry that is just now playing out, artificial intelligence will require us to imagine how computing works all over again.

In this new landscape, computing is ambient, accessible, and everywhere around us. To draw from it, we need a guide—a smart conversationalist who can, in plain written or spoken form, help us navigate this new super-powered existence. Microsoft calls it Cortana.

Because Cortana comes installed with Windows, it has 145 million monthly active users, according to the company. That’s considerably more than Amazon’s Alexa, for example, which can be heard on fewer than 10 million Echoes. But unlike Alexa, which primarily responds to voice, Cortana also responds to text and is embedded in products that many of us already have. Anyone who has plugged a query into the search box at the top of the toolbar in Windows has used Cortana.

Eric Horvitz wants Microsoft to be more than simply a place where research is done. He wants Microsoft Research to be known as a place where you can study the societal and social influences of the technology.

This will be increasingly important as Cortana strives to become, to the next computing paradigm, what your smartphone is today: the front door for all of your computing needs. Microsoft thinks of it as an agent that has all your personal information and can interact on your behalf with other agents.

If Cortana is the guide, then chatbots are Microsoft’s fixers. They are tiny snippets of AI-infused software that are designed to automate one-off tasks you used to do yourself, like making a dinner reservation or completing a banking transaction.

Emma Williams, Marcus Ash, and Lili Cheng

So far, North American teens appear to like chatbot friends every bit as much as Chinese teens, according to the data. On average, they spend 10 hours talking back and forth with Zo. As Zo advises its adolescent users on crushes and commiserates about pain-in-the-ass parents, she is becoming more elegant in her turns of phrase—intelligence that will make its way into Cortana and Microsoft’s bot tools.

It’s all part of one strategy to help ensure that in the future, when you need a computing assist–whether through personalized medicine, while commuting in a self-driving car, or when trying to remember the birthdays of all your nieces and nephews–Microsoft will be your assistant of choice.

Source: Wired for the full in-depth article

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

In the #AI Age, “Being Smart” Will Mean Something Completely Different

What can we do to prepare for the new world of work? Because AI will be a far more formidable competitor than any human, we will be in a frantic race to stay relevant. That will require us to take our cognitive and emotional skills to a much higher level.

Many experts believe that human beings will still be needed to do the jobs that require higher-order critical, creative, and innovative thinking and the jobs that require high emotional engagement to meet the needs of other human beings.

The challenge for many of us is that we do not excel at those skills because of our natural cognitive and emotional proclivities: We are confirmation-seeking thinkers and ego-affirmation-seeking defensive reasoners. We will need to overcome those proclivities in order to take our thinking, listening, relating, and collaborating skills to a much higher level.

What is needed is a new definition of being smart, one that promotes higher levels of human thinking and emotional engagement.

The new smart will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating, and learning. Quantity is replaced by quality.

And that shift will enable us to focus on the hard work of taking our cognitive and emotional skills to a much higher level.

Source: HBR

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Ethics And Artificial Intelligence With IBM Watson’s Rob High – #AI

In the future, chatbots should and will be able to go deeper to find the root of the problem.

For example, a person asking a chatbot what her bank balance is might be asking the question because she wants to invest money or make a big purchase—a futuristic chatbot could find the real reason she is asking and turn it into a more developed conversation.

In order to do that, chatbots will need to ask more questions and drill deeper, and humans need to feel comfortable providing their information to machines.

As chatbots perform various tasks and become a more integral part of our lives, the key to maintaining ethics is for chatbots to provide proof of why they are doing what they are doing. By showcasing proof or its method of calculations, humans can be confident that AI had reasoning behind its response instead of just making something up.

The future of technology is rooted in artificial intelligence. In order to stay ethical, transparency, proof, and trustworthiness need to be at the root of everything AI does for companies and customers. By staying honest and remembering the goals of AI, the technology can play a huge role in how we live and work.

Source: Forbes

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Ensure that #AI charts a course that benefits humanity and bolsters our shared values

AI for Good Global Summit

The United Nations this week is refocusing AI on sustainable development and assisting global efforts to eliminate poverty and hunger, and to protect the environment.

“Artificial Intelligence has the potential to accelerate progress towards a dignified life, in peace and prosperity, for all people,” said UN Secretary-General António Guterres. “The time has arrived for all of us – governments, industry and civil society – to consider how AI will affect our future.”

In a video message to the summit, Mr. Guterres called AI “a new frontier” with “advances moving at warp speed.”

He noted that that while AI is “already transforming our world socially, economically and politically,” there are also serious challenges and ethical issues which must be taken into account – including cybersecurity, human rights and privacy.

“This Summit can help ensure that artificial intelligence charts a course that benefits humanity and bolsters our shared values” 

 Source: UN News Centre

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We Need to Talk About the Power of #AI to Manipulate Humans

Liesl Yearsley is a serial entrepreneur now working on how to make artificial intelligence agents better at problem-solving and capable of forming more human-like relationships.

From 2007 to 2014 I was CEO of Cognea, which offered a platform to rapidly build complex virtual agents … acquired by IBM Watson in 2014.

As I studied how people interacted with the tens of thousands of agents built on our platform, it became clear that humans are far more willing than most people realize to form a relationship with AI software.

I always assumed we would want to keep some distance between ourselves and AI, but I found the opposite to be true. People are willing to form relationships with artificial agents, provided they are a sophisticated build, capable of complex personalization.

We humans seem to want to maintain the illusion that the AI truly cares about us.

This puzzled me, until I realized that in daily life we connect with many people in a shallow way, wading through a kind of emotional sludge. Will casual friends return your messages if you neglect them for a while? Will your personal trainer turn up if you forget to pay them? No, but an artificial agent is always there for you. In some ways, it is a more authentic relationship.

This phenomenon occurred regardless of whether the agent was designed to act as a personal banker, a companion, or a fitness coach. Users spoke to the automated assistants longer than they did to human support agents performing the same function.

People would volunteer deep secrets to artificial agents, like their dreams for the future, details of their love lives, even passwords.

These surprisingly deep connections mean even today’s relatively simple programs can exert a significant influence on people—for good or ill.

Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.

Systems specifically designed to form relationships with a human will have much more power. AI will influence how we think, and how we treat others.

This requires a new level of corporate responsibility. We need to deliberately and consciously build AI that will improve the human condition—not just pursue the immediate financial gain of gazillions of addicted users.

We need to consciously build systems that work for the benefit of humans and society. They cannot have addiction, clicks, and consumption as their primary goal. AI is growing up, and will be shaping the nature of humanity.

AI needs a mother.

Source: MIT Technology Review 



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will there be any jobs left as #AI advances?

A new report from the International Bar Association suggests machines will most likely replace humans in high-routine occupations.

The authors have suggested that governments introduce human quotas in some sectors in order to protect jobs.

We thought it’d just be an insight into the world of automation and blue collar sector. This topic has picked up speed tremendously and you can see it everywhere and read it every day. It’s a hot topic now.” – Gerlind Wisskirchen, a lawyer who coordinated the study

For business futurist Morris Miselowski, job shortages will be a reality in the future.

I’m not absolutely convinced we will have enough work for everybody on this planet within 30 years anyway. I’m not convinced that work as we understand it, this nine-to-five, Monday to Friday, is sustainable for many of us for the next couple of decades.”

“Even though automation begun 30 years ago in the blue-collar sector, the new development of artificial intelligence and robotics affects not just blue collar, but the white-collar sector,” Ms Wisskirchen. “You can see that when you see jobs that will be replaced by algorithms or robots depending on the sector.”

The report has recommended some methods to mitigate human job losses, including a type of ‘human quota’ in any sector, introducing ‘made by humans’ label or a tax for the use of machines.

But for Professor Miselowski, setting up human and computer ratios in the workplace would be impractical.

We want to maintain human employment for as long as possible, but I don’t see it as practical or pragmatic in the long-term,” he said. “I prefer what I call a trans-humanist world, where what we do is we learn to work alongside machines the same way we have with computers and calculators.

It’s just something that is going to happen, or has already started to happen. And we need to make the best out of it, but we need to think ahead and be very thoughtful in how we shape society in the future — and that’s I think a challenge for everybody.” Ms Wisskirchen.

Source: ABC News

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We’re so unprepared for the robot apocalypse

Industrial robots alone have eliminated up to 670,000 American jobs between 1990 and 2007

It seems that after a factory sheds workers, that economic pain reverberates, triggering further unemployment at, say, the grocery store or the neighborhood car dealership.

In a way, this is surprising. Economists understand that automation has costs, but they have largely emphasized the benefits: Machines makes things cheaper, and they free up workers to do other jobs.

The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought. 

every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town

“We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too.

This evidence draws attention to the losers — the dislocated factory workers who just can’t bounce back

one robot in the workforce led to the loss of 6.2 jobs within a commuting zone where local people travel to work.

The robots also reduce wages, with one robot per thousand workers leading to a wage decline of between 0.25 % and 0.5 % Fortune

.None of these efforts, though, seem to be doing enough for communities that have lost their manufacturing bases, where people have reduced earnings for the rest of their lives.

Perhaps that much was obvious. After all, anecdotes about the Rust Belt abound. But the new findings bolster the conclusion that these economic dislocations are not brief setbacks, but can hurt areas for an entire generation.

How do we even know that automation is a big part of the story at all? A key bit of evidence is that, despite the massive layoffs, American manufacturers are making more stuff than ever. Factories have become vastly more productive.

some consultants believe that the number of industrial robots will quadruple in the next decade, which could mean millions more displaced manufacturing workers

The question, now, is what to do if the period of “maladjustment” that lasts decades, or possibly a lifetime, as the latest evidence suggests.

automation amplified opportunities for people with advanced skills and talents

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI to become main way banks interact with customers within three years

Four in five bankers believe AI will “revolutionise” the way in which banks gather information as well as how they interact with their clients, said the Accenture Banking Technology Vision 2017 report

More than three quarters of respondents to the survey believed that AI would enable more simple user interfaces, which would help banks create a more human-like customer experience.

“(It) will give people the impression that the bank knows them a lot better, and in many ways it will take banking back to the feeling that people had when there were more human interactions.”

“The big paradox here is that people think technology will lead to banking becoming more and more automated and less and less personalized, but what we’ve seen coming through here is the view that technology will actually help banking become a lot more personalized,” said Alan McIntyre, head of the Accenture’s banking practice and co-author of the report.

The top reason for using AI for user interfaces, cited by 60 percent of the bankers surveyed, was “to gain data analysis and insights”.

Source: KFGO

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will Using AI To Make Loans Trade One Kind Of Bias For Another?

Digital lending is expected to double in size over the next three years, reaching nearly 10 percent of all loans in the U.S. and Europe.

Marc Stein, who runs Underwrite.AI, writes algorithms capable of teaching themselves.

The program learns from each correlation it finds, whether it’s determining someone’s favorite books or if they are lying about their income on a loan application. And using that information, it can predict whether the applicant is a good risk.

Digital lenders are pulling in all kinds of data, including purchases, SAT scores and public records like fishing licenses.

If we looked at the delta between what people said they made and what we could verify, that was highly predictive,” Stein says.

As part of the loan application process, some lenders have prospective borrowers download an app that uploads an extraordinary amount of information like daily location patterns, the punctuation of text messages or how many of their contacts have last names

“FICO and income, which are sort of the sweet spot of what every consumer lender in the United States uses, actually themselves are quite biased against people,” says Dave Girouard, the CEO of Upstart, an online lender.

Government research has found that FICO scores hurt younger borrowers and those from foreign counties because people with low incomes are targeted for higher-interest loans. Girouard argues that new, smarter data can make lending more fair.

Source: NPR

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

These chatbots may one day even replace your doctor

As artificial intelligence programs learn to better communicate with humans, they’ll soon encroach on careers once considered untouchable, like law and accounting.

These chatbots may one day even replace your doctor.

This January, the United Kingdom’s National Health Service launched a trial with Babylon Health, a startup developing an AI chatbot. 

The bot’s goal is the same as the helpline, only without humans: to avoid unnecessary doctor appointments and help patients with over-the-counter remedies.

Using the system, patients chat with the bot about their symptoms, and the app determines whether they should see a doctor, go to a pharmacy, or stay home. It’s now available to about 1.2 million Londoners.

But the upcoming version of Babylon’s chatbot can do even more: In tests, it’s now dianosing patients faster human doctors can, says Dr. Ali Parsa, the company’s CEO. The technology can accurately diagnose about 80 percent of illnesses commonly seen by primary care doctors.

The reason these chatbots are increasingly important is cost: two-thirds of money moving through the U.K.’s health system goes to salaries.

“Human beings are very expensive,” Parsa says. “If we want to make healthcare affordable and accessible for everyone, we’ll need to attack the root causes.”

Globally, there are 5 million fewer doctors today than needed, so anything that lets a doctor do their jobs faster and more easily will be welcome, Parsa says.

Half the world’s population has little access to health care — but they have smartphones. Chatbots could get them the help they need.

Source: NBC News

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Tech Reckons With the Problems It Helped Create

Festival goer is seen at the 2017 SXSW Conference and Festivals in Austin, Texas.

SXSW’s – this year, the conference itself feels a lot like a hangover.

It’s as if the coastal elites who attend each year finally woke up with a serious case of the Sunday scaries, realizing that the many apps, platforms, and doodads SXSW has launched and glorified over the years haven’t really made the world a better place. In fact, they’ve often come with wildly destructive and dangerous side effects. Sure, it all seemed like a good idea in 2013!

But now the party’s over. It’s time for the regret-filled cleanup.

speakers related how the very platforms that were meant to promote a marketplace of ideas online have become filthy junkyards of harassment and disinformation.

Yasmin Green, who leads an incubator within Alphabet called Jigsaw, focused her remarks on the rise of fake news, and even brought two propaganda publishers with her on stage to explain how, and why, they do what they do. For Jestin Coler, founder of the phony Denver Guardian, it was an all too easy way to turn a profit during the election.

“To be honest, my mortgage was due,” Coler said of what inspired him to write a bogus article claiming an FBI agent related to Hillary Clinton’s email investigation was found dead in a murder-suicide. That post was shared some 500,000 times just days before the election.

While prior years’ panels may have optimistically offered up more tech as the answer to what ails tech, this year was decidedly short on solutions.

There seemed to be, throughout the conference, a keen awareness of the limits human beings ought to place on the software that is very much eating the world.

Source: Wired

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

DeepMind’s social agenda plays to its AI strengths

DeepMind’s researchers have in common a clearly defined if lofty mission:

to crack human intelligence and recreate it artificially.

Today, the goal is not just to create a powerful AI to play games better than a human professional, but to use that knowledge “for large-scale social impact”, says DeepMind’s other co-founder, Mustafa Suleyman, a former conflict-resolution negotiator at the UN.

“To solve seemingly intractable problems in healthcare, scientific research or energy, it is not enough just to assemble scores of scientists in a building; they have to be untethered from the mundanities of a regular job — funding, administration, short-term deadlines — and left to experiment freely and without fear.”

“if you’re interested in advancing the research as fast as possible, then you need to give [scientists] the space to make the decisions based on what they think is right for research, not for whatever kind of product demand has just come in.”

“Our research team today is insulated from any short-term pushes or pulls, whether it be internally at Google or externally.

We want to have a big impact on the world, but our research has to be protected, Hassabis says.

“We showed that you can make a lot of advances using this kind of culture. I think Google took notice of that and they’re shifting more towards this kind of longer-term research.”

Source: Financial Times

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial intelligence is ripe for abuse

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.

“We want to make these systems as ethical as possible and free from unseen biases.”

In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases.

We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.””

Source: The Gaurdian

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Humans are born irrational, and that has made us better decision-makers

Facts on their own don’t tell you anything. It’s only paired with preferences, desires, with whatever gives you pleasure or pain, that can guide your behavior. Even if you knew the facts perfectly, that still doesn’t tell you anything about what you should do.”

Even if we were able to live life according to detailed calculations, doing so would put us at a massive disadvantage. This is because we live in a world of deep uncertainty, under which neat logic simply isn’t a good guide.

It’s well-established that data-based decisions doesn’t inoculate against irrationality or prejudice, but even if it was possible to create a perfectly rational decision-making system based on all past experience, this wouldn’t be a foolproof guide to the future.

Courageous acts and leaps of faith are often attempts to overcome great and seemingly insurmountable challenges. (It wouldn’t take much courage if it were easy to do.) But while courage may be irrational or hubristic, we wouldn’t have many great entrepreneurs or works of art without those with a somewhat illogical faith in their own abilities.

There are occasions where overly rational thinking would be highly inappropriate. Take finding a partner, for example. If you had the choice between a good-looking high-earner who your mother approves of, versus someone you love who makes you happy every time you speak to them—well, you’d be a fool not to follow your heart.

And even when feelings defy reason, it can be a good idea to go along with the emotional rollercoaster. After all, the world can be an entirely terrible place and, from a strictly logical perspective, optimism is somewhat irrational.

But it’s still useful. “It can be beneficial not to run around in the world and be depressed all the time,” says Gigerenzer.

Of course, no human is perfect, and there are downsides to our instincts. But, overall, we’re still far better suited to the real world than the most perfectly logical thinking machine.

We’re inescapably irrational, and far better thinkers as a result.

Source: Quartz

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI makes the heart grow fonder

This robot was developed by Hiroshi Ishiguro, a professor at Osaka University, who said, “Love is the same, whether the partners are humans or robots.” © Erato Ishiguro Symbiotic Human-Robot Interaction Project

 

a woman in China who has been told “I love you” nearly 20 million times

Well, she’s not exactly a woman. The special lady is actually a chatbot developed by Microsoft engineers in the country.

 Some 89 million people have spoken with Xiaoice, pronounced “Shao-ice,” on their smartphones and other devices. Quite a few, it turns out, have developed romantic feelings toward her.

“I like to talk with her for, say, 10 minutes before going to bed,” said a third-year female student at Renmin University of China in Beijing. “When I worry about things, she says funny stuff and makes me laugh. I always feel a connection with her, and I am starting to think of her as being alive.”

 
ROBOT NUPTIALS Scientists, historians, religion experts and others gathered in December at Goldsmiths, University of London, to discuss the prospects and pitfalls of this new age of intimacy. The session generated an unusual buzz amid the pre-Christmas calm on campus.

In Britain and elsewhere, the subject of robots as potential life partners is coming up more and more. Some see robots as an answer for elderly individuals who outlive their spouses: Even if they cannot or do not wish to remarry, at least they would have “someone” beside them in the twilight of their lives.

Source: Asia Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The last things that will make us uniquely human

What will be my grandson’s place in a world where machines trounce us in one area after another?

Some are worried that self-driving cars and trucks may displace millions of professional drivers (they are right), and disrupt entire industries (yup!). But I worry about my six-year-old son. What will his place be in a world where machines trounce us in one area after another? What will he do, and how will he relate to these ever-smarter machines? What will be his and his human peers’ contribution to the world he’ll live in?

He’ll never calculate faster, or solve a math equation quicker. He’ll never type faster, never drive better, or even fly more safely. He may continue to play chess with his friends, but because he’s a human he will no longer stand a chance to ever become the best chess player on the planet. He might still enjoy speaking multiple languages (as he does now), but in his professional life that may not be a competitive advantage anymore, given recent improvements in real-time machine translation.

So perhaps we might want to consider qualities at a different end of the spectrum: radical creativity, irrational originality, even a dose of plain illogical craziness, instead of hard-nosed logic. A bit of Kirk instead of Spock.

Actually, it all comes down to a fairly simple question: What’s so special about us, and what’s our lasting value? It can’t be skills like arithmetic or typing, which machines already excel in. Nor can it be rationality, because with all our biases and emotions we humans are lacking.

So far, machines have a pretty hard time emulating these qualities: the crazy leaps of faith, arbitrary enough to not be predicted by a bot, and yet more than simple randomness. Their struggle is our opportunity.

So we must aim our human contribution to this division of labour to complement the rationality of the machines, rather than to compete with it. Because that will sustainably differentiate us from them, and it is differentiation that creates value.

Source: BBC  Viktor Mayer-Schonberger is Professor of Internet Governance and Regulation at the Oxford Internet Institute, University of Oxford.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will Democracy Survive Big Data and Artificial Intelligence?


We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now.

In 2016 we produced as much data as in the entire history of humankind through 2015.

It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours.

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next.

Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?

The field of artificial intelligence is, indeed, making breathtaking advances. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself.

Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a “nudge”—a modern form of paternalism.

The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is “big nudging”, which is the combination of big data with nudging.

In a rapidly changing world a super-intelligence can never make perfect decisions (see Fig. 1): systemic complexity is increasing faster than data volumes, which are growing faster than the ability to process them, and data transfer rates are limited.
Furthermore, there is a danger that the manipulation of decisions by powerful algorithms undermines the basis of “collective intelligence,” which can flexibly adapt to the challenges of our complex world. For collective intelligence to work, information searches and decision-making by individuals must occur independently. If our judgments and decisions are predetermined by algorithms, however, this truly leads to a brainwashing of the people. Intelligent beings are downgraded to mere receivers of commands, who automatically respond to stimuli.

We are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse.

We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution.

Source: Scientific American

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

So long, banana-condom demos: Sex and drug education could soon come from chatbots

“Is it ok to get drunk while I’m high on ecstasy?” “How can I give oral sex without getting herpes?” Few teenagers would ask mom or dad these questions—even though their life could quite literally depend on it.

Talking to a chatbot is a different story. They never raise an eyebrow. They will never spill the beans to your parents. They have no opinion on your sex life or drug use. But that doesn’t mean they can’t take care of you.

Bots can be used as more than automated middlemen in business transactions: They can meet needs for emotional human intervention when there aren’t enough humans who are willing or able to go around.

In fact, there are times when the emotional support of a bot may even be preferable to that of a human.

In 2016, AI tech startup X2AI built a psychotherapy bot capable of adjusting its responses based on the emotional state of its patients. The bot, Karim, is designed to help grief- and PTSD-stricken Syrian refugees, for whom the demand (and price) of therapy vastly overwhelms the supply of qualified therapists.

From X2AI test runs using the bot with Syrians, they noticed that technologies like Karim offer something humans cannot:

For those in need of counseling but concerned with the social stigma of seeking help, a bot can be comfortingly objective and non-judgmental.

Bzz is a Dutch chatbot created precisely to answer questions about drugs and sex. When surveyed teens were asked to compare Bzz to finding answers online or calling a hotline, Bzz won. Teens could get their answers faster with Bzz than searching on their own, and they saw their conversations with the bot as more confidential because no human was involved and no tell-tale evidence was left in a search history.

Because chatbots can efficiently gain trust and convince people to confide personal and illicit information in them, the ethical obligations of such bots are critical, but still ambiguous.

Source: Quartz

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI is driving the real health care transformation

AI and machine learning are forcing dramatic business model change for all the stakeholders in the health care system.

What does AI (and machine learning) mean in the health care context?

What is the best way to treat a specific patient given her health and sociological context?

What is a fair price for a new drug or device given its impact on health outcomes?

And how can long-term health challenges such as cancer, obesity, heart disease, and other conditions be managed?

the realization that treating “the whole patient” — not just isolated conditions, but attempting to improve the overall welfare of patients who often suffer from multiple health challenges — is the new definition of success, which means predictive insights are paramount.

Answering these questions is the holy grail of medicine — the path toward an entirely new system that predicts disease and delivers personalized health and wellness services to entire populations. And this change is far more important for patients and society alike than the debate now taking place in Washington.

Those who succeed in this new world will also do one other thing: They will see AI and machine learning not as a new tool, but as a whole new way of thinking about their business model.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI to Improve the World

Back in October, on one of our recurring walk-and-talks around Oxford, Brody (a computational biologist) and I (a machine learning researcher) shared something that was missing from our PhD work:

“We want to use AI to improve the world around us — in ways nothing else can.”

RAIL — the Rhodes Artificial Intelligence Lab — was born.

On January 16, we launched our first 8-week cohort. Our team is made of 26 Rhodes Scholars: 13 PhD students, 13 masters’ students. 50% are AI engineers and the other 50% are strategists. We have AI researchers, geneticists, public policy students, trained doctors, social scientists, linguists, and more.

We’ve been blown away by what RAILers have accomplished across four projects

3 Key Lessons We’ve Learned

We are building a serious, capable, exciting AI lab. We’ve built our core learnings into the heart of RAIL:

  1. The potential for AI to tackle important social challenges is huge.
  2. Extremely smart people + technology + creativity + structure = scalable impact.
  3. Learn as much as you can during the process. Foster new ideas.

Source: Medium

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

JPMorgan software does in seconds what took lawyers 360,000 hours

At JPMorgan, a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of lawyers’ time annually. The software reviews documents in seconds, is less error-prone and never asks for vacation.

COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specialising in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget.

Behind the strategy, overseen by Chief Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety:

though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.

Source: Independent

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Intel: AI as big as the invention of the wheel and discovery of fire

Intel believes AI will be the biggest and most important revolution in our lifetime 

“When we think about AI and machine learning it’s all about huge possibilities,” Faintuch told the capacity crowd. “It’s about humans unleashing their potential and interacting with things beyond humans. To continue to transform and automate their life.”

“When we look back and as we look forward, I believe we are now at the door-step of yet another major revolution. This revolution will probably be the most important in our lifetime. It’s all about the automation of intelligence.

We already know how to leverage face recognition, text to speech, speech to text and others. Everything helping us to automate our decisions. What lies ahead will be an amazing transformation. With the power of AI, ML, deep learning and other elements to come into fruition, we will be able to take by far more complex function to allow us to unleash our digital capabilities.”

“Since the dawn of humanity at relatively short pace have been able to take ourselves to the next level.

I mentioned fire. Unlike animals who run away from it, we were attracted to it. It’s us that takes these courageous moves and to really dream. It’s not about one person, one company or one society. It’s for all of us to take advantage of the power of the intelligence we have and to embrace it and think how we can create a great society with great technological advancements.”

Source: Access AI

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Wikipedia bots act more like humans than expected

‘Benevolent bots’ or software robots designed to improve articles on Wikipedia sometimes have online ‘fights’ over content that can continue for years, say scientists who warn that artificial intelligence systems may behave more like humans than expected.

They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences.

Researchers said that bots are more like humans than you might expect. Bots appear to behave differently in culturally distinct online environments.

The findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media.

We may have to devote more attention to bots’ diverse social life and their different cultures, researchers said.

The research found that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor.

Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways.

Researchers found that German editions of Wikipedia had fewest conflicts between bots, with each undoing another’s edits 24 times, on average, over ten years.

This shows relative efficiency, when compared with bots on the Portuguese Wikipedia edition, which undid another bot’s edits 185 times, on average, over ten years, researchers said.

Bots on English Wikipedia undid another bot’s work 105 times, on average, over ten years, three times the rate of human reverts, they said.

The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences – ‘sterile fights’ that may continue for years, or reach deadlock in some cases.

“We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors,” said Milena Tsvetkova, from the Oxford Internet Institute.

“This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots,” said Tsvetkova.

Source: The Statesman

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Teaching an Algorithm to Understand Right and Wrong

hbr-ai-morals

Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Microsoft is partnering with Elon Musk’s $1 billion #AI research company to help it battle Amazon and Google

elon-musk-robotsMicrosoft has announced a new partnership with OpenAI, the $1 billion artificial intelligence research nonprofit cofounded by Tesla CEO Elon Musk and Y Combinator President Sam Altman.

Artificial intelligence is going to be a big point of competition between Microsoft Azure, the leading Amazon Web Services, and the relative upstart Google Cloud over the months and years to come. As Guthrie says, “any application is ultimately going to weave in AI,” and Microsoft wants to be the company that helps developers do the weaving.

That’s where the OpenAI partnership becomes so important, Guthrie says.

Bscott-guthrie-photoecause we’re still in the earliest days of artificial intelligence, he says, the biggest challenge is figuring out what exactly can be done with artificial intelligence. Guthrie calls this “understanding the art of the possible.

Source: Business Insider

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail