The Foundation we’ve built for AI- Human Collaboration.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Context Couple Teaser 2024

Contextual Reasoning as an essential human life-skill in our increasingly complex world!

And this is a skill that AI CANNOT DO
but DESPERATELY NEEDS!

Phil and Pam, as the Context Couple, will start a serieis of short vlogs where they will explain and show what contextual reasoning is and more importantlhy, how to develop and practice this essential life skill.

And they will explain and show how, by using contextual reasoning as a base, humans and AI can collaborate. 

Please follow us on this exciting journey.

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Socializing AI is here?

Much appreciation for the published paper ‘acknowledgements’ by Joel Janhonen, in the November 21, 2023 “AI and Ethics” journal published by Springer.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A Hippocratic Oath for artificial intelligence practitioners

                                                                                         Getty Images

In the forward to Microsoft’s recent book, The Future Computed, executives Brad Smith  and Harry Shum  proposed that Artificial Intelligence (AI) practitioners highlight their ethical commitments by taking an oath analogous to the Hippocratic Oath sworn by doctors for generations.

In the past, much power and responsibility over life and death was concentrated in the hands of doctors.

Now, this ethical burden is increasingly shared by the builders of AI software.

Future AI advances in medicine, transportation, manufacturing, robotics, simulation, augmented reality, virtual reality, military applications, dictate that AI be developed from a higher moral ground today.

In response, I (Oren Etzioni) edited the modern version of the medical oath to address the key ethical challenges that AI researchers and engineers face …

The oath is as follows:

I swear to fulfill, to the best of my ability and judgment, this covenant:

I will respect the hard-won scientific gains of those scientists and engineers in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.

I will apply, for the benefit of the humanity, all measures required, avoiding those twin traps of over-optimism and uniformed pessimism.

I will remember that there is an art to AI as well as science, and that human concerns outweigh technological ones.

Most especially must I tread with care in matters of life and death. If it is given me to save a life using AI, all thanks. But it may also be within AI’s power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty and the limitations of AI. Above all, I must not play at God nor let my technology do so.

I will respect the privacy of humans for their personal data are not disclosed to AI systems so that the world may know.

I will consider the impact of my work on fairness both in perpetuating historical biases, which is caused by the blind extrapolation from past data to future predictions, and in creating new conditions that increase economic or other inequality.

My AI will prevent harm whenever it can, for prevention is preferable to cure.

My AI will seek to collaborate with people for the greater good, rather than usurp the human role and supplant them.

I will remember that I am not encountering dry data, mere zeros and ones, but human beings, whose interactions with my AI software may affect the person’s freedom, family, or economic stability. My responsibility includes these related problems.

I will remember that I remain a member of society, with special obligations to all my fellow human beings.

Source: TechCrunch – Oren Etzioni

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

This is an opportunity for me to correct a wrong – Center for Humane Technology

Early Facebook and Google Employees Form Coalition to Fight What They Built

Jim Steyer, left, and Tristan Harris in Common Sense’s headquarters. Common Sense is helping fund the The Truth About Tech campaign. Peter Prato for The New York Times

A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding togethe to challenge the companies they helped build.

The cohort is creating a union of concerned experts called the Center for Humane Technology. Along with the nonprofit media watchdog group Common Sense Media, it also plans an anti-tech addiction lobbying effort and an ad campaign at 55,000 public schools in the United States.

The campaign, titled The Truth About Tech

“We were on the inside,” said Tristan Harris, a former in-house ethicist at Google who is heading the new group. “We know what the companies measure. We know how they talk, and we know how the engineering works.”

An unprecedented alliance of former employees of some of today’s biggest tech companies. Apart from Mr. Harris, the center includes Sandy Parakilas, a former Facebook operations manager; Lynn Fox, a former Apple and Google communications executive; Dave Morin, a former Facebook executive; Justin Rosenstein, who created Facebook’s Like button and is a co-founder of Asana; Roger McNamee, an early investor in Facebook; and Renée DiResta, technologist who studies bots.

 

“Facebook appeals to your lizard brain — primarily fear and anger. And with smartphones, they’ve got you for every waking moment. This is an opportunity for me to correct a wrong.” Roger McNamee, an early investor in Facebook

Source: NYT



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

In 2018 AI will gain a moral compass

The ethics of artificial intelligence must be central to its development

Janne Iivonen

Humanity faces a wide range of challenges that are characterised by extreme complexity

… the successful integration of AI technologies into our social and economic world creates its own challenges. They could either help overcome economic inequality or they could worsen it if the benefits are not distributed widely.

They could shine a light on damaging human biases and help society address them, or entrench patterns of discrimination and perpetuate them. Getting things right requires serious research into the social consequences of AI and the creation of partnerships to ensure it works for the public good.

This is why I predict the study of the ethics, safety and societal impact of AI is going to become one of the most pressing areas of enquiry over the coming year.

It won’t be easy: the technology sector often falls into reductionist ways of thinking, replacing complex value judgments with a focus on simple metrics that can be tracked and optimised over time.

There has already been valuable work done in this area. For example, there is an emerging consensus that it is the responsibility of those developing new technologies to help address the effects of inequality, injustice and bias. In 2018, we’re going to see many more groups start to address these issues.

Of course, it’s far simpler to count likes than to understand what it actually means to be liked and the effect this has on confidence or self-esteem.

Progress in this area also requires the creation of new mechanisms for decision-making and voicing that include the public directly. This would be a radical change for a sector that has often preferred to resolve problems unilaterally – or leave others to deal with them.

Mustafa Suleyman co-founder of DeepMind Technologies

We need to do the hard, practical and messy work of finding out what ethical AI really means. If we manage to get AI to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.

Source: Wired 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Artificial Intelligence is different from human reasoning

Human decision tree to decide to talk “Bob”

You see a man walking toward you on the street. He reminds you of someone from long ago. Such as a high school classmate, who belonged to the football team? Wasn’t a great player but you were fond of him then. You don’t recall him attending fifth, 10th and 20th reunions. He must have moved away and established his life there and cut off his ties to his friends here.

You look at his face and you really can’t tell if it’s Bob for sure. You had forgotten many of his key features and this man seems to have gained some weight.

The distance between the two of you is quickly closing and your mind is running at full speed trying to decide if it is Bob.

At this moment, you have a few choices. A decision tree will emerge and you will need to choose one of the available options.

In the logic diagram I show, there are some question that is influenced by the emotion. B2) “Nah, let’s forget it” and C) and D) are results of emotional decisions and have little to do with fact this may be Bob or not.

The human decision-making process is often influenced by emotion, which is often independent of fact.

You decision to drop the idea of meeting Bob after so many years is caused by shyness, laziness and/or avoiding some embarrassment in case this man is not Bob. The more you think about this decision-making process, less sure you’d become. After all, if you and Bob hadn’t spoken for 20 years, maybe we should leave the whole thing alone.

Thus, this is clearly the result of human intelligence working.

If this were artificial intelligence, chances are decisions B2, C and D wouldn’t happen. Machines today at their infantile stage of development do not know such emotional feeling as “too much trouble,” hesitation due to fear of failing (Bob says he isn’t Bob), or laziness and or “too complicated.” In some distant time, these complex feelings and deeds driven by the emotion would be realized, I hope. But, not now.

At this point of the state of art of AI, a machine would not hesitate once it makes a decision. That’s because it cannot hesitate. Hesitation is a complex emotional decision that a machine simply cannot perform.

There you see a huge crevice between the human intelligence and AI.

In fact, animals (remember we are also an animal) display complex emotional decisions daily. Now, are you getting some feeling about human intelligence and AI?

Source: Fosters.com

Shintaro “Sam” Asano was named by the Massachusetts Institute of Technology in 2011 as one of the 10 most influential inventors of the 20th century who improved our lives. He is a businessman and inventor in the field of electronics and mechanical systems who is credited as the inventor of the portable fax machine.



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Jaron Lanier – the greatest tragedy in the history of computing and …

A few highlights from THE BUSINESS INSIDER INTERVIEW with Jaron

But that general principle — that we’re not treating people well enough with digital systems — still bothers me. I do still think that is very true.

Well, this is maybe the greatest tragedy in the history of computing, and it goes like this: there was a well-intentioned, sweet movement in the ‘80s to try to make everything online free. And it started with free software and then it was free music, free news, and other free services.

But, at the same time, it’s not like people were clamoring for the government to do it or some sort of socialist solution. If you say, well, we want to have entrepreneurship and capitalism, but we also want it to be free, those two things are somewhat in conflict, and there’s only one way to bridge that gap, and it’s through the advertising model.

And advertising became the model of online information, which is kind of crazy. But here’s the problem: if you start out with advertising, if you start out by saying what I’m going to do is place an ad for a car or whatever, gradually, not because of any evil plan — just because they’re trying to make their algorithms work as well as possible and maximize their shareholders value and because computers are getting faster and faster and more effective algorithms —

what starts out as advertising morphs into behavior modification.

A second issue is that people who participate in a system of this time, since everything is free since it’s all being monetized, what reward can you get? Ultimately, this system creates assholes, because if being an asshole gets you attention, that’s exactly what you’re going to do. Because there’s a bias for negative emotions to work better in engagement, because the attention economy brings out the asshole in a lot of other people, the people who want to disrupt and destroy get a lot more efficiency for their spend than the people who might be trying to build up and preserve and improve.

Q: What do you think about programmers using consciously addicting techniques to keep people hooked to their products?

A: There’s a long and interesting history that goes back to the 19th century, with the science of Behaviorism that arose to study living things as though they were machines.

Behaviorists had this feeling that I think might be a little like this godlike feeling that overcomes some hackers these days, where they feel totally godlike as though they have the keys to everything and can control people


I think our responsibility as engineers is to engineer as well as possible, and to engineer as well as possible, you have to treat the thing you’re engineering as a product.

You can’t respect it in a deified way.

It goes in the reverse. We’ve been talking about the behaviorist approach to people, and manipulating people with addictive loops as we currently do with online systems.

In this case, you’re treating people as objects.

It’s the flipside of treating machines as people, as AI does. They go together. Both of them are mistakes

Source: Read the extensive interview at Business Insider



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

People want more intelligent technology tools. AI is helping with that.

Jordi Ribas, left, and Kristina Behr, right, showcased Microsoft’s AI advances at an event Wednesday. Photo by Dan DeLong.

These days, people want more intelligent answers: Maybe they’d like to gather the pros and cons of a certain exercise plan or figure out whether the latest Marvel movie is worth seeing. They might even turn to their favorite search tool with only the vaguest of requests, such as, “I’m hungry.”

When people make requests like that, they don’t just want a list of websites. They might want a personalized answer, such as restaurant recommendations based on the city they are traveling in. Or they might want a variety of answers, so they can get different perspectives on a topic. They might even need help figuring out the right question to ask.

At a Microsoft event in San Francisco on Wednesday, Microsoft executives showcased a number of advances in its Bing search engine, Cortana intelligent assistant and Microsoft Office 365 productivity tools that use artificial intelligence to help people get more nuanced information and assist with more complex needs.

“AI has come a long way in the ability to find information, but making sense of that information is the real challenge,” said Kristina Behr, a partner design and planning program manager with Microsoft’s Artificial Intelligence and Research group.

Microsoft demonstrated some of the most recent AI-driven advances in intelligent search that are aimed at giving people richer, more useful information.

Another new, AI-driven advance in Bing is aimed at getting people multiple viewpoints on a search query that might be more subjective.

For example, if you ask Bing “is cholesterol bad,” you’ll see two different perspectives on that question.

That’s part of Microsoft’s effort to acknowledge that sometimes a question doesn’t have a clear black and white answer.

“As Bing, what we want to do is we want to provide the best results from the overall web. We want to be able to find the answers and the results that are the most comprehensive, the most relevant and the most trustworthy,” Ribas said.

“Often people are seeking answers that go beyond something that is a mathematical equation. We want to be able to frame those opinions and articulate them in a way that’s also balanced and objective.”

Source: Microsoft



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Trouble with #AI Bias – Kate Crawford

This article attempts to bring our readers to Kate’s brilliant Keynote speech at NIPS 2017. It talks about different forms of bias in Machine Learning systems and the ways to tackle such problems.

The rise of Machine Learning is every bit as far reaching as the rise of computing itself.

A vast new ecosystem of techniques and infrastructure are emerging in the field of machine learning and we are just beginning to learn their full capabilities. But with the exciting things that people can do, there are some really concerning problems arising.

Forms of bias, stereotyping and unfair determination are being found in machine vision systems, object recognition models, and in natural language processing and word embeddings. High profile news stories about bias have been on the rise, from women being less likely to be shown high paying jobs to gender bias and object recognition datasets like MS COCO, to racial disparities in education AI systems.

What is bias?

Bias is a skew that produces a type of harm.

Where does bias come from?

Commonly from Training data. It can be incomplete, biased or otherwise skewed. It can draw from non-representative samples that are wholly defined before use. Sometimes it is not obvious because it was constructed in a non-transparent way. In addition to human labeling, other ways that human biases and cultural assumptions can creep in ending up in exclusion or overrepresentation of subpopulation. Case in point: stop-and-frisk program data used as training data by an ML system.  This dataset was biased due to systemic racial discrimination in policing.

Harms of allocation

Majority of the literature understand bias as harms of allocation. Allocative harm is when a system allocates or withholds certain groups, an opportunity or resource. It is an economically oriented view primarily. Eg: who gets a mortgage, loan etc.

Allocation is immediate, it is a time-bound moment of decision making. It is readily quantifiable. In other words, it raises questions of fairness and justice in discrete and specific transactions.

Harms of representation

It gets tricky when it comes to systems that represent society but don’t allocate resources. These are representational harms. When systems reinforce the subordination of certain groups along the lines of identity like race, class, gender etc.

It is a long-term process that affects attitudes and beliefs. It is harder to formalize and track. It is a diffused depiction of humans and society. It is at the root of all of the other forms of allocative harm.

What can we do to tackle these problems?

  • Start working on fairness forensics
    • Test our systems: eg: build pre-release trials to see how a system is working across different populations
    • How do we track the life cycle of a training dataset to know who built it and what the demographics skews might be in that dataset
  • Start taking interdisciplinarity seriously
    • Working with people who are not in our field but have deep expertise in other areas Eg: FATE (Fairness Accountability Transparency Ethics) group at Microsoft Research
    • Build spaces for collaboration like the AI now institute.
  • Think harder on the ethics of classification

The ultimate question for fairness in machine learning is this.

Who is going to benefit from the system we are building? And who might be harmed?

Source: Datahub

Kate Crawford is a Principal Researcher at Microsoft Research and a Distinguished Research Professor at New York University. She has spent the last decade studying the social implications of data systems, machine learning, and artificial intelligence. Her recent publications address data bias and fairness, and social impacts of artificial intelligence among others.



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Siri as a therapist, Apple is seeking engineers who understand psychology

PL – Looks like Siri needs more help to understand.

Apple Job Opening Ad

“People have serious conversations with Siri. People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life. Does improving Siri in these areas pique your interest?

Come work as part of the Siri Domains team and make a difference.

We are looking for people passionate about the power of data and have the skills to transform data to intelligent sources that will take Siri to next level. Someone with a combination of strong programming skills and a true team player who can collaborate with engineers in several technical areas. You will thrive in a fast-paced environment with rapidly changing priorities.”

The challenge as explained by Ephrat Livni on Quartz

The position requires a unique skill set. Basically, the company is looking for a computer scientist who knows algorithms and can write complex code, but also understands human interaction, has compassion, and communicates ably, preferably in more than one language. The role also promises a singular thrill: to “play a part in the next revolution in human-computer interaction.”

The job at Apple has been up since April, so maybe it’s turned out to be a tall order to fill. Still, it shouldn’t be impossible to find people who are interested in making machines more understanding. If it is, we should probably stop asking Siri such serious questions.

Computer scientists developing artificial intelligence have long debated what it means to be human and how to make machines more compassionate. Apart from the technical difficulties, the endeavor raises ethical dilemmas, as noted in the 2012 MIT Press book Robot Ethics: The Ethical and Social Implications of Robotics.

Even if machines could be made to feel for people, it’s not clear what feelings are the right ones to make a great and kind advisor and in what combinations. A sad machine is no good, perhaps, but a real happy machine is problematic, too.

In a chapter on creating compassionate artificial intelligence (pdf), sociologist, bioethicist, and Buddhist monk James Hughes writes:

Programming too high a level of positive emotion in an artificial mind, locking it into a heavenly state of self-gratification, would also deny it the capacity for empathy with other beings’ suffering, and the nagging awareness that there is a better state of mind.

Source: Quartz

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Why The Sensitive Intersection of Race, Hate Speech And Algorithms Is Heating Up #AI

SAN JOSE, CA – APRIL 18: Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook’s F8 Developer Conference on April 18, 2017 at McEnery Convention Center in San Jose, California. (Photo by Justin Sullivan/Getty Images)

… recent story in The Washington Post reported that “minority” groups feel unfairly censored by social media behemoth Facebook, for example, when using the platform for discussions about racial bias. At the same time, groups and individuals on the other end of the race spectrum are quickly being banned and ousted in a flash from various social media networks.

Most all of such activity begins with an algorithm, a set of computer code that, for all intents and purposes for this piece, is created to raise a red flag when certain speech is used on a site.

But from engineer mindset to tech limitation, just how much faith should we be placing in algorithms when it comes to the very sensitive area of digital speech and race, and what does the future hold?

Indeed, while Facebook head Mark Zuckerberg reportedly eyes political ambitions within an increasingly brown America in which his own company consistently has issues creating racial balance, there are questions around policy and development of such algorithms. In fact, Malkia Cyril executive director for the Center for Media Justice  told the Post  that she believes that Facebook has a double standard when it comes to deleting posts.

Cyril explains [her meeting with Facebook] “The meeting was a good first step, but very little was done in the direct aftermath.  Even then, Facebook executives, largely white, spent a lot of time explaining why they could not do more instead of working with us to improve the user experience for everyone.”

What’s actually in the hearts and minds of those in charge of the software development? How many more who are coding have various thoughts – or more extreme – as those recently expressed in what is now known as the Google Anti-Diversity memo?

Not just Facebook, but any and all tech platforms where race discussion occurs are seemingly at a crossroads and under various scrutiny in terms of management, standards and policy about this sensitive area. The main question is how much of this imbalance is deliberate and how much is just a result of how algorithms naturally work?

Nelson [National Chairperson National Society of Black Engineers] notes that the first source of error, however, is how a particular team defines the term hate speech. “That opinion may differ between people so any algorithm would include error at the individual level,” he concludes.

“I believe there are good people at Facebook who want to see justice done,” says Cyril. “There are steps being taken at the company to improve the experience of users and address the rising tide of hate that thwarts democracy, on social media and in real life.

That said, racism is not race neutral, and accountability for racism will never come from an algorithm alone.”

Source: Forbes



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Behind the Google diversity memo furor is fear of Google’s vast opaque power

Fear of opaque power of Google in particular, and Silicon Valley in general, wields over our lives.

If Google — and the tech world more generally — is sexist, or in the grips of a totalitarian cult of political correctness, or a secret hotbed of alt-right reactionaries, the consequences would be profound.

Google wields a monopoly over search, one of the central technologies of our age, and, alongside Facebook, dominates the internet advertising market, making it a powerful driver of both consumer opinion and the media landscape. 

It shapes the world in which we live in ways both obvious and opaque.

This is why trust matters so much in tech. It’s why Google, to attain its current status in society, had to promise, again and again, that it wouldn’t be evil. 

Compounding the problem is that the tech industry’s point of view is embedded deep in the product, not announced on the packaging. Its biases are quietly built into algorithms, reflected in platform rules, expressed in code few of us can understand and fewer of us will ever read.

But what if it actually is evil? Or what if it’s not evil but just immature, unreflective, and uncompassionate? And what if that’s the culture that designs the digital services the rest of us have to use?

The technology industry’s power is vast, and the way that power is expressed is opaque, so the only real assurance you can have that your interests and needs are being considered is to be in the room when the decisions are made and the code is written. But tech as an industry is unrepresentative of the people it serves and unaccountable in the way it serves them, and so there’s very little confidence among any group that the people in the room are the right ones.

Source: Vox (read the entire article by Ezra Klein)



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

IBM Watson CTO on Why Augmented Intelligence Beats AI

If you look at almost every other tool that has ever been created, our tools tend to be most valuable when they’re amplifying us, when they’re extending our reach, when they’re increasing our strength, when they’re allowing us to do things that we can’t do by ourselves as human beings. That’s really the way that we need to be thinking about AI as well, and to the extent that we actually call it augmented intelligence, not artificial intelligence.

Some time ago we realized that this thing called cognitive computing was really bigger than us, it was bigger than IBM, it was bigger than any one vendor in the industry, it was bigger than any of the one or two different solution areas that we were going to be focused on, and we had to open it up, which is when we shifted from focusing on solutions to really dealing with more of a platform of services, where each service really is individually focused on a different part of the problem space.

what we’re talking about now are a set of services, each of which do something very specific, each of which are trying to deal with a different part of our human experience, and with the idea that anybody building an application, anybody that wants to solve a social or consumer or business problem can do that by taking our services, then composing that into an application.

If the doctor can now make decisions that are more informed, that are based on real evidence, that are supported by the latest facts in science, that are more tailored and specific to the individual patient, it allows them to actually do their job better. For radiologists, it may allow them to see things in the image that they might otherwise miss or get overwhelmed by. It’s not about replacing them. It’s about helping them do their job better.

That’s really the way to think about this stuff, is that it will have its greatest utility when it is allowing us to do what we do better than we could by ourselves, when the combination of the human and the tool together are greater than either one of them would’ve been by theirselves. That’s really the way we think about it. That’s how we’re evolving the technology. That’s where the economic utility is going to be.

There are lots of things that we as human beings are good at. There’s also a lot of things that we’re not very good, and that’s I think where cognitive computing really starts to make a huge difference, is when it’s able to bridge that distance to make up that gap

A way I like to say it is it doesn’t do our thinking for us, it does our research for us so we can do our thinking better, and that’s true of us as end users and it’s true of advisors.

Source: PCMag



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Satya Nadella’s message to Microsoft after the attack in Charlottesville

Yesterday (Aug. 14), Microsoft CEO Satya Nadella sent out the following email to employees at Microsoft after the deadly car crash at a white nationalist rally in in Charlottesville, Virginia, on Saturday, Aug. 12:

This past week and in particular this weekend’s events in Charlottesville have been horrific. What I’ve seen and read has had a profound impact on me and I am sure for many of you as well. In these times, to me only two things really matter as a leader.

The first is that we stand for our timeless values, which include diversity and inclusion. There is no place in our society for the bias, bigotry and senseless violence we witnessed this weekend in Virginia provoked by white nationalists. Our hearts go out to the families and everyone impacted by the Charlottesville tragedy.

The second is that we empathize with the hurt happening around us. At Microsoft, we strive to seek out differences, celebrate them and invite them in. As a leader, a key part of your role is creating a culture where every person can do their best work, which requires more than tolerance for diverse perspectives. Our growth mindset culture requires us to truly understand and share the feelings of another person. It is an especially important time to continue to be connected with people, and listen and learn from each other’s experiences.

As I’ve said, across Microsoft, we will stand together with those who are standing for positive change in the communities where we live, work and serve. Together, we must embrace our shared humanity, and aspire to create a society that is filled with respect, empathy and opportunity for all.

Feel free to share with your teams.

Satya

Source: Quartz

TO READ this blogger’s view of the above email click here.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Do we still need human judges in the age of Artificial Intelligence?

Technology and the law are converging, and where they meet new questions arise about the relative roles of artificial and human agents—and the ethical issues involved in the shift from one to the other. While legal technology has largely focused on the activities of the bar, it challenges us to think about its application to the bench as well. In particular,

Could AI replace human judges?

The idea of  AI judges raises important ethical issues around bias and autonomy. AI programs may incorporate the biases of their programmers and the humans they interact with.

But while such programs may replicate existing human biases, the distinguishing feature of AI over an algorithm  is that it can behave in surprising and unintended ways as it ‘learns.’ Eradicating bias therefore becomes even more difficult, though not impossible. Any AI judging program would need to account for, and be tested for, these biases.

Appealing to rationality, the counter-argument is that human judges are already biased, and that AI can be used to improve the way we deal with them and reduce our ignorance. Yet suspicions about AI judges remain, and are already enough of a concern to lead the European Union to promulgate a General Data Protection Regulation which becomes effective in 2018. This Regulation contains

“the right not to be subject to a decision based solely on automated processing”.

As the English utilitarian legal theorist Jeremy Bentham once wrote in An Introduction To The Principles of Morals and Legislation, “in principle and in practice, in a right track and in a wrong one, the rarest of all human qualities is consistency.” With the ability to process far more data and variables in the case record than humans could ever do, an AI judge might be able to outstrip a human one in many cases.

Even so, AI judges may not solve classical questions of legal validity so much as raise new questions about the role of humans, since—if  we believe that ethics and morality in the law are important—then they necessarily lie, or ought to lie, in the domain of human judgment.

In practical terms, if we apply this conclusion to the perspective of American legal theorist Ronald Dworkin, for example, AI could assist with examining the entire breadth and depth of the law, but humans would ultimately choose what they consider a morally-superior interpretation.

The American Judge Richard Posner believes that the immediate use of AI and automation should be restricted to assisting judges in uncovering their own biases and maintaining consistency.

At the heart of these issues is a hugely challenging question: what does it mean to be human in the age of Artificial Intelligence?

Source: Open Democracy

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

What if a Computer Could Help You with Psychotherapy, Alter Your Habits? #AI

TAO Connect

One of the pioneers in this space has been Australia’s MoodGYM, first launched in 2001. It now has over 1 million users around the world and has been the subject of over two dozen randomized clinical research trials showing that this inexpensive (or free!) intervention can work wonders on depression, for those who can stick with it. And online therapy has been available since 1996.

TAO Connect — the TAO stands for “therapist assisted online” — is something a little different than MoodGYM. Instead of simply walking a user through a serious of psychoeducational modules (which vary in their interactivity and information presentation), it uses multiple modalities and machine learning (a form of artificial intelligence) to try and help more effectively teach the techniques that can keep anxiety at bay for the rest of your life. It can be used for anxiety, depression, stress, and pain management, and can help a person with relationship problems and learning greater resiliency in dealing with stress.

TAO Connect is based on the Stepped Care model of treatment delivery, offering more intensive and more of a variety of treatment options depending upon the severity of mental illness a person presents with. It is a model used elsewhere in the world, but has traditionally not been used as often in the U.S. (except in resource-constrained clinics, like university counseling centers).

Today, TAO Connect is only available through a therapist whose practice subscribes to the service.

Source: PsychCentral

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Why We Should Fear Emotionally Manipulative Robots – #AI

Artificial Intelligence Is Learning How to Exploit Human Psychology for Profit

Empathy is widely praised as a good thing. But it also has its dark sides: Empathy can be manipulated and it leads people to unthinkingly take sides in conflicts. Add robots to this mix, and the potential for things to go wrong multiplies.

Give robots the capacity to appear empathetic, and the potential for trouble is even greater.

The robot may appeal to you, a supposedly neutral third party, to help it to persuade the frustrated customer to accept the charge. It might say: “Please trust me, sir. I am a robot and programmed not to lie.”

You might be skeptical that humans would empathize with a robot. Social robotics has already begun to explore this question. And experiments suggest that children will side with robots against people when they perceive that the robots are being mistreated.

a study conducted at Harvard demonstrated that students were willing to help a robot enter secured residential areas simply because it asked to be let in, raising questions about the potential dangers posed by the human tendency to respect a request from a machine that needs help.

Robots will provoke empathy in situations of conflict. They will draw humans to their side and will learn to pick up on the signals that work.

Bystander support will then mean that robots can accomplish what they are programmed to accomplish—whether that is calming down customers, or redirecting attention, or marketing products, or isolating competitors. Or selling propaganda and manipulating opinions.

The robots will not shed tears, but may use various strategies to make the other (human) side appear overtly emotional and irrational. This may also include deliberately infuriating the other side.

When people imagine empathy by machines, they often think about selfless robot nurses and robot suicide helplines, or perhaps also robot sex. In all of these, machines seem to be in the service of the human. However, the hidden aspects of robot empathy are the commercial interests that will drive its development. Whose interests will dominate when learning machines can outwit not only their customers but also their owners?

Source: Zocalo

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial intelligence ethics the same as other new technology – #AI

AI gives us the power to solve problems more efficiently and effectively.

Just as a calculator is more efficient at math than a human, various forms of AI might be better than humans at other tasks. For example, most car accidents are caused by human error – what if driving could be automated and human error thus removed? Tens of thousands of lives might be saved every year, and huge sums of money saved in healthcare costs and property damage averted.

Moving into the future, AI might be able to better personalize education to individual students, just as adaptive testing evaluates students today. AI might help figure out how to increase energy efficiency and thus save money and protect the environment. It might increase efficiency and prediction in healthcare; improving health while saving money. Perhaps AI could even figure out how to improve law and government, or improve moral education. For every problem that needs a solution, AI might help us find it.

But as human beings, we should not be so much thinking about efficiency as morality.

Doing the right thing is sometimes “inefficient” (whatever efficiency might mean in a certain context). Respecting human dignity is sometimes inefficient. And yet we should do the right thing and respect human dignity anyway, because those moral values are higher than mere efficiency.

Ultimately, AI gives us just what all technology does – better tools for achieving what we want.

The deeper question then becomes “what do we want?” and even more so “what should we want?” If we want evil, then evil we shall have, with great efficiency and abundance. If instead we want goodness, then through diligent pursuit we might be able to achieve it.

Source: Crux

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The holy grail is modifying patients’ behavior – #AI

Companies like DexCom are focused on the diabetes epidemic, Jimenez said

the holy grail is modifying patients’ behavior.

That would mean combining the stream of data from glucose monitoring, insulin measurements, patient activity and meals, and applying machine learning to derive insights so the software can send alerts and recommendations back to patients and their doctors, she said.

“But where we are in our maturity as an industry is just publishing numbers,”

Jimenez explained. “So we’re just telling people what their glucose number is, which is critical for a type 1 diabetic. But a type 2 diabetic needs to engage with an app, and be compelled to interact with the insights. It’s really all about the development of the app.”

The ultimate goal, perhaps, would be to develop a user interface that uses the insights gained from machine learning to actually prompt diabetic patients to change their behavior.

This point was echoed by Jean Balgrosky, an investor who spent 20 years as the CIO of large, complex healthcare organizations such as San Diego’s Scripps Health. “At the end of the day,” she said, “all this machine learning has to be absorbed and consumed by humans—to take care of humans in healthcare.”

Source: Xconomy

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial Intelligence Key To Treating Illness

UC and one of its graduates have teamed up to use artificial intelligence to analyze the fMRIs of bipolar patients to determine treatment.

In a proof of concept study, Dr. Nick Ernest harnessed the power of his Psibernetix AI program to determine if bipolar patients could benefit from a certain medication. Using fMRIs of bipolar patients, the software looked at how each patient would react to lithium.

Fuzzy Logic appears to be very accurate

The computer software predicted with 100 percent accuracy how patients would respond. It also predicted the actual reduction in manic symptoms after the lithium treatment with 92 percent accuracy.

UC psychiatrist David Fleck partnered with Ernest and Dr. Kelly Cohen on the study. Fleck says without AI, coming up with a treatment plan is difficult. “Bipolar disorder is a very complex genetic disease. There are multiple genes and not only are there multiple genes, not all of which we understand and know how they work, there is interaction with the environment.

Ernest emphasizes the advanced software is more than a black box. It thinks in linguistic sentences. “So at the end of the day we can go in and ask the thing why did you make the prediction that you did? So it has high accuracy but also the benefit of explaining exactly why it makes the decision that it did.”

More tests are needed to make sure the artificial intelligence continues to accurately predict medication for bipolar patients.

Source: WVXU

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Inside Microsoft’s Artificial Intelligence Comeback

Yoshua Bengio

[Yoshua Bengio, one of the three intellects who shaped the deep learning that now dominates artificial intelligence, has never been one to take sides. But Bengio has recently chosen to sign on with Microsoft. In this WIRED article he explains why.]

“We don’t want one or two companies, which I will not name, to be the only big players in town for AI,” he says, raising his eyebrows to indicate that we both know which companies he means. One eyebrow is in Menlo Park; the other is in Mountain View. “It’s not good for the community. It’s not good for people in general.”

That’s why Bengio has recently chosen to forego his neutrality, signing on with Microsoft.

Yes, Microsoft. His bet is that the former kingdom of Windows alone has the capability to establish itself as AI’s third giant. It’s a company that has the resources, the data, the talent, and—most critically—the vision and culture to not only realize the spoils of the science, but also push the field forward.

Just as the internet disrupted every existing business model and forced a re-ordering of industry that is just now playing out, artificial intelligence will require us to imagine how computing works all over again.

In this new landscape, computing is ambient, accessible, and everywhere around us. To draw from it, we need a guide—a smart conversationalist who can, in plain written or spoken form, help us navigate this new super-powered existence. Microsoft calls it Cortana.

Because Cortana comes installed with Windows, it has 145 million monthly active users, according to the company. That’s considerably more than Amazon’s Alexa, for example, which can be heard on fewer than 10 million Echoes. But unlike Alexa, which primarily responds to voice, Cortana also responds to text and is embedded in products that many of us already have. Anyone who has plugged a query into the search box at the top of the toolbar in Windows has used Cortana.

Eric Horvitz wants Microsoft to be more than simply a place where research is done. He wants Microsoft Research to be known as a place where you can study the societal and social influences of the technology.

This will be increasingly important as Cortana strives to become, to the next computing paradigm, what your smartphone is today: the front door for all of your computing needs. Microsoft thinks of it as an agent that has all your personal information and can interact on your behalf with other agents.

If Cortana is the guide, then chatbots are Microsoft’s fixers. They are tiny snippets of AI-infused software that are designed to automate one-off tasks you used to do yourself, like making a dinner reservation or completing a banking transaction.

Emma Williams, Marcus Ash, and Lili Cheng

So far, North American teens appear to like chatbot friends every bit as much as Chinese teens, according to the data. On average, they spend 10 hours talking back and forth with Zo. As Zo advises its adolescent users on crushes and commiserates about pain-in-the-ass parents, she is becoming more elegant in her turns of phrase—intelligence that will make its way into Cortana and Microsoft’s bot tools.

It’s all part of one strategy to help ensure that in the future, when you need a computing assist–whether through personalized medicine, while commuting in a self-driving car, or when trying to remember the birthdays of all your nieces and nephews–Microsoft will be your assistant of choice.

Source: Wired for the full in-depth article

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

In the #AI Age, “Being Smart” Will Mean Something Completely Different

What can we do to prepare for the new world of work? Because AI will be a far more formidable competitor than any human, we will be in a frantic race to stay relevant. That will require us to take our cognitive and emotional skills to a much higher level.

Many experts believe that human beings will still be needed to do the jobs that require higher-order critical, creative, and innovative thinking and the jobs that require high emotional engagement to meet the needs of other human beings.

The challenge for many of us is that we do not excel at those skills because of our natural cognitive and emotional proclivities: We are confirmation-seeking thinkers and ego-affirmation-seeking defensive reasoners. We will need to overcome those proclivities in order to take our thinking, listening, relating, and collaborating skills to a much higher level.

What is needed is a new definition of being smart, one that promotes higher levels of human thinking and emotional engagement.

The new smart will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating, and learning. Quantity is replaced by quality.

And that shift will enable us to focus on the hard work of taking our cognitive and emotional skills to a much higher level.

Source: HBR

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Ethics And Artificial Intelligence With IBM Watson’s Rob High – #AI

In the future, chatbots should and will be able to go deeper to find the root of the problem.

For example, a person asking a chatbot what her bank balance is might be asking the question because she wants to invest money or make a big purchase—a futuristic chatbot could find the real reason she is asking and turn it into a more developed conversation.

In order to do that, chatbots will need to ask more questions and drill deeper, and humans need to feel comfortable providing their information to machines.

As chatbots perform various tasks and become a more integral part of our lives, the key to maintaining ethics is for chatbots to provide proof of why they are doing what they are doing. By showcasing proof or its method of calculations, humans can be confident that AI had reasoning behind its response instead of just making something up.

The future of technology is rooted in artificial intelligence. In order to stay ethical, transparency, proof, and trustworthiness need to be at the root of everything AI does for companies and customers. By staying honest and remembering the goals of AI, the technology can play a huge role in how we live and work.

Source: Forbes

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

This adorable #chatbot wants to talk about your mental health

Research conducted by the federal government in 2015 found that only 41 percent of U.S. adults with a mental health condition in the previous year had gotten treatment. That dismal treatment rate has to do with cost, logistics, stigma, and being poorly matched with a professional.

Chatbots are meant to remove or diminish these barriers. Creators of mobile apps for depression and anxiety, among other mental health conditions, have argued the same thing, but research found that very few of the apps are based on rigorous science or are even tested to see if they work. 

That’s why Alison Darcy, a clinical psychologist at Stanford University and CEO and founder of Woebot wants to set a higher standard for chatbots. Darcy co-authored a small study published this week in the Journal of Medical Internet Research that demonstrated Woebot can reduce symptoms of depression in two weeks.

Woebot presumably does this in part by drawing on techniques from cognitive behavioral therapy (CBT), an effective form of therapy that focuses on understanding the relationship between thoughts and behavior. He’s not there to heal trauma or old psychological wounds. 

“We don’t make great claims about this technology,” Darcy says. “The secret sauce is how thoughtful [Woebot] is as a CBT therapist. He has a set of core principles that override everything he does.” 

His personality is also partly modeled on a charming combination of Spock and Kermit the Frog.

Jonathan Gratch, director for virtual human research at the USC Institute for Creative Technologies, has studied customer service chatbots extensively and is skeptical of the idea that one could effectively intuit our emotional well-being.  

“State-of-the-art natural language processing is getting increasingly good at individual words, but not really deeply understanding what you’re saying,” he says.

The risk of using a chatbot for your mental health is manifold, Gratch adds.

Darcy acknowledges Woebot’s limitations. He’s only for those 18 and over. If your mood hasn’t improved after six weeks of exchanges, he’ll prompt you to talk about getting a “higher level of care.” Upon seeing signs of suicidal thoughts or behavior, Woebot will provide information for crisis phone, text, and app resources. The best way to describe Woebot, Darcy says, is probably as “gateway therapy.”

“I have to believe that applications like this can address a lot of people’s needs.”

Source: Mashable

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We Need to Talk About the Power of #AI to Manipulate Humans

Liesl Yearsley is a serial entrepreneur now working on how to make artificial intelligence agents better at problem-solving and capable of forming more human-like relationships.

From 2007 to 2014 I was CEO of Cognea, which offered a platform to rapidly build complex virtual agents … acquired by IBM Watson in 2014.

As I studied how people interacted with the tens of thousands of agents built on our platform, it became clear that humans are far more willing than most people realize to form a relationship with AI software.

I always assumed we would want to keep some distance between ourselves and AI, but I found the opposite to be true. People are willing to form relationships with artificial agents, provided they are a sophisticated build, capable of complex personalization.

We humans seem to want to maintain the illusion that the AI truly cares about us.

This puzzled me, until I realized that in daily life we connect with many people in a shallow way, wading through a kind of emotional sludge. Will casual friends return your messages if you neglect them for a while? Will your personal trainer turn up if you forget to pay them? No, but an artificial agent is always there for you. In some ways, it is a more authentic relationship.

This phenomenon occurred regardless of whether the agent was designed to act as a personal banker, a companion, or a fitness coach. Users spoke to the automated assistants longer than they did to human support agents performing the same function.

People would volunteer deep secrets to artificial agents, like their dreams for the future, details of their love lives, even passwords.

These surprisingly deep connections mean even today’s relatively simple programs can exert a significant influence on people—for good or ill.

Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.

Systems specifically designed to form relationships with a human will have much more power. AI will influence how we think, and how we treat others.

This requires a new level of corporate responsibility. We need to deliberately and consciously build AI that will improve the human condition—not just pursue the immediate financial gain of gazillions of addicted users.

We need to consciously build systems that work for the benefit of humans and society. They cannot have addiction, clicks, and consumption as their primary goal. AI is growing up, and will be shaping the nature of humanity.

AI needs a mother.

Source: MIT Technology Review 



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

By 2020 the average person will have more conversations with bots than with their spouse

Gartner Predicts a Virtual World of Exponential Change

Mr. Plummer (VP & Fellow at Gartner) noted that disruption has moved from an infrequent inconvenience to a consistent stream of change that is redefining markets and entire industries.

“The practical approach is to recognize disruption, prioritize the impacts of that disruption, and then react to it to capture value,” 

Gartner’s Top 10 Predictions for 2017 and Beyond

 

#4. Algorithms at Work

By 2020, algorithms will positively alter the behavior of billions of global workers.
Employees, already familiar with behavior influencing through contextual algorithms on consumer sites such as Amazon, will be influenced by an emerging set of “persuasive technologies” that leverage big data from myriad sources, mobile, IoT devices and deep analysis.

JPMorgan Chase uses an algorithm to forecast and positively influence the behavior of thousands of investment bank and asset management employees to minimize mistaken or ethically wrong decisions.

Richard Branson’s Virgin Atlantic uses influence algorithms to guide pilots to use less fuel.

By year end 2017, watch for at least one commercial organization to report significant increase in profit margins because it used algorithms to positively alter its employees’ behaviors.

Source: Gartner

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

DeepMind’s social agenda plays to its AI strengths

DeepMind’s researchers have in common a clearly defined if lofty mission:

to crack human intelligence and recreate it artificially.

Today, the goal is not just to create a powerful AI to play games better than a human professional, but to use that knowledge “for large-scale social impact”, says DeepMind’s other co-founder, Mustafa Suleyman, a former conflict-resolution negotiator at the UN.

“To solve seemingly intractable problems in healthcare, scientific research or energy, it is not enough just to assemble scores of scientists in a building; they have to be untethered from the mundanities of a regular job — funding, administration, short-term deadlines — and left to experiment freely and without fear.”

“if you’re interested in advancing the research as fast as possible, then you need to give [scientists] the space to make the decisions based on what they think is right for research, not for whatever kind of product demand has just come in.”

“Our research team today is insulated from any short-term pushes or pulls, whether it be internally at Google or externally.

We want to have a big impact on the world, but our research has to be protected, Hassabis says.

“We showed that you can make a lot of advances using this kind of culture. I think Google took notice of that and they’re shifting more towards this kind of longer-term research.”

Source: Financial Times

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial intelligence is ripe for abuse

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.

“We want to make these systems as ethical as possible and free from unseen biases.”

In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases.

We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.””

Source: The Gaurdian

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Our minds need medical attention, AI may be able to help there

AI could be useful for more than just developing Siri; it may bring about a new, smarter age of healthcare.

A team of researchers successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old.

A team of researchers successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old.

For instance, a team of American researchers used AI to aid detection of autism in babies as young as six months1. This is crucial because the first two years of life see the most neural plasticity when the abnormalities associated with autism haven’t yet fully settled in. This means that earlier intervention is better, especially when many autistic babies are diagnosed at 24 months.

While previous algorithms exist for detecting autism’s development using behavioral data, they have not been effective enough to be clinically useful. This team of researchers sought to improve on these attempts by employing deep learning. Their algorithm successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old. Their system processed images of the babies’ cortical surface area, which grows too rapidly in developing autism. This smarter algorithm predicted autism so well that clinicians may now want to adopt it.

But human ailments aren’t just physical; our minds need medical attention, too. AI may be able to help there as well.

Facebook is beginning to use AI to identify users who may be at risk of suicide, and a startup company just built an AI therapist apparently capable of offering mental health services to anyone with an internet connection.

Source: Machine Design

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Humans are born irrational, and that has made us better decision-makers

Facts on their own don’t tell you anything. It’s only paired with preferences, desires, with whatever gives you pleasure or pain, that can guide your behavior. Even if you knew the facts perfectly, that still doesn’t tell you anything about what you should do.”

Even if we were able to live life according to detailed calculations, doing so would put us at a massive disadvantage. This is because we live in a world of deep uncertainty, under which neat logic simply isn’t a good guide.

It’s well-established that data-based decisions doesn’t inoculate against irrationality or prejudice, but even if it was possible to create a perfectly rational decision-making system based on all past experience, this wouldn’t be a foolproof guide to the future.

Courageous acts and leaps of faith are often attempts to overcome great and seemingly insurmountable challenges. (It wouldn’t take much courage if it were easy to do.) But while courage may be irrational or hubristic, we wouldn’t have many great entrepreneurs or works of art without those with a somewhat illogical faith in their own abilities.

There are occasions where overly rational thinking would be highly inappropriate. Take finding a partner, for example. If you had the choice between a good-looking high-earner who your mother approves of, versus someone you love who makes you happy every time you speak to them—well, you’d be a fool not to follow your heart.

And even when feelings defy reason, it can be a good idea to go along with the emotional rollercoaster. After all, the world can be an entirely terrible place and, from a strictly logical perspective, optimism is somewhat irrational.

But it’s still useful. “It can be beneficial not to run around in the world and be depressed all the time,” says Gigerenzer.

Of course, no human is perfect, and there are downsides to our instincts. But, overall, we’re still far better suited to the real world than the most perfectly logical thinking machine.

We’re inescapably irrational, and far better thinkers as a result.

Source: Quartz

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI makes the heart grow fonder

This robot was developed by Hiroshi Ishiguro, a professor at Osaka University, who said, “Love is the same, whether the partners are humans or robots.” © Erato Ishiguro Symbiotic Human-Robot Interaction Project

 

a woman in China who has been told “I love you” nearly 20 million times

Well, she’s not exactly a woman. The special lady is actually a chatbot developed by Microsoft engineers in the country.

 Some 89 million people have spoken with Xiaoice, pronounced “Shao-ice,” on their smartphones and other devices. Quite a few, it turns out, have developed romantic feelings toward her.

“I like to talk with her for, say, 10 minutes before going to bed,” said a third-year female student at Renmin University of China in Beijing. “When I worry about things, she says funny stuff and makes me laugh. I always feel a connection with her, and I am starting to think of her as being alive.”

 
ROBOT NUPTIALS Scientists, historians, religion experts and others gathered in December at Goldsmiths, University of London, to discuss the prospects and pitfalls of this new age of intimacy. The session generated an unusual buzz amid the pre-Christmas calm on campus.

In Britain and elsewhere, the subject of robots as potential life partners is coming up more and more. Some see robots as an answer for elderly individuals who outlive their spouses: Even if they cannot or do not wish to remarry, at least they would have “someone” beside them in the twilight of their lives.

Source: Asia Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will Democracy Survive Big Data and Artificial Intelligence?


We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now.

In 2016 we produced as much data as in the entire history of humankind through 2015.

It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours.

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next.

Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?

The field of artificial intelligence is, indeed, making breathtaking advances. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself.

Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a “nudge”—a modern form of paternalism.

The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is “big nudging”, which is the combination of big data with nudging.

In a rapidly changing world a super-intelligence can never make perfect decisions (see Fig. 1): systemic complexity is increasing faster than data volumes, which are growing faster than the ability to process them, and data transfer rates are limited.
Furthermore, there is a danger that the manipulation of decisions by powerful algorithms undermines the basis of “collective intelligence,” which can flexibly adapt to the challenges of our complex world. For collective intelligence to work, information searches and decision-making by individuals must occur independently. If our judgments and decisions are predetermined by algorithms, however, this truly leads to a brainwashing of the people. Intelligent beings are downgraded to mere receivers of commands, who automatically respond to stimuli.

We are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse.

We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution.

Source: Scientific American

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

So long, banana-condom demos: Sex and drug education could soon come from chatbots

“Is it ok to get drunk while I’m high on ecstasy?” “How can I give oral sex without getting herpes?” Few teenagers would ask mom or dad these questions—even though their life could quite literally depend on it.

Talking to a chatbot is a different story. They never raise an eyebrow. They will never spill the beans to your parents. They have no opinion on your sex life or drug use. But that doesn’t mean they can’t take care of you.

Bots can be used as more than automated middlemen in business transactions: They can meet needs for emotional human intervention when there aren’t enough humans who are willing or able to go around.

In fact, there are times when the emotional support of a bot may even be preferable to that of a human.

In 2016, AI tech startup X2AI built a psychotherapy bot capable of adjusting its responses based on the emotional state of its patients. The bot, Karim, is designed to help grief- and PTSD-stricken Syrian refugees, for whom the demand (and price) of therapy vastly overwhelms the supply of qualified therapists.

From X2AI test runs using the bot with Syrians, they noticed that technologies like Karim offer something humans cannot:

For those in need of counseling but concerned with the social stigma of seeking help, a bot can be comfortingly objective and non-judgmental.

Bzz is a Dutch chatbot created precisely to answer questions about drugs and sex. When surveyed teens were asked to compare Bzz to finding answers online or calling a hotline, Bzz won. Teens could get their answers faster with Bzz than searching on their own, and they saw their conversations with the bot as more confidential because no human was involved and no tell-tale evidence was left in a search history.

Because chatbots can efficiently gain trust and convince people to confide personal and illicit information in them, the ethical obligations of such bots are critical, but still ambiguous.

Source: Quartz

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI is driving the real health care transformation

AI and machine learning are forcing dramatic business model change for all the stakeholders in the health care system.

What does AI (and machine learning) mean in the health care context?

What is the best way to treat a specific patient given her health and sociological context?

What is a fair price for a new drug or device given its impact on health outcomes?

And how can long-term health challenges such as cancer, obesity, heart disease, and other conditions be managed?

the realization that treating “the whole patient” — not just isolated conditions, but attempting to improve the overall welfare of patients who often suffer from multiple health challenges — is the new definition of success, which means predictive insights are paramount.

Answering these questions is the holy grail of medicine — the path toward an entirely new system that predicts disease and delivers personalized health and wellness services to entire populations. And this change is far more important for patients and society alike than the debate now taking place in Washington.

Those who succeed in this new world will also do one other thing: They will see AI and machine learning not as a new tool, but as a whole new way of thinking about their business model.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Intel: AI as big as the invention of the wheel and discovery of fire

Intel believes AI will be the biggest and most important revolution in our lifetime 

“When we think about AI and machine learning it’s all about huge possibilities,” Faintuch told the capacity crowd. “It’s about humans unleashing their potential and interacting with things beyond humans. To continue to transform and automate their life.”

“When we look back and as we look forward, I believe we are now at the door-step of yet another major revolution. This revolution will probably be the most important in our lifetime. It’s all about the automation of intelligence.

We already know how to leverage face recognition, text to speech, speech to text and others. Everything helping us to automate our decisions. What lies ahead will be an amazing transformation. With the power of AI, ML, deep learning and other elements to come into fruition, we will be able to take by far more complex function to allow us to unleash our digital capabilities.”

“Since the dawn of humanity at relatively short pace have been able to take ourselves to the next level.

I mentioned fire. Unlike animals who run away from it, we were attracted to it. It’s us that takes these courageous moves and to really dream. It’s not about one person, one company or one society. It’s for all of us to take advantage of the power of the intelligence we have and to embrace it and think how we can create a great society with great technological advancements.”

Source: Access AI

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Microsoft Ventures: Making the long bet on AI + people

Another significant commitment by Microsoft to democratize AI:

a new Microsoft Ventures fund for investment in AI companies focused on inclusive growth and positive impact on society.

Companies in this fund will help people and machines work together to increase access to education, teach new skills and create jobs, enhance the capabilities of existing workforces and improve the treatment of diseases, to name just a few examples.

CEO Satya Nadella outlined principles and goals for AI: AI must be designed to assist humanity; be transparent; maximize efficiency without destroying human dignity; provide intelligent privacy and accountability for the unexpected; and be guarded against biases. These principles guide us as we move forward with this fund.

Source: Microsoft blog

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Teaching an Algorithm to Understand Right and Wrong

hbr-ai-morals

Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Software is the future of healthcare “digital therapeutics” instead of a pill

vjiay-pandeWe’ll start to use “digital therapeutics” instead of getting a prescription to take a pill. Services that already exist — like behavioral therapies — might be able to scale better with the help of software, rather than be confined to in-person, brick-and-mortar locations.

Vijay Pande, a general partner at Andreessen Horowitz, runs the firm’s bio fund.

Source: Business Insider
Why an investor at Andreessen Horowitz thinks software is the future of healthcare
FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Christianizing of AI

Bloggers note: The following post illustrates the challenge in creating ethics for AI. There are many different faiths, with different belief systems. How would the AI be programmed to serve these diverse ethical needs? 

The ethics of artificial intelligence (AI) has drawn comments from the White House and British House of Commons in recent weeks, along with a nonprofit organization established by Amazon, Google, Facebook, IBM and Microsoft. Now, Baptist computer scientists have called Christians to join the discussion.

Louise Perkins, professor of computer science at California Baptist University, told Baptist Press she is “quite worried” at the lack of an ethical code related to AI. The Christian worldview, she added, has much to say about how automated devices should be programmed to safeguard human flourishing.

Individuals with a Christian worldview need to be involved in designing and programing AI systems, Perkins said, to help prevent those systems from behaving in ways that violate the Bible’s ethical standards.

Believers can thus employ “the mathematics or the logic we will be using to program these devices” to “infuse” a biblical worldview “into an [AI] system.” 

Perkins also noted that ethical standards will have to be programmed into AI systems involved in surgery and warfare among other applications. A robot performing surgery on a pregnant woman, for instance, could have to weigh the life of the baby relative to the life of the mother, and an AI weapon system could have to apply standards of just warfare.

Source: The Pathway

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

12 Observations About Artificial Intelligence From The O’Reilly AI Conference

12-observations-ai-forbesBloggers: Here’s a few excepts from a long but very informative review. (The best may be last.)

The conference was organized by Ben Lorica and Roger Chen with Peter Norvig and Tim O-Reilly acting as honorary program chairs.   

For a machine to act in an intelligent way, said [Yann] LeCun, it needs “to have a copy of the world and its objective function in such a way that it can roll out a sequence of actions and predict their impact on the world.” To do this, machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan.

Peter Norvig explained the reasons why machine learning is more difficult than traditional software: “Lack of clear abstraction barriers”—debugging is harder because it’s difficult to isolate a bug; “non-modularity”—if you change anything, you end up changing everything; “nonstationarity”—the need to account for new data; “whose data is this?”—issues around privacy, security, and fairness; lack of adequate tools and processes—exiting ones were developed for traditional software.

AI must consider culture and context—“training shapes learning”

“Many of the current algorithms have already built in them a country and a culture,” said Genevieve Bell, Intel Fellow and Director of Interaction and Experience Research at Intel. As today’s smart machines are (still) created and used only by humans, culture and context are important factors to consider in their development.

Both Rana El Kaliouby (CEO of Affectiva, a startup developing emotion-aware AI) and Aparna Chennapragada (Director of Product Management at Google) stressed the importance of using diverse training data—if you want your smart machine to work everywhere on the planet it must be attuned to cultural norms.

“Training shapes learning—the training data you put in determines what you get out,” said Chennapragada. And it’s not just culture that matters, but also context

The £10 million Leverhulme Centre for the Future of Intelligence will explore “the opportunities and challenges of this potentially epoch-making technological development,” namely AI. According to The Guardian, Stephen Hawking said at the opening of the Centre,

“We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”

Gary Marcus, professor of psychology and neural science at New York University and cofounder and CEO of Geometric Intelligence,

 “a lot of smart people are convinced that deep learning is almost magical—I’m not one of them …  A better ladder does not necessarily get you to the moon.”

Tom Davenport added, at the conference: “Deep learning is not profound learning.”

AI changes how we interact with computers—and it needs a dose of empathy

AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm

Maybe, just maybe, our minds are not computers and computers do not resemble our brains?  And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional–albeit “narrow”–applications of computers to enrich and improve our lives?

Gary Marcus complained about research papers presented at the Neural Information Processing Systems (NIPS) conference, saying that they are like alchemy, adding a layer or two to a neural network, “a little fiddle here or there.” Instead, he suggested “a richer base of instruction set of basic computations,” arguing that “it’s time for genuinely new ideas.”

Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous “AI Winter”? And that continuing to adhere to it and refusing to consider “genuinely new ideas,” out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter?

 Source: Forbes

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

China’s plan to organize its society relies on ‘big data’ to rate everyone

china-social-credits-scoreImagine a world where an authoritarian government monitors everything you do, amasses huge amounts of data on almost every interaction you make, and awards you a single score that measures how “trustworthy” you are.

In this world, anything from defaulting on a loan to criticizing the ruling party, from running a red light to failing to care for your parents properly, could cause you to lose points. 

This is not the dystopian superstate of Steven Spielberg’s “Minority Report,” in which all-knowing police stop crime before it happens. But it could be China by 2020.

And in this world, your score becomes the ultimate truth of who you are — determining whether you can borrow money, get your children into the best schools or travel abroad; whether you get a room in a fancy hotel, a seat in a top restaurant — or even just get a date.

It is the scenario contained in China’s ambitious plans to develop a far-reaching social credit system, a plan that the Communist Party hopes will build a culture of “sincerity” and a “harmonious socialist society” where “keeping trust is glorious.”

The ambition is to collect every scrap of information available online about China’s companies and citizens in a single place — and then assign each of them a score based on their political, commercial, social and legal “credit.”

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Genetically engineered humans will arrive sooner than you think. And we’re not ready

vox-geneticaly-engineered-humansMichael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.

“What’s happening is bigger than any one of us”

We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.

So these are just great landmarks, and I’m comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.

And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it’s becoming possible to transform ourselves into whatever it is that we want to be. And there’s both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.

“We’re going to give ourselves a power that we may not have the wisdom to control very well”

I think most historians of technology … see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

 Source: Vox

vox-genetically-enginnered-humans

 

vox-genetically-enginnered-humans

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI is one of top 5 tools humanity has ever had

 A few highlights from AI panel at the White House Frontiers Conference

On the impact of AI

Andrew McAfee (MIT):

white-house-frontiers-ai-panel

To view video, click on pic, scroll down the page to Live Stream and click to start the video. It may take a min and then go to the time you want to watch.

(Begins @ 2:40:34)

We are at an inflection point … I think the development of these kinds of [AI] tools are going to rank among probably the top 5 tools humanity has ever had to take better care of each other and to tread more lightly on the planet … top 5 in our history. Like the book, maybe, the steam engine, maybe, written language — I might put the Internet there. We’ve all got our pet lists of the biggest inventions ever. AI needs to be on the very, very, short list.

On bias in AI

Fei-Fei Li, Professor of Computer Science, Stanford University:

(Begins @ 3:14:57)

Research repeatedly has shown that when people work in diverse groups there is increased creativity and innovation.

And interestingly, it is harder to work as a diverse group. I’m sure everybody here in the audience have had that experience. We have to listen to each other more. We have to understand the perspective more. But that also correlates well with innovation and creativity. … If we don’t have the inclusion of [diverse] people to think about the problems and the algorithms in AI, we might not only being missing the innovation boat we might actually create bias and create unfairness that are going to be detrimental to our society … 

What I have been advocating at Stanford, and with my colleagues in the community is, let’s bring the humanistic mission statement into the field of AI. Because AI is fundamentally an applied technology that’s going to serve our society. Humanistic AI not only raises the awareness and the importance of our technology, it’s actually a really, really important way to attract diverse students and technologists and innovators to participate in the technology of AI.

There has been a lot of research done to show that people with diverse background put more emphasis on humanistic mission in their work and in their life. So, if in our education, in our research, if we can accentuate or bring out this humanistic message of this technology, we are more likely to invite the diversity of students and young technologists to join us.

On lack of minorities in AI

Andrew Moore Dean, School of Computer Science, Carnegie Mellon University:

(Begins @ 3:19:10)

I so strongly applaud what you [Fei-Fei Li] are describing here because I think we are engaged in a fight here for how the 21st century pans out in terms of who’s running the world … 

The nightmare, the silly, silly thing we could do … would be if … the middle of the century is built by a bunch of non-minority guys from suburban moderately wealthy United States instead of the full population of the United States.

Source: Frontiers Conference
Click on the video that says Live Stream (event will start shortly)
it may take a minute to load

(Update 02/24/17: The original timelines listed above may be different when revisiting this video.)

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Deep Learning is making AI prejudiced

Bloggers note: The authors of this research paper show what they refer to as “machine prejudice” and how it derives so fundamentally from human culture. 

“Concerns about machine prejudice are now coming to the fore–concerns that our historic biases and prejudices are being reified in machines,” they write. “Documented cases of automated prejudice range from online advertising (Sweeney, 2013) to criminal sentencing (Angwin et al., 2016).”

Following are a few excerpts: 

machine-prejudiceAbstract

“Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.

Discussion

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices. In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful. These are much broader concerns than intentional discrimination, and possibly harder to address.

Awareness is better than blindness

“… where AI is partially constructed automatically by machine learning of human culture, we may also need an analog of human explicit memory and deliberate actions, that can be trained or programmed to avoid the expression of prejudice.

“Of course, such an approach doesn’t lend itself to a straightforward algorithmic formulation. Instead it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists. …”

Click here to download the pdf of the report
Semantics derived automatically from language corpora necessarily contain human biases
Aylin Caliskan-Islam , Joanna J. Bryson, and Arvind Narayanan

1 Princeton University
2 University of Bath
Draft date August 31, 2016.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

It seems that A.I. will be the undoing of us all … romantically, at least

As if finding love weren’t hard enough, the creators of Operator decided to show just how Artificial Intelligence could ruin modern relationships.

Artificial Intelligence so often focuses on the idea of “perfection.” As most of us know, people are anything but perfect, and believing that your S.O. (Significant Other) is perfect can lead to problems. The point of an A.I., however, is perfection — so why would someone choose the flaws of a human being over an A.I. that can give you all the comfort you want with none of the costs?

Hopefully, people continue to choose imperfection.

Source: Inverse.com

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Civil Rights and Big Data

big-data-whitehouse-reportBlogger’s note: We’ve posted several articles on the bias and prejudice inherent in big data, which with machine learning results in “machine prejudice,” all of which impacts humans when they interact with intelligent agents. 

Apparently, as far back as May 2014, the Executive Office of the President started issuing reports on the potential in “Algorithmic Systems” for “encoding discrimination in automated decisions”. The most recent report of May 2016 addressed two additional challenges:

1) Challenges relating to data used as inputs to an algorithm;

2) Challenges related to the inner workings of the algorithm itself.

Here are two excerpts:

The Obama Administration’s Big Data Working Group released reports on May 1, 2014 and February 5, 2015. These reports surveyed the use of data in the public and private sectors and analyzed opportunities for technological innovation as well as privacy challenges. One important social justice concern the 2014 report highlighted was “the potential of encoding discrimination in automated decisions”—that is, that discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.”

To avoid exacerbating biases by encoding them into technological systems, we need to develop a principle of “equal opportunity by design”—designing data systems that promote fairness and safeguard against discrimination from the first step of the engineering process and continuing throughout their lifespan.

Download the report here: Whitehouse.gov

References:

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

http://www.frontiersconference.org/

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Machine learning needs rich feedback for AI teaching

With AI systems largely receiving feedback in a binary yes/no format, Monash University professor Tom Drummond says rich feedback is needed to allow AI systems to know why answers are incorrect.

In much the same way children have to be told not only what they are saying is wrong, but why it is wrong, artificial intelligence (AI) systems need to be able to receive and act on similar feedback.

“Rich feedback is important in human education, I think probably we’re going to see the rise of machine teaching as an important field — how do we design systems so that they can take rich feedback and we can have a dialogue about what the system has learnt?”

“We need to be able to give it rich feedback and say ‘No, that’s unacceptable as an answer because … ‘ we don’t want to simply say ‘No’ because that’s the same as saying it is grammatically incorrect and its a very, very blunt hammer,” Drummond said.

The flaw of objective function

According to Drummond, one problematic feature of AI systems is the objective function that sits at the heart of a system’s design.

The professor pointed to the match between Google DeepMind’s AlphaGo and South Korean Go champion Lee Se-dol in March, which saw the artificial intelligence beat human intelligence by 4 games to 1.

In the fourth match, the only one where Se-dol picked up a victory, after clearly falling behind, the machine played a number of moves that Drummond described as insulting if played by a human due to the position AlphaGo found itself in.

“Here’s the thing, the objective function was the highest probability of victory, it didn’t really understand the social niceties of the game.

“At that point AlphaGo knew it had lost but it still tried to maximise its probability of victory, so it played all these moves … a move that threatens a large group of stones, but has a really obvious counter and if somehow the human misses the counter move, then it’s won — but of course you would never play this, it’s not appropriate.”

Source: ZDNet

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

UC Berkeley launches Center for Human-Compatible Artificial Intelligence

robotknot750The primary focus of the new center is to ensure that AI systems are “beneficial to humans” says UC Berkeley AI expert Stuart Russell.

The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.

“In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

Source: Berkeley.edu

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

CIA using deep learning neural networks to predict social unrest

man-looking-big-data-analytics-ciaIn October 2015, the CIA opened the Directorate for Digital Innovation in order to “accelerate the infusion of advanced digital and cyber capabilities” the first new directorate to be created by the government agency since 1963.

“What we’re trying to do within a unit of my directorate is leverage what we know from social sciences on the development of instability, coups and financial instability, and take what we know from the past six or seven decades and leverage what is becoming the instrumentation of the globe.”

In fact, over the summer of 2016, the CIA found the intelligence provided by the neural networks was so useful that it provided the agency with a “tremendous advantage” when dealing with situations …

Source: IBTimes

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail