Google CEO Sundar Pichai believes artificial intelligence could have “more profound” implications for humanity than electricity or fire, according to recent comments.
Pichai also warned that the development of artificial intelligence could pose as much risk as that of fire if its potential is not harnessed correctly.
“AI is one of the most important things humanity is working on” Pichai said in an interview with MSNBC and Recode
“My point is AI is really important, but we have to be concerned about it,” Pichai said. “It’s fair to be worried about it—I wouldn’t say we’re just being optimistic about it— we want to be thoughtful about it. AI holds the potential for some of the biggest advances we’re going to see.”
Google-owned DeepMind has announced the formation of a major new AI research unit comprised of full-time staff and external advisors
As we hand over more of our lives to artificial intelligence systems, keeping a firm grip on their ethical and societal impact is crucial.
DeepMind Ethics & Society (DMES), a unit comprised of both full-time DeepMind employees and external fellows, is the company’s latest attempt to scrutinise the societal impacts of the technologies it creates.
DMES will work alongside technologists within DeepMind and fund external research based on six areas: privacy transparency and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world’s challenges.
Its aim, according to DeepMind, is twofold: to help technologists understand the ethical implications of their work and help society decide how AI can be beneficial.
“We want these systems in production to be our highest collective selves. We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights lawsthat have been so valiantly fought for over the last sixty years.” [Mustafa Suleyman]
This article attempts to bring our readers to Kate’s brilliant Keynote speech at NIPS 2017. It talks about different forms of bias in Machine Learning systems and the ways to tackle such problems.
The rise of Machine Learning is every bit as far reaching as the rise of computing itself.
A vast new ecosystem of techniques and infrastructure are emerging in the field of machine learning and we are just beginning to learn their full capabilities. But with the exciting things that people can do, there are some really concerning problems arising.
Forms of bias, stereotyping and unfair determination are being found in machine vision systems, object recognition models, and in natural language processing and word embeddings. High profile news stories about bias have been on the rise, from women being less likely to be shown high paying jobs to gender bias and object recognition datasets like MS COCO, to racial disparities in education AI systems.
What is bias?
Bias is a skew that produces a type of harm.
Where does bias come from?
Commonly from Training data. It can be incomplete, biased or otherwise skewed. It can draw from non-representative samples that are wholly defined before use. Sometimes it is not obvious because it was constructed in a non-transparent way. In addition to human labeling, other ways that human biases and cultural assumptions can creep in ending up in exclusion or overrepresentation of subpopulation. Case in point: stop-and-frisk program data used as training data by an ML system. This dataset was biased due to systemic racial discrimination in policing.
Harms of allocation
Majority of the literature understand bias as harms of allocation. Allocative harm is when a system allocates or withholds certain groups, an opportunity or resource. It is an economically oriented view primarily. Eg: who gets a mortgage, loan etc.
Allocation is immediate, it is a time-bound moment of decision making. It is readily quantifiable. In other words, it raises questions of fairness and justice in discrete and specific transactions.
Harms of representation
It gets tricky when it comes to systems that represent society but don’t allocate resources. These are representational harms. When systems reinforce the subordination of certain groups along the lines of identity like race, class, gender etc.
It is a long-term process that affects attitudes and beliefs. It is harder to formalize and track. It is a diffused depiction of humans and society. It is at the root of all of the other forms of allocative harm.
What can we do to tackle these problems?
Start working on fairness forensics
Test our systems: eg: build pre-release trials to see how a system is working across different populations
How do we track the life cycle of a training dataset to know who built it and what the demographics skews might be in that dataset
Start taking interdisciplinarity seriously
Working with people who are not in our field but have deep expertise in other areas Eg: FATE (Fairness Accountability Transparency Ethics) group at Microsoft Research
Build spaces for collaboration like the AI now institute.
Think harder on the ethics of classification
The ultimate question for fairness in machine learning is this.
Who is going to benefit from the system we are building? And who might be harmed?
Kate Crawford is a Principal Researcher at Microsoft Research and a Distinguished Research Professor at New York University. She has spent the last decade studying the social implications of data systems, machine learning, and artificial intelligence. Her recent publications address data bias and fairness, and social impacts of artificial intelligence among others.
The prestigious Neural Information Processing Systems conference have a new topic on their agenda. Alongside the usual … concern about AI’s power.
Kate Crawford … urged attendees to start considering, and finding ways to mitigate, accidental or intentional harms caused by their creations. “
“Amongst the very real excitement about what we can do there are also some really concerning problems arising”
“In domains like medicine we can’t have these models just be a black box where something goes in and you get something out but don’t know why,” says Maithra Raghu, a machine-learning researcher at Google. On Monday, she presented open-source software developed with colleagues that can reveal what a machine-learning program is paying attention to in data. Itmay ultimately allow a doctor to see what part of a scan or patient history led an AI assistant to make a particular diagnosis.
“If you have a diversity of perspectives and background you might be more likely to check for bias against different groups” Hanna Wallach a researcher at Microsoft
Others in Long Beach hope to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called Women in Machine Learning has run alongside NIPS for a decade. This Friday sees the first Black in AI workshop, intended to create a dedicated space for people of color in the field to present their work.
Towards the end of her talk Tuesday, Crawford suggested civil disobedience could shape the uses of AI. She talked of French engineer Rene Carmille, who sabotaged tabulating machines used by the Nazis to track French Jews. And she told today’s AI engineers to consider the lines they don’t want their technology to cross. “Are there some things we just shouldn’t build?” she asked.
[Timnit] Gebru, 34, joined a Microsoft Corp. team called FATE—for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.
“I started to realize that I have to start thinking about things like bias. Even my own Phd work suffers from whatever issues you’d have with dataset bias.”
Companies, government agencies and hospitals are increasingly turning to machine learning, image recognition and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer. The tools have big blind spots that particularly effect women and minorities.
“The worry is if we don’t get this right, we could be making wrong decisions that have critical consequences to someone’s life, health or financial stability,” says Jeannette Wing, director of Columbia University’s Data Sciences Institute.
AI also has a disconcertingly human habit of amplifying stereotypes. Phd students at the University of Virginia and University of Washington examined a public dataset of photos and found that the images of people cooking were 33 percent more likely to picture women than men. When they ran the images through an AI model, the algorithms said women were 68 percent more likely to appear in the cooking photos.
Researchers say it will probably take years to solve the bias problem.
The good news is that some of the smartest people in the world have turned their brainpower on the problem. “The field really has woken up and you are seeing some of the best computer scientists, often in concert with social scientists, writing great papers on it,” says University of Washington computer science professor Dan Weld. “There’s been a real call to arms.”
Donald Trump meeting PayPal co-founder Peter Thiel and Apple CEO Tim Cook in December last year. Photograph: Evan Vucci/AP
One of the biggest puzzles about our current predicament with fake news and the weaponisation of social media is why the folks who built this technology are so taken aback by what has happened.
We have a burgeoning genre of “OMG, what have we done?” angst coming from former Facebook and Google employees who have begun to realise that the cool stuff they worked on might have had, well, antisocial consequences.
Put simply, what Google and Facebook have built is a pair of amazingly sophisticated, computer-driven engines for extracting users’ personal information and data trails, refining them for sale to advertisers in high-speed data-trading auctions that are entirely unregulated and opaque to everyone except the companies themselves.
The purpose of this infrastructure was to enable companies to target people with carefully customised commercial messages and, as far as we know, they are pretty good at that.
It never seems to have occurred to them that their advertising engines could also be used to deliver precisely targeted ideological and political messages to voters. Hence the obvious question: how could such smart people be so stupid?
My hunch is it has something to do with their educational backgrounds. Take the Google co-founders. Sergey Brin studied mathematics and computer science. His partner, Larry Page, studied engineering and computer science.Zuckerberg dropped out of Harvard, where he was studying psychology and computer science, but seems to have been more interested in the latter.
Now mathematics, engineering and computer science are wonderful disciplines – intellectually demanding and fulfilling. And they are economically vital for any advanced society. But mastering them teaches students very little about society or history – or indeed about human nature.
As a consequence, the new masters of our universe are people who are essentially only half-educated. They have had no exposure to the humanities or the social sciences, the academic disciplines that aim to provide some understanding of how society works, of history and of the roles that beliefs, philosophies, laws, norms, religion and customs play in the evolution of human culture.
We are now beginning to see the consequences of the dominance of this half-educated elite.
Source: The Gaurdian – John Naughton is professor of the public understanding of technology at the Open University.
As the director of Stanford’s AI Lab and now as a chief scientist of Google Cloud, Fei-Fei Li is helping to spur the AI revolution. But it’s a revolution that needs to include more people. She spoke with MIT Technology Review senior editor Will Knight about why everyone benefits if we emphasize the human side of the technology.
Why did you join Google?
Researching cutting-edge AI is very satisfying and rewarding, but we’re seeing this great awakening, a great moment in history. For me it’s very important to think about AI’s impact in the world, and one of the most important missions is to democratize this technology. The cloud is this gigantic computing vehicle that delivers computing services to every single industry.
What have you learned so far?
We need to be much more human-centered.
If you look at where we are in AI, I would say it’s the great triumph of pattern recognition. It is very task-focused, it lacks contextual awareness, and it lacks the kind of flexible learning that humans have.
We also want to make technology that makes humans’ lives better, our world safer, our lives more productive and better. All this requires a layer of human-level communication and collaboration.
When you are making a technology this pervasive and this important for humanity, you want it to carry the values of the entire humanity, and serve the needs of the entire humanity.
If the developers of this technology do not represent all walks of life, it is very likely that this will be a biased technology. I say this as a technologist, a researcher, and a mother. And we need to be speaking about this clearly and loudly.
SAN JOSE, CA – APRIL 18: Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook’s F8 Developer Conference on April 18, 2017 at McEnery Convention Center in San Jose, California. (Photo by Justin Sullivan/Getty Images)
… recent story in The Washington Post reported that “minority” groups feel unfairly censored by social media behemoth Facebook, for example, when using the platform for discussions about racial bias. At the same time, groups and individuals on the other end of the race spectrum are quickly being banned and ousted in a flash from various social media networks.
Most all of such activity begins with an algorithm, a set of computer code that, for all intents and purposes for this piece, is created to raise a red flag when certain speech is used on a site.
But from engineer mindset to tech limitation, just how much faith should we be placing in algorithms when it comes to the very sensitive area of digital speech and race, and what does the future hold?
Indeed, while Facebook head Mark Zuckerberg reportedly eyes political ambitions within an increasingly brown America in which his own company consistently has issues creating racial balance, there are questions around policy and development of such algorithms. In fact, Malkia Cyril executive director for the Center for Media Justice told the Post that she believes that Facebook has a double standard when it comes to deleting posts.
Cyril explains [her meeting with Facebook] “The meeting was a good first step, but very little was done in the direct aftermath. Even then, Facebook executives, largely white, spent a lot of time explaining why they could not do more instead of working with us to improve the user experience for everyone.”
What’s actually in the hearts and minds of those in charge of the software development? How many more who are coding have various thoughts – or more extreme – as those recently expressed in what is now known as the Google Anti-Diversity memo?
Not just Facebook, but any and all tech platforms where race discussion occurs are seemingly at a crossroads and under various scrutiny in terms of management, standards and policy about this sensitive area. The main question is how much of this imbalance is deliberate and how much is just a result of how algorithms naturally work?
Nelson [National Chairperson National Society of Black Engineers] notes that the first source of error, however, is how a particular team defines the term hate speech. “That opinion may differ between people so any algorithm would include error at the individual level,” he concludes.
“I believe there are good people at Facebook who want to see justice done,” says Cyril. “There are steps being taken at the company to improve the experience of users and address the rising tide of hate that thwarts democracy, on social media and in real life.
That said, racism is not race neutral, and accountability for racism will never come from an algorithm alone.”
Fear of opaque power of Google in particular, and Silicon Valley in general, wields over our lives.
If Google — and the tech world more generally — is sexist, or in the grips of a totalitarian cult of political correctness, or a secret hotbed of alt-right reactionaries, the consequences would be profound.
Google wields a monopoly over search, one of the central technologies of our age, and, alongside Facebook, dominates the internet advertising market, making it a powerful driver of both consumer opinion and the media landscape.
It shapes the world in which we live in ways both obvious and opaque.
This is why trust matters so much in tech. It’s why Google, to attain its current status in society, had to promise, again and again, that it wouldn’t be evil.
Compounding the problem is that the tech industry’s point of view is embedded deep in the product, not announced on the packaging. Its biases are quietly built into algorithms, reflected in platform rules, expressed in code few of us can understand and fewer of us will ever read.
But what if it actually is evil? Or whatif it’s not evil but just immature, unreflective, and uncompassionate?And what if that’s the culture that designs the digital services the rest of us have to use?
The technology industry’s power is vast, and the way that power is expressed is opaque, so the only real assurance you can have that your interests and needs are being considered is to be in the room when the decisions are made and the code is written. But tech as an industry is unrepresentative of the people it serves and unaccountable in the way it serves them, and so there’s very little confidence among any group that the people in the room are the right ones.
Among the questions the House of Lords committee will consider as part of the enquiry are:
How can the data-based monopolies of some large corporations, and the ‘winner-takes-all’ economics associated with them, be addressed?
What are the ethical implications of the development and use of artificial intelligence?
In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable?
What role should the government take in the development and use of artificial intelligence in the UK?
Should artificial intelligence be regulated?
The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.
“We are looking to be pragmatic in our approach, and want to make sure our recommendations to government and others will be practical and sensible.
There are significant questions to address relevant to both the present and the future, and we want to help inform the answers to them. To do this, we need the help of the widest range of people and organisations.”
Artificial intelligence algorithms can indeed create a world that distributes resources more efficiently and, in theory, can offer more for everyone.
Yes, but: If we aren’t careful, these same algorithms could actually lead to greater discrimination by codifying the biases that exist both overtly and unconsciously in human society.
What’s more, the power to make these decisions lies in the hands of Silicon Valley, which has a decidedly mixed record on spotting and addressing diversity issues in its midst.
Airbnb’s Mike Curtis put it well when I interviewed him this week at VentureBeat’s MobileBeat conference:
“One of the best ways to combat bias is to be aware of it. When you are aware of the biases then you can be proactive about getting in front of them. Well, computers don’t have that advantage. They can’t be aware of the biases that may have come into them from the data patterns they have seen.”
Concern is growing:
The ACLU has raised concerns that age, sex, and race biases are already being codified into the algorithms that power AI.
ProPublica found that a computer program used in various regions to decide whom to grant parole would go easy on white offenders while being unduly harsh to black ones.
It’s an issue that Weapons of Math Destruction author Cathy O’Neil raised in a popular talk at the TED conference this year. “Algorithms don’t make things fair,” she said. “They automate the status quo.”
“We’re here at an inflection point for AI. We have an ethical imperative to harness AI to protect and preserve over time.”Eric Horvitz, managing director of Microsoft Research
2017 EmTech panel discussion
One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.
Francesca Rossi, a researcher at IBM, gave the example of a machine providing assistance or companionship to elderly people. “This robot will have to follow cultural norms that are culture-specific and task-specific,” she said. “[And] if you were to deploy in the U.S. or Japan, that behavior would have to be very different.”
In the past year, many efforts to research the ethical challenges of machine learning and AI have sprung up in academia and industry. The University of California, Berkeley; Harvard; and the Universities of Oxford and Cambridge have all started programs or institutes to work on ethics and safety in AI. In 2016, Amazon, Microsoft, Google, IBM, and Facebook jointly founded a nonprofit called Partnership on AI to work on the problem (Apple joined in January).
Companies are also taking individual action to build safeguards around their technology.
Gupta highlighted research at Google that is testing ways to correct biased machine-learning models, or prevent them from becoming skewed in the first place.
Horvitz described Microsoft’s internal ethics board for AI, dubbed AETHER, which considers things like new decision algorithms developed for the company’s in-cloud services. Although currently populated with Microsoft employees, in future the company hopes to add outside voices as well.
Industrial robots alone have eliminated up to 670,000 American jobs between 1990 and 2007
It seems that after a factory sheds workers, that economic pain reverberates, triggering further unemployment at, say, the grocery store or the neighborhood car dealership.
In a way, this is surprising. Economists understand that automation has costs, but they have largely emphasized the benefits: Machines makes things cheaper, and they free up workers to do other jobs.
The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought.
every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town
“We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too.
This evidence draws attention to the losers — the dislocated factory workers who just can’t bounce back
one robot in the workforce led to the loss of 6.2 jobs within a commuting zone where local people travel to work.
The robots also reduce wages, with one robot per thousand workers leading to a wage decline of between 0.25 % and 0.5 % Fortune
.None of these efforts, though, seem to be doing enough for communities that have lost their manufacturing bases, where people have reduced earnings for the rest of their lives.
Perhaps that much was obvious. After all, anecdotes about the Rust Belt abound. But the new findings bolster the conclusion that these economic dislocations are not brief setbacks, but can hurt areas for an entire generation.
How do we even know that automation is a big part of the story at all? A key bit of evidence is that, despite the massive layoffs, American manufacturers are making more stuff than ever. Factories have become vastly more productive.
some consultants believe that the number of industrial robots will quadruple in the next decade, which could mean millions more displaced manufacturing workers
The question, now, is what to do if the period of “maladjustment” that lasts decades, or possibly a lifetime, as the latest evidence suggests.
automation amplified opportunities for people with advanced skills and talents
Digital lending is expected to double in size over the next three years, reaching nearly 10 percent of all loans in the U.S. and Europe.
Marc Stein, who runs Underwrite.AI, writes algorithms capable of teaching themselves.
The program learns from each correlation it finds, whether it’s determining someone’s favorite books or if they are lying about their income on a loan application. And using that information, it can predict whether the applicant is a good risk. Digital lenders are pulling in all kinds of data, including purchases, SAT scores and public records like fishing licenses.
If we looked at the delta between what people said they made and what we could verify, that was highly predictive,” Stein says.
As part of the loan application process, some lenders have prospective borrowers download an app that uploads an extraordinary amount of information like daily location patterns, the punctuation of text messages or how many of their contacts have last names
“FICO and income, which are sort of the sweet spot of what every consumer lender in the United States uses, actually themselves are quite biased against people,” says Dave Girouard, the CEO of Upstart, an online lender.
Government research has found that FICO scores hurt younger borrowers and those from foreign counties because people with low incomes are targeted for higher-interest loans. Girouard argues that new, smarter data can make lending more fair.
Festival goer is seen at the 2017 SXSW Conference and Festivals in Austin, Texas.
SXSW’s – this year, the conference itself feels a lot like a hangover.
It’s as if the coastal elites who attend each year finally woke up with a serious case of the Sunday scaries, realizing that the many apps, platforms, and doodads SXSW has launched and glorified over the years haven’t really made the world a better place. In fact, they’ve often come with wildly destructive and dangerous side effects. Sure, it all seemed like a good idea in 2013!
But now the party’s over. It’s time for the regret-filled cleanup.
speakers related how the very platforms that were meant to promote a marketplace of ideas online have become filthy junkyards of harassment and disinformation.
Yasmin Green, who leads an incubator within Alphabet called Jigsaw, focused her remarks on the rise of fake news, and even brought two propaganda publishers with her on stage to explain how, and why, they do what they do. For Jestin Coler, founder of the phony Denver Guardian, it was an all too easy way to turn a profit during the election.
“To be honest, my mortgage was due,” Coler said of what inspired him to write a bogus article claiming an FBI agent related to Hillary Clinton’s email investigation was found dead in a murder-suicide. That post was shared some 500,000 times just days before the election.
While prior years’ panels may have optimistically offered up more tech as the answer to what ails tech, this year was decidedly short on solutions.
There seemed to be, throughout the conference, a keen awareness of the limits human beings ought to place on the software that is very much eating the world.
While economists debate the extent to which technology plays a role in global inequality, most agree that tech advances have exacerbated the problem.
Economist Erik Brynjolfsson said,
“My reading of the data is that technology is the main driver of the recent increases in inequality. It’s the biggest factor.”
AI expert Yoshua Bengio suggests that equality and ensuring a shared benefit from AI could be pivotal in the development of safe artificial intelligence. Bengio, a professor at the University of Montreal, explains, “In a society where there’s a lot of violence, a lot of inequality, [then] the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.”
“It’s almost a moral principle that we should share benefits among more people in society,” argued Bart Selman, a professor at Cornell University … “So we have to go into a mode where we are first educating the people about what’s causing this inequality and acknowledging that technology is part of that cost, and then society has to decide how to proceed.”