“But it is a test that I am confident we can meet”
Thereas May, Prime Minister UK
The prime minister is to say she wants the UK to lead the world in deciding how artificial intelligence can be deployed in a safe and ethical manner.
Theresa May will say at the World Economic Forum in Davos that a new advisory body, previously announced in the Autumn Budget, will co-ordinate efforts with other countries.
In addition, she will confirm that the UK will join the Davos forum’s own council on artificial intelligence.
But others may have stronger claims.
Earlier this week, Google picked France as the base for a new research centre dedicated to exploring how AI can be applied to health and the environment.
Facebook also announced it was doubling the size of its existing AI lab in Paris, while software firm SAP committed itself to a 2bn euro ($2.5bn; £1.7bn) investment into the country that will include work on machine learning.
Meanwhile, a report released last month by the Eurasia Group consultancy suggested that the US and China are engaged in a “two-way race for AI dominance”.
It predicted Beijing would take the lead thanks to the “insurmountable” advantage of offering its companies more flexibility in how they use data about its citizens.
she is expected to say that the UK is recognised as first in the world for its preparedness to “bring artificial intelligence into government”.
Zhu Long, co-founder and CEO of Yitu Technology, has his identity checked at the company’s headquarters in the Hongqiao business district in Shanghai. Picture: Zigor Aldama
“Our machines can very easily recognise you among at least 2 billion people in a matter of seconds,” says chief executive and Yitu co-founder Zhu Long, “which would have been unbelievable just three years ago.”
Its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth.
On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest.
In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.
Whole cities in which the algorithms are working say they have seen a decrease in crime. According to Yitu, which says it gets its figures directly from the local authorities, since the system has been implemented, pickpocketing on Xiamen’s city buses has fallen by 30 per cent; 500 criminal cases have been resolved by AI in Suzhou since June 2015; and police arrested nine suspects identified by algorithms during the 2016 G20 summit in Hangzhou.
“Chinese authorities are collecting and centralising ever more information about hundreds of millions of ordinary people, identifying persons who deviate from what they determine to be ‘normal thought’ and then surveilling them,” says Sophie Richardson, China director at HRW.
Research and advocacy group Human Rights Watch (HRW) says security systems such as those being developed by Yitu “violate privacy and target dissent”.
The NGO calls it a “police cloud” system and believes “it is designed to track and predict the activities of activists, dissidents and ethnic minorities, including those authorities say have extreme thoughts, among others”.
Zhu says, “We all discuss AI as an opportunity for humanity to advance or as a threat to it. What I believe is that we will have to redefine what it is to be human. We will have to ask ourselves what the foundations of our species are.
“At the same time, AI will allow us to explore the boundaries of human intelligence, evaluate its performance and help us understand ourselves better.”
In a statement broadcast live on Facebook on September 21 and subsequently posted to his profile page, Zuckerberg pledged to increase the resources of Facebook’s security and election-integrity teams and to work “proactively to strengthen the democratic process.”
It was an admirable commitment. But reading through it, I kept getting stuck on one line: “We have been working to ensure the integrity of the German elections this weekend,” Zuckerberg writes. It’s a comforting sentence, a statement that shows Zuckerberg and Facebook are eager to restore trust in their system.
But … it’s not the kind of language we expect from media organizations, even the largest ones. It’s the language of governments, or political parties, or NGOs. A private company, working unilaterally to ensure election integrity in a country it’s not even based in?
Facebook has grown so big, and become so totalizing, that we can’t really grasp it all at once.
Like a four-dimensional object, we catch slices of it when it passes through the three-dimensional world we recognize. In one context, it looks and acts like a television broadcaster, but in this other context, an NGO. In a recent essay for the London Review of Books, John Lanchester argued that for all its rhetoric about connecting the world, the company is ultimately built to extract data from users to sell to advertisers. This may be true, but Facebook’s business model tells us only so much about how the network shapes the world.
Between March 23, 2015, when Ted Cruz announced his candidacy, and November 2016, 128 million people in America created nearly 10 billion Facebook posts, shares, likes, and comments about the election. (For scale, 137 million people voted last year.)
In February 2016, the media theorist Clay Shirky wrote about Facebook’s effect: “Reaching and persuading even a fraction of the electorate used to be so daunting that only two national orgs” — the two major national political parties — “could do it. Now dozens can.”
It used to be if you wanted to reach hundreds of millions of voters on the right, you needed to go through the GOP Establishment. But in 2016, the number of registered Republicans was a fraction of the number of daily American Facebook users, and the cost of reaching them directly was negligible.
Tim Wu, the Columbia Law School professor
“Facebook has the same kind of attentional power [as TV networks in the 1950s], but there is not a sense of responsibility,” he said. “No constraints. No regulation. No oversight. Nothing. A bunch of algorithms, basically, designed to give people what they want to hear.”
It tends to get forgotten, but Facebook briefly ran itself in part as a democracy: Between 2009 and 2012, users were given the opportunity to vote on changes to the site’s policy. But voter participation was minuscule, and Facebook felt the scheme “incentivized the quantity of comments over their quality.” In December 2012, that mechanism was abandoned “in favor of a system that leads to more meaningful feedback and engagement.”
Facebook had grown too big, and its users too complacent, for democracy.
“Personally, I think the idea that fake news on Facebook, it’s a very small amount of the content, influenced the election in any way is a pretty crazy idea.”
Facebook CEO Mark Zuckerberg’s company recently said it would turn over to Congress more than 3,000 politically themed advertisements that were bought by suspected Russian operatives. (Eric Risberg/AP
Nine days after Facebook chief executive Mark Zuckerberg dismissed as “crazy” the idea that fake news on his company’s social network played a key role in the U.S. election, President Barack Obama pulled the youthful tech billionaire aside and delivered what he hoped would be a wake-up call.
Obama made a personal appeal to Zuckerberg to take the threat of fake news and political disinformation seriously. Unless Facebook and the government did more to address the threat, Obama warned, it would only get worse in the next presidential race.
“There’s been a systematic failure of responsibility. It’s rooted in their overconfidence that they know best, their naivete about how the world works, their extensive effort to avoid oversight, and their business model of having very few employees so that no one is minding the store.”Zeynep Tufekci
Zuckerberg acknowledged the problem posed by fake news. But he told Obama that those messages weren’t widespread on Facebook and that there was no easy remedy, according to people briefed on the exchange
One outcome of those efforts was Zuckerberg’s admission on Thursday that Facebook had indeed been manipulated and that the company would now turn over to Congress more than 3,000 politically themed advertisements that were bought by suspected Russian operatives.
These issues have forced Facebook and other Silicon Valley companies to weigh core values, including freedom of speech, against the problems created when malevolent actors use those same freedoms to pump messages of violence, hate and disinformation.
Congressional investigators say the disclosure only scratches the surface. One called Facebook’s discoveries thus far “the tip of the iceberg.” Nobody really knows how many accounts are out there and how to prevent more of them from being created to shape the next election — and turn American society against itself.
“There is no question that the idea that Silicon Valley is the darling of our markets and of our society — that sentiment is definitely turning,” said Tim O’Reilly, an adviser to tech executives and chief executive of the influential Silicon Valley-based publisher O’Reilly Media
Fear of opaque power of Google in particular, and Silicon Valley in general, wields over our lives.
If Google — and the tech world more generally — is sexist, or in the grips of a totalitarian cult of political correctness, or a secret hotbed of alt-right reactionaries, the consequences would be profound.
Google wields a monopoly over search, one of the central technologies of our age, and, alongside Facebook, dominates the internet advertising market, making it a powerful driver of both consumer opinion and the media landscape.
It shapes the world in which we live in ways both obvious and opaque.
This is why trust matters so much in tech. It’s why Google, to attain its current status in society, had to promise, again and again, that it wouldn’t be evil.
Compounding the problem is that the tech industry’s point of view is embedded deep in the product, not announced on the packaging. Its biases are quietly built into algorithms, reflected in platform rules, expressed in code few of us can understand and fewer of us will ever read.
But what if it actually is evil? Or whatif it’s not evil but just immature, unreflective, and uncompassionate?And what if that’s the culture that designs the digital services the rest of us have to use?
The technology industry’s power is vast, and the way that power is expressed is opaque, so the only real assurance you can have that your interests and needs are being considered is to be in the room when the decisions are made and the code is written. But tech as an industry is unrepresentative of the people it serves and unaccountable in the way it serves them, and so there’s very little confidence among any group that the people in the room are the right ones.
Technology and the law are converging, and where they meet new questions arise about the relative roles of artificial and human agents—and the ethical issues involved in the shift from one to the other. While legal technology has largely focused on the activities of the bar, it challenges us to think about its application to the bench as well. In particular,
Could AI replace human judges?
The idea of AI judges raises important ethical issues around bias and autonomy. AI programs may incorporate the biases of their programmers and the humans they interact with.
But while such programs may replicate existing human biases, the distinguishing feature of AI over an algorithm is that it can behave in surprising and unintended ways as it ‘learns.’ Eradicating bias therefore becomes even more difficult, though not impossible. Any AI judging program would need to account for, and be tested for, these biases.
Appealing to rationality, the counter-argument is that human judges are already biased, and that AI can be used to improve the way we deal with them and reduce our ignorance. Yet suspicions about AI judges remain, and are already enough of a concern to lead the European Union to promulgate a General Data Protection Regulation which becomes effective in 2018. This Regulation contains
“the right not to be subject to a decision based solely on automated processing”.
As the English utilitarian legal theorist Jeremy Bentham once wrote in An Introduction To The Principles of Morals and Legislation, “in principle and in practice, in a right track and in a wrong one, the rarest of all human qualities is consistency.” With the ability to process far more data and variables in the case record than humans could ever do, an AI judge might be able to outstrip a human one in many cases.
Even so, AI judges may not solve classical questions of legal validity so much as raise new questions about the role of humans, since—if we believe that ethics and morality in the law are important—then they necessarily lie, or ought to lie, in the domain of human judgment.
In practical terms, if we apply this conclusion to the perspective of American legal theorist Ronald Dworkin, for example, AI could assist with examining the entire breadth and depth of the law, but humans would ultimately choose what they consider a morally-superior interpretation.
The American Judge Richard Posner believes that the immediate use of AI and automation should be restricted to assisting judges in uncovering their own biases and maintaining consistency.
At the heart of these issues is a hugely challenging question: what does it mean to be human in the age of Artificial Intelligence?
Among the questions the House of Lords committee will consider as part of the enquiry are:
How can the data-based monopolies of some large corporations, and the ‘winner-takes-all’ economics associated with them, be addressed?
What are the ethical implications of the development and use of artificial intelligence?
In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable?
What role should the government take in the development and use of artificial intelligence in the UK?
Should artificial intelligence be regulated?
The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.
“We are looking to be pragmatic in our approach, and want to make sure our recommendations to government and others will be practical and sensible.
There are significant questions to address relevant to both the present and the future, and we want to help inform the answers to them. To do this, we need the help of the widest range of people and organisations.”
A new report from the International Bar Association suggests machines will most likely replace humans in high-routine occupations.
The authors have suggested that governments introduce human quotas in some sectors in order to protect jobs.
We thought it’d just be an insight into the world of automation and blue collar sector. This topic has picked up speed tremendously and you can see it everywhere and read it every day. It’s a hot topic now.” – Gerlind Wisskirchen, a lawyer who coordinated the study
For business futurist Morris Miselowski, job shortages will be a reality in the future.
I’m not absolutely convinced we will have enough work for everybody on this planet within 30 years anyway. I’m not convinced that work as we understand it, this nine-to-five, Monday to Friday, is sustainable for many of us for the next couple of decades.”
“Even though automation begun 30 years ago in the blue-collar sector, the new development of artificial intelligence and robotics affects not just blue collar, but the white-collar sector,” Ms Wisskirchen. “You can see that when you see jobs that will be replaced by algorithms or robots depending on the sector.”
The report has recommended some methods to mitigate human job losses, including a type of ‘human quota’ in any sector, introducing ‘made by humans’ label or a tax for the use of machines.
But for Professor Miselowski, setting up human and computer ratios in the workplace would be impractical.
We want to maintain human employment for as long as possible, but I don’t see it as practical or pragmatic in the long-term,” he said. “I prefer what I call a trans-humanist world, where what we do is we learn to work alongside machines the same way we have with computers and calculators.
“It’s just something that is going to happen, or has already started to happen. And we need to make the best out of it, but we need to think ahead and be very thoughtful in how we shape society in the future — and that’s I think a challenge for everybody.” Ms Wisskirchen.
Festival goer is seen at the 2017 SXSW Conference and Festivals in Austin, Texas.
SXSW’s – this year, the conference itself feels a lot like a hangover.
It’s as if the coastal elites who attend each year finally woke up with a serious case of the Sunday scaries, realizing that the many apps, platforms, and doodads SXSW has launched and glorified over the years haven’t really made the world a better place. In fact, they’ve often come with wildly destructive and dangerous side effects. Sure, it all seemed like a good idea in 2013!
But now the party’s over. It’s time for the regret-filled cleanup.
speakers related how the very platforms that were meant to promote a marketplace of ideas online have become filthy junkyards of harassment and disinformation.
Yasmin Green, who leads an incubator within Alphabet called Jigsaw, focused her remarks on the rise of fake news, and even brought two propaganda publishers with her on stage to explain how, and why, they do what they do. For Jestin Coler, founder of the phony Denver Guardian, it was an all too easy way to turn a profit during the election.
“To be honest, my mortgage was due,” Coler said of what inspired him to write a bogus article claiming an FBI agent related to Hillary Clinton’s email investigation was found dead in a murder-suicide. That post was shared some 500,000 times just days before the election.
While prior years’ panels may have optimistically offered up more tech as the answer to what ails tech, this year was decidedly short on solutions.
There seemed to be, throughout the conference, a keen awareness of the limits human beings ought to place on the software that is very much eating the world.
We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now.
In 2016 we produced as much data as in the entire history of humankind through 2015.
It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours.
One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next.
Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities.Should we also expect these developments to result in smart nations and a smarter planet?
The field of artificial intelligence is, indeed, making breathtaking advances. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself.
Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a “nudge”—a modern form of paternalism.
The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is “big nudging”, which is the combination of big data with nudging.
In a rapidly changing world a super-intelligence can never make perfect decisions (see Fig. 1): systemic complexity is increasing faster than data volumes, which are growing faster than the ability to process them, and data transfer rates are limited. Furthermore, there is a danger that the manipulation of decisions by powerful algorithms undermines the basis of “collective intelligence,” which can flexibly adapt to the challenges of our complex world. For collective intelligence to work, information searches and decision-making by individuals must occur independently. If our judgments and decisions are predetermined by algorithms, however, this truly leads to a brainwashing of the people. Intelligent beings are downgraded to mere receivers of commands, who automatically respond to stimuli.
We are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse.
We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution.
At JPMorgan, a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.
The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of lawyers’ time annually. The software reviews documents in seconds, is less error-prone and never asks for vacation.
COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specialising in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.
The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget.
Behind the strategy, overseen by Chief Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety:
though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.
Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.
We all agree that we should be good and just, but it’s much harder to decide what that entails.
“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.”– Francesca Rossi, an AI researcher at IBM
Since Aristotle’s time, the questions he raised have been continually discussed and debated.
Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.
Cultural Norms vs. Moral Values
Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.
What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.
Setting a Higher Standard
Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.
Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.
“Hi, I’m LawBot, a robot designed to help victims of crime in England.”
While volunteering at a school sexual consent class, Ludwig Bull, a law student at the University of Cambridge, was inspired to build a chatbot that offers free legal advice to students. He enlisted the help of four coursemates, and Lawbot was designed and built in just six weeks.
The program is still in beta, but Bull hopes it will help victims of crime, at Cambridge and beyond, to get justice.
“A victim can talk to our artificially intelligent chatbot, receive a preliminary assessment of their situation, and then decide which available actions to pursue”