Socializing AI is here?

Much appreciation for the published paper ‘acknowledgements’ by Joel Janhonen, in the November 21, 2023 “AI and Ethics” journal published by Springer.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We should be going for … radically new ideas.

WIRED: The recent boom of interest and investment in AI and machine learning means there’s more funding for research than ever. Does the rapid growth of the field also bring new challenges?

GH: One big challenge the community faces is that if you want to get a paper published in machine learning now it’s got to have a table in it, with all these different data sets across the top, and all these different methods along the side, and your method has to look like the best one. If it doesn’t look like that, it’s hard to get published. I don’t think that’s encouraging people to think about radically new ideas.

Now if you send in a paper that has a radically new idea, there’s no chance in hell it will get accepted, because it’s going to get some junior reviewer who doesn’t understand it. Or it’s going to get a senior reviewer who’s trying to review too many papers and doesn’t understand it first time round and assumes it must be nonsense.

Anything that makes the brain hurt is not going to get accepted. And I think that’s really bad.

What we should be going for, particularly in the basic science conferences, is radically new ideas. Because we know a radically new idea in the long run is going to be much more influential than a tiny improvement. That’s I think the main downside of the fact that we’ve got this inversion now, where you’ve got a few senior guys and a gazillion young guys.

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The AI Cold War That Could Doom Us All

… as Beijing began to build up speed, the United States government was slowing to a walk. After President Trump took office, the Obama-era reports on AI were relegated to an archived website.

In March 2017, Treasury secretary Steven Mnuchin said that the idea of humans losing jobs because of AI “is not even on our radar screen.” It might be a threat, he added, in “50 to 100 more years.” That same year, China committed itself to building a $150 billion AI industry by 2030.

And what’s at stake is not just the technological dominance of the United States. At a moment of great anxiety about the state of modern liberal democracy, AI in China appears to be an incredibly powerful enabler of authoritarian rule. Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?

AFTER THE END of the Cold War, conventional wisdom in the West came to be guided by two articles of faith: that liberal democracy was destined to spread across the planet, and that digital technology would be the wind at its back.

As the era of social media kicked in, the techno-optimists’ twin articles of faith looked unassailable. In 2009, during Iran’s Green Revolution, outsiders marveled at how protest organizers on Twitter circumvented the state’s media blackout. A year later, the Arab Spring toppled regimes in Tunisia and Egypt and sparked protests across the Middle East, spreading with all the virality of a social media phenomenon—because, in large part, that’s what it was.

“If you want to liberate a society, all you need is the internet,” said Wael Ghonim, an Egyptian Google executive who set up the primary Facebook group that helped galvanize dissenters in Cairo.

It didn’t take long, however, for the Arab Spring to turn into winter

…. in 2013 the military staged a successful coup. Soon thereafter, Ghonim moved to California, where he tried to set up a social media platform that would favor reason over outrage. But it was too hard to peel users away from Twitter and Facebook, and the project didn’t last long. Egypt’s military government, meanwhile, recently passed a law that allows it to wipe its critics off social media.

Of course, it’s not just in Egypt and the Middle East that things have gone sour. In a remarkably short time, the exuberance surrounding the spread of liberalism and technology has turned into a crisis of faith in both. Overall, the number of liberal democracies in the world has been in steady decline for a decade. According to Freedom House, 71 countries last year saw declines in their political rights and freedoms; only 35 saw improvements.

While the crisis of democracy has many causes, social media platforms have come to seem like a prime culprit.

Which leaves us where we are now: Rather than cheering for the way social platforms spread democracy, we are busy assessing the extent to which they corrode it.

VLADIMIR PUTIN IS a technological pioneer when it comes to cyberwarfare and disinformation. And he has an opinion about what happens next with AI: “The one who becomes the leader in this sphere will be the ruler of the world.”

 

It’s not hard to see the appeal for much of the world of hitching their future to China. Today, as the West grapples with stagnant wage growth and declining trust in core institutions, more Chinese people live in cities, work in middle-class jobs, drive cars, and take vacations than ever before. China’s plans for a tech-driven, privacy-invading social credit system may sound dystopian to Western ears, but it hasn’t raised much protest there.

In a recent survey by the public relations consultancy Edelman, 84 percent of Chinese respondents said they had trust in their government. In the US, only a third of people felt that way.

… for now, at least, conflicting goals, mutual suspicion, and a growing conviction that AI and other advanced technologies are a winner-take-all game are pushing the two countries’ tech sectors further apart.

A permanent cleavage will come at a steep cost and will only give techno-authoritarianism more room to grow.

Source: Wired (click to read the full article)

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Moral Machine” reveals deep split in autonomous car ethics

In the Moral Machine game, users were required to decide whether an autonomous car careened into unexpected pedestrians or animals, or swerved away from them, killing or injuring the passengers.

The scenario played out in ways that probed nine types of dilemmas, asking users to make judgements based on species, the age or gender of the pedestrians, and the number of pedestrians involved. Sometimes other factors were added. Pedestrians might be pregnant, for instance, or be obviously members of very high or very low socio-economic classes.

All up, the researchers collected 39.61 million decisions from 233 countries, dependencies, or territories.

On the positive side, there was a clear consensus on some dilemmas.

“The strongest preferences are observed for sparing humans over animals, sparing more lives, and sparing young lives,” 

“Accordingly, these three preferences may be considered essential building blocks for machine ethics, or at least essential topics to be considered by policymakers.”

The four most spared characters in the game, they report, were “the baby, the little girl, the little boy, and the pregnant woman”.

So far, then, so universal, but after that divisions in decision-making started to appear and do so quite starkly. The determinants, it seems, were social, cultural and perhaps even economic.

Awad’s team noted, for instance, that there were significant differences between “individualistic cultures and collectivistic cultures” – a division that also correlated, albeit roughly, with North American and European cultures, in the former, and Asian cultures in the latter.

In individualistic cultures – “which emphasise the distinctive value of each individual” – there was an emphasis on saving a greater number of characters. In collectivistic cultures – “which emphasise the respect that is due to older members of the community” – there was a weaker emphasis on sparing the young.

Given that car-makers and models are manufactured on a global scale, with regional differences extending only to matters such as which side the steering wheel should be on and what the badge says, the finding flags a major issue for the people who will eventually have to program the behaviour of the vehicles.

Source: Cosmos

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Making AI Safe May be an Impossible Task

When it comes to creating safe AI and regulating this technology, these great minds have little clue what they’re doing. They don’t even know where to begin.

I met with Michael Page, the Policy and Ethics Advisor at OpenAI.

Beneath the glittering skyscrapers of the self-proclaimed “city of the future,” he told me of the uncertainty that he faces. He spoke of the questions that don’t have answers, and the fantastically high price we’ll pay if we don’t find them.

The conversation began when I asked Page about his role at OpenAI. He responded that his job is to “look at the long-term policy implications of advanced AI.” If you think that this seems a little intangible and poorly defined, you aren’t the only one. I asked Page what that means, practically speaking. He was frank in his answer: “I’m still trying to figure that out.” 

Page attempted to paint a better picture of the current state of affairs by noting that, since true artificial intelligence doesn’t actually exist yet, his job is a little more difficult than ordinary.

He noted that, when policy experts consider how to protect the world from AI, they are really trying to predict the future.

They are trying to, as he put it, “find the failure modes … find if there are courses that we could take today that might put us in a position that we can’t get out of.” In short, these policy experts are trying to safeguard the world of tomorrow by anticipating issues and acting today. 

The problem is that they may be faced with an impossible task.

Page is fully aware of this uncomfortable possibility, and readily admits it. “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,” he said.

asked for a concrete prediction of where humanity and AI will together be in a year, or in five years, Page didn’t offer false hope: “I have no idea,”

However, Page and OpenAI aren’t alone in working on finding the solutions. He therefore hopes such solutions may be forthcoming: “Hopefully, in a year, I’ll have an answer. Hopefully, in five years, there will be thousands of people thinking about this,” Page said.

Source: Futurism

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The reckoning over social media has transformed SXSW

“Fifteen years ago, when we were coming here to Austin to talk about the internet, it was this magical place that was different from the rest of the world,” said Ev Williams, now the CEO of Medium, at a panel over the weekend.

“It was a subset” of the general population, he said, “and everyone was cool. There were some spammers, but that was kind of it. And now it just reflects the world.” He continued: “When we built Twitter, we weren’t thinking about these things. We laid down fundamental architectures that had assumptions that didn’t account for bad behavior. And now we’re catching on to that.”

Questions about the unintended consequences of social networks pervaded this year’s event. Academics, business leaders, and Facebook executives weighed in on how social platforms spread misinformation, encourage polarization, and promote hate speech.

The idea that the architects of our social networks would face their comeuppance in Austin was once all but unimaginable at SXSW, which is credited with launching Twitter, Foursquare, and Meerkat to prominence.

But this year, the festival’s focus turned to what social apps had wroughtto what Chris Zappone, a who covers Russian influence campaigns at Australian newspaper The Age, called at his panel “essentially a national emergency.” 

Steve Huffman, the CEO of Reddit discouraged strong intervention from the government. “The foundation of the United States and the First Amendment is really solid,” Huffman said. “We’re going through a very difficult time. And as I mentioned before, our values are being tested. But that’s how you know they’re values. It’s very important that we stand by our values and don’t try to overcorrect.”

Sen. Mark Warner (D-VA), vice chairman of the Senate Select Committee on intelligence, echoed that sentiment. “We’re going to need their cooperation because if not, and you simply leave this to Washington, we’ll probably mess it up,” he said at a panel that, he noted with great disappointment, took place in a room that was more than half empty. “It needs to be more of a collaborative process. But the notion that this is going to go away just isn’t accurate.”

Nearly everyone I heard speak on the subject of propaganda this week said something like “there are no easy answers” to the information crisis.

And if there is one thing that hasn’t changed about SXSW, it was that: a sense that tech would prevail in the end.

“It would also be naive to say we can’t do anything about it,” Ev Williams said. “We’re just in the early days of trying to do something about it.”

Source: The Verge – Casey Newton

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Stephen Hawking’s Haunting Last Reddit Posts on AI Are Going Viral

GETTY IMAGES

In the hours since the news of his death broke, fans have been resurfacing some of their favorite quotes of his, including those from his Reddit AMA two years ago.

He wrote confidently about the imminent development of human-level AI and warned people to prepare for its consequences:

“When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

When asked if human-created AI could exceed our own intelligence, he replied:

It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

As for whether that same AI could potentially be a threat to humans one day?

“AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal,” he wrote. “This can cause problems for humans whose resources get taken away.”

Source: Cosmopolitan

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A Hippocratic Oath for artificial intelligence practitioners

                                                                                         Getty Images

In the forward to Microsoft’s recent book, The Future Computed, executives Brad Smith  and Harry Shum  proposed that Artificial Intelligence (AI) practitioners highlight their ethical commitments by taking an oath analogous to the Hippocratic Oath sworn by doctors for generations.

In the past, much power and responsibility over life and death was concentrated in the hands of doctors.

Now, this ethical burden is increasingly shared by the builders of AI software.

Future AI advances in medicine, transportation, manufacturing, robotics, simulation, augmented reality, virtual reality, military applications, dictate that AI be developed from a higher moral ground today.

In response, I (Oren Etzioni) edited the modern version of the medical oath to address the key ethical challenges that AI researchers and engineers face …

The oath is as follows:

I swear to fulfill, to the best of my ability and judgment, this covenant:

I will respect the hard-won scientific gains of those scientists and engineers in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.

I will apply, for the benefit of the humanity, all measures required, avoiding those twin traps of over-optimism and uniformed pessimism.

I will remember that there is an art to AI as well as science, and that human concerns outweigh technological ones.

Most especially must I tread with care in matters of life and death. If it is given me to save a life using AI, all thanks. But it may also be within AI’s power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty and the limitations of AI. Above all, I must not play at God nor let my technology do so.

I will respect the privacy of humans for their personal data are not disclosed to AI systems so that the world may know.

I will consider the impact of my work on fairness both in perpetuating historical biases, which is caused by the blind extrapolation from past data to future predictions, and in creating new conditions that increase economic or other inequality.

My AI will prevent harm whenever it can, for prevention is preferable to cure.

My AI will seek to collaborate with people for the greater good, rather than usurp the human role and supplant them.

I will remember that I am not encountering dry data, mere zeros and ones, but human beings, whose interactions with my AI software may affect the person’s freedom, family, or economic stability. My responsibility includes these related problems.

I will remember that I remain a member of society, with special obligations to all my fellow human beings.

Source: TechCrunch – Oren Etzioni

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How to Make A.I. That’s Good for People

Credit Elisa Macellari

For a field that was not well known outside of academia a decade ago, artificial intelligence has grown dizzyingly fast.

Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.

I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.

  • First, A.I. needs to reflect more of the depth that characterizes our own intelligence.
  • the second goal of human-centered A.I.: enhancing us, not replacing us.
  • the third goal of human-centered A.I.: ensuring that the development of this technology is guided, at each step, by concern for its effect on humans.

No technology is more reflective of its creators than A.I. It has been said that there are no “machine” values at all, in fact; machine values are human values.

A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility.

Fei-Fei Li is a professor of computer science at Stanford, where she directs the Stanford Artificial Intelligence Lab, and the chief scientist for A.I. research at Google Cloud.

Source: NYT

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Mueller indictment exposes the danger of Facebook’s focus on Groups

Illustration by Alex Castro / The Verge

A year ago this past Friday, Mark Zuckerberg published a lengthy post titled “Building a Global Community.” It offered a comprehensive statement from the Facebook CEO on how he planned to move the company away from its longtime mission of making the world “more open and connected” to instead create “the social infrastructure … to build a global community.”

“Social media is a short-form medium where resonant messages get amplified many times,” Zuckerberg wrote. “This rewards simplicity and discourages nuance. At its best, this focuses messages and exposes people to different ideas. At its worst, it oversimplifies important topics and pushes us towards extremes.

By that standard, Robert Mueller’s indictment of of a Russian troll farm last week showed social media at its worst.

Facebook has estimated that 126 million users saw Russian disinformation on the platform during the 2016 campaign. The effects of that disinformation went beyond likes, comments, and shares. Coordinating with unwitting Americans through social media platforms, Russians staged rallies and paid Americans to participate in them. In one case, they hired Americans to build a cage on a flatbed truck and dress up in a Hillary Clinton costume to promote the idea that she should be put in jail.

Russians spent thousands of dollars a month promoting those groups on Facebook and other sites, according to the indictment. They meticulously tracked the growth of their audience, creating and distributing reports on their growing influence. They worked to make their posts seem more authentically American, and to create posts more likely to spread virally through the mechanisms of the social networks.

the dark side of “developing the social infrastructure for community” is now all too visible.

The tools that are so useful for organizing a parenting group are just as effective at coercing large groups of Americans into yelling at each other. Facebook dreams of serving one global community, when in fact it serves — and enables —countless agitated tribes.

The more Facebook pushes us into groups, the more it risks encouraging the kind of polarization that Russia so eagerly exploited.

Source: The Verge

 



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The biggest misconception about artificial intelligence is that we have it

Artificial intelligence still has a long way to go, says IBM’s head of AI, Alexander Gray

“The biggest misconception is that we have it. I wouldn’t even call it AI. I would say it’s right to call the field AI, we’re pursuing AI, but we don’t have it yet,” he said.

Gray says at present, humans are still sorely needed.

“No matter how you look at it, there’s a lot of handcrafting [involved]. We have ever increasingly powerful tools but we haven’t made the leap yet,”

According to Gray, we’re only seeing “human-level performance” for narrowly defined tasks. Most machine learning-based algorithms have to analyze thousands of examples, and haven’t achieved the idea of one-shot or few-shot learnings.

Once you go slightly beyond that data set and it looks different. Humans win. There will always be things that humans can do that AI can’t do. You still need human data scientists to do the data preparation part — lots of blood and guts stuff that requires open domain knowledge about the world,” he said.

Artificial intelligence is perhaps the most hyped yet misunderstood field of study today.

Gray said while we may not be experiencing the full effects of AI yet, it’s going to happen a lot faster than we think — and that’s where the fear comes in.

“Everything moves on an exponential curve. I really do believe that we will start to see entire classes of jobs getting impacted.

My fear is that we won’t have the social structures and agreements on what we should do to keep pace with that. I’m not sure if that makes me optimistic or pessimistic.”

Source: Yahoo



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Why Ethical Robots Might Not Be Such a Good Idea After All

Nao ethical robotThis week my colleague Dieter Vanderelst presented our paper: “The Dark Side of Ethical Robots” at AIES 2018 in New Orleans.

I blogged about Dieter’s very elegant experiment here, but let me summarize. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot’s logic it is transformed into a distinctly unethical robot—behaving either competitively or aggressively toward the proxy human.

Here are our paper’s key conclusions:

The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviors require the same cognitive machinery with—in our implementation—only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.

Let us examine the risks associated with ethical robots and if, and how, they might be mitigated. There are three.

  1. First there is the risk that an unscrupulous manufacturer
  2. Perhaps more serious is the risk arising from robots that have user adjustable ethics settings.
  3. But even hard-coded ethics would not guard against undoubtedly the most serious risk of all, which arises when those ethical rules are vulnerable to malicious hacking.

It is very clear that guaranteeing the security of ethical robots is beyond the scope of engineering and will need regulatory and legislative efforts.

Considering the ethical, legal and societal implications of robots, it becomes obvious that robots themselves are not where responsibility lies. Robots are simply smart machines of various kinds and the responsibility to ensure they behave well must always lie with human beings. In other words, we require ethical governance, and this is equally true for robots with or without explicit ethical behaviors.

Two years ago I thought the benefits of ethical robots outweighed the risks. Now I’m not so sure.

I now believe that – even with strong ethical governance—the risks that a robot’s ethics might be compromised by unscrupulous actors are so great as to raise very serious doubts over the wisdom of embedding ethical decision making in real-world safety critical robots, such as driverless cars. Ethical robots might not be such a good idea after all.

Thus, even though we’re calling into question the wisdom of explicitly ethical robots, that doesn’t change the fact that we absolutely must design all robots to minimize the likelihood of ethical harms, in other words we should be designing implicitly ethical robots within Moor’s schema.

Source: IEEE



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Life at the Intersection of AI and Society

Edits from a Microsoft podcast with Dr. Ece Kamar, a senior researcher in the Adaptive Systems and Interaction Group at Microsoft Research.

Ece Kamar Senior Researcher

I’m very interested in the complementarity between machine intelligence and human intelligence and what kind of value can be generated from using both of them to make daily life better

We try to build systems that can interact with people, that can work with people and that can be beneficial for people. Our group has a big human component, so we care about modelling the human side. And we also work on machine-learning decision-making algorithms that can make decisions appropriately for the domain they were designed for.

My main area is the intersection between humans and AI.

we are actually at an important point in the history of AI where a lot of critical AI systems are entering the real world and starting to interact with people. So, we are at this inflection point where, whatever AI does, and the way we build AI, have consequences for the society we live in.

So, let’s look for what can augment human intelligence, what can make human intelligence better.” And that’s what my research focuses on. I really look for the complementarity in intelligences, and building these experience that can, in the future, hopefully, create super-human experiences.

So, a lot of the work I do focuses on two big parts: one is how we can build AI systems that can provide value for humans in their daily tasks and making them better. But also thinking about how humans may complement AI systems.

And when we look at our AI practices, it is actually very data-dependent these days … However, data collection is not a real science. We have our insights, we have our assumptions and we do data collection that way. And that data is not always the perfect representation of the world. This creates blind spots. When our data is not the right representation of the world and it’s not representing everything we care about, then our models cannot learn about some of the important things.

“AI is developed by people, with people, for people.”

And when I talk about building AI for people, a lot of the systems we care about are human-driven. We want to be useful for human. 

We are thinking about AI algorithms that can bias their decisions based on race, gender, age. They can impact society and there are a lot of areas like judicial decision-making that touches law. And also, for every vertical, we are building these systems and I think we should be working with the domain experts from these verticals. We need to talk to educators. We need to talk to doctors. We need to talk to people who understand what that domain means and all the special considerations we should be careful about.

So, I think if we can understand what this complementary means, and then build AI that can use the power of AI to complement what humans are good at and support them in things that they want to spend time on, I think that is the beautiful future I foresee from the collaboration of humans and machines.

Source: Microsoft Research Podcast

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

What’s Bigger Than Fire and Electricity? Artificial Intelligence – Google

Google CEO Sundar Pichai believes artificial intelligence could have “more profound” implications for humanity than electricity or fire, according to recent comments.

Pichai also warned that the development of artificial intelligence could pose as much risk as that of fire if its potential is not harnessed correctly.

“AI is one of the most important things humanity is working on” Pichai said in an interview with MSNBC and Recode

“My point is AI is really important, but we have to be concerned about it,” Pichai said. “It’s fair to be worried about it—I wouldn’t say we’re just being optimistic about it— we want to be thoughtful about it. AI holds the potential for some of the biggest advances we’re going to see.”

Source: Newsweek

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Artificial Intelligence is different from human reasoning

Human decision tree to decide to talk “Bob”

You see a man walking toward you on the street. He reminds you of someone from long ago. Such as a high school classmate, who belonged to the football team? Wasn’t a great player but you were fond of him then. You don’t recall him attending fifth, 10th and 20th reunions. He must have moved away and established his life there and cut off his ties to his friends here.

You look at his face and you really can’t tell if it’s Bob for sure. You had forgotten many of his key features and this man seems to have gained some weight.

The distance between the two of you is quickly closing and your mind is running at full speed trying to decide if it is Bob.

At this moment, you have a few choices. A decision tree will emerge and you will need to choose one of the available options.

In the logic diagram I show, there are some question that is influenced by the emotion. B2) “Nah, let’s forget it” and C) and D) are results of emotional decisions and have little to do with fact this may be Bob or not.

The human decision-making process is often influenced by emotion, which is often independent of fact.

You decision to drop the idea of meeting Bob after so many years is caused by shyness, laziness and/or avoiding some embarrassment in case this man is not Bob. The more you think about this decision-making process, less sure you’d become. After all, if you and Bob hadn’t spoken for 20 years, maybe we should leave the whole thing alone.

Thus, this is clearly the result of human intelligence working.

If this were artificial intelligence, chances are decisions B2, C and D wouldn’t happen. Machines today at their infantile stage of development do not know such emotional feeling as “too much trouble,” hesitation due to fear of failing (Bob says he isn’t Bob), or laziness and or “too complicated.” In some distant time, these complex feelings and deeds driven by the emotion would be realized, I hope. But, not now.

At this point of the state of art of AI, a machine would not hesitate once it makes a decision. That’s because it cannot hesitate. Hesitation is a complex emotional decision that a machine simply cannot perform.

There you see a huge crevice between the human intelligence and AI.

In fact, animals (remember we are also an animal) display complex emotional decisions daily. Now, are you getting some feeling about human intelligence and AI?

Source: Fosters.com

Shintaro “Sam” Asano was named by the Massachusetts Institute of Technology in 2011 as one of the 10 most influential inventors of the 20th century who improved our lives. He is a businessman and inventor in the field of electronics and mechanical systems who is credited as the inventor of the portable fax machine.



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Jaron Lanier – the greatest tragedy in the history of computing and …

A few highlights from THE BUSINESS INSIDER INTERVIEW with Jaron

But that general principle — that we’re not treating people well enough with digital systems — still bothers me. I do still think that is very true.

Well, this is maybe the greatest tragedy in the history of computing, and it goes like this: there was a well-intentioned, sweet movement in the ‘80s to try to make everything online free. And it started with free software and then it was free music, free news, and other free services.

But, at the same time, it’s not like people were clamoring for the government to do it or some sort of socialist solution. If you say, well, we want to have entrepreneurship and capitalism, but we also want it to be free, those two things are somewhat in conflict, and there’s only one way to bridge that gap, and it’s through the advertising model.

And advertising became the model of online information, which is kind of crazy. But here’s the problem: if you start out with advertising, if you start out by saying what I’m going to do is place an ad for a car or whatever, gradually, not because of any evil plan — just because they’re trying to make their algorithms work as well as possible and maximize their shareholders value and because computers are getting faster and faster and more effective algorithms —

what starts out as advertising morphs into behavior modification.

A second issue is that people who participate in a system of this time, since everything is free since it’s all being monetized, what reward can you get? Ultimately, this system creates assholes, because if being an asshole gets you attention, that’s exactly what you’re going to do. Because there’s a bias for negative emotions to work better in engagement, because the attention economy brings out the asshole in a lot of other people, the people who want to disrupt and destroy get a lot more efficiency for their spend than the people who might be trying to build up and preserve and improve.

Q: What do you think about programmers using consciously addicting techniques to keep people hooked to their products?

A: There’s a long and interesting history that goes back to the 19th century, with the science of Behaviorism that arose to study living things as though they were machines.

Behaviorists had this feeling that I think might be a little like this godlike feeling that overcomes some hackers these days, where they feel totally godlike as though they have the keys to everything and can control people


I think our responsibility as engineers is to engineer as well as possible, and to engineer as well as possible, you have to treat the thing you’re engineering as a product.

You can’t respect it in a deified way.

It goes in the reverse. We’ve been talking about the behaviorist approach to people, and manipulating people with addictive loops as we currently do with online systems.

In this case, you’re treating people as objects.

It’s the flipside of treating machines as people, as AI does. They go together. Both of them are mistakes

Source: Read the extensive interview at Business Insider



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Montreal seeks to be world leader in AI research

Various computer scientists, researchers, lawyers and other techies have recently been attending bi-monthly meetings in Montreal to discuss life’s big questions — as they relate to our increasingly intelligent machines.

Should a computer give medical advice? Is it acceptable for the legal system to use algorithms in order to decide whether convicts get paroled? Can an artificial agent that spouts racial slurs be held culpable?

And perhaps most pressing for many people: Are Facebook and other social media applications capable of knowing when a user is depressed or suffering a manic episode — and are these people being targeted with online advertisements in order to exploit them at their most vulnerable?

Researchers such as Abhishek Gupta are trying to help Montreal lead the world in ensuring AI is developed responsibly.

“The spotlight of the world is on (Montreal),” said Gupta, an AI ethics researcher at McGill University who is also a software developer in cybersecurity at Ericsson.

His bi-monthly “AI ethics meet-up” brings together people from around the city who want to influence the way researchers are thinking about machine-learning.

“In the past two months we’ve had six new AI labs open in Montreal,” Gupta said. “It makes complete sense we would also be the ones who would help guide the discussion on how to do it ethically.”

In November, Gupta and Universite de Montreal researchers helped create the Montreal Declaration for a Responsible Development of Artificial Intelligence, which is a series of principles seeking to guide the evolution of AI in the city and across the planet.

Its principles are broken down into seven themes: well-being, autonomy, justice, privacy, knowledge, democracy and responsibility.

“How do we ensure that the benefits of AI are available to everyone?” Gupta asked his group. “What types of legal decisions can we delegate to AI?”

Doina Precup, a McGill University computer science professor and the Montreal head of DeepMind … said the global industry is starting to be preoccupied with the societal consequences of machine-learning, and Canadian values encourage the discussion.

“Montreal is a little ahead because we are in Canada,” Precup said. “Canada, compared to other parts of the world, has a different set of values that are more oriented towards ensuring everybody’s wellness. The background and culture of the country and the city matter a lot.”

Source:  iPolitics



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

IEEE launches ethical design guide for AI developers

As autonomous and intelligent systems become more pervasive, it is essential the designers and developers behind them stop to consider the ethical considerations of what they are unleashing.

That’s the view of the Institute of Electrical and Electronics Engineers (IEEE) which this week released for feedback its second Ethically Aligned Design document in an attempt

to ensure such systems “remain human-centric”.

“These systems have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between people and technology that is needed for its fruitful, pervasive use in our daily lives,” the document states.

“Defining what exactly ‘right’ and ‘good’ are in a digital future is a question of great complexity that places us at the intersection of technology and ethics,” 

“Throwing our hands up in air crying ‘it’s too hard’ while we sit back and watch technology careen us forward into a future that happens to us, rather than one we create, is hardly a viable option.

“This publication is a truly game-changing and promising first step in a direction – which has often felt long in coming – toward breaking the protective wall of specialisation that has allowed technologists to disassociate from the societal impacts of their technologies.”

“It will demand that future tech leaders begin to take responsibility for and think deeply about the non-technical impact on disempowered groups, on privacy and justice, on physical and mental health, right down to unpacking hidden biases and moral implications. It represents a positive step toward ensuring the technology we build as humans genuinely benefits us and our planet,” [University of Sydney software engineering Professor Rafael Calvo.]

“We believe explicitly aligning technology with ethical values will help advance innovation with these new tools while diminishing fear in the process” the IEEE said.

Source: Computer World



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

China’s artificial intelligence is catching criminals and advancing health care

Zhu Long, co-founder and CEO of Yitu Technology, has his identity checked at the company’s headquarters in the Hongqiao business district in Shanghai. Picture: Zigor Aldama

“Our machines can very easily recognise you among at least 2 billion people in a matter of seconds,” says chief executive and Yitu co-founder Zhu Long, “which would have been unbelievable just three years ago.”

Its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth.

On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest.

In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Whole cities in which the algorithms are working say they have seen a decrease in crime. According to Yitu, which says it gets its figures directly from the local authorities, since the system has been implemented, pickpocketing on Xiamen’s city buses has fallen by 30 per cent; 500 criminal cases have been resolved by AI in Suzhou since June 2015; and police arrested nine suspects identified by algorithms during the 2016 G20 summit in Hangzhou.

“Chinese authorities are collecting and centralising ever more information about hundreds of millions of ordinary people, identifying persons who deviate from what they determine to be ‘normal thought’ and then surveilling them,” says Sophie Richardson, China director at HRW.

Research and advocacy group Human Rights Watch (HRW) says security systems such as those being developed by Yitu “violate privacy and target dissent”.

The NGO calls it a “police cloud” system and believes “it is designed to track and predict the activities of activists, dissidents and ethnic minorities, including those authorities say have extreme thoughts, among others”.

Zhu  says, “We all discuss AI as an opportunity for humanity to advance or as a threat to it. What I believe is that we will have to redefine what it is to be human. We will have to ask our­selves what the foundations of our species are.

“At the same time, AI will allow us to explore the boundar­ies of human intelligence, evaluate its performance and help us understand ourselves better.”

Source: South China Morning Post



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

First Nation With a State Minister for Artificial Intelligence

On October 19, the UAE became the first nation with a government minister dedicated to AI. Yes, the UAE now has a minister for artificial intelligence.

“We want the UAE to become the world’s most prepared country for artificial intelligence,” UAE Vice President and Prime Minister and Ruler of Dubai His Highness Sheikh Mohammed bin Rashid Al Maktoum said during the announcement of the position.

The first person to occupy the state minister for AI post is H.E. Omar Bin Sultan Al Olama. The 27-year-old is currently the Managing Director of the World Government Summit in the Prime Minister’s Office at the Ministry of Cabinet Affairs and the Future,

We have visionary leadership that wants to implement these technologies to serve humanity better. Ultimately, we want to make sure that we leverage that while, at the same time, overcoming the challenges that might be created by AI.” Al Olama

The UAE hopes its AI initiatives will encourage the rest of the world to really consider how our AI-powered future should look.

“AI is not negative or positive. It’s in between. The future is not going to be a black or white. As with every technology on Earth, it really depends on how we use it and how we implement it,”

“People need to be part of the discussion. It’s not one of those things that just a select group of people need to discuss and focus on.

“At this point, it’s really about starting conversations — beginning conversations about regulations and figuring out what needs to be implemented in order to get to where we want to be. I hope that we can work with other governments and the private sector to help in our discussions and to really increase global participation in this debate.

With regards to AI, one country can’t do everything. It’s a global effort,” Al Olama said.

Source: Futurism



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail