The Foundation we’ve built for AI- Human Collaboration.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Context Couple Teaser 2024

Contextual Reasoning as an essential human life-skill in our increasingly complex world!

And this is a skill that AI CANNOT DO
but DESPERATELY NEEDS!

Phil and Pam, as the Context Couple, will start a serieis of short vlogs where they will explain and show what contextual reasoning is and more importantlhy, how to develop and practice this essential life skill.

And they will explain and show how, by using contextual reasoning as a base, humans and AI can collaborate. 

Please follow us on this exciting journey.

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Replacing THINKING with PROMPTING?

Navigating the Cognitive Age—a remarkable leap in human capability and understanding enabled by artificial intelligence. •

🤖I’m concerned that people are completely replacing THINKING with PROMPTING. What’s left is an empty-headed techno zombie commanded by its AI overlord.

The magic of the prompt is the dynamic interplay between humans and tech—in the context of a Socratic dialogue.”

PPL –

Human-AI collaboration will require much more than prompts to the AI.

As we wrote the day we started this blog in May of 2014, “This will require imprinting AI’s genome with social intelligence for human interaction. It will require wisdom-powered coding. It must begin right now.” https://www.socializingai.com/point/

We must move far beyond current AI/LLM prompts, to hyper-personalized situation specific individual prompts. 

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Socializing AI is here?

Much appreciation for the published paper ‘acknowledgements’ by Joel Janhonen, in the November 21, 2023 “AI and Ethics” journal published by Springer.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI poses one of the “greatest tests of leadership for our time”

                                                           Getty Images

But it is a test that I am confident we can meet”
Thereas May, Prime Minister UK

The prime minister is to say she wants the UK to lead the world in deciding how artificial intelligence can be deployed in a safe and ethical manner.

Theresa May will say at the World Economic Forum in Davos that a new advisory body, previously announced in the Autumn Budget, will co-ordinate efforts with other countries.

In addition, she will confirm that the UK will join the Davos forum’s own council on artificial intelligence.

But others may have stronger claims.

Earlier this week, Google picked France as the base for a new research centre dedicated to exploring how AI can be applied to health and the environment.

Facebook also announced it was doubling the size of its existing AI lab in Paris, while software firm SAP committed itself to a 2bn euro ($2.5bn; £1.7bn) investment into the country that will include work on machine learning.

Meanwhile, a report released last month by the Eurasia Group consultancy suggested that the US and China are engaged in a “two-way race for AI dominance”.

It predicted Beijing would take the lead thanks to the “insurmountable” advantage of offering its companies more flexibility in how they use data about its citizens.

she is expected to say that the UK is recognised as first in the world for its preparedness to “bring artificial intelligence into government”.

Source: BBC

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

DeepMind’s new AI ethics unit

DeepMind made this announcement Oct 2017

Google-owned DeepMind has announced the formation of a major new AI research unit comprised of full-time staff and external advisors

DrAfter123/iStock

As we hand over more of our lives to artificial intelligence systems, keeping a firm grip on their ethical and societal impact is crucial.

DeepMind Ethics & Society (DMES), a unit comprised of both full-time DeepMind employees and external fellows, is the company’s latest attempt to scrutinise the societal impacts of the technologies it creates.

DMES will work alongside technologists within DeepMind and fund external research based on six areas: privacy transparency and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world’s challenges.

Its aim, according to DeepMind, is twofold: to help technologists understand the ethical implications of their work and help society decide how AI can be beneficial.

“We want these systems in production to be our highest collective selves. We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last sixty years.” [Mustafa Suleyman]

Source: Wired

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

wait … am I being manipulated on this topic by an Amazon-owned AI engine?

Image Credit: chombosan/Shutterstock

The other night, my nine-year-old daughter (who is, of course, the most tech-savvy person in the house), introduced me to a new Amazon Alexa skill.

Alexa, start a conversation,” she said.

We were immediately drawn into an experience with new bot, or, as the technologists would say, “conversational user interface” (CUI).  It was, we were told, the recent winner in an Amazon AI competition from the University of Washington.

At first, the experience was fun, but when we chose to explore a technology topic, the bot responded, “have you heard of Net Neutrality?What we experienced thereafter was slightly discomforting.

The bot seemingly innocuously cited a number of articles that she “had read on the web” about the FCC, Ajit Pai, and the issue of net neutrality. But here’s the thing: All four articles she recommended had a distinct and clear anti-Ajit Pai bias.

Now, the topic of Net Neutrality is a heated one and many smart people make valid points on both sides, including Fred Wilson and Ben Thompson. That is how it should be.

But the experience of the Alexa CUI should give you pause, as it did me.

To someone with limited familiarity with the topic of net neutrality, the voice seemed soothing and the information unbiased. But if you have a familiarity with the topic, you might start to wonder, “wait … am I being manipulated on this topic by an Amazon-owned AI engine to help the company achieve its own policy objectives?”

The experience highlights some of the risks of the AI-powered future into which we are hurtling at warp speed.

If you are going to trust your decision-making to a centralized AI source, you need to have 100 percent confidence in:

  • The integrity and security of the data (are the inputs accurate and reliable, and can they be manipulated or stolen?)
  • The machine learning algorithms that inform the AI (are they prone to excessive error or bias, and can they be inspected?)
  • The AI’s interface (does it reliably represent the output of the AI and effectively capture new data?)

In a centralized, closed model of AI, you are asked to implicitly trust in each layer without knowing what is going on behind the curtains.

Welcome to the world of Blockchain+AI.

3 blockchain projects tackling decentralized data and AI (click here to read the blockchain projects)

Source: Venture Beat



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

First Nation With a State Minister for Artificial Intelligence

On October 19, the UAE became the first nation with a government minister dedicated to AI. Yes, the UAE now has a minister for artificial intelligence.

“We want the UAE to become the world’s most prepared country for artificial intelligence,” UAE Vice President and Prime Minister and Ruler of Dubai His Highness Sheikh Mohammed bin Rashid Al Maktoum said during the announcement of the position.

The first person to occupy the state minister for AI post is H.E. Omar Bin Sultan Al Olama. The 27-year-old is currently the Managing Director of the World Government Summit in the Prime Minister’s Office at the Ministry of Cabinet Affairs and the Future,

We have visionary leadership that wants to implement these technologies to serve humanity better. Ultimately, we want to make sure that we leverage that while, at the same time, overcoming the challenges that might be created by AI.” Al Olama

The UAE hopes its AI initiatives will encourage the rest of the world to really consider how our AI-powered future should look.

“AI is not negative or positive. It’s in between. The future is not going to be a black or white. As with every technology on Earth, it really depends on how we use it and how we implement it,”

“People need to be part of the discussion. It’s not one of those things that just a select group of people need to discuss and focus on.

“At this point, it’s really about starting conversations — beginning conversations about regulations and figuring out what needs to be implemented in order to get to where we want to be. I hope that we can work with other governments and the private sector to help in our discussions and to really increase global participation in this debate.

With regards to AI, one country can’t do everything. It’s a global effort,” Al Olama said.

Source: Futurism



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Intelligent Machines Forget Killer Robots—Bias Is the Real AI Danger

John Giannandrea – GETTY

John Giannandrea, who leads AI at Google, is worried about intelligent systems learning human prejudices.

… concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute.

The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.

Karrie Karahalios, a professor of computer science at the University of Illinois, presented research highlighting how tricky it can be to spot bias in even the most commonplace algorithms. Karahalios showed that users don’t generally understand how Facebook filters the posts shown in their news feed. While this might seem innocuous, it is a neat illustration of how difficult it is to interrogate an algorithm.

Facebook’s news feed algorithm can certainly shape the public perception of social interactions and even major news events. Other algorithms may already be subtly distorting the kinds of medical care a person receives, or how they get treated in the criminal justice system.

This is surely a lot more important than killer robots, at least for now.

Source: MIT Technology Review



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial intelligence pioneer says throw it all away and start again

Geoffrey Hinton harbors doubts about AI’s current workhorse. (Johnny Guatto / University of Toronto)

In 1986, Geoffrey Hinton co-authored a paper that, three decades later, is central to the explosion of artificial intelligence.

But Hinton says his breakthrough method should be dispensed with, and a new path to AI found.

… he is now “deeply suspicious” of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri.

“My view is throw it all away and start again”

Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. “Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.”

Hinton suggested that, to get to where neural networks are able to become intelligent on their own, what is known as “unsupervised learning,” “I suspect that means getting rid of back-propagation.”

“I don’t think it’s how the brain works,” he said. “We clearly don’t need all the labeled data.”

Source: Axios

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Tech Giants Grapple with the Ethical Concerns Raised by the #AI Boom

“We’re here at an inflection point for AI. We have an ethical imperative to harness AI to protect and preserve over time.” Eric Horvitz, managing director of Microsoft Research

2017 EmTech panel discussion

One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.

Francesca Rossi, a researcher at IBM, gave the example of a machine providing assistance or companionship to elderly people. “This robot will have to follow cultural norms that are culture-specific and task-specific,” she said. “[And] if you were to deploy in the U.S. or Japan, that behavior would have to be very different.”

In the past year, many efforts to research the ethical challenges of machine learning and AI have sprung up in academia and industry. The University of California, Berkeley; Harvard; and the Universities of Oxford and Cambridge have all started programs or institutes to work on ethics and safety in AI. In 2016, Amazon, Microsoft, Google, IBM, and Facebook jointly founded a nonprofit called Partnership on AI to work on the problem (Apple joined in January).

Companies are also taking individual action to build safeguards around their technology.

  • Gupta highlighted research at Google that is testing ways to correct biased machine-learning models, or prevent them from becoming skewed in the first place.
  • Horvitz described Microsoft’s internal ethics board for AI, dubbed AETHER, which considers things like new decision algorithms developed for the company’s in-cloud services. Although currently populated with Microsoft employees, in future the company hopes to add outside voices as well.
  • Google has started its own AI ethics board.

Technology Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will there be any jobs left as #AI advances?

A new report from the International Bar Association suggests machines will most likely replace humans in high-routine occupations.

The authors have suggested that governments introduce human quotas in some sectors in order to protect jobs.

We thought it’d just be an insight into the world of automation and blue collar sector. This topic has picked up speed tremendously and you can see it everywhere and read it every day. It’s a hot topic now.” – Gerlind Wisskirchen, a lawyer who coordinated the study

For business futurist Morris Miselowski, job shortages will be a reality in the future.

I’m not absolutely convinced we will have enough work for everybody on this planet within 30 years anyway. I’m not convinced that work as we understand it, this nine-to-five, Monday to Friday, is sustainable for many of us for the next couple of decades.”

“Even though automation begun 30 years ago in the blue-collar sector, the new development of artificial intelligence and robotics affects not just blue collar, but the white-collar sector,” Ms Wisskirchen. “You can see that when you see jobs that will be replaced by algorithms or robots depending on the sector.”

The report has recommended some methods to mitigate human job losses, including a type of ‘human quota’ in any sector, introducing ‘made by humans’ label or a tax for the use of machines.

But for Professor Miselowski, setting up human and computer ratios in the workplace would be impractical.

We want to maintain human employment for as long as possible, but I don’t see it as practical or pragmatic in the long-term,” he said. “I prefer what I call a trans-humanist world, where what we do is we learn to work alongside machines the same way we have with computers and calculators.

It’s just something that is going to happen, or has already started to happen. And we need to make the best out of it, but we need to think ahead and be very thoughtful in how we shape society in the future — and that’s I think a challenge for everybody.” Ms Wisskirchen.

Source: ABC News

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We’re so unprepared for the robot apocalypse

Industrial robots alone have eliminated up to 670,000 American jobs between 1990 and 2007

It seems that after a factory sheds workers, that economic pain reverberates, triggering further unemployment at, say, the grocery store or the neighborhood car dealership.

In a way, this is surprising. Economists understand that automation has costs, but they have largely emphasized the benefits: Machines makes things cheaper, and they free up workers to do other jobs.

The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought. 

every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town

“We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too.

This evidence draws attention to the losers — the dislocated factory workers who just can’t bounce back

one robot in the workforce led to the loss of 6.2 jobs within a commuting zone where local people travel to work.

The robots also reduce wages, with one robot per thousand workers leading to a wage decline of between 0.25 % and 0.5 % Fortune

.None of these efforts, though, seem to be doing enough for communities that have lost their manufacturing bases, where people have reduced earnings for the rest of their lives.

Perhaps that much was obvious. After all, anecdotes about the Rust Belt abound. But the new findings bolster the conclusion that these economic dislocations are not brief setbacks, but can hurt areas for an entire generation.

How do we even know that automation is a big part of the story at all? A key bit of evidence is that, despite the massive layoffs, American manufacturers are making more stuff than ever. Factories have become vastly more productive.

some consultants believe that the number of industrial robots will quadruple in the next decade, which could mean millions more displaced manufacturing workers

The question, now, is what to do if the period of “maladjustment” that lasts decades, or possibly a lifetime, as the latest evidence suggests.

automation amplified opportunities for people with advanced skills and talents

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Teaching an Algorithm to Understand Right and Wrong

hbr-ai-morals

Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

New Research Center to Explore Ethics of Artificial Intelligence

carnegie-mellon-ethics

The Chimp robot, built by a Carnegie Mellon team, took third place in a competition held by DARPA last year. The school is starting a research center focused on the ethics of artificial intelligence. Credit Chip Somodevilla/Getty Images

Carnegie Mellon University plans to announce on Wednesday that it will create a research center that focuses on the ethics of artificial intelligence.

The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of A.I. technologies.

“We are at a unique point in time where the technology is far ahead of society’s ability to restrain it”
Subra Suresh, Carnegie Mellon’s president

The new center is being created with a $10 million gift from K&L Gates, an international law firm headquartered in Pittsburgh.

Peter J. Kalis, chairman of the law firm, said the potential impact of A.I. technology on the economy and culture made it essential that as a society we make thoughtful, ethical choices about how the software and machines are used.

“Carnegie Mellon resides at the intersection of many disciplines,” he said. “It will take a synthesis of the best thinking of all of these disciplines for society to define the ethical constraints on the emerging A.I. technologies.”

Source: NY Times

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Genetically engineered humans will arrive sooner than you think. And we’re not ready

vox-geneticaly-engineered-humansMichael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.

“What’s happening is bigger than any one of us”

We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.

So these are just great landmarks, and I’m comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.

And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it’s becoming possible to transform ourselves into whatever it is that we want to be. And there’s both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.

“We’re going to give ourselves a power that we may not have the wisdom to control very well”

I think most historians of technology … see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

 Source: Vox

vox-genetically-enginnered-humans

 

vox-genetically-enginnered-humans

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI is one of top 5 tools humanity has ever had

 A few highlights from AI panel at the White House Frontiers Conference

On the impact of AI

Andrew McAfee (MIT):

white-house-frontiers-ai-panel

To view video, click on pic, scroll down the page to Live Stream and click to start the video. It may take a min and then go to the time you want to watch.

(Begins @ 2:40:34)

We are at an inflection point … I think the development of these kinds of [AI] tools are going to rank among probably the top 5 tools humanity has ever had to take better care of each other and to tread more lightly on the planet … top 5 in our history. Like the book, maybe, the steam engine, maybe, written language — I might put the Internet there. We’ve all got our pet lists of the biggest inventions ever. AI needs to be on the very, very, short list.

On bias in AI

Fei-Fei Li, Professor of Computer Science, Stanford University:

(Begins @ 3:14:57)

Research repeatedly has shown that when people work in diverse groups there is increased creativity and innovation.

And interestingly, it is harder to work as a diverse group. I’m sure everybody here in the audience have had that experience. We have to listen to each other more. We have to understand the perspective more. But that also correlates well with innovation and creativity. … If we don’t have the inclusion of [diverse] people to think about the problems and the algorithms in AI, we might not only being missing the innovation boat we might actually create bias and create unfairness that are going to be detrimental to our society … 

What I have been advocating at Stanford, and with my colleagues in the community is, let’s bring the humanistic mission statement into the field of AI. Because AI is fundamentally an applied technology that’s going to serve our society. Humanistic AI not only raises the awareness and the importance of our technology, it’s actually a really, really important way to attract diverse students and technologists and innovators to participate in the technology of AI.

There has been a lot of research done to show that people with diverse background put more emphasis on humanistic mission in their work and in their life. So, if in our education, in our research, if we can accentuate or bring out this humanistic message of this technology, we are more likely to invite the diversity of students and young technologists to join us.

On lack of minorities in AI

Andrew Moore Dean, School of Computer Science, Carnegie Mellon University:

(Begins @ 3:19:10)

I so strongly applaud what you [Fei-Fei Li] are describing here because I think we are engaged in a fight here for how the 21st century pans out in terms of who’s running the world … 

The nightmare, the silly, silly thing we could do … would be if … the middle of the century is built by a bunch of non-minority guys from suburban moderately wealthy United States instead of the full population of the United States.

Source: Frontiers Conference
Click on the video that says Live Stream (event will start shortly)
it may take a minute to load

(Update 02/24/17: The original timelines listed above may be different when revisiting this video.)

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

“Big data need big theory too”

This published paper written by Peter V. Coveney, Edward R. Dougherty, Roger R. Highfield

Abstractroyal-society-2


The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry.
Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales.

Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data.

Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.

Source: The Royal Society Publishing

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A Legal Definition Of Artificial Intelligence?

Defining the terms: artificial and intelligence

image from Shutterstock

image from Shutterstock

For regulatory purposes, “artificial” is, hopefully, the easy bit. It can simply mean “not occurring in nature or not occurring in the same form in nature”. Here, the alternative given after the “or” allows for the possible future use of modified biological materials.

This, then, leaves the knottier problem of “intelligence”.

Intelligence measures an agent’s ability to achieve goals in a wide range of environments.

Definition proposed by: Marcus Hutter (now at ANU) and Shane Legg (now at Google DeepMind) 

This informal definition signposts things that a regulator could manage, establishing and applying objective measures of ability (as defined) of an entity in one or more environments (as defined). The core focus on achievement of goals also elegantly covers other intelligence-related concepts such as learning, planning and problem solving.

But many hurdles remain.

First, the informal definition may not be directly usable for regulatory purposes because of AIXI’s own underlying constraints. One constraint, often emphasised by Hutter, is that AIXI can only be “approximated” in a computer because of time and space limitations.

Another constraint is that AIXI lacks a “self-model” (but a recently proposed variant called “reflective AIXI” may change that).

Second, for testing and certification purposes, regulators have to be able to treat intelligence as something divisible into many sub-abilities (such as movement, communication, etc.). But this may cut across any definition based on general intelligence.

From a consumer perspective, this is ultimately all a question of drawing the line between a system defined as displaying actual AI, as opposed to being just another programmable box …

Source: Lifehacker

PL: Ouch

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Inside the surprisingly sexist world of artificial intelligence

women in aiRight now, the real danger in the world of artificial intelligence isn’t the threat of robot overlords — it’s a startling lack of diversity.

There’s no doubt Stephen Hawking is a smart guy. But the world-famous theoretical physicist recently declared that women leave him stumped.

“Women should remain a mystery,” Hawking wrote in response to a Reddit user’s question about the realm of the unknown that intrigued him most. While Hawking’s remark was meant to be light-hearted, he sounded quite serious discussing the potential dangers of artificial intelligence during Reddit’s online Q&A session:

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

Hawking’s comments might seem unrelated. But according to some women at the forefront of computer science, together they point to an unsettling truth. Right now, the real danger in the world of artificial intelligence isn’t the threat of robot overlords—it’s a startling lack of diversity.

I spoke with a few current and emerging female leaders in robotics and artificial intelligence about how a preponderance of white men have shaped the fields—and what schools can do to get more women and minorities involved. Here’s what I learned:

  1. Hawking’s offhand remark about women is indicative of the gender stereotypes that continue to flourish in science.
  2. Fewer women are pursuing careers in artificial intelligence because the field tends to de-emphasize humanistic goals.
  3. There may be a link between the homogeneity of AI researchers and public fears about scientists who lose control of superintelligent machines.
  4. To close the diversity gap, schools need to emphasize the humanistic applications of artificial intelligence.
  5. A number of women scientists are already advancing the range of applications for robotics and artificial intelligence.
  6. Robotics and artificial intelligence don’t just need more women—they need more diversity across the board.

In general, many women are driven by the desire to do work that benefits their communities, desJardins says. Men tend to be more interested in questions about algorithms and mathematical properties.

Since men have come to dominate AI, she says, “research has become very narrowly focused on solving technical problems and not the big questions.”

Source: Quartz

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The danger of tech’s far reaching tentacles

Jobs one last thing

Steve Jobs during one of his presentations of new Apple products. Photograph: Christoph Dernbach/Corbis

Excerpt from Tim Adams interview with Danny Boyle, director of Steve Jobs:

Tim Adams: We have all been complicit, I suggest, in the rise of Apple to be world’s most valuable company, in the journey that Jobs engineered from rebellion to ubiquity and all that it entails. Did Boyle want the film to comment on that complicity?

Danny Boyle: I think so. Ultimately it is about his character, and a father and a daughter. But you do want it to try and be part of the big story of our relationship with these giant corporations. All the companies that were easy to criticise, banks, oil companies, pharmaceutical companies, they have been replaced by tech guys. And yet the atmosphere around them remains fairly benign. Governments are not powerful enough any more to resist them and the law is not quick enough. One of the reasons I wanted to do this [direct the movie Steve Jobs] is that sense that we have to constantly bring these people to account. I mean, they have emasculated journalism for one thing. They have robbed it of its income. If you want to look at that malignly you certainly could do: they have made it so nobody can afford to write stories about them. Their tentacles are so far reaching in the way the world is structured that there is a danger they become author and critic at the same time. Exactly what Jobs used to accuse IBM of.”

Source: The Gaurdian

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

First feature film ever told from the point of view of artificial intelligence

Stephen Hawkings, Elon Musk and Bill Gates will love this one! (Not)

“We made NIGHTMARE CODE to open up a highly relevant conversation, asking how our mastery of computer code is changing our basic human codes of behavior. Do we still control our tools, or are we—willingly—allowing our tools to take control of us?”

The movie synopsis: “Brett Desmond, a genius programmer with a troubled past, is called in to finish a top secret behavior recognition program, ROPER, after the previous lead programmer went insane. But the deeper Brett delves into the code, the more his own behavior begins changing … in increasingly terrifying ways.

“NIGHTMARE CODE came out of something I learned working in video-game development,” Netter says. “Prior to that experience, I thought that any two programmers of comparable skill would write the same program with code that would be 95 percent similar. I learned instead that different programmers come up with vastly different coding solutions, meaning that somewhere deep inside every computer, every mobile phone, is the individual personality of a programmer—expressed as logic.

“But what if this personality, this logic, was sentient? And what if it was extremely pissed off?”

Available on Google Play

Fangoria

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

What IBM Watson must never become

Below is excerpted dialog between an AI powered robot “synth,” called Vera, and her human patient, Dr. Millican, who also happens to be one of the original developers of snyths, from the hit British-American TV drama Humans.

Synth caregiver Vera – Please stick out your tongue.Humans 1

Dr. Millican – Your kidding me

Synth caregiver Vera – Any non- compliance or variation in your medication intake must be reported to your GP

Dr. Millican – You’re not a carer, you’re a jailer. Elster would be sick to his stomach if he saw what you have become. I’m fine now get lost.

Synth caregiver Vera – You should sleep now Dr Millican, your pulse is slightly elevated.

Dr. Millican – Slightly?

Synth caregiver Vera – Your GP will be notified of any refusal to follow recommendations made in your best interests.

Humans poster

From the TV series Humans, episode #2

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail
Aside

Point of this blog on Socializing AI

Artificial Intelligence must be about more than our things. It must be about more than our machines. It must be a way to advance human behavior in complex human situations. But this will require wisdom-powered code. It will require imprinting AI’s genome with social intelligence for human interaction. It must begin right now.”
— Phil Lawson
(read more)

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail