Wikipedia bots act more like humans than expected

‘Benevolent bots’ or software robots designed to improve articles on Wikipedia sometimes have online ‘fights’ over content that can continue for years, say scientists who warn that artificial intelligence systems may behave more like humans than expected.

They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences.

Researchers said that bots are more like humans than you might expect. Bots appear to behave differently in culturally distinct online environments.

The findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media.

We may have to devote more attention to bots’ diverse social life and their different cultures, researchers said.

The research found that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor.

Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways.

Researchers found that German editions of Wikipedia had fewest conflicts between bots, with each undoing another’s edits 24 times, on average, over ten years.

This shows relative efficiency, when compared with bots on the Portuguese Wikipedia edition, which undid another bot’s edits 185 times, on average, over ten years, researchers said.

Bots on English Wikipedia undid another bot’s work 105 times, on average, over ten years, three times the rate of human reverts, they said.

The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences – ‘sterile fights’ that may continue for years, or reach deadlock in some cases.

“We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors,” said Milena Tsvetkova, from the Oxford Internet Institute.

“This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots,” said Tsvetkova.

Source: The Statesman

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Civil Rights and Big Data

big-data-whitehouse-reportBlogger’s note: We’ve posted several articles on the bias and prejudice inherent in big data, which with machine learning results in “machine prejudice,” all of which impacts humans when they interact with intelligent agents. 

Apparently, as far back as May 2014, the Executive Office of the President started issuing reports on the potential in “Algorithmic Systems” for “encoding discrimination in automated decisions”. The most recent report of May 2016 addressed two additional challenges:

1) Challenges relating to data used as inputs to an algorithm;

2) Challenges related to the inner workings of the algorithm itself.

Here are two excerpts:

The Obama Administration’s Big Data Working Group released reports on May 1, 2014 and February 5, 2015. These reports surveyed the use of data in the public and private sectors and analyzed opportunities for technological innovation as well as privacy challenges. One important social justice concern the 2014 report highlighted was “the potential of encoding discrimination in automated decisions”—that is, that discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.”

To avoid exacerbating biases by encoding them into technological systems, we need to develop a principle of “equal opportunity by design”—designing data systems that promote fairness and safeguard against discrimination from the first step of the engineering process and continuing throughout their lifespan.

Download the report here: Whitehouse.gov

References:

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

http://www.frontiersconference.org/

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Blurring the boundaries between humans and robots

Inspired by Japans unique spiritual beliefs they are blurring the boundaries between humans and robots.

“It is a question of where the soul is. Japanese people have always been told that the soul can exist in everything and anything. So we don’t have any problem with the idea that a robot too has a soul. We don’t make much distinction between humans and robots.” –   Roboticist, Hiroshi Ishiguro

Gemonoid HI-1 is a doppelganger droid built by its male co-creator, roboticist Hiroshi Ishiguro. It is controlled by a motion-capture interface. It can imitate Ishiguro’s body and facial movements, and it can reproduce his voice in sync with his motion and posture. Ishiguro hopes to develop the robot’s human-like presence to such a degree that he could use it to teach classes remotely, lecturing from home while the Geminoid interacts with his classes at Osaka University.

NOTE: this video was published on Youtube Mar 17, 2012

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Internet to create a free teen-friendly hub where students can access info in one place

“I began my career as a high school teacher in the Bronx at a 5,000-student high school that’s since been shut down for chronic low-performance. That experience helped me understand how alone so many young people are as they are trying to figure out their future. Their parents are busy, their friends are worried about their own issues, and often they don’t have a teacher or other adult who is there to guide them,” Executive Director of GetSchooled.com Marie Groark

PLDid you know the average high school student spends less than one hour per school year with a guidance counselor mulling over college decisions? This, according to the National Association for College Admission Counseling.

Not only is this not nearly enough time to make decisions that can impact the rest of their lives, but for kids whose families can’t afford college prep, that might be their only interaction with someone equipped to steer them toward higher education.

Get Schooled has turned to the Internet to create a free teen-friendly hub where students can access relevant info in one place, from how to find and apply for scholarships to info on standardized tests to what type of school fits their personality. They cut the boredom factor with celebrity interviews and a gamification model that awards students points as they engage, redeemable for offline rewards.

We believe a role for AI, as a next step in this expanding opportunity, is to engage and collaborate with students individually about their own life and future. Get to know the unique perspective and situation of each student. To guide the student in what they personally need precisely when they need it. Equipping them with information tailored to their own personal journey. 

Source: Fast Company

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail