Wikipedia bots act more like humans than expected

‘Benevolent bots’ or software robots designed to improve articles on Wikipedia sometimes have online ‘fights’ over content that can continue for years, say scientists who warn that artificial intelligence systems may behave more like humans than expected.

They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences.

Researchers said that bots are more like humans than you might expect. Bots appear to behave differently in culturally distinct online environments.

The findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media.

We may have to devote more attention to bots’ diverse social life and their different cultures, researchers said.

The research found that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor.

Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways.

Researchers found that German editions of Wikipedia had fewest conflicts between bots, with each undoing another’s edits 24 times, on average, over ten years.

This shows relative efficiency, when compared with bots on the Portuguese Wikipedia edition, which undid another bot’s edits 185 times, on average, over ten years, researchers said.

Bots on English Wikipedia undid another bot’s work 105 times, on average, over ten years, three times the rate of human reverts, they said.

The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences – ‘sterile fights’ that may continue for years, or reach deadlock in some cases.

“We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors,” said Milena Tsvetkova, from the Oxford Internet Institute.

“This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots,” said Tsvetkova.

Source: The Statesman

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail