Opinions

What Makes A 'Good' Robot Colleague?

How will human-robot working relationships play out in the future?

Share this article

Share this article

How will human-robot working relationships play out in the future?

Opinions

What Makes A 'Good' Robot Colleague?

How will human-robot working relationships play out in the future?

Share this article

An animal charity in San Francisco has become a target for much global criticism and local abuse, all because of the behaviour of its most recent recruit: an R2D2-style robot.

The 1.5 metre tall Knightscope security robot had been trundling around neighbouring car parks and alleyways recording video, stopping to say hello to passers by - but ended up being accused of harassing homeless people. A social media storm led to calls for acts of retribution, violence and vandalism against the charity.

The Knightscope itself was regularly tipped over, once covered in a tarpaulin, smeared with barbecue sauce and even faeces. Problems with electronic workers are becoming more common. A similar robot in another US city inadvertently knocked over a toddler; others have been tipped over by disgruntled office workers.

It’s an example of how robot technologies can provoke extreme, perhaps irrational reactions. Security was a real issue for the charity. There had been a string of break-ins, incidents of vandalism and evidence of hard drug use that was making staff and visitors feel unsafe, and it seems reasonable the charity should try to make people feel safer.

The introduction of new forms of Artificial Intelligence into people’s everyday working lives is one of the biggest challenges facing not just employers but entire societies. How far are we willing - or should we - let robots into the workplace?

What kinds of roles are acceptable and which are not? Robots are already capable of providing childcare and eldercare, working in hospitals, performing surgery, delivering customer care, as well as a whole raft of office-based analytical roles.

Most importantly, who sets the rules for how they behave, and how they decide on priorities when interacting with people? Fundamental issues like prioritising all business tasks and objectives over scruples, over the opinions and feelings of human co-workers. After all, the behaviour and the ‘right’ choices are all made in the programming.

The potential of robot-enhanced living and the huge commercial opportunities involved, mean we will become more accepting of robots as they become a familiar, even inescapable, part of our lives. But that central issue, of what kinds of robots we want and where, needs dealing with now.

The debate needs to be shaped as much by ordinary members of the public, employees and their managers, as much as by technologists and engineers. We all have a stake in deciding what makes a ‘good’ robot.

The British Standards Institute published the first standard for the robot ethics in 2016, the BS 8611. But that’s just the start, and the BSI’s UK Robot Ethics Group is now looking to involve the general public in producing new standards and inform the work of developers and manufacturers.

Our relationship to AI and to robots is messy and confused. On the one hand there are attacks on robots when there’s a feeling of intrusion. On the other we’re increasingly attached emotionally to our personal technologies, to our personalised phones and tablets, and make pets of robot toys and anything which shows signs of engagement, no matter how limited and fake. There’s the potential for too much trust.

We need to be clear-sighted about the future of human-robot relationships in the workplace, and that means debate now before the sheer scale of consumer opportunities and cost-savings make the decisions for us.

Dr Sarah Fletcher is a senior research fellow at the Centre for Structures, Assembly and Intelligent Automation, Cranfield University.

Related Articles
Get news to your inbox
Trending articles on Opinions

What Makes A 'Good' Robot Colleague?

Share this article