Editorial use only. No book cover usage. Mandatory Credit: Photo by Disney/Kobal/REX/Shutterstock (5886101by) Julie Andrews Mary Poppins - 1964 Director: Robert Stevenson Walt Disney Pictures USA Film Portrait Family
Julie Andrews as Mary Poppins. Do robots need the care of an ethical nanny? © Disney/Rex

I recently offered a former colleague, the founder of an artificial intelligence start-up, some unsolicited advice. Congratulating him on the birth of his first child, confidently as the father of two teenage boys, I said: “Babies aren’t like the robots you’re used to raising,” I said. “They enter this world with a certain nature, and that’s essentially who they remain throughout life.”

The words were scarcely out of my mouth when Mary Poppins deftly piloted her umbrella to a firm landing inside my head and took me to task. Underestimating the power of nurture calls into question the power of good or bad parents (and nannies) to change the course of their children’s lives. Upbringing can help determine whether you end up with a brat or a brainiac.

As we look towards a future with AI, what you end up with will depend on what we teach them as Ms Poppins would insist is the case with our children. You could as easily end up with AS (artificial stupidity) as you could with AI. No personality, no instinct, no consciousness, no soul. Probably no sense of humour. The blank slate of AI means that for robots, nurturing is all that counts.

I think about this when I watch my 15- and 18-year-old sons use their smartphones. The cloud absorbs massive amounts of data about them and remembers everything. As we give up more data about ourselves, we hope that AI will deliver the perfected state of existence that we yearn for as part of the human condition. We also hope that it will diagnose diseases and find cures, transport us effortlessly and harmlessly, and unlock the secrets of the universe. AI has redefined what could be possible.

But will what is possible redefine humanity? Will morality shift to accommodate new realities, or will it guide us in what we seek to make possible? Will we end up like Mickey Mouse as the clumsy sorcerer’s apprentice, frantically trying to grasp the power we have unleashed yet falling further behind? Or will we fully embrace nurture and create a moral and ethical environment in which to raise AI to its full societal potential?

AI learns from those who teach it, and risks becoming the new battleground for old conflicts about right and wrong. Digital creations could be used in ways that reflect the morals of their makers, who might lack any morals at all.

And because AI processes are trained with data, poor quality data or lack of transparency in how the systems learn is a risk for users. Deploying AI to consumers without understanding how it was raised could result in unintended consequences, when something goes wrong or the data it collects ends up in the wrong hands. This is one of the main reasons that the EU enacted its General Data Protection Regulation last May.

Concerns about building ethical AI coincide with a generational shift. Companies are striving to attract and retain millennials, a generation that says it wants to make a positive impact on the world. Earning their trust requires embracing fairness, inclusiveness, transparency and accountability.

Companies therefore are looking for explicit commitments and follow-through. Many are worried that their AI will not work as intended for customers, especially as products will grow in their ability to self-learn and adapt.

It is easy to be “the perfect nanny” when your toolkit includes magic. It will take more than a snap of our fingers and some hummable songs to raise our technology progenies the right way. But like Mary Poppins, we can try to teach our technology, like our children, in a moral framework that fosters the right behaviours, and follows a consistent set of rules. And, importantly, we must instruct it in a way that broadens its, and our, view of the world — creating a new way of seeing, believing and marvelling at all the possibilities it could hold.

There is much to be done. As we build tech that attempts to serve rather than threaten humanity, ethical frameworks for AI that reflect nurturing worthy of Mary Poppins may yet be the moral compasses we need.

David Oskandy is general counsel and secretary at Avanade, a technology consulting joint venture between Microsoft and Accenture

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments