Skip to main content
Open this photo in gallery:

Ana Cristina Garcia waves to Captcha, a robot by Hidoba Research, during the AI for Good Global summit on artificial intelligence, organized by the International Telecommunication Union, in Geneva, Switzerland, on May 30.Denis Balibouse/Reuters

The term “robo-adviser” might one day live up to its name, and it turns out that might not be great for investors.

A new study reveals a possible future paradox: By making AI-driven investment advice appear more human in how it’s delivered, investors might end up with worse returns.

In Canada, virtually all services that are considered robo-advisers do not refer to themselves as robo-advisers (except when quoting media accolades). If you go to the homepage of any “robo-adviser” in Canada and check, you’ll see they all mostly refer to themselves as some version of managed investing.

But the rise of AI-driven investment management for retail investors is already taking place around the world and is likely to spread in Canada. And once again, as the paper published last year in the Journal of Behavioral and Experimental Economics suggests, investors’ behaviours can get in their own way.

While AI-driven investment advice showed potential in reducing behavioural biases like the disposition effect (the tendency to hold onto losing investments too long while selling winners too quickly), adding human-like features appears to make investors less likely to seek and follow the AI robo-adviser’s guidance.

This is a striking finding. The very features designed to make these platforms more approachable could push investors away from advice. Why?

When we interact with a clearly artificial system – think a basic interface with buttons and data – we approach it as a tool. We don’t worry about judgment or feel the need to protect our ego. But add a friendly avatar and a name such as “Charles,” and suddenly the dynamic shifts. Now we’re dealing with a “person,” albeit a digital one. And with that comes the psychological baggage of human interaction.

Augmenting an online investment platform with a human-like digital adviser complete with an avatar, name and conversational abilities that mimic human interaction sounds like a good thing. The thinking goes: If we make these AI-powered advisers more friendly, surely investors will feel more comfortable using them.

Except the research suggests this humanization trend could backfire spectacularly.

The study showed that investors using AI robo-advisers with social design elements exhibited a stronger disposition effect compared to those using more neutral interfaces. In other words, they held onto losing investments longer and sold winners too quickly – exactly the behaviour the AI robo-advisers were trying to prevent.

(It’s important to note the difference in the general nature of investment management between the study and a contemporary, Canadian robo-adviser platform. The study looked at the buying and selling of individual assets in a trading environment while Canadian robo-advisers tend to offer fully-managed portfolios.)

The investors in the study were less inclined to seek advice from these humanized, AI robo-advisers. The researchers found that participants in their study requested 43 per cent less guidance when the robo-adviser appeared more human compared to the non-humanized interface. Further, almost 30 per cent of participants in the humanized interface group didn’t ask for advice at all – compared with 12.5 per cent in the non-humanized interface.

It’s as if the psychological cost of admitting we need help to a “human-like” entity outweighs the potential benefits of the advice. This reluctance to engage translated directly into poorer investment decisions.

Herein lies the paradox. On one hand, robo-advisers and other AI-powered investment management tools have the potential to improve investment outcomes for the average person. They could be accessible, cost-effective and capable of overcoming many of the behavioural biases that plague human decision-making.

But on the other hand, if we design them in a way that makes people reluctant to use them, we’ve solved nothing. Perhaps this phenomenon could be called the irony of inaccessible rationality.


Preet Banerjee is a consultant to the wealth management industry with a focus on commercial applications of behavioural finance research.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe