It is mundane to say that computers are changing our lives. But what about them changing our capacity to be a good leader? In this paper, we want to show that computers can really help humans to enhance their leadership. Machine-learning and AI are game-changers. Robots — or, more precisely, “chatbots” — are to be tested to tackle the difficulties of the NHS. Robots are probably on the brink of becoming much more secure drivers than human beings. But they cannot match human intuition, relational or emotional intelligence, which are essential assets of good leaders. This article wants to plead for a fruitful alliance between computers and persons aiming at being leaders. And to explore some fundamentals upon which to build this alliance.
The era of machine-learning
The recent increase in computers’ performance has enabled the renewal of already old concepts in artificial intelligence such as Artificial Neural Networks. This type of model, drawing its name from an architecture which supposedly mimics the structure of the neuron connections in the brain, has been very successfully applied to problems such as image recognition, and more recently winning against humans at the complex game of GO. The particularity of problems such as image recognition is that the human brain is particularly well equipped to solve them easily, while the very simple basic principles behind computing have so far been considered poorly adapted to such complex, and yet easy, tasks. In contrast, computers were typically considered more powerful for dealing with computations involving big numbers and a high number of computations, a non-complex, and yet difficult task.
Another domain in which the human brain supremacy is being more and more challenged by the increased efficiency of algorithms is Natural Language Processing, the ability to recognise human handwriting or spoken speech. This, coupled with storage of information and fast search engines is enabling the development of so-called Artificial Intelligences, with very ergonomic computer-user interactions for the general public, such as Siri on iPhones or some equivalents like Google Now. These portable AIs (note that the core algorithm is not run on your device but on a remote server, which is why you cannot use Siri offline) are the most well-known but they’re not alone. In 2011, IBM’s AI platform Watson won the famous Jeopardy game against former human winners, and is also used for a number of niche applications, such as the automatic analysis of scientific literature or helping doctors for medical diagnostics.
These recent tools rely heavily on a vast domain of computer science called machine learning, which encompasses an entire class of algorithms, based on more or less sophisticated statistical models, which have the ability to adapt to a (sometimes vast) quantity of data fed to the computer – data is used to train the algorithm. Although the applications of machine learning we just cited are probably some of the most striking (not to mention self driving cars for instance), machine learning is omnipresent in many other areas of our everyday lives: e-marketing, spam filtering etc. The presence of most people on one or more social media or online shopping websites (eg Amazon, eBay, …) is probably the main way through which some data about your tastes and preferences is gathered by a number of companies, and used in machine learning algorithms in order to give you tailor shopping advices on Amazon or targeted ads on your Facebook newsfeed for example.
The increasing sophistication of these machine-learning tools, coupled with the increasing amount and diversity of data on which they are trained promises that our lives will become more and more reliant to computer-based —perhaps computer-aided— decision making. This is probably for the better, as algorithms tend to take faster and better decisions than humans. For example, self-driving cars are becoming safer than human drivers. However, some of the reactions to the recent crash of a Tesla car on autopilot is only one of the reminders that, as much as people are often prone to trust blindly the results given by a mathematical model or a software, they are also irrationally unwilling to forgive their mistakes.
Application to company management
Machine learning is now taking a growing importance in the field of company management. For example, an increasing number of companies rely on Machine Learning-based software for recruitment purpose. While this is in principle a good way of reducing the subjective bias of recruiters, and making more sensible decisions, there are a couple of caveats to be aware of in such an approach. Machine Learning algorithms rely on training (adapting) a model based on observations from the past. For example, such a software could detect that only a certain type of people have had a successful career in your company in the past, and encourage recruiters to hire the same type of people. This might be a serious issue if the company has a history of sexism in its promotion policy for example, as most companies of a certain age probably would. In other words, two important limitations of blindly relying on computational tools for recruitment policy can be phrased as such: 1- using historical data to guide future decisions will have the tendency to reproduce an old model, 2- the objectivity of computer based decision backfires when moral values or empathy should be driving the decisions.
Human bias, an asset?
Anyway, computers are probably more objective than humans. But is “objectivity” really the goal?
Hannah Arendt used to say: “Fortunately, we have prejudices; otherwise, we would have to make again our judgements every time we need them”. Was it a joke, coming from one of the most famous Jewish political theorists, the author of The Origins of Totalitarianism and of Eichmann in Jerusalem: A Report on the Banality of Evil? Not so sure, because we really need “prejudices” (literally: prejudgements) sometimes to get efficient.
Likewise some of our most irrational cognitive biases are nonetheless essential elements of human thinking. For example, the so-called “identifiable victim effect bias” enhances our capacity to feel empathy for “real people”, while the “negativity bias” helps us remember negative experiences and thus is a kind of self-protection.
Another cognitive bias — the tendency we already mentioned, to be much more indulgent with human errors than with machine errors — can be seen as a safeguard, at least in two respects:
- First, it has to do with this ancient proverb “Errare humanum est, perseverare diabolicum”. Or, as Admiral Nimitz, the winner of the Pacific War, used to say “Each dog should be allowed one bite”. That is the only way to progress, since we learn a lot of our mistakes.
- But it is also a sort of balance against another tendency that leads us to overtrust the machines. One of the authors of this paper has heard at least a thousand time during his career as an engineer, as a manager and later as an executive coach, sentences meaning more or less “it must be true, the computer says so”. If so, we are right to be very demanding.
Speaking of computers, let us remember, as underscored previously, that machines have their own biases! Sometimes, it is even the way to initiate machine learning: starting from a “prejudice”. Furthermore, as powerful as they can be, computers are not almighty, and their limitations — in precision, in computing power, in programming algorithms — act as our biases do: they make their own reasoning sometimes error prone. The fact that their biases are not as thoroughly documented as ours should be a good reason to remain careful. All the more so if we include the fact that their own biases probably complement those of their creators and programmers.
Culture, a specifically human glue
Some of our cognitive biases are universal, but many are driven by our culture(s). Is it to be regretted? Our cultures are also what enables us to live together. They represent bonds as much as they represent boundaries. We need these different cultures to be able to deal with the complexity of our world. In his book When cultures collide, Richard Lewis refers to Professor Robin Dunbar’s assumption that our language may have evolved as a form of social glue to hold us together, since humans live in much larger groups that other primates. To this respect, Richard Lewis hints that “gossip” could be the ultimate form of this glue. Let us preserve this glue, whatever biases it involves, or we could risk to become as human as a computer network!
Actually, thinking about computer-aided reasoning reminds us of this wisdom, beautifully stated by Blaise Pascal: there is nowhere on earth such a thing as absolute truth. Keeping that in mind is a keystone to be able to address with efficiency and respect all the complexity of human relations, and thus of management. Yes, cognitive biases exist, and we have to be conscious of that. This awareness can lead us to a very healthy humility. Contrary to computers, our very nature is not made of pure rational intelligence: it also includes relational and emotional intelligence. These are actually sometimes a hindrance for reasoning. But they are also a genuine asset for leading people.
A fruitful alliance between man and machine
So, can machines really help men and women to become better leaders? One could doubt it when reading the number of papers focusing on computer biases, even if you do not share Stephen Hawking’s warning that “the development of full artificial intelligence could spell the end of the human race” [Prof Stephen Hawking for the BBC, 2 December 2014].
Maybe the point is in one word told by Prof. Hawking: the development of FULL artificial intelligence. At J2-Reliance, we deeply believe in COLLABORATIVE artificial intelligence, instead of FULL. What does it mean?
Machines are really good at identifying patterns among a huge amount of information. They really excel in tackling complicated issues. They are focused, and able to dig very deeply into a collection of information. Human intuition is really good at integrating pieces of information coming simultaneously from multiple sources. We human beings are able of lateral thinking as well as peripheral sight. We are able to see in the width, rather than in the depth. We are capable of dealing with complex problems, involving numerous simple factors. And moreover, we are capable of feelings, of relational intelligence, of emotional intelligence. Why not add our respective strengths?
Computers can help us to become conscious of our biases. Because the computer reasoning is often counterintuitive for us, it gives us the opportunity to take a step back. And, if we understand on what the computer logic is based, we can use our own reasoning to complement the computer’s ‘suggestions’. To give a very trivial example, computerized data analysis has proven beyond any doubt the existence of a causal relationship between tobacco and cancer. But what computer models can’t easily take into account is, for example, the psychological needs of those who become great smokers and the resulting positive effect of smoking on their health: one of our clients told us once that her physician had advised her to resume smoking, because it was a lesser risk than committing suicide…
Leadership is always a complex issue, if not always complicated. So, wouldn’t it be possible to free the leaders’ and managers’ brains from all this complicated stuff they have to deal with, to help them focus only on the complexity of their leadership tasks? Correctly trained computers may tell them who deserves a raise, according to the usual career paths in a given company. They may even tell them who will best fit a given position. But a good leader will add some informal criteria to those used by the computer. Since the human brain is capable of thinking ‘in width’, taking into account a lot of different points, while the computer is capable of thinking ‘in depth’, let leaders take into account the computer’s recommendations as one of their own data. And use their intuition, their specific relational or emotional intelligence to go further. The more they rely on the computer for computation, the more they can spare energy for focusing on genuine human skills. And the computer might even help them to be more conscious of their own biases.
As far as we are directly concerned, at J2-Reliance, we aim at offering the best of both worlds, to help managers and leaders to address efficiently the challenges they have to tackle:
- a set of dedicated apps, based on reliable statistical models, focusing on detecting and defusing conflicts, assessing global staff motivations, or whichever question that might be expressed in terms of quantitative information; even if these are yet under development, we are deeply convinced that this is the right direction;
- and highly professional coaching, relying on our expertise in how people communicate with one another, taking into account the whole complexity of managerial communication, including our human biases and so-called imperfections, these very ‘imperfections’ that give sense to our commitments and are thus totally inseparable from any human enterprise.
Thus, rather than witnessing the end of the human race because of full artificial intelligence, we could promote a new era of Computer-aided Leadership, for the best of our society.
Jacques Arnol-Stephan & Damien Arnol
 This paper from A. Krizhevsky and G. Hinton, who is a pioneer and leader in the field of deep neural neworks describes one of the first very succesful image recognition system: https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
 Watch Watson beats former winners of the game Jeopardy!: https://www.youtube.com/watch?v=Puhs2LuO3Zc
 On the public’s opinion on self driving cars after Tesla autopilot crash: http://fortune.com/2016/07/29/tesla-public-opinion/
 This example is inspired from Cathy O’Neil’s book Weapons of Maths Destruction, which gives many other examples of such issues.
 “Vérité en deçà des Pyrénées, erreur au-delà” : Truth on this side of the Pyrenees, error beyond