AI, the “Invisible Hand” on our Decision-making Process

AI-Invisible-Hand

Few months ago, I heard the SoftBank CEO, Masayoshi Son, stating on CNBC that people should brace themselves for the proliferation of artificial intelligence as it will change the way we live within three decades. And he is absolutely right but do we really measure the impact it will have on humanity? Stephen Hawking, Tim Berners-Lee or even Elon Musk have explicitly voiced concerns. With his primary chess program, the AI pioneer Alan Turing anticipated early 1950s that machines would “take control.”

Financial institutions, as many industries, are experiencing the rapid transfer of human output to robot output. The sector is digitizing because we are seeking low friction and immediacy. Anything that can be automated will be automated. Among the hype and unavoidable buzz, some voices claim that humans will be inevitably replaced by our AI-enabled robot overlords in a “Skynet takeover” [1] like.

I believe we are reaching the tipping point where data analytics and machine learning embedded in applications will replace reports, dashboards and other people-oriented output as the primary consumers of data. Software will be empowered to act on data for us whether it is machine-to-machine or machine-to-consumer rather than simply surfacing the data for people to examine and use to make decisions.[2]

Coming on little cat feet, AI has grown ubiquitous in financial services. Computers are already proficient at picking stocks, managing assets, identifying customer churn, providing clients with insight into their income & expenditure, assessing credit, reading documents through OCR and their reach has begun to extend beyond computation and taxonomy. This will have deep implications not only in terms of technology but in the fundamental nature of how people make decisions.

But do computational systems state the truth?

AI is displayed as an organ authorized to diagnose the reality in a more reliable manner than we would do ourselves as well as revealing some dimensions shadowed by our consciousness. A large part of the algorithm science borrows an anthropological path using human skills in enabling situation assessment and draw up a conclusion.

AI is good at solving specific tasks but it does not have a sense or awareness of self. Consciousness is not going to emerge out of a system that is narrow in his predictions. Humans have the ability to construct counterfactuals, to imagine any kind of ‘what if’ scenario – we are able to think out of the box. AI is able to generalize by absorbing lots of data and by ‘transfer learning’ could create things never seen before. Deep learning is merely a matching pattern. It is a correlation of neural networks. “But what about real intelligence ?” as put it my wise friend Nicolas Lebard who inspired me in writing this article.

One may argue that AI is more a technical principle than an innovation. It is a robotic analysis of diverse ordering, a real-time equation, generally to execute action accordingly done by either human acts or in an autonomous mode by systems themselves. Are human intelligence and machine intelligence the same? Is the human brain essentially a computer? Unlike the Dartmouth proposal where “Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”, I am not convinced a machine would ever have a mind, mental states, and consciousness in the same way that a human being can. To be a bit provocative and cartesian as Larry Tesler, the primary inventor of copy/paste, human intelligence “is whatever machines haven’t done yet.” The scientific approach to this debate depends on the definition of “intelligence” and “consciousness” and exactly which “machines” we are referring to.

For more than one century, IT-enabled data storage and management. With the digital era, there is a clear change – less informing, more orienting human action. Digital technology dictates the tempo of our living. We see emerging “an automated invisible hand”, a world sorted through a feedback regime, a data-driven society. The so-called ‘AI Spring’ is now in full bloom, ignited and alighted by the migration of the people’s data into the digital universe. Those Aletheia (after the Greek word for truth) mechanisms relentlessly more complex are steering to enforce their law, influencing the human business at a different level: incitative, prescriptive, coercitive. [3]

A number of AI scientists make assertions about veracious intuition and values and they usually seem to assume that there is a ‘truth’ or ‘right answer’ to every debate. It is not the case when we engage in ethical considerations. Social scientists would assist to mitigate bias, enhance fairness, bolster accountability in the intent of strengthening ethics. A credible and effective governance structure should be set up as a framework. Failing to do so allows AI developers to push innovation with potentially severe consequences for humanity in a black box.

It is important to identify those biases at the outset because if we do not get it right at the foundation, you can extrapolate and see the catastrophic mess it could beget. There is so much to cheer about the AI technology that a balanced way of looking at it, is to glorify what is so special about it while being mindful of the shortcomings. The 2 elephants in the room are privacy and explainability.

Privacy should be a bone-chilling concern. Though it is clear to enjoy AI benefits, we have to overcome the stumbling blocks linked to it. Techniques showing promises such as deep learning only work when you can use all of the data and you do not make any presumptions about what data is relevant because the algorithm can actually surface the relevant variables and covariance that matter. We need to define some sort of backstops to preserve somehow individual sanctity. I have not made up my mind yet if privacy rules such as GDPR in Europe or PDPA in Singapore effectively impedes the use of data for AI beneficial purposes.

Regarding explainability, how do we understand a decision done by deep learning? Those systems lack transparency. They can quickly lose humans in the complexity inherent to the algorithm and many of them contain an imprint of the unconscious biases of the scientists that helped to develop them. So, do we accept a low-performance AI or not full AI so we can have explainability or do we have high-performance AI but limited explainability? My bet is that we would predominantly choose high performances but I do not see that choice as sustainable in the long run since there is a genuine risk of losing control. It reminds me of my years where we were outsourcing at full blast the IT, the processes and unintentionally the knowledge. So the day you need to amend a process for effectiveness, policy or regulation, you are in trouble. Likewise, with AI introduction, you run the risk of not being able to explain outcome (you would rely solely on interpretability) and you will struggle to amend processes accordingly. If a financial institution starts to leverage on AI to detect sophisticated financial cyber-crime for instance in detecting anomalies in real-time and in reducing false positives, they need to be able to explain how the filtering is done and the hits management. So you do need the causal inference and cannot just rely on the correlations.

AI can only bring value if we make sure it serves a wider purpose for our clients (ie. provide a seamless, affordable and continuous experience and service that empowers them) and for our own people (ie. support the development of human intelligence and vision). Though there is a number of things that robot will do better than humans thanks to AI, we should stop opposing humans and artificial intelligence. With ethics and social responsibility, the relationship between humans and robots should be seen as synergistic.


References

(1)   The Terminator directed by James Cameron, 1984.

(2)   Karthik Ramasamy, 2019 Data Predictions: Demise Of Big Data And Rise Of Intelligent Apps, Forbes, February 22 2019.

(3)   Eric Sadin, L’intelligence artificielle ou l’enjeu du siècle : Anatomie d’un antihumanisme radical, 2018.

(4)   “Superior Intelligence.”, The New Yorker, May 14 2018.