As Fama looks back at the year that was 2020 there is little doubt that COVID-19 created uncertainty at a global scale unlike anything we’ve ever witnessed before. For markets and businesses, however, it simultaneously validated the use and adoption of artificial intelligence technology across most, if not all, industries. And regardless of the positive outcomes that will result from the rollout of COVID vaccines in 2021, it’s hard to imagine AI technology playing a diminished role in a post-pandemic world. For Fama this means wrestling with the difficult questions surrounding AI now. A part of that commitment is leaning on thought leaders who, through their own experiences, are well-positioned to help Fama better understand the history of artificial intelligence technology and where it might be headed in the future. One such person is Neil Sahota – an IBM Master Inventor, AI Advisor to the United Nations and author of the book, Own the AI Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition.

Fama: How do you see yourself in the world of AI? What’s your role in the conversation?

Neil: I’ve gone from instigator to sherpa. Being part of the original Watson Team and playing off the current wave we’re in with the instigation. I think there’s a lot of potential we could tap into. It’s just that people don’t know how to do that. We don’t teach people how to think critically in schools so it’s tough. And so by sherpa I really see my role as trying to help people and organizations figure out how they can actually tap into this technology. Yes to make money but also hopefully to do social good – we have social enterprise, social entrepreneurship.

I think everyone needs that helping hand right now to figure out what it is that they should be doing and once they figure that out how to get started.

Fama: Could you speak about some of the work that you’re involved with at the U.N.?

 

Neil: I helped the U.N. start the AI for Good initiative which is basically using AI technology and other emerging technologies to help fulfill the sustainable development goals (SDG) which are seventeen goals that member nations agree to. Some of these are things like zero hunger, ending poverty, better gender and diversity inclusion, access to healthcare, education and justice. These are real goals the U.N. wants to try and make happen by the year 2030. There’s a shortfall of about seven to twenty trillion dollars every year. We’ve seen that technology can actually help bridge some of that gap and we’ve rolled out quite a few projects that have already created benefits for people on the planet. And today there are about 116 active projects going on.

Fama: Anything that you or others are working on to level the opportunity playing field for developing countries, either from a data set or AI standpoint?

Neil: Developing countries usually have the better infrastructure. They have high-speed WiFi, the 5G networks, the computing power. The interesting thing is that developing countries, I think, actually have the better opportunity. They’re not married to an old mindset or an infrastructure. They actually have the ability to leapfrog ahead when it comes to the AI world. Some of the most innovative, disruptive and successful companies were startups come from some of these emerging countries. I liken it to the development of mobile technology. The real leaders that pioneered mobile payments, for example, were actually Africa and China – and still way ahead of the U.S. and Europe because they never grew up with a landline culture. They embraced the latest technology because they came in with Tabula Rasa, a clean slate on how they looked at things. It’s very much the same thing with AI. The companies that are really doing the most amazing work are the ones that aren’t married to the old mindset. They’re not trying to just do some incremental improvement. They think about a radically different way of trying to do the work. And the other big thing is that local problems have global solutions. These are people that feel the pain points of climate change or not being able to grow enough food or that have difficulty training people so they can get good jobs. But the thing is that when some of these people solve those problems, their solution scales out to a global level. So I actually think that while the delta of developing countries are hindered by the lack of infrastructure, they probably come with the most innovative ideas.

Fama: how do you think a technology company, like Fama, can ensure that their AI and algorithms are human-centered?

Neil: I think there are two things – you guys are actually doing both of them. One is the [hiring] diverse and inclusive teams so that as you develop your AI and machine learning technology you are bringing in those different perspectives so you can see how people look at something a different way. The second is that it isn’t just about the machine trying to automate everything. It should be human and machine working in a complementary way. The AI should be complementing some of the human work and, given some of the fuzziness that goes on in a background search along with the intent of some social media, requires human judgement.

I think there are two things – you guys are actually doing both of them. One is the [hiring] diverse and inclusive teams so that as you develop your AI and machine learning technology you are bringing in those different perspectives so you can see how people look at something a different way.

Fama: How has AI evolved in the last ten years?

Neil: This coming February will mark the 10th anniversary of the Jeopardy challenge with IBM Watson. I remember back in those days even (before the Jeopardy challenge) there was a lot of chaos in that everyone was trying to figure out are these things even possible? Could a machine think, could it find those hidden dots, could it understand natural language. If I say I’m feeling blue will it understand what I’m talking about? We actually took great steps back then to make sure that AI was passive: couldn’t think for itself, could only respond to queries, could only do what you taught it to do. We went out of our way to de-anthropomorphize things – basically meaning we could make the software look like a computer, sound like a machine not a human. Even the first Watson-powered robots or the first AI-powered robots from Aldebaran systems at SoftBank, they had the herky-jerky motions, they talked like a computer (with no emotion).

The surprising thing was that around 2013 and 2014, people were complaining and saying things like why don’t you make them more humanlike? I think that’s been one of the biggest changes in the last ten years is that now we’ve kind of embraced this idea of making the technology more human. So it’s more comfortable for people to use.

There’s less concern about machine trying to trick humans. But the other big change is that in 2011 and 2012 people couldn’t even wrap their arms around what this is, let alone think about how to use it. Today, I think, the awareness is there, the understanding is there, everyone is plagued with the question, what do I do with it? It’s a totally different computer model from what they’re used to of giving a computer a software program and it just runs it. Now it’s like here’s a computer that can think. You can ask questions I don’t know the answer to it and i’ll try and figure it out. They don’t know how to really tap into that effectively.

Fama: So it sounds like the U.N. is using AI to help accelerate the development of initiatives and not as a standalone application?

Neil: It is. Think about healthcare. In Africa (if I remember the numbers correctly), there’s 1 doctor for every 2,000 people and I think the average person lives about seventy kilometers from a doctor or clinic or hospital. So it can be tough to get quality care. So working with the governments there (and some NGOs), through AI for Good we created these self-contained tablets (keeping in mind that mobile technology is very prevalent in Africa). These tablets have AI on them and they don’t need an Internet connection. It’s been taught some basic things about healthcare and medicine so if you’re in one of the villages and someone falls ill or they get seriously injured, a villager can take this tablet and the AI will prompt questions. Or they could use the camera and scan the person, that sort of stuff, to be able to help the villager diagnose and if possible treat the patient. And if it’s very serious then it can call for a helicopter and get this person to a medical facility. You’re basically now saying that through this AI tablet, I can turn any villager into a bit of a physician’s assistant.

Fama: Is it your belief that machines are much better at preventing or avoiding bias?

Neil: Only if we teach them. The big thing is that this is a people problem. The bias the machine learns comes from us and it’s why we need diverse and inclusive teams. We need people to have different perspectives. So we’re thinking about the data, we’re thinking about the training, and thinking about how we educate the machine. These things come to the forefront. The big thing that I’ve been pushing with the U.N. is that we ourselves are not good at looking for a bias. We could teach AI to look for bias – we could have one AI looking at another AI to point out flaws in the training. And some might point out, is that AI that’s looking for bias biased itself? And it very well might be. We’re never going to achieve perfection, we should never expect perfection. But we know that machines can do some things more accurately and more fairly than we can.

Fama: Who do you believe to be ultimately responsible for this? Is it a data challenge or does responsibility go to the engineers? Or….?

Neil: We all are to be honest. This is way more than a data set issue. We all have implicit bias built in and that’s the real challenge. We have these stereotypes and these things that we don’t consciously perceive. The AI will pick up on those things and it’ll influence the way they make decisions. And that’s the problem. It’s not that people are trying to necessarily be willfully malicious (although in some cases they are, as in Tay chatbot). But the bigger issues are really about implicit bias. So you look at things like Google’s hate speech detector. Very altruistic, they wanted to do this for good. And it turned out that their hate speech detector was racist. Well why did that happen? Nobody taught it to be racist but there were some subconscious stereotypes and because there wasn’t diversity on the team that was actually teaching the AI system they missed things. And so the machine perceived some things as racist that may not have been but some things that were racist it didn’t realize this.

Fama: I picked the last ten years because for a lot of consumers, the Jeopardy Watson Challenge was an event that brought AI (specifically natural language processing) “to the masses”. Before that event, was the chaos that existed caused by computational or data structure shortcomings?

Neil: It was twofold. So one there was the data question and it wasn’t a question of, how are we going to make the data work? The question was, do we actually have enough data to teach the machine? This is because AI needs lots and lots of [big] data as we say. So if you want to teach it something you have to have enough examples and the more variation you have (the more likely scenarios), the more data you actually need so that it (AI) then learns effectively. That was ten years ago. Today we actually know that we don’t need big data. We actually need “medium” data because as people we’re not good about understanding what data actually has relevancy. A simple example…if you want to teach AI a different language it has to hear about a hundred million words. It seems to master the language at that point and it becomes proficient in it. But a human child only needs to hear about fifteen million words. So it’s not so much about the volume. It’s about the fact that there are certain words and phrases that are better teachers. The problem however is that we don’t actually know what those words and phrases are. As much as we dive into linguistics and invented language we don’t know what some of these actual triggers are. We don’t know enough about the human psyche to see what that is. Which is actually a good lead into the second problem: the people problem. And that’s probably the biggest problem out there. I’m not just talking about people afraid of the technology or worried about the future or thinking it’s Terminator time. You have people that can’t figure out what to do with it. They know they should do something they just don’t know what it is or they’re working on things they don’t actually know what they’re building. This is called the interoperability problem. This is where you have engineers that are building stuff where the AI does something and they, the engineers, don’t actually understand how the AI did it.

Today we actually know that we don’t need big data. We actually need “medium” data because as people we’re not good about understanding what data actually has relevancy.

Fama: Because we’ve spoken about Healthcare and how the industry is benefiting from the application of AI, what other industries do you believe stand to benefit the most but that haven’t adopted it as quickly as others?

Neil: Well, Marketing for sure! They were the first, early adopters of AI and they’re using it to do precision marketing, so individual targeting. They’re using psychographic profiling or linguistics – the whole ball of wax. They know you outside and they know you inside. Another area, ironically, is sports. It’s not something people think about but they’ve gone heavily into not just analytics like sabermetrics but really using data to figure out things like team chemistry, who might be a free agent fit, where position players can play defense, what spot of the floor you should pass to and who you should pass to. So they’ve done a really good job tapping into AI.
On the flipside I think there’s three areas where I think there’s a lot of opportunity because there hasn’t been that much adoption yet. And that’s legal services, talent management, and accounting. Legal services have always been slow to make changes because they make tons of money. And why break something that’s not broken? So they’re always kind of slow to adopt. Accounting has a similar challenge.

Talent Management is a little bit different. I think there are people who’ve been experimenting for a few years but I think there’s this perception that machines can’t do some things or can’t do something better than a human can. And that’s partially true. There are things people do much better but some of the soft things like being able to tell if someone’s a good cultural or team fit or how to assess a person’s emotional state – machines, we’ve seen, are better than humans at doing that.

Fama: Anything else, Neil, that you would like to say to our Fama audience?

Neil: I know that there’s a lot of fear and concern, and some of that is very much justified and I know people worry about the future. But at the end of the day [AI] is an opportunity – we can’t lose sight of that. AI, like all technology, is a tool. It’s all about how we as people choose to wield the tool. You can use it to create or you can use it to destroy. And so if you want to use it to create for good this means creating the mindset to encourage people to look for opportunities, to work together. But we have to proactively do that. We can’t step to the side or say that someone else will do it. It won’t work. We each have a role to play and so if you’re really worried about these things we need each one of us to step up and actually make it happen.