I had the opportunity to talk with Neil Sahota for an episode of the Future Squared podcast.
Sahota, a professor at UC Irvine, speaks from a position of authority. He is a founding member of the UN’s Artificial Intelligence for Social Good Committee, an IBM Master Inventor, and author of Own the AI Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition.
What started as an inquiry on the state of AI today quickly snowballed into a conversation on robot ‘rights’, cyborgs, how AI and automation will affect the job market, and how companies can use AI to gain leverage and competitive advantage.
This episode is jam-packed with ideas, from the philosophical to the practical. You can listen to the entire episode below, as well as dive into the key lessons learned.
Sahota suggests the definition of artificial intelligence should satisfy three conditions:
Given this criteria, it is difficult to say how far away true AI (also known as artificial general intelligence, or AGI) really is.
Most efforts to date have focused on ‘narrow AI’ applications that produce more immediate value, such as Google Home. This also extends to IBM Watson which, based on the above-mentioned definition, is not true AI.
But given the exponential pace that technology is progressing at, it‘s ’possible’ that we will see general AI in the next 10 to 15 years, according to Sahota. Estimates vary wildly though, with some suggesting we won’t see it this Century.
Nonetheless, what is clear is that AI is not just speculative — it is coming and we can choose to ignore it at our own peril.
We discussed the ‘paperclip maximizer’ thought experiment — whereby AI tasked with maximizing the number of paperclips created destroys all of humanity for consuming resources that could otherwise be used to create paperclips.
This is a form of perverse instantiation and is just one of the many sci-fi-induced anxieties about AI and the destruction of the human species.
Google’s hate-speech detector AI eventually demonstrated racist behaviour, flagging tweets by African Americans at a much higher rate than other people. This is because bad data fed to machines can contain implicit biases inherent in the ‘feeder’.
Thus, to keep a lid on human bias, Sahota suggests setting ‘ground truths’ and constraints when implementing AI tech. This requires the minimization of ambiguity and massaging the data to reflect specific goals.
While this doesn’t take remove the bias problem completely, it makes AI more predictable and useful.
The most obvious and immediate benefit of using AI in business lies in automation.
Today, companies as disparate as law firms and mining companies are beginning to use AI to perform tedious tasks such as verification and paperwork.
Naturally, this has led to the fear that AI will make jobs obsolete.
However, the goal of AI should not be to replace people, but to free up our time to work on higher value and more complex tasks — or just live life!
What does this mean for the nature of work, then?
In the not so distant past, there were no social media marketers, Uber drivers, and social media influencers.
Thus, while new technologies made some jobs obsolete, they can also pave way for more, and hopefully, more fulfilling jobs — it wasn’t that long ago that the majority of us could be found either working with a plow, on an assembly line or on our knees in a coalmine. In fact, at the start of the 20th Century, over 40% of Americans were farmers — today, that number is less than 2%.
Sahota suggests that there will be more need for philosophy and the arts, and not just engineers and scientists.
After all, we can’t teach AI how to connect with people at a deeper level — at least not in the foreseeable future. Thus, the need for the engagement of the human spirit will remain.
Sahota mentioned a curious campaign called Love in AI where scientists are trying to figure out how to program artificial empathy. These kinds of projects challenge us to understand what it means to be truly human.
For example, an AI named Sofia was granted Saudi Arabian citizenship and a passport.
While this might have been just a publicity stunt, it does beg the question of rights for robots, particularly as AI becomes more intelligent. At what point, if ever, do we deem that AI is conscious? And if it is, would it not then feel, and should it not have rights?
Sahota echoes these sentiments, suggesting that AI might gain consciousness, but that determining what kind of consciousness it will be is a different matter.
All across the value chain, you’ll find possibilities for AI augmentation and automation.
But before integrating AI into your business, Sahota suggests considering a few things:
● Start with a problem — Determine the most pertinent issues in your value chain. Plan how AI might address this problem
● Get the data — To make sure your AI will have a significant positive effect on your business, you need data, lots of it. Do you already have data on your hands? If not, do you have the capacity to generate data? Can you buy relevant data? The data also needs to be cleaned.
● You need AI domain experts — Another problem with AI implementation in businesses is the lack of domain experts. The AI program, after all, is only one part of the solution.
Sahota noted that firms with smaller infrastructures will be more adaptable when it comes to adapting AI across the entire value chain.
As in the animal kingdom, adapt or perish.
Steve Glaveski is on a mission to unlock your potential to do your best work and live your best life. He is the founder of innovation accelerator, Collective Campus, author of several books, including Employee to Entrepreneur and Time Rich, and productivity contributor for Harvard Business Review. He’s a chronic autodidact and is into everything from 80s metal and high-intensity workouts to attempting to surf and hold a warrior three pose.