Artificial intelligence (AI) technology is the next evolutionary leap to go from sci-fi imagination to a transformative reality with the potential to enhance our lives in unprecedented ways. Like the internet, the only way we are going to harness the capabilities and tap the possibilities of AI technologies is by crafting smart policies that allow these innovations to flourish while guarding against unwanted potential impacts.
ITI understands that the tech industry has a responsibility to engage and educate lawmakers around the globe and the public, which is why we launched ITI Decodes. This series quickly breaks down and explains complex issues such as encryption, the internet of things, global data flows, global value chains, and artificial intelligence (AI).
If you don’t know the definition of AI, you’re not alone. Although the field has existed for many years, the technology is rapidly evolving. AI is not sci-fi theory, it is being used by consumers and businesses today all around the world. For example, AI is a key tool in helping doctors find the right cancer treatments, cutting energy use, and teaching self-driving cars to communicate with each other. And this is just the beginning.
However, we also understand that technology is neutral, and AI technology is not infallible. The tech sector understands that there will be disruption and growing pains, which is why five of our member companies created a non-profit, The Partnership on Artificial Intelligence to Benefit People and Society, with the goal of bringing stakeholders together to mitigate these issues, and why a much broader cross-section of companies are working to affirmatively address potential challenges.
To make sure we get it right, we need to have frank and honest discussions about what the technology is, and how we can democratize access, because as a number of the panelists at our ITI Decodes AI event noted, “you don’t have to be an expert in machine learning to have a voice in this conversation, you should be engaged.”
This week we did just that with our ITI Decodes Artificial Intelligence event, which convened a panel of industry experts and leaders that included Hilary Cain of Toyota, Dr. Murray Campbell of IBM, Sarah Holland of Google, and Frank Torres of Microsoft. This diverse set of perspectives provided insightful comments and reflected on questions related to the development of AI – some of which, like questions of accountability and ethics, are completely new and unique to this field.
A theme each member of the panel touched on was trust, and that should not come as a surprise, because without consumer confidence we will not be able to fully realize the extraordinary benefits these technologies offer. This commitment is building confidence, and that is reflected by two-thirds or more of consumers saying “they would trust AI with handling medication reminders, travel directions, entertainment, targeted news, and manual labor and mechanics.”
But we know we need to maintain and build on this trust, because whether it is a self-driving car, or a tool to help diagnose forms of cancer, or an app used to translate different languages, consumers deserve to have confidence that AI will work. With any technology—especially emerging technologies—accidents and mistakes can and will likely happen, which is why I echo our panelists’ sentiment from yesterday, urging lawmakers to ask the tough questions, but to also focus on the real risks of real harm, and not solely the “what-ifs,” because that could cut this technology off at its knees. We must consider what societal benefits we would give up if we were to unnecessarily restrict these technologies.
As we move forward, I feel that the future is ours to shape. The benefits of AI are virtually limitless if we create policies that will enable us to achieve those benefits. We look forward to working with lawmakers, industry partners, and the general public to make sure we get this right.