BRUSSELS – Today, global tech trade association ITI called on European policymakers to work with international partners to develop balanced proposals on AI that preserve innovation while addressing critical security and ethics questions. In its response to the public consultation on the European Commission’s White Paper on Artificial Intelligence, ITI addresses the economic and social implications of AI and the role of the tech industry, including safeguarding the public and individual interests at stake. ITI’s submission also assesses key areas of the proposal’s risk-based approach to AI, existing rules and legal requirements, and compliance and enforcement.

“Technological innovations bring innumerable benefits to the European economy and society,” said Guido Lobrano, ITI’s Vice President of Policy, Europe. “ITI and our members welcome the Commission’s AI White Paper as an important opportunity to discuss Europe’s vision on how to advance AI innovation around a human centric approach and help European companies thrive, while simultaneously addressing public concerns around technological advancement. ITI and our partners want to be a constructive partner in realising this goal.”

In its comments, ITI supports the Commission’s idea of a risk-based approach but recommends a clear demarcation of measures targeting ‘high-risk’ and ‘low-risk’ AI applications. The definition of ‘high-risk’ should consider context-specific factors such as complexity of the AI system, or the probability and irreversibility of harm caused in worst case scenarios. ITI also recommends that the liability regime follow an application-specific approach. Given AI technology is already covered by existing liability legislation in most cases, new legislation should only address clearly identified regulatory gaps.  

ITI encourages the Commission to avoid burdensome requirements that may create market access barriers, underscoring the importance of voluntary industry-led standardisation on a global level. The submission also urges to avoid localisation requirements for testing bodies and instead proposes a combination of ex-ante risk self-assessment and ex-post enforcement for high-risk AI applications. The ex-ante regulatory approach proposed by the White Paper may create difficulties in training the AI, impact products that are already on the market and hamper R&D and early-stage products.

ITI also highlights the necessity of Europe’s global partnerships, and the importance of shared values like trust, fairness, explainability, effectiveness, safety, and human oversight needed to guide future policy action on AI.

Read the full submission here.

Related [Artificial Intelligence]