On August 1, 2024, the EU Artificial Intelligence (AI) Act came into force. In two years, the AI Act will be fully applicable.
The idea of an EU regulatory framework for AI has been in development for years. Back in April of 2021, the European Commission first proposed regulation. In December 2023, an agreement on the AI Act was reached between the European Parliament and the Council of the EU.
Through the regulation, the EU aims to ensure people and organisations can count on safe, transparent, traceable, non-discriminatory, and environmentally friendly AI systems under human supervision.
The AI Act is the first-ever comprehensive legal framework on AI worldwide, and many, including EU lawmaker Brando Benifei, believe it could become the “blueprint” for other nations to follow.
Who does the AI Act apply to?
The AI Act applies to every business operating an AI system in the EU, across all sectors. This includes providers, users/deployers, importers, distributors, and manufacturers. It also means that businesses outside of the EU, that operate any AI system within the EU, are affected.
The Act classifies AI systems into a four-tier category based on their level of risk: unacceptable; high; limited; and minimal. Most of the Act covers high risk systems, and most of the obligations fall on providers/deployers of these systems.
The AI Act will enable increased transparency around data
Food Ingredients Global Insights sat down with Mariette Abrahams, founder and CEO of Qina, a health nutrition technology consultancy and platform, to understand how the Act will shape the future of AI in the food and beverage industry.
Abrahams explained that most food and beverage brands currently fall under the low-risk category, as these products generally do not directly use consumer data. “However, companies that leverage AI to generate nutrition and health content such as blogs, videos, social media posts, or chatbots, will need to start labelling their content according to the new regulation,” she explained. Labelling is of particular importance, considering the high amount of misinformation spread online, Abrahams added.
As food as medicine, direct-to-consumer, and e-commerce trends grow, Abrahams expects more companies in the future to, on the one hand, leverage AI to create new products, and on the other hand hold more health and personal data.
Abrahams added that this will make the AI Act increasingly important. Alongside the general data protection regulation (GDPR Act), the Act will enable increased transparency in terms of how data is collected, used, shared, and how it may impact the wider society in the long term.
Companies can use the AI Act to build stronger brands
Various challenges and opportunities will arise as companies adapt to a new normal under the AI Act.
Abrahams believes the biggest challenges will be related to privacy, security, data quality, and transparency regarding how AI systems are trained. She explained that AI usage for product innovation, while growing in popularity, remains limited to larger companies. Smaller companies with limited resources may face more challenges, as the ethics of AI have not been high on the agenda, and the AI Act will certainly pose a challenge here.
When it comes to opportunities, Abrahams identified five that have the potential to build stronger, more trustworthy, and more sustainable brands:
“Increasing transparency – which will help to build consumer trust.
“Equity – ensuring that individuals, groups, and populations are not excluded.
“Affordability – making sure that everyone can afford products that contribute to their health and wellbeing.
“Sustainability – making sure to balance the needs of people and planet.
“Greater accountability – how companies contribute to the health of consumers.
Steps companies can take to ensure they comply with the AI Act
Compliance with the AI Act will require companies to familiarise themselves with the various regulations and ensure they have a documentation trail outlining how they comply with the regulation.
Abrahams stressed the importance of creating an AI code of ethics, which can be clearly and transparently communicated to all consumers, partners, and customers.
Implementing an internal team and/or people to be responsible for AI is also important, Abrahams added. Having dedicated people to lead and disseminate activities, news, and developments throughout the company will ensure that everyone is informed.
The AI Act will impact businesses in Europe, and those selling into Europe, and while Abrahams acknowledged that it is still early days, she urged food and beverage companies to seek expert advice early to reduce potential risks.
What are companies currently doing to ensure they comply?
At the end of July, Unilever outlined the steps it is taking to ensure compliance with the AI Act.
The company has implemented an AI assurance process that, before deployment, reviews AI projects to identify and mitigate risks. This process involves a cross-functional team of subject matter experts, including those from Holistic AI, a company that assists enterprises in adopting and scaling AI, using its AI Governance platform. The AI assurance process is designed to be both adaptable and scalable, allowing it to keep pace with evolving regulations and advancements in technology. As of July 2024, the process is fully integrated across the organisation, covering over 500 of its AI systems globally.
To guide the development and deployment of AI technologies, Unilever has established Responsible AI Principles, which it says aligns with its Code of Business Principles, legal standards, and UN regulations.
Educational initiatives, including practical guidance, training sessions, and educational programs are provided to employees to help them understand and use AI responsibly.
The company also claims to proactively monitor legal developments to ensure its AI practices comply with regulations, including the US AI Executive Order.