Europe's Data Privacy Rules Could Give Artificial Intelligence a Global Headache
This AI Startup Raised $60 Million in Round Led by First Backers of Snap · Fortune

Companies experimenting with artificial intelligence technologies may find it challenging over the next few years, as they expand their operations in Europe. That’s because by May 2018, tough new European Union rules related to the General Data Protection Regulation (GDPR) will come into effect and could pose problems for companies that rely on gathering and processing user data for their businesses.

At a panel on data privacy at the annual RSA cybersecurity conference in San Francisco, Cisco csco chief privacy officer Michelle Dennedy explained that companies—from sports brands to pharmaceutical corporations—are gathering more data than ever from the influx of Internet-connected devices now wired into their IT infrastructure. And the problem is that the upcoming regulation is especially tough on what’s known as profiling, which is essentially the ability for companies to use automation to determine certain characteristics of their individual users.

Get Data Sheet, Fortune’s technology newsletter.

For instance, when a company uses data analytics and related automation technologies to predict whether someone is likely to be a good worker or be more prone to a specific illness, that business is engaging in profiling.

Because the EU regulation leans heavily on protecting the personal data of an individual, companies operating in the European Union have to be extremely careful with how they handle and process their customer data. If the EU determines that a company’s use of data analytic and automation technologies ends up discriminating against certain groups of people, or if a business is unable to fix potential problems with their technologies, they face stiff fines that can cost them up to 4% of their overall revenue.

Companies are no longer just stockpiling their data into large repositories called “data lakes,” Dennedy said. They are now using various AI technologies like machine learning to build powerful software services that can automatically make decisions on their own, based on the data they processed.

If these types of AI-powered software start taking on tasks like determining whether a particular person should receive or not receive a certain benefit or something like financial advice, companies are at higher risk of violating the EU regulation, she explained.

Companies need to carefully determine how to use their various types of data for different purposes that don’t potentially put them at risk of a violation. In some cases, that may mean a company should leave out certain demographic data when debuting a specific service overseas, she said.