The limitations of AI in chip manufacturing
A person wearing blue latex gloves holding a magnifying glass over a microchip. · Manufacturing Dive · sefa ozel via Getty Images

In This Article:

This story was originally published on Manufacturing Dive. To receive daily news and insights, subscribe to our free daily Manufacturing Dive newsletter.

Manufacturers are implementing artificial intelligence into their processes to boost efficiency and gain a competitive edge.

Eventually, with the right combination of robots, data and software, it could lead to fully autonomous semiconductor fabs or processes that free up the human workforce to solve problems alongside AI, according to executives at an industry conference this week.

Leaders from Intel, EMD, Global Foundries and other computer chip companies discussed their visions for AI at the Advanced Semiconductor Manufacturing Conference in Albany, New York. They also spoke about the challenges and limitations facing the technology as adoption grows across the industry.

From issues around data scarcity to hallucinations, here are some of the key limitations around AI that industry leaders are facing.

Identifying where to get value from AI

Currently, tools like ChatGPT and other large language models can generate human-like text and perform language-related tasks. Some tools can generate video, images or even code, while other forms of AI can create digital replicas of factory floors, handle repetitive tasks or improve quality control through computer vision.

There are countless possibilities for the use of AI in chip manufacturing. However, as companies race to incorporate the technologies into their operations, they can fall into the trap of implementing tools without understanding how they can improve performance.

“People tend to use that hype. Everything is AI. Let's try to do something with AI, but we don't get value out of how much we invested,” Safa Kutup Kurt, global head of plant operations and digital transformation at EMD Technologies, said during a conference panel. “We need to find that balance, and we need to really work on scalability of the solution and generating the value.”

“If we cannot quantify it, well, it's going to be difficult to scale,” Kurt added.

The ‘explainability’ of AI

AI tools can provide answers to solutions, but understanding why they came to their conclusions is tricky.

“Improving just the explainability and building trust in these models is a huge way…that they need to evolve,” Jason Komorowski, senior automation and analytics engineer at Intel Corp., said during the panel.

“We can't just give people a black box and say you input what you want and it'll spit out what you need to do for it. Right? We have to be able to explain it, understand which features are being used, and how we're building and coming to our decisions,” Komorowski added.