Phison Expands aiDAPTIV+ GPU Memory Extension Capabilities for Additional Platforms to Enable LLM Training and Improve Inferencing On-Premises

In This Article:

Boosting Fine-Tuning and Inference with Phison's aiDAPTIV+
Boosting Fine-Tuning and Inference with Phison's aiDAPTIV+

aiDAPTIV+ capabilities now support large parameter models on AI laptop PC and edge computing devices

SAN JOSE, Calif., March 18, 2025--(BUSINESS WIRE)--NVIDIA GTC – Phison Electronics (8299TT), a leading innovator in NAND flash technologies, today announced an array of expanded capabilities on aiDAPTIV+, the affordable AI training and inferencing solution for on-premises environments. aiDAPTIV+ will be integrated into a ML-series Maingear laptop, the first AI laptop PC capable of LLMOps, utilizing NVIDIA GPUs and available for concept demonstration and registration this week at NVIDIA GTC 2025. Customers will be able to fine-tune Large Language Models (LLMs) up to 8 billion parameters using their own data. Phison also expanded aiDAPTIV+ capabilities to run on edge computing devices powered by the NVIDIA Jetson platform, for enhanced generative AI inference at the edge and robotics deployments. With today’s announcement, new and current aiDAPTIV+ users can look forward to the new aiDAPTIVLink 3.0 middleware, which will provide faster Time to First Token (TTFT) recall and extend the token length for greater context, improving inferencing performance and accuracy. These expansions will unlock access for users ranging from university students and AI industry professionals learning to train LLMs, or researchers uncovering deeper insights within their own data using a PC, all the way to manufacturing engineers automating factory floor enhancements via edge devices.

With the proliferation of AI and edge processing use cases, demand has spiked for future AI developer talent. Developers require hands-on access to LLM training solutions to learn to build for tomorrow’s applications, while decision-makers in highly regulated government, research, healthcare and industrial organizations seek on-premises, secure devices they can leverage to train on their own data. Beyond this, focus has also shifted toward improving LLM inferencing response and accuracy, with many organizations demanding best-in-market solutions at a predictable cost. aiDAPTIV+ is a budget-friendly, GPU memory extension capability allowing users to train an LLM with their data within an on-premises "closed-loop" secure network, while providing a simple user interface to interact with and ask questions about the data.

"aiDAPTIV+ is now equivalent to having an expert on your own data in your backpack at all times," said Michael Wu, GM and President at Phison US. "Not only do you get to train and do inferencing on your own fine-tuned or RAG-enabled LLMs, but then you reap the rewards of insights. That can lead to your next application, whether that’s a groundbreaking pharmaceutical, a smarter financial forecasting model or a methodology to expedite factory output at the device level."