Beckhoff’s TF3820 | TwinCAT 3 Machine Learning Server
Janaury 27, 2022
Beckhoff offers a solution for machine learning (ML) and deep learning (DL) that is seamlessly integrated in TwinCAT 3. The TF3820 TwinCAT 3 Machine Learning Server is a high-performance execution module (inference engine) for trained ML and DL models.
The inference engine is programmed classically in the PLC. From here, models can be loaded, the running hardware can be configured, and the inference executed. The model runs in an independent process of the operating system. There are almost no restrictions when it comes to the ML and DL models. From clustering models to image classification and object detection, the possibilities in the choice of models are extremely diverse.
The ML and DL models are trained in established frameworks such as PyTorch, TensorFlow or MATLAB®. The information of the learned network is loaded to the inference engine as a description file. The standardized exchange format Open Neural Network Exchange (ONNX) is supported, so that the worlds of automation and data science merge seamlessly.
As far as the design of the ML and DL models is concerned, a wide range of hardware is available. The TwinCAT 3 Machine Learning Server can operate classically in parallel on CPU cores, uses the integrated GPU of the Beckhoff Industrial PCs or can access dedicated GPUs, such as those from NVIDIA.
The TwinCAT 3 Machine Learning Server provides an inference engine with maximum flexibility in terms of models and high performance in terms of hardware. Applications can be found in predictive and prescriptive models, machine vision and robotics. Examples are image-based methods for sorting or evaluating products, for defect classification and defect or product localization as well as calculating gripping positions.