Tech

Machine Learning: PyTorch 1.4 opens for Java

Facebook has released version 1.4 of PyTorch. The current release of the machine learning framework introduces, among other things, the distributed, parallel training of models. It also extends the use introduced in the predecessor on Android and iOS devices via PyTorch Mobile. Pruning can be used to simplify artificial neural networks. It is also worth mentioning that the framework brings bindings for Java for the first time.

Java plays a subordinate role in the area of ​​machine learning (ML). Due to its proximity to data science, Python in particular has established itself as the top dog, which PyTorch as well as Google's ML framework TensorFlow, which was recently released in version 2.1, knows as the main language. In addition, Facebook's framework can be used with C ++.

On the Java Virtual Machine (JVM), Scala plays a greater role for ML applications than Java, among other things due to the direct connection to the Apache Spark framework. Java found its way to PyTorch via the detour PyTorch Mobile: The connection is based on the Android interface to the mobile version. In PyTorch 1.4, the Java bindings are still considered experimental. They are only available for Linux and are limited to inferences.

The PyTorch Mobile launched in October 2019 to connect mobile devices with iOS and Android brings more control options for developers in the current release. You can reduce the size of the library by including only the operators that you need in your models. According to the release notes, the size is reduced by the customized MobileNetV2 40 to 50 percent more than the prefabricated PyTorch Mobile Library.

In order to make the training of very large models as performant as possible, PyTorch introduces the so-called Distributed Model Parallel Training, i.e. a distributed parallel training for models. PyTorch 1.4 comes with a distributed RPC framework (Remote Procedure Call) with which functions can be executed on remote systems and remote objects can be referenced without having to copy the data.

Among other things, it aims to train Facebook from September RoBERTa models presented last year optimize. The methods for natural language processing (NLP) extend Google's BERT approach (bidirectional encoder representations from transformers). RoBERTa (Robustly Optimized BERT Pretraining Approach) is implemented in PyTorch and has a lot of parameters. In the current release of the ML framework, the distributed, parallel training is marked as experimental.

PyTorch 1.4 brings ready-made pruning methods to the nn.utils.pruneModule with. Pruning, which can be translated with trimmings, is used in the area of ​​machine learning to simplify decision trees or artificial neural networks.

As in botany, pruning is intended to combat wild growth. In the ML environment, this can be used to combat what is known as overfitting, a too strong adaptation of the model to the training data, which makes it less suitable for generalization.

PyTorch 1.4 has pre-built techniques for random pruning and magnitude-based pruning (MTB) on board. The latter is a simple but mostly efficient pruning algorithm that removes the part with the lowest weight after each training session. In addition to the ready-made classes for pruning, developers can create their own as a subclass of BasePruningMethod create.

show more

Further changes to PyTorch 1.4 can be made the release notes in the GitHub repository remove. The current release includes numerous adjustments listed in the release notes that lead to incompatibilities. To get started with the RPC framework for parallel training, there is a extensive tutorial such as some examples and a Documentation of the API, Simple examples of pruning can be found in the release notes.


(RME)



. (tagsToTranslate) Facebook (t) Machine Learning (t) PyTorch