Quantized Approximately Orthogonal Recurrent Neural Networks - IRT Saint Exupéry - Institut de Recherche Technologique
Pré-Publication, Document De Travail Année : 2024

Quantized Approximately Orthogonal Recurrent Neural Networks

Résumé

Orthogonal recurrent neural networks (ORNNs) are an appealing option for learning tasks involving time series with long-term dependencies, thanks to their simplicity and computational stability. However, these networks often require a substantial number of parameters to perform well, which can be prohibitive in power-constrained environments, such as compact devices. One approach to address this issue is neural network quantization. The construction of such networks remains an open problem, acknowledged for its inherent instability. In this paper, we explore the quantization of the recurrent and input weight matrices in ORNNs, leading to Quantized approximately Orthogonal RNNs (QORNNs). We investigate one post-training quantization (PTQ) strategy and three quantization-aware training (QAT) algorithms that incorporate orthogonal constraints and quantized weights. Empirical results demonstrate the advantages of employing QAT over PTQ. The most efficient model achieves results similar to state-of-the-art full-precision ORNN and LSTM on a variety of standard benchmarks, even with 3-bits quantization.
Fichier principal
Vignette du fichier
qornn_arxiv.pdf (2.72 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04434011 , version 1 (02-02-2024)
hal-04434011 , version 2 (07-06-2024)

Identifiants

Citer

Armand Foucault, Franck Mamalet, François Malgouyres. Quantized Approximately Orthogonal Recurrent Neural Networks. 2024. ⟨hal-04434011v1⟩
125 Consultations
110 Téléchargements

Altmetric

Partager

More