Robust Training of Artificial Neural Networks via p-Quasinorms

  • Data used for the purpose of machine learning are often erroneous. In this thesis, p-quasinorms (p<1) are employed as loss functions in order to increase the robustness of training algorithms for artificial neural networks. Numerical issues arising from these loss functions are addressed via enhanced optimization algorithms (proximal point methods; Frank-Wolfe methods) based on the (non-monotonic) Armijo-rule. Numerical experiments comprising 1100 test problems confirm the effectiveness of the approach. Depending on the parametrization, an average reduction of the absolute residuals of up to 64.6% is achieved (aggregated over 100 test problems).

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Stefan Geisen
URN:urn:nbn:de:hbz:385-1-14359
DOI:https://doi.org/10.25353/ubtr-xxxx-0afe-e122
Advisor:Ekkehard Sachs, Volker Schulz
Document Type:Doctoral Thesis
Language:English
Date of completion:2020/06/29
Publishing institution:Universit├Ąt Trier
Granting institution:Universit├Ąt Trier, Fachbereich 4
Date of final exam:2020/06/26
Release Date:2020/08/06
GND Keyword:Maschinelles Lernen; Neuronales Netz; Optimierung
Institutes:Fachbereich 4
Dewey Decimal Classification:5 Naturwissenschaften und Mathematik / 51 Mathematik / 510 Mathematik
Licence (German):License LogoCC BY-NC-ND: Creative-Commons-Lizenz 4.0 International

$Rev: 13581 $