Dear guest, welcome to this publication database. As an anonymous user, you will probably not have edit rights. Also, the collapse status of the topic tree will not be persistent. If you like to have these and other options enabled, you might ask Admin (Ivan Eggel) for a login account.
 [BibTeX] [RIS]
Interpreting intentionally flawed models with linear probes
Publicatietype: In proceedings
Citatie:
Boektitel: ICCV workshop on statistical deep learning in computer vision
Jaar: 2019
Locatie: Seoul, Korea
Samenvatting: The representational differences between generalizing networks and intentionally flawed models can be insightful on the dynamics of network training. Do memorizing networks, e.g. networks that learn random label correspondences, focus on specific patterns in the data to memorize the labels? Are the features learned by a generalizing network affected by randomization of the model parameters? In high-risk applications such as medical, legal or financial domains, highlighting the representational differences that help generalization may be even more important than model performance itself. In this paper, we probe the activations of intermediate layers with linear classification and regression. Results show that the bias towards simple solutions of generalizing networks is maintained even when statistical irregularities are intentionally introduced.
Trefwoorden: Deep Learning, interpretability, wrong labels
Auteurs Graziani, Mara
Müller, Henning
Andrearczyk, Vincent
Toegevoegd door: []
Totaalscore: 0
Bestanden
  • grazianiSDLCV19.pdf
Aantekeningen
    Onderwerpen