Reset filters

Search publications


Search by keyword
List by department / centre / faculty

No publications found.

 

Invariance, Encodings, and Generalization: Learning Identity Effects With Neural Networks

Authors: Brugiapaglia SLiu MTupper P


Affiliations

1 Department of Mathematics and Statistics, Concordia University, Montreal, Quebec, H3G 1M8, Canada simone.brugiapaglia@concordia.ca.
2 Department of Mathematics and Statistics, Concordia University, Montreal, Quebec, H3G 1M8, Canada matthew.liu@mail.concordia.ca.
3 Department of Mathematics, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada pft3@sfu.ca.

Description

Often in language and other areas of cognition, whether two components of an object are identical or not determines if it is well formed. We call such constraints identity effects. When developing a system to learn well-formedness from examples, it is easy enough to build in an identity effect. But can identity effects be learned from the data without explicit guidance? We provide a framework in which we can rigorously prove that algorithms satisfying simple criteria cannot make the correct inference. We then show that a broad class of learning algorithms, including deep feedforward neural networks trained via gradient-based algorithms (such as stochastic gradient descent or the Adam method), satisfies our criteria, dependent on the encoding of inputs. In some broader circumstances, we are able to provide adversarial examples that the network necessarily classifies incorrectly. Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs. This allows us to show similar effects to those predicted by theory for more realistic methods that violate some of the conditions of our theoretical results.


Links

PubMed: https://pubmed.ncbi.nlm.nih.gov/35798322/

DOI: 10.1162/neco_a_01510