Keyword search (4,163 papers available)

"Brugiapaglia S" Authored Publications:

Title Authors PubMed ID
1 Development and Application of Children s Sex- and Age-Specific Fat-Mass and Muscle-Mass Reference Curves From Dual-Energy X-Ray Absorptiometry Data for Predicting Cardiometabolic Risk Saputra ST; Van Hulst A; Henderson M; Brugiapaglia S; Faustini C; Kakinami L; 40878792
SOH
2 Real-time motion detection using dynamic mode decomposition Mignacca M; Brugiapaglia S; Bramburger JJ; 40421310
MATHSTATS
3 Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks Adcock B; Brugiapaglia S; Dexter N; Moraga S; 39454372
MATHSTATS
4 Generalization limits of Graph Neural Networks in identity effects learning D' Inverno GA; Brugiapaglia S; Ravanelli M; 39426036
ENCS
5 Invariance, Encodings, and Generalization: Learning Identity Effects With Neural Networks Brugiapaglia S; Liu M; Tupper P; 35798322
MATHSTATS

 

Title:Invariance, Encodings, and Generalization: Learning Identity Effects With Neural Networks
Authors:Brugiapaglia SLiu MTupper P
Link:https://pubmed.ncbi.nlm.nih.gov/35798322/
DOI:10.1162/neco_a_01510
Publication:Neural computation
Keywords:
PMID:35798322 Category: Date Added:2022-07-08
Dept Affiliation: MATHSTATS
1 Department of Mathematics and Statistics, Concordia University, Montreal, Quebec, H3G 1M8, Canada simone.brugiapaglia@concordia.ca.
2 Department of Mathematics and Statistics, Concordia University, Montreal, Quebec, H3G 1M8, Canada matthew.liu@mail.concordia.ca.
3 Department of Mathematics, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada pft3@sfu.ca.

Description:

Often in language and other areas of cognition, whether two components of an object are identical or not determines if it is well formed. We call such constraints identity effects. When developing a system to learn well-formedness from examples, it is easy enough to build in an identity effect. But can identity effects be learned from the data without explicit guidance? We provide a framework in which we can rigorously prove that algorithms satisfying simple criteria cannot make the correct inference. We then show that a broad class of learning algorithms, including deep feedforward neural networks trained via gradient-based algorithms (such as stochastic gradient descent or the Adam method), satisfies our criteria, dependent on the encoding of inputs. In some broader circumstances, we are able to provide adversarial examples that the network necessarily classifies incorrectly. Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs. This allows us to show similar effects to those predicted by theory for more realistic methods that violate some of the conditions of our theoretical results.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University