Tags → #ml
-
Analysis of one-hidden-layer neural networks via the resolvent method
We provide an alternative derivation for the asymptotic spectrum of non-linear random matrices, based on the more robust resolvent model. Our approach in particular extends previous results on random feature models to the practically important case with additive bias.
-
Deterministic equivalent and error universality of deep random features learning
We show that the generalization error of deep random feature models is the same as the generalization error of Gaussian features with matched covariance, and derive an explicit expression for the generalization error.
-
Asymptotics of Learning with Deep Structured (Random) Features
We derive an approximative formula for the generalization error of deep neural networks with structured (random) features, confirming a widely believed conjecture. We also show that our results can capture feature maps learned by deep, finite-width neural networks trained under gradient descent.
-
Cortical Silent Period
Regression on EEG data using xResnet1d to predict the cortical silent period onset and offset. Work in progress.