Integrating a Non-Uniformly Sampled Software Retina with a Deep CNN Model

Piotr Ozimek and Jan P Siebert


Abstract
We present a biologically inspired method for pre-processing images applied to CNNs that reduces their memory requirements while increasing their invariance to scale and rotation changes. Our method is based on the mammalian retino-cortical transform: a mapping between a pseudo-randomly tessellated retina model (used to sample an input image) and a CNN. The aim of this first pilot study is to demonstrate a functional retina-integrated CNN implementation and this produced the following results: a network using the full retino-cortical transform yielded an F1 score of 0.80 on a test set during a 4-way classification task, while an identical network not using the proposed method yielded an F1 score of 0.86 on the same task. The method reduced the visual data by ~×7, the input data to the CNN by 40% and the number of CNN training epochs by 64%. These results demonstrate the viability of our method and hint at the potential of exploiting functional traits of natural vision systems in CNNs.


Files
Paper (PDF)


DOI
Coming soon


Bibtex
@inproceedings{dlid2017_4,

title = {Integrating a Non-Uniformly Sampled Software Retina with a Deep CNN Model},
author = {P. Ozimek and J. P. Siebert},
booktitle = {British Machine Vision Conference Workshop: Deep Learning on Irregular Domains (DLID)},
year = {2017},
pages={4.1--4.11}

}