Resizing Tiny Imagenet: An Iterative Approach Towards Image Classification

Authors

  • Praveen Kumar Student, Department of Computer Science & Engineering, MVJ College of Engineering, Whitefield, Bangalore, India Author
  • Rishabh Singh Student, Department of Computer Science & Engineering, MVJ College of Engineering, Whitefield, Bangalore, India Author
  • Nilesh Kumar Singh Student, Department of Computer Science & Engineering, MVJ College of Engineering, Whitefield, Bangalore, India Author
  • Hitesh Agarwal Student, Department of Computer Science & Engineering, MVJ College of Engineering, Whitefield, Bangalore, India Author

Keywords:

Tiny ImageNet, Residual Networks, Classification, ILSVRC, Deep Architectures

Abstract

Deep neural networks have attained almost  human-level performance over several Image classification  and object detection problems, deploying robust and powerful  state-of-the-art networks. Stanford’s Tiny ImageNet dataset has been around for a while and neural networks have  struggled to classify them. It is generally considered one of the  harder datasets in the domain of image classification. The  validation accuracy of the existing systems maxes out at 61- 62% with a select few shooting beyond 68-69%. These  approaches are often plagued by problems of overfitting and  vanishing gradients. In this paper, we present a new method to  get above average validation accuracy while circumventing these problems. We use the resizing image technique which  trains multiple model times over different image sizes. This  approach enables the model to derive more relevant and  precise features as we move towards the original image size. It  also makes the training process adaptable to available  hardware resources. After reaching the image size, we hyper tune other parameters such as Learning Rate, Optimizers, etc.  to increase the overall validation accuracy. The final  validation accuracy of the model using resizing and hyper  tuning is 62.57%. 

Downloads

Download data is not yet available.

References

Z. Abai, N. Rajmalwar, et al., “DenseNet Models for Tiny ImageNet Classification,” arXiv:1904.10429 [cs.CV]. [2] S. F. L. Shi, “Kaminet - a convolutional neural network for tiny imagenet challenge,” CS, vol. 231, 2015.

A. S. Walia, “The Vanishing Gradient Problem.” [Online]. Available: https://medium.com/

“What is the Exploding Gradient Problem?” [Online]. Avail- able: https://deepai.org/machine learning-glossary-and-terms/exploding- gradient-problem

L. N. Smith, “Nicholay Topin. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates,” arXiv:1708.07120 [cs.LG].

learning recurrent neural nets and problem solutions,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 2, no. 107–116, 1998. [Online]. Available: https://www.bioinf.jku.at/publications/older/2304.pdf

Google, “TensorFlow playground .” [Online]. Available: https:// playground.tensorflow.org/

L N. Smith, “Cyclic Learning rates for training neural networks”, arXiv:1506.01186[cs.CV]

Abinav Shrivastava, Abhinav Gupta, Ross Girshick,” Training region-based object detectors with online hard example mining”, arXiv:1604.03540 [sc.CV]

ILSVRC, “https://tiny-imagenet.herokuapp.com/” [11] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov,” Dropout: a simple way to prevent neural networks from overfitting”, journal of machine learning research 15 (2014)

Jarek Duda, “SGD momentum optimizer with step estimation by online parabola model”, arXiv:1907.07063 [cs.LG]

Downloads

Published

2020-11-30

How to Cite

Resizing Tiny Imagenet: An Iterative Approach Towards Image Classification. (2020). International Journal of Innovative Research in Computer Science & Technology, 8(6), 411–415. Retrieved from https://acspublisher.com/journals/index.php/ijircst/article/view/13040