6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Pixel-level Reconstruction and Classification for Noisy Handwritten Bangla Characters

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Classification techniques for images of handwritten characters are susceptible to noise. Quadtrees can be an efficient representation for learning from sparse features. In this paper, we improve the effectiveness of probabilistic quadtrees by using a pixel level classifier to extract the character pixels and remove noise from handwritten character images. The pixel level denoiser (a deep belief network) uses the map responses obtained from a pretrained CNN as features for reconstructing the characters eliminating noise. We experimentally demonstrate the effectiveness of our approach by reconstructing and classifying a noisy version of handwritten Bangla Numeral and Basic Character datasets.

          Related collections

          Most cited references6

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          DeepSat

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            A Semiautomated Probabilistic Framework for Tree-Cover Delineation From 1-m NAIP Imagery Using a High-Performance Computing Architecture

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              A Theoretical Analysis of Deep Neural Networks for Texture Classification

              We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity.
                Bookmark

                Author and article information

                Journal
                20 June 2018
                Article
                1806.08037
                32efe7ce-e1a3-43f8-8759-23da4af96bdd

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                Paper was accepted at the 16th International Conference on Frontiers in Handwriting Recognition (ICFHR 2018)
                cs.CV cs.LG

                Computer vision & Pattern recognition,Artificial intelligence
                Computer vision & Pattern recognition, Artificial intelligence

                Comments

                Comment on this article