Learning transfer using deep convolutional features for remote sensing image retrieval

Ahmad Alzu'bi, Abbes Amira, Naeem Ramzan

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Convolutional neural networks (CNNs) have recently witnessed a notable interest due to their superior performance demonstrated in computer vision applications; including image retrieval. This paper introduces an optimized bilinear-CNN architecture applied in the context of remote sensing image retrieval, which investigates the capability of deep neural networks in learning transfer from general data to domain-specific application, i.e. remote sensing image retrieval. The proposed deep learning model involves two parallel feature extractors to formulate image representations from local patches at deep convolutional layers. The extracted features are approximated into low-dimensional features by a polynomial kernel projection. Each single geographic image is represented by a discriminating compact descriptor using a modified compact pooling scheme followed by feature normalization. An end-to-end deep learning is performed to generate the final fine-tuned network model. The model performance is evaluated on the standard UCMerced land-use/land-cover (LULC) dataset with high-resolution aerial imagery. The conducted experiments on the proposed model show high performance in extracting and learning complex image features, which affirms the superiority of deep bilinear features in the context of remote sensing image retrieval.

Original languageEnglish
Pages (from-to)1-8
Number of pages8
JournalIAENG International Journal of Computer Science
Volume46
Issue number4
Early online date20 Nov 2019
Publication statusPublished - 30 Nov 2019

Keywords

  • Deep learning
  • Image retrieval
  • Remote sensing

Fingerprint

Dive into the research topics of 'Learning transfer using deep convolutional features for remote sensing image retrieval'. Together they form a unique fingerprint.

Cite this