A. Alvarez-Gila, J. van de Weijer, and E. Garrote, “Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB,” presented at the 1st International Workshop on Physics Based Vision meets Deep Learning at ICCV2017, Venice, Italy, 2017.

Full text preprint

Definition of the train-test splits used in the experimental evaluation: Train-test splits

Slides from our presentation at the PBDL Workshop at ICCV 2017


Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer. Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However, most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 33.2% and a Relative RMSE drop of 54.0% on the ICVL natural hyperspectral image dataset.

Adversarial RGB to hyperspectral image reconstruction


  title = {Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB},
  doi = {10.1109/ICCVW.2017.64},
  booktitle = {2017 IEEE International Conference on Computer Vision Workshops (ICCVW)},
  author = {Alvarez-Gila, Aitor and van de Weijer, Joost and Garrote, Estibaliz},
  month = oct,
  year = {2017},
  pages = {480--490}