<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://aitorshuffle.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://aitorshuffle.github.io/" rel="alternate" type="text/html" /><updated>2026-02-24T02:52:36-08:00</updated><id>https://aitorshuffle.github.io/feed.xml</id><title type="html">Aitor Alvarez-Gila</title><subtitle>Computer vision and machine learning / deep learning scientist. Senior researcher at Tecnalia. Photographer (not really).</subtitle><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><entry><title type="html">PhD defended!</title><link href="https://aitorshuffle.github.io/posts/2022/07/phddefence/" rel="alternate" type="text/html" title="PhD defended!" /><published>2022-07-26T00:00:00-07:00</published><updated>2022-07-26T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2022/07/phd_defence</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2022/07/phddefence/"><![CDATA[<p>Last Tuesday, July 19th 2022, I successfully defended my PhD dissertation at CVC (Barcelona), obtaining an <em>Excellent Cum Laude</em> grade. The thesis is titled “Self-supervised learning for image-to-image translation in the small data regime” and was developed at <a href="http://www.cvc.uab.es/">Computer Vision Center’s</a> <a href="http://www.cvc.uab.es/lamp/">Learning And Machine Perception (LAMP)</a> team and <a href="https://www.tecnalia.com/">Tecnalia</a>, under the supervision of Joost van de Weijer and Estibaliz Garrote. Many thanks to both!</p>

<p>Get the <a href="https://aitorshuffle.github.io/files/20220719_phd_defence/AitorAlvarezGilaThesisV1.pdf">PhD thesis manuscript</a>.</p>

<p>Get the <a href="https://aitorshuffle.github.io/files/20220719_phd_defence/20220719_phd_defense_aitor_alvarez_v1.0_public_split.pdf">slides of the presentation</a>.</p>

<p>Watch the <a href="http://www.cvc.uab.es/cvctv/?id=348">video of the presentation</a>.</p>

<p><img src="https://aitorshuffle.github.io/images/20220719_aitor_alvarez_phd_defence.jpg" alt="fig1" />
<img src="https://aitorshuffle.github.io/images/20220719_aitor_alvarez_phd.jpg" alt="fig2" />
<img src="https://aitorshuffle.github.io/images/20220719_Aitor_Alvarez_628x409_announcing.jpg" alt="fig3" /></p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="phd" /><category term="deep learning" /><category term="computer vision" /><category term="neural networks" /><category term="self-supervised learning" /><category term="image-to-image mapping" /><category term="probabilistic programming" /><summary type="html"><![CDATA[Last Tuesday, July 19th 2022, I successfully defended my PhD dissertation at CVC (Barcelona), obtaining an Excellent Cum Laude grade. The thesis is titled “Self-supervised learning for image-to-image translation in the small data regime” and was developed at Computer Vision Center’s Learning And Machine Perception (LAMP) team and Tecnalia, under the supervision of Joost van de Weijer and Estibaliz Garrote. Many thanks to both!]]></summary></entry><entry><title type="html">Invited talk on deep learning at the Faculty of Science and Technology (UPV/EHU)</title><link href="https://aitorshuffle.github.io/posts/2019/09/news1/" rel="alternate" type="text/html" title="Invited talk on deep learning at the Faculty of Science and Technology (UPV/EHU)" /><published>2019-09-20T00:00:00-07:00</published><updated>2019-09-20T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2019/09/dl_intro_ehu_science</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2019/09/news1/"><![CDATA[<p>Next Wednesday, October 2nd, I’ll be giving an introductory talk to Deep Learning techniques at the Faculty of Science and Technology of the University of the Basque Country (UPV/EHU).</p>

<p>The talk, titled <em>Introduction to deep learning. The techniques enabling the current Artificial Intelligence revolution</em>, will take place at 11:45 at the Adela Moyua room of
 the faculty, in the campus of Leioa.</p>

<p><img src="https://aitorshuffle.github.io/images/20190920_cartel_practicas_tecnalia_02.jpg" alt="fig1" /></p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="scientific dissemination" /><category term="deep learning" /><category term="talks" /><summary type="html"><![CDATA[Next Wednesday, October 2nd, I’ll be giving an introductory talk to Deep Learning techniques at the Faculty of Science and Technology of the University of the Basque Country (UPV/EHU).]]></summary></entry><entry><title type="html">New journal paper on synthetic image blurring for self-supervised deep blur detection</title><link href="https://aitorshuffle.github.io/posts/2019/08/news1/" rel="alternate" type="text/html" title="New journal paper on synthetic image blurring for self-supervised deep blur detection" /><published>2019-08-25T00:00:00-07:00</published><updated>2019-08-25T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2019/08/paper_self_supervised_blur_segmentation</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2019/08/news1/"><![CDATA[<p>Our paper <a href="https://aitorshuffle.github.io/publication/2019-08-25-alvarez-gila_self-supervised_2019">“Self-Supervised Blur Detection from Synthetically Blurred Scenes”</a> just got accepted for publication at the <a href="https://www.journals.elsevier.com/image-and-vision-computing"><em>Image and Vision Computing</em></a> journal (Q1).</p>

<p><img src="https://aitorshuffle.github.io/images/alvarez-gila_self-supervised_2019_fig1_abstract.png" alt="fig1" /></p>

<p>The paper, shared work between Tecnalia and the Computer Vision Center/Universitat Autònoma de Barcelona, makes use of synthetic blurring to show how we can use self-supervised and weakly supervised learning techniques to train a Convolutional Neural Net on the task of segmenting the defocus or motion-blurred areas of an image without having access to images annotated under real blur.</p>

<p>More info and full text available <a href="https://aitorshuffle.github.io/publication/2019-08-25-alvarez-gila_self-supervised_2019">here</a>.</p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="self-supervised learning" /><category term="weakly-supervised learning" /><category term="blur detection" /><category term="blur segmentation" /><category term="defocus blur" /><category term="motion blur" /><category term="blur" /><category term="deep learning" /><category term="cnn" /><category term="convolutional neural nets" /><summary type="html"><![CDATA[Our paper “Self-Supervised Blur Detection from Synthetically Blurred Scenes” just got accepted for publication at the Image and Vision Computing journal (Q1).]]></summary></entry><entry><title type="html">PLOS ONE paper on deep learning techniques for the detection of lethal ventricular arrhythmia on out-of-hospital cardiac arrest patients</title><link href="https://aitorshuffle.github.io/posts/2019/05/news1/" rel="alternate" type="text/html" title="PLOS ONE paper on deep learning techniques for the detection of lethal ventricular arrhythmia on out-of-hospital cardiac arrest patients" /><published>2019-05-20T00:00:00-07:00</published><updated>2019-05-20T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2019/05/paper_ecg_lstm_plos_one</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2019/05/news1/"><![CDATA[<p>Our paper <a href="https://aitorshuffle.github.io/publication/2019-05-20-picon_mixed_2019a">“Mixed Convolutional and Long Short-Term Memory Network for the Detection of Lethal Ventricular Arrhythmia”</a> was just published on <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0216756"><em>PLOS ONE</em></a>.</p>

<p><img src="https://aitorshuffle.github.io/images/20190520_journal.pone.0216756.g001.PNG" alt="fig1" /></p>

<p>This is shared work between Tecnalia’s Computer Vision Group and the <a href="https://www.ehu.eus/en/web/biores/home">Research group on Bioengineering and Resuscitation (Biores)</a> from the University of the Basque Country (UPV/EHU). The paper shows how, by applying a neural network comprising convolutional and LSTM modules, we can successfully detect lethal ventricular arrhythmia on Out-of-Hospital Cardiac Arrest (OHCA) patients.</p>

<p>More info and full text available <a href="https://aitorshuffle.github.io//publication/2019-05-20-picon_mixed_2019a">here</a>.</p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="ecg" /><category term="out-of-hospital cardiac arrest" /><category term="deep learning" /><category term="cnn" /><category term="convolutional neural nets" /><category term="long short term memory" /><category term="lstm" /><category term="recurrent neural nets" /><summary type="html"><![CDATA[Our paper “Mixed Convolutional and Long Short-Term Memory Network for the Detection of Lethal Ventricular Arrhythmia” was just published on PLOS ONE.]]></summary></entry><entry><title type="html">_Introduction to deep learning. The techniques enabling the current Artificial Intelligence revolution_</title><link href="https://aitorshuffle.github.io/posts/2018/10/news1/" rel="alternate" type="text/html" title="_Introduction to deep learning. The techniques enabling the current Artificial Intelligence revolution_" /><published>2018-10-09T00:00:00-07:00</published><updated>2018-10-09T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2018/10/dl_intro_enpresa_digitala_biz</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2018/10/news1/"><![CDATA[<p>Next Tuesday, October 9th, I’ll be giving an introductory talk to Deep Learning techniques within <a href="https://www.spri.eus/en/">SPRI’s</a> <em>Enpresa Digitala</em> initiative. 
It will take place at the building 204 of the Science Park of Biscay. Info and registrations: <a href="http://www.spri.eus/euskadinnova/es/enpresa-digitala/agenda/deep-learning-tecnicas-tras-revolucion-inteligencia-artificial/14624.aspx">http://www.spri.eus/euskadinnova/es/enpresa-digitala/agenda/deep-learning-tecnicas-tras-revolucion-inteligencia-artificial/14624.aspx</a>.</p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="scientific dissemination" /><category term="deep learning" /><category term="talks" /><summary type="html"><![CDATA[Next Tuesday, October 9th, I’ll be giving an introductory talk to Deep Learning techniques within SPRI’s Enpresa Digitala initiative. It will take place at the building 204 of the Science Park of Biscay. Info and registrations: http://www.spri.eus/euskadinnova/es/enpresa-digitala/agenda/deep-learning-tecnicas-tras-revolucion-inteligencia-artificial/14624.aspx.]]></summary></entry><entry><title type="html">New journal paper on deep learning-based plant disease detection in the wild</title><link href="https://aitorshuffle.github.io/posts/2018/04/news3/" rel="alternate" type="text/html" title="New journal paper on deep learning-based plant disease detection in the wild" /><published>2018-04-16T00:00:00-07:00</published><updated>2018-04-16T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2018/04/paper_deep_plant_diseases</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2018/04/news3/"><![CDATA[<p>Our paper <a href="https://aitorshuffle.github.io/publication/2018-04-16-johannes_deep_2018">“Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild”</a> just got accepted for publication at the <a href="https://www.journals.elsevier.com/computers-and-electronics-in-agriculture"><em>Computers and Electronics in Agriculture</em> journal</a>.</p>

<p><img src="https://aitorshuffle.github.io/images/20180416_basf_app.png" alt="fig1" /></p>

<p>The paper, presented in collaboration with <a href="https://www.basf.com">BASF</a>, extends <a href="https://aitorshuffle.github.io/publication/2017-05-09-johannes_automatic_2017">our previous work</a>, and describes how we migrated our classical computer vision workflow-based disease classification engine to a Deep Convolutional Neural Networks-based solution, exhaustively tested in real field-conditions.</p>

<p>More info and full text available <a href="https://aitorshuffle.github.io/publication/2018-04-16-johannes_deep_2018">here</a> soon.</p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="tecnalia" /><category term="basf" /><category term="agro" /><category term="plants" /><category term="disease" /><category term="image understanding" /><category term="deep learning" /><category term="cnn" /><category term="convolutional neural nets" /><category term="neural nets" /><summary type="html"><![CDATA[Our paper “Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild” just got accepted for publication at the Computers and Electronics in Agriculture journal.]]></summary></entry><entry><title type="html">Our paper _On the Duality Between Retinex and Image Dehazing_, accepted to CVPR 2018</title><link href="https://aitorshuffle.github.io/posts/2018/04/news2/" rel="alternate" type="text/html" title="Our paper _On the Duality Between Retinex and Image Dehazing_, accepted to CVPR 2018" /><published>2018-04-11T00:00:00-07:00</published><updated>2018-04-11T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2018/04/duality_retinex_dehazing_cvpr2018</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2018/04/news2/"><![CDATA[<p><img src="https://aitorshuffle.github.io/images/galdran_duality_2017_fig1_screenshot.png" alt="fig1" /></p>

<p>Our paper <a href="https://aitorshuffle.github.io/publication/2018-04-11-galdran_duality_2018">On the Duality Between Retinex and Image Dehazing</a> has been accepted to <a href="http://cvpr2018.thecvf.com/">CVPR 2018</a>, to be held next June in Salt Lake City. The camera ready version of the paper is now available on arxiv through <a href="https://arxiv.org/abs/1712.02754">this link</a>.</p>

<p>In this work, we prove that image enhancement algorithms based on the retinex color vision model and dehazing algorithms based on Kochsmieder’s model are related at a fundamental, modelling level. Extensive experiments show how we can obtain state of the art image dehazing results using retinex implementations over images with inverted intensities, and inverting back the result.</p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="publications" /><category term="retinex" /><category term="dehazing" /><category term="conferences" /><category term="CVPR" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">_Introduction to deep learning_, invited lecture at ESI Bilbao (UPV/EHU)</title><link href="https://aitorshuffle.github.io/posts/2018/04/news1/" rel="alternate" type="text/html" title="_Introduction to deep learning_, invited lecture at ESI Bilbao (UPV/EHU)" /><published>2018-04-10T00:00:00-07:00</published><updated>2018-04-10T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2018/04/dl_intro_class_at_ehu_esi</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2018/04/news1/"><![CDATA[<p><img src="https://aitorshuffle.github.io/images/20180410_intro_dl_ehu_esi.png" alt="fig1" /></p>

<p>Tomorrow I’ll be giving an introductory lecture to Deep Learning techniques at the School of Engineering of Bilbao, University of the Basque Country.
Back to where it all started!</p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="teaching" /><category term="deep learning" /><category term="talks" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Slides, updated arxiv and train-test splits available for our Adversarial RGB to Hyperspectral paper</title><link href="https://aitorshuffle.github.io/posts/2018/03/news1/" rel="alternate" type="text/html" title="Slides, updated arxiv and train-test splits available for our Adversarial RGB to Hyperspectral paper" /><published>2018-03-13T00:00:00-07:00</published><updated>2018-03-13T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2018/03/pbdl2017_slides_splits_adv_rgb2hs</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2018/03/news1/"><![CDATA[<p><img src="https://aitorshuffle.github.io/images/20180313_adv_rgb2hs_slides.png" alt="fig1" /></p>

<p>The slides of our presentation at the <em>Physics-Based vision meets Deep Learning</em> (PBDL) workshop at ICCV 2017, corresponding to our paper <em>Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB</em>, are now avaiable <a href="https://aitorshuffle.github.io/publication/2017-10-10-alvarez-gila_adversarial_2017">here</a>.</p>

<p>In addition, we also provide the <a href="http://icvl.cs.bgu.ac.il/hyperspectral/">ICVL dataset</a> train-test splits used during the experimental evaluation.
We have updated the arxiv version of the paper, which now shows this information in the supplementary material section, and updated the quantitative results in Table 1 for other’s methods.</p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="publications" /><category term="gans" /><category term="adversarial" /><category term="hyperspectral" /><category term="color" /><category term="spectral" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Our paper _Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB_, accepted at _ICCV 2017 Workshop on Physics-Based Vision meets Deep Learning_</title><link href="https://aitorshuffle.github.io/posts/2017/10/news1/" rel="alternate" type="text/html" title="Our paper _Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB_, accepted at _ICCV 2017 Workshop on Physics-Based Vision meets Deep Learning_" /><published>2017-10-10T00:00:00-07:00</published><updated>2017-10-10T00:00:00-07:00</updated><id>https://aitorshuffle.github.io/posts/2017/10/iccvw2017-pbdl_paper_accepted</id><content type="html" xml:base="https://aitorshuffle.github.io/posts/2017/10/news1/"><![CDATA[<p><img src="https://aitorshuffle.github.io/images/advrgb2hs_sample.png" alt="Adversarial RGB to hyperspectral image reconstruction" /></p>

<p>Our paper <a href="https://aitorshuffle.github.io/publication/2017-10-10-alvarez-gila_adversarial_2017">Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB</a>, has been accepted for oral presentation at the ICCV 2017 Workshop on <a href="https://pbdl2017.github.io/">Physics-Based Vision meets Deep Learning</a>, to be held next October 23rd in Venice, Italy.</p>

<p>The paper shows how we can use the framework provided by the Generative Adversarial Networks (GANs) in a conditional setting in order to accurately reconstruct a hyperspectral image composed by 31 channels, taking only an RGB image as input.</p>

<p>Full program and schedule for the workshop can be found <a href="https://pbdl2017.github.io/program.html">here</a>, and a detailed pdf version with the program for all the workshops is available <a href="http://iccv2017.thecvf.com/files/ICCV_2017_Workshops_Tutorials.pdf">here</a></p>]]></content><author><name>Aitor Alvarez-Gila</name><email>aitor.alvarez@tecnalia.com</email></author><category term="tecnalia" /><category term="talks" /><category term="deep learning" /><summary type="html"><![CDATA[]]></summary></entry></feed>