Segmentation of Satellite Images of Solar Panels Using Fast Deep Learning Model

M. Arif Wani, Tahir Mujtaba

Abstract


Segmenting satellite images provides an easy and cost-effective solution to detect solar arrays installed on building tops and on ground over a region. Solar panel detection is the first step towards the estimation of energy generation from the distributed solar arrays connected to a conventional electric grid. Segmentation models for small devices require light weight procedures in terms of computational effort.  State-of-the-art deep learning segmentation models have the disadvantage that these require long training times, large number of floating-point operations (FLOPS) and tens of millions of parameters which make these models less suitable for devices with limited computational power. This paper proposes a deep learning segmentation architecture that is suitable for small devices. The proposed architecture combines features of Mobilenet classification architecture and Unet architecture in such a way such that it is efficient in terms of computational effort and produces segmentation results with good accuracy. The results of the proposed model are compared with the results obtained by various state-of-the-art segmentation models. The results demonstrate that the proposed model is computationally efficient as it requires less number of model parameters, less training time, consumes less number of FLOPS and produces good segmentation results with competitive accuracy.


Keywords


Computer vision, deep learning, semantic segmentation, convolutional neural networks, depthwise separable convolution, satellite imagery, solar panel arrays

Full Text:

PDF

References


A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,†Adv. neural Inf. Process. Syst., pp. 1097--1105, 2012, doi: 10.1201/9781420010749.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,†3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,†Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,†Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 2261–2269, 2017, doi: 10.1109/CVPR.2017.243.

C. Szegedy et al., “Going deeper with convolutions,†Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1–9, 2015, doi: 10.1109/CVPR.2015.7298594.

A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,†arXiv Prepr. arXiv1704.04861, 2017, [Online]. Available: http://arxiv.org/abs/1704.04861.

F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,†Proc. IEEE Conf. Comput. Vis. pattern Recognit., pp. 1251–1258, 2017, doi: 10.4271/2014-01-0975.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,†Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018, doi: 10.1109/CVPR.2018.00474.

M. A. Wani, F. A. Bhat, S. Afzal, and A. . Khan, Advances in Deep Learning. Springer, 2020.

R. Girshick, J. Donahue, T. Darrell, J. Malik, and U. C. Berkeley, “Rich feature hierarchies for accurate object detection and semantic segmentation,†2014.

R. Girshick, “Fast R-CNN,†Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440–1448, 2015, doi: 10.1109/ICCV.2015.169.

J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object detection via region-based fully convolutional networks,†Adv. Neural Inf. Process. Syst., pp. 379–387, 2016.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,†IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031.

W. Liu et al., “SSD: Single shot multibox detector,†Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016, doi: 10.1007/978-3-319-46448-0_2.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,†2016.

K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,†2017.

E. Shelhamer, J. Long, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,†IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 640–651, 2017, doi: 10.1109/TPAMI.2016.2572683.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,†Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9351, pp. 234–241, 2015, doi: 10.1007/978-3-319-24574-4_28.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,†IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, 2017.

H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,†Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 6230–6239, 2017, doi: 10.1109/CVPR.2017.660.

F. Yu, V. Koltun, and T. Funkhouser, “Dilated residual networks,†Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 636–644, 2017, doi: 10.1109/CVPR.2017.75.

F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,†4th Int. Conf. Learn. Represent. ICLR 2016 - Conf. Track Proc., 2016.

L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,†Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11211 LNCS, pp. 833–851, 2018, doi: 10.1007/978-3-030-01234-2_49.

F. Milletari, N. Navab, and S. A. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation,†Proc. - 2016 4th Int. Conf. 3D Vision, 3DV 2016, pp. 565–571, 2016, doi: 10.1109/3DV.2016.79.

M. A. Wani and R. Riyaz, “A new cluster validity index using maximum cluster spread based compactness measure,†Int. J. Intell. Comput. Cybern., vol. 9, no. 2, pp. 179–204, 2016, doi: 10.1108/IJICC-02-2016-0006.

M. Arif Wani and R. Riyaz, “A novel point density based validity index for clustering gene expression datasets,†Int. J. Data Min. Bioinform., vol. 17, no. 1, pp. 66–84, 2017, doi: 10.1504/IJDMB.2017.084027.

M. R. Wani, M. A. Wani, and R. Riyaz, “Cluster based approach for mining patterns to predict wind speed,†2016 IEEE Int. Conf. Renew. Energy Res. Appl. ICRERA 2016, vol. 5, pp. 1046–1050, 2017, doi: 10.1109/ICRERA.2016.7884493.

M. A. Wani, “Incremental hybrid approach for microarray classification,†Proc. - 7th Int. Conf. Mach. Learn. Appl. ICMLA 2008, pp. 514–520, 2008, doi: 10.1109/ICMLA.2008.134.

R. Riyaz and M. A. Wani, “Local and global data spread based index for determining number of clusters in a dataset,†Proc. - 2016 15th IEEE Int. Conf. Mach. Learn. Appl. ICMLA 2016, pp. 651–656, 2017, doi: 10.1109/ICMLA.2016.181.

M. Arif Wani and M. Yesilbudak, “Recognition of wind speed patterns using multi-Scale subspace grids with decision trees,†Int. J. Renew. Energy Res., vol. 3, no. 2, pp. 458–462, 2013, doi: 10.20508/ijrer.61956.

Z. Zhong, J. Li, W. Cui, and H. Jiang, “Fully convolutional networks for building and road extraction: Preliminary results,†Int. Geosci. Remote Sens. Symp., vol. 2016-Novem, pp. 1591–1594, 2016, doi: 10.1109/IGARSS.2016.7729406.

M. Vakalopoulou, K. Karantzalos, N. Komodakis, and N. Paragios, “BUILDING DETECTION IN VERY HIGH RESOLUTION MULTISPECTRAL DATA WITH DEEP LEARNING FEATURES Remote Sensing Lab ., National Technical University of Athens , Athens , Greece Ecole des Ponts ParisTech , Universite Paris Est , France Center for Visual Computing,†2015 IEEE Int. Geosci. Remote Sens. Symp., pp. 1873–1876, 2015, doi: 10.1109/IGARSS.2015.7326158.

B. Joshi, H. Baluyan, A. Al Hinai, and W. L. Woon, “Automatic rooftop detection using a two-stage classification,†Proc. - UKSim-AMSS 16th Int. Conf. Comput. Model. Simulation, UKSim 2014, pp. 286–291, 2014, doi: 10.1109/UKSim.2014.89.

V. Iglovikov and A. Shvets, “TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation,†2018, [Online]. Available: http://arxiv.org/abs/1801.05746.

J. Yuan, “Automatic Building Extraction in Aerial Scenes Using Convolutional Networks,†2016, [Online]. Available: http://arxiv.org/abs/1602.06564.

V. I. Iglovikov, S. S. Seferbekov, A. V Buslaev, and A. Shvets, “TernausNetV2 : Fully Convolutional Network for Instance Segmentation,†2018.

R. Alshehhi, P. R. Marpu, W. L. Woon, and M. D. Mura, “Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks,†ISPRS J. Photogramm. Remote Sens., vol. 130, pp. 139–149, 2017, doi: 10.1016/j.isprsjprs.2017.05.002.

V. Mnih and G. E. Hinton, “Learning to detect roads in high-resolution aerial images,†Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 6316 LNCS, no. PART 6, pp. 210–223, 2010, doi: 10.1007/978-3-642-15567-3_16.

L. Gao, W. Song, J. Dai, and Y. Chen, “Road extraction from high-resolution remote sensing imagery using refined deep residual convolutional neural network,†Remote Sens., vol. 11, no. 5, pp. 1–16, 2019, doi: 10.3390/rs11050552.

A. V Buslaev and V. I. Iglovikov, “Fully Convolutional Network for Automatic Road Extraction from Satellite Imagery,†pp. 0–3.

A. Constantin, J. Ding, and Y. Lee, “Accurate Road Detection from Satellite Images Using Modified U-net,†2018 IEEE Asia Pacific Conf. Circuits Syst., pp. 423–426, 2018, doi: 10.1109/APCCAS.2018.8605652.

Z. Zhang, Q. Liu, Y. Wang, and S. Member, “Road Extraction by Deep Residual U-Net,†pp. 1–5.

N. Audebert, B. Le Saux, and S. Lefèvre, “Segment-before-detect: Vehicle detection and classification through semantic segmentation of aerial images,†Remote Sens., vol. 9, no. 4, 2017, doi: 10.3390/rs9040368.

X. Chen, S. Xiang, C. L. Liu, and C. H. Pan, “Vehicle detection in satellite images by hybrid deep convolutional neural networks,†IEEE Geosci. Remote Sens. Lett., vol. 11, no. 10, pp. 1797–1801, 2014, doi: 10.1109/LGRS.2014.2309695.

C. Robinson, F. Hohman, and B. Dilkina, “A deep learning approach for population estimation from satellite imagery,†Proc. 1st ACM SIGSPATIAL Work. Geospatial Humanit. GeoHumanities 2017, vol. 1996, no. 1, pp. 47–54, 2017, doi: 10.1145/3149858.3149863.

N. Jean, M. Burke, M. Xie, W. M. Davis, D. B. Lobell, and S. Ermon, “Combining satellite imagery and machine learning to predict poverty,†vol. 353, no. 6301, 2016.

A. Albert, J. Kaur, and M. C. Gonzalez, “Using convolutional networks and satellite imagery to identify patterns in urban environments at a large scale,†Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., vol. Part F1296, pp. 1357–1366, 2017, doi: 10.1145/3097983.3098070.

“https://www.irena.org/solar.â€

J. M. Malof, R. Hou, L. M. Collins, K. Bradbury, and R. Newell, “Automatic solar photovoltaic panel detection in satellite imagery,†2015 Int. Conf. Renew. Energy Res. Appl. ICRERA 2015, vol. 5, pp. 1428–1431, 2015, doi: 10.1109/ICRERA.2015.7418643.

J. M. Malof, K. Bradbury, L. M. Collins, and R. G. Newell, “Automatic detection of solar photovoltaic arrays in high resolution aerial imagery,†Appl. Energy, vol. 183, pp. 229–240, 2016, doi: 10.1016/j.apenergy.2016.08.191.

J. M. Malof, L. M. ; Collins, K. Bradbury, and R. G. Newell, “A deep convolutional neural network and a random forest classifier for solar photovoltaic array detection in aerial imagery,†2016 IEEE Int. Conf. Renew. Energy Res. Appl., pp. 650--654, 2016.

J. M. Malof, L. M. Collins, and K. Bradbury, “A deep convolutional neural network, with pre-training, for solar photovoltaic array detection in aerial imagery,†Int. Geosci. Remote Sens. Symp., vol. 2017-July, pp. 874–877, 2017, doi: 10.1109/IGARSS.2017.8127092.

J. Camilo, R. Wang, L. M. Collins, K. Bradbury, and J. M. Malof, “Application of a semantic segmentation convolutional neural network for accurate automatic detection and mapping of solar photovoltaic arrays in aerial imagery,†2018, [Online]. Available: http://arxiv.org/abs/1801.04018.

J. Yuan, H. H. L. Yang, O. A. Omitaomu, and B. L. Bhaduri, “Large-scale solar panel mapping from aerial images using deep convolutional networks,†Proc. - 2016 IEEE Int. Conf. Big Data, Big Data 2016, pp. 2703–2708, 2016, doi: 10.1109/BigData.2016.7840915.

J. M. Malof, B. Li, B. Huang, K. Bradbury, and A. Stretslov, “Mapping solar array location , size , and capacity using deep learning and overhead imagery,†pp. 1–6, 2015.

V. Golovko, A. Kroshchanka, S. Bezobrazov, A. Sachenko, M. Komar, and O. Novosad, “Development of Solar Panels Detector,†2018 Int. Sci. Conf. Probl. Infocommunications. Sci. Technol. (PIC S&T), pp. 761–764, 2018.

R. Castello, S. Roquette, M. Esguerra, A. Guerra, and J. L. Scartezzini, “Deep learning in the built environment: Automatic detection of rooftop solar panels using Convolutional Neural Networks,†J. Phys. Conf. Ser., vol. 1343, no. 1, 2019, doi: 10.1088/1742-6596/1343/1/012034.

J. Yu, Z. Wang, A. Majumdar, and R. Rajagopal, “DeepSolar: A Machine Learning Framework to Efficiently Construct a Solar Deployment Database in the United States,†Joule, vol. 2, no. 12, pp. 2605–2617, 2018, doi: 10.1016/j.joule.2018.11.021.

K. Bradbury et al., “Distributed solar photovoltaic array location and extent dataset for remote sensing object identification,†Sci. Data, vol. 3, no. December, pp. 1–9, 2016, doi: 10.1038/sdata.2016.106.

L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking Atrous Convolution for Semantic Image Segmentation,†arXiv Prepr. arXiv1706.05587, 2017.

S. Ioffe and C. Szegedy, “Batch Normalization : Accelerating Deep Network Training by Reducing Internal Covariate Shift,†arXiv Prepr. arXiv1502.03167, 2015.




DOI (PDF): https://doi.org/10.20508/ijrer.v11i1.11607.g8167

Refbacks

  • There are currently no refbacks.


Online ISSN: 1309-0127

Publisher: Gazi University

IJRER is cited in SCOPUS, EBSCO, WEB of SCIENCE (Clarivate Analytics);

IJRER has been cited in Emerging Sources Citation Index from 2016 in web of science.

WEB of SCIENCE in 2025; 

h=35,

Average citation per item=6.59

Last three Years Impact Factor=(1947+1753+1586)/(146+201+78)=5286/425=12.43

Category Quartile:Q4