Vision Transformer Models for Land Cover Classification on the BigEarthNet Dataset: A Comparative Analysis
Evaluating Vision Transformer Models for Land Cover Classification on the BigEarthNet Dataset: A Comparative Analysis
Abstract:
The use of satellite imagery for land cover classification has gained significant attention due to its potential applications in various fields, including agriculture, urban planning, and environmental monitoring. The BigEarthNet dataset stands out as one of the largest publicly available satellite image datasets, featuring high-resolution Sentinel-2 images encompassing the entire globe. However, the dataset's vast size and complexity present substantial challenges for conventional machine learning algorithms.
This paper investigates the performance of Vision Transformer (ViT) models for land cover classification on the BigEarthNet dataset. ViTs, a cutting-edge deep learning architecture, have demonstrated promising results in computer vision tasks. We compare the performance of ViT models against baseline results obtained using traditional machine learning algorithms on the same dataset.
Our experiments reveal that ViT models surpass the baseline results in terms of accuracy, particularly for complex land cover classes. We also observe that certain ViT variants are more effective than others, and their performance is influenced by the dataset's size and the complexity of the land cover classes.
To the best of our knowledge, this is the first study to evaluate the performance of ViT models on the BigEarthNet dataset. Our findings highlight the potential of ViT models for land cover classification on large-scale satellite image datasets. This study serves as a reference for researchers and practitioners interested in utilizing ViT models for land cover classification on the BigEarthNet dataset.
Keywords: Vision Transformer, ViT, BigEarthNet dataset, land cover classification, satellite imagery, deep learning, machine learning, remote sensing.
原文地址: https://www.cveoy.top/t/topic/nEkh 著作权归作者所有。请勿转载和采集!