Journal of Food Science & Nutrition Category: Agriculture Type: Research Article

Incorporating Deep Learning into the Diagnosis of Banana Leaf Spot Diseases for the Protection of Banana Crops

Cihan Unal1*
1 Department of computer programming, Hacettepe University, Ankara, Turkey

*Corresponding Author(s):
Cihan Unal
Department Of Computer Programming, Hacettepe University, Ankara, Turkey
Email:cihan.unal@hacettepe.edu.tr / cihanunal42@gmail.com

Received Date: Nov 06, 2024
Accepted Date: Nov 21, 2024
Published Date: Nov 30, 2024

Abstract

Banana crops play a pivotal role in securing global food supplies and supporting economic stability. However, they are confronted with significant challenges stemming from a variety of diseases that not only diminish yields but also compromise the quality of the fruit. Artificial intelligence, especially deep learning, assumes a pivotal role in tackling this challenge by leveraging advanced algorithms and data analysis techniques to enhance disease detection and diagnosis in banana crops, thus contributing significantly to their protection and preservation. To address this challenge, we present the "Banana Leaf Spot Diseases (Banana LSD) Dataset," comprising images of major banana leaf spot diseases and healthy leaves, meticulously labeled by plant pathologists. Using deep learning models, including DenseNet-201, EfficientNet-b0, and VGG16, we achieved remarkable disease classification accuracy rates. DenseNet-201 achieved an impressive 98.12% accuracy. The study analyses by the performance metrics and visualizing by grad-cam technique. These results underscore the potential of deep learning for precise banana leaf disease diagnosis, offering significant implications for crop preservation, economic stability, and global food security.

Keywords

Banana leaves; Classification; Deep learning; DenseNet-201; EfficientNet-b0; VGG16

Introduction

In the global realm of agriculture, the cultivation of bananas holds significant importance due to their origins in Southeast Asia [1]. Bananas serve as a crucial staple food, contributing approximately 16% to the overall world fruit production. They stand second only to citrus fruits in terms of volume [1]. These versatile fruits are categorized as dessert bananas and plantains. They offer not only direct consumption but also come in various processed forms, containing beneficial bioactive compounds that promote human health [1]. Significantly, about 15% of the world's banana production is sent to Western countries for people to eat. India plays a major role in this, providing around 25.70% of all the bananas produced globally [2]. The Philippines, Ecuador, Indonesia, and Brazil together make up a large part, about 20%, of the banana production. Additionally, the United States is at the forefront of bringing in bananas from other countries, making up nearly 18% of all the bananas that are imported globally [2]. The effects of diseases and changing climates on banana plants are very impactful. They could potentially lead to losses of up to 100% in a country's banana production and the bananas they send to other countries. Four main diseases-black Sigatoka, fusarium wilt (often called Panama wilt), Xanthomonas wilt, and bunchy top virus-are big dangers to banana plants [2]. The economy greatly relies on agricultural productivity, and the detection of plant diseases is essential due to their frequent presence [3]. It is imperative to employ automated methods that utilize computer vision and image processing techniques for identifying and categorizing plant diseases [3-5]. In this study, the primary emphasis will be directed towards the examination of the banana leaves rather than the banana fruit itself. Notable leaf-related diseases in bananas encompass panama disease, moko disease, sigatoka disease, black spot, banana bunchy top, infectious chlorosis, banana streak virus, and banana bract mosaic virus [1,2]. In contrast to the diverse array of banana leaf diseases examined in earlier research, this study centered its attention on three diseases: (a) Sigatoka, (b) Cordana, and (c) Pestalotiopsis. Collectively, these diseases present notable challenges to banana cultivation, underscoring the necessity of adept disease management and control strategies to ensure robust plant growth and optimal fruit output. 

(a) Sigatoka: Sigatoka diseases represent a group of fungal ailments that target banana plants. Notably, black sigatoka and yellow sigatoka are prominent among them. These diseases give rise to distinctive markings and injuries on leaves, subsequently hampering photosynthesis and leading to reduced yields. They present a substantial threat to worldwide banana cultivation and mandate meticulous control measures to avert noteworthy economic setbacks [1,3,6,7]. 

(b) Cordana: Cordana leaf spot, arising from the Cordana musae fungus, constitutes an affliction affecting banana plants. It manifests as circular or oval marks with a yellow halo on leaves, which can eventually trigger leaf loss and diminished fruit quality. Thriving in damp and humid environments, effective management approaches are essential to counteract its impact [1,3,6,7]. 

(c) Pestalotiopsis: Pestalotiopsis, a fungal pathogen, prompts leaf spotting and blight across diverse plant species, including bananas. This results in the emergence of irregularly shaped blemishes featuring brown or gray centers and darker perimeters on banana leaves. Intense infections can culminate in leaf loss and the compromise of plant health overall. The mitigation of Pestalotiopsis involves the application of cultural practices alongside fungicidal treatments [1,3,6,7]. 

The primary objective of this research study is to confront the substantial threat posed by various diseases to banana crops, which are of utmost importance for both global food security and economic stability. These diseases can lead to fewer bananas, slow plant growth, and even the death of banana plants. To address this pressing issue, the study introduces the "Banana Leaf Spot Diseases (BananaLSD) Dataset. This dataset comprises images that depict three prominent banana leaf spot diseases-Sigatoka, Cordana, and Pestalotiopsis-alongside images showcasing healthy leaves. To better find diseases in banana plants, this study uses four advanced computer programs of Deep Learning Models. These models are DenseNet-201, EfficientNet-b0, ResNet-101, and VGG16. The study does well in accurately identifying banana leaf diseases with these models.

Related Works

In this section, the literature works are presented, offering a diverse and captivating collection of literary masterpieces from various genres and time periods. 

Bhuiyan et al., In their study, used a combination of deep learning and Bayesian optimization to develop a computer program called "BananaSqueezeNet" that can look at pictures of banana leaves and identify diseases. They tested this program against other advanced computer models like EfficientNetB0, MobileNetV3, ResNet-101, ResNet-50, InceptionNet-V3, and VGG16, and BananaSqueezeNet performed even better, with an impressive overall accuracy of 96.25%. What's more, BananaSqueezeNet could also spot seven other diseases that affect banana leaves, fruits, and stems, achieving an accuracy rate of 95.13%. The researchers collected a dataset of pictures called the "Banana Leaf Spot Diseases (BananaLSD)" dataset, which contains images of three specific banana leaf diseases: Pestalotiopsis, Sigatoka, and Cordana [1]. 

In their research Narayanan et al., aimed to create a special computer system using a mix of technologies that can effectively spot and categorize diseases in banana plants. They mentioned that it is particularly important in India, where growing bananas is a big deal, but diseases and pests can cause farmers to lose a lot of money. The new system is very good at identifying diseases, with an impressive 99% accuracy. It helps farmers by finding diseases early on, so they can take action to protect their plants. This study introduced a hybrid CNN-based system for precise banana disease detection and classification. Prominent diseases include Xanthomonas wilt, fusarium wilt, bunchy top virus, and black Sigatoka, each causing specific damage. The research explores AI, deep learning (CNN), and machine learning (SVM) for detecting and classifying diseases in diverse crops [2]. 

Krishnan et al., mentioned that agriculture is important for making food, and bananas can make a lot of money. But there are problems like bugs and diseases that can hurt banana plants. Luckily, new technology like special cameras and smart computer programs can help find and fix these problems. They used a mix of computer techniques to find and identify diseases on banana leaves. In their process, they first prepare the images by changing their colors and adjusting their size and quality. Then, they use a method called Total Generalized Variation Fuzzy C-Means (TGVFCMS) to separate the banana leaf from the background. After that, they employ a type of smart computer program called a Convolutional Neural Network (CNN) to figure out if there's any disease on the leaf. This CNN works well and is better at finding diseases compared to other computer methods, which gained 93.45% of accuracy [8]. In another study, Kumar et al., methodically reviews recent research on using computer vision techniques like machine learning, deep learning, CNNs, and image processing to detect and classify fungal and bacterial plant diseases. Deep learning-based methods demonstrated a significantly higher average accuracy of 98.80%, in contrast to machine learning-based techniques, which achieved an average accuracy of 92.20% [9]. In the same study it’s mentioned that a study with Plant leaves (like rose, lemon, mango, and banana) using k-means clustering, genetic algorithm, SVM, gained the accuracy 95.71% and another study for Banana leaves with CNN and a total generalized variation fuzzy C-means segmentation gained the accuracy of 93.45% [9]. 

In their study, a comprehensive analysis of disease identification and grading in banana leaves was conducted, with a specific focus on Black Sigatoka disease and Panamawilt disease. The study compared the performance of two prominent classifiers: the Support Vector Machine (SVM) and the Adaptive Neuro-Fuzzy Inference System (ANFIS), using a confusion matrix-based evaluation. The results revealed an exceptional 100% accuracy rate for disease identification when employing the ANFIS classifier, while SVM achieved a respectable accuracy level of 92%. Furthermore, the study extended its scope beyond mere disease identification, utilizing the ANFIS classifier for disease grading [7]. 

The implemented model has demonstrated impressive performance, achieving a remarkable accuracy rate of 90%. This achievement is attributed to the utilization of a combination of GLCM (Gray-Level Co-occurrence Matrix) and NGTDM (Neighboring Gray-Tone Difference Matrix) features extracted from leaf images. These extracted features are primarily categorized into three classes: healthy leaves, Sigatoka disease, and Xanthomonas disease of banana leaves. To classify these diseases accurately, a Support Vector Machine (SVM) classifier was employed [6,10]. 

Aruraj et al., in this study presented an image processing method using LBP to accurately detect and classify diseases like Black Sigatoka and Cordana leaf spot in banana plants, potentially boosting agricultural productivity in Asia and Africa. In this research, image processing methods, with a specific focus on LBP, were combined with machine learning algorithms like SVM and KNN to categorize diseases in banana plants. The findings indicate an 89.10% accuracy rate for Black Sigatoka and a 90.90% accuracy rate for Cordana leaf spot classification [11]. 

Robert et al., proposed a new deep learning approach known as Heap Auto Encoders (HAEs) is presented for the accurate classification of diseases affecting banana leaves, tackling the existing difficulties in preserving healthy banana plants rich in potassium and health-promoting attributes. Dropout and the Rectified Linear Units (ReLU) activation function are also implemented within the HAE framework. The HAEs technique achieved a high classification accuracy of 99.35% [12]. 

Dat et al., mentioned in their study that the accuracy of the proposed method is 89.80% for identifying disability defects based on color and 94.7% for identifying torn leaves. The study used image processing, segmentation, labeling, size filtering, color analysis, boundary feature extraction, pre-processing, brightness adjustment, HSV color conversion, contour analysis, Laplacian filter, and convex hull techniques. 

Table 1 presents a summary of prior research conducted on the classification of plant leaves diseases.

No.

Method

Accuracy

Class

References

1

Banana Squeeze Net

96.25%

3 (Same dataset)

[1]

2

CNN + SVM

99%

4

[2]

3

TGVFCMS + CNN

93.45%

5

[8]

4

SVM

92%

2

[7]

5

GLCM + NGTDM, SVM

90%

3

[6]

6

SVM for Healthy-Black Sigatoka

SVM for Healthy-Cordana leaf spot

89.10%

90.90%

2

[11]

7

Heap Auto Encoders, ReLU

99.35%

3

[12]

Table 1: Related works for plat leaves diseases.

Material And Methods

This section encompasses the classification models employed for banana leaf disease categorization, along with the performance metrics utilized to evaluate the efficacy of these methods. This section also includes a comprehensive description of the dataset utilized in the study. 

Dataset Description 

Banana leaves are susceptible to a range of diseases, which negatively affect yield through outcomes like decreased fruit production, hindered growth, and even plant mortality. This dataset encompasses images portraying three common banana leaf spot diseases (Sigatoka, Cordana, Pestalotiopsis) as well as images of healthy leaves. This compilation serves the purpose of aiding in the identification and management of these diseases [13]. This dataset has been made available through Mendeley Data. The dataset includes images specifically centered on prevalent banana leaf spot diseases - Sigatoka, Cordana, Pestalotiopsis - alongside images of healthy leaves. It is composed of two sets: the Original Set, comprising 937 RGB images representing both disease-infected and healthy leaves, and the Augmented Set, encompassing 1600 images (400 images per category) that have been modified through techniques like blur, flip, crop, contrast adjustments, shear, translation, and rotational shearing. The dataset utilized in this study is the augmented set. These images adhere to a standard resolution of 224 x 224 pixels and were taken using smartphone cameras within the banana fields of Bangladesh during June 2021 [13] (Figure 1).

 Figure 1: Banana leaf spot diseases dataset image samples.

Classification with Deep Learning 

Deep learning has emerged as a transformative approach in the field of plant classification and plant disease identification. Deep learning employs intricate neural networks with multiple layers to analyze and extract intricate patterns from plant images. In the context of plant disease classification [14,15], these neural networks prove invaluable in recognizing and diagnosing diseases or pest infestations in plants by discerning subtle visual cues, leaf discolorations, or other symptoms, aiding farmers and agricultural experts in timely and accurate disease management [16]. Deep learning has thus revolutionized the way we approach plant-related challenges, offering a powerful tool to ensure healthier crops and sustainable agriculture practices [17]. The agriculture sector faces significant challenges from plant leaf diseases and destructive insects, leading to substantial economic losses. To mitigate these issues, the rapid and precise prediction of leaf diseases in crops is crucial for early treatment [18]. Fortunately, recent advancements in Deep Learning have greatly enhanced the performance and accuracy of object detection and recognition systems, offering promising solutions to this problem [19,20]. The flow diagram for this study is illustrated in figure 2.

 Figure 2: Banana leaf spot diseases dataset process flow diagram.

DenseNet-201 

DenseNet-201 is a deep convolutional neural network comprising a total of 201 layers. It offers the capability to utilize a pretrained variant, which has been trained on an extensive dataset of over a million images sourced from the ImageNet database. Consequently, this neural network has acquired intricate and valuable feature representations for a broad spectrum of images. Its input images are required to have dimensions of 224 by 224 pixels [21].

EfficientNet-b0 

EfficientNet-b0, a revolutionary advancement in neural network architecture, was introduced by Tan and Le in 2019 [22]. This innovative approach to designing neural networks revolves around efficient scaling in terms of depth, width, and resolution [23]. With a meticulously crafted architecture that optimizes depth, width, and resolution, EfficientNet-b0 [24], stands as an ideal model. In this research involved the classification of four distinct classes of healthy banana leaves and three different types of leaf diseases. At the core of the EfficientNet framework lie three critical scaling dimensions: Depth (d), Width (w), and Resolution (r). These dimensions are intricately woven into the fabric of the model's architecture, culminating in a seamless balance between precision and computational efficiency [23,25].

VGG16 

In this study, the classification of banana leaf diseases was undertaken utilizing the VGG16 architecture. Comprising 16 convolutional layers with 3 × 3 filters, the VGG16 architecture features a total of five 2 × 2 maximum pooling layers, positioned after each convolutional layer [26,27]. Following these, the design integrates three fully connected layers, which come after the last maximum pooling layer in the architecture. Within the hidden layers, the ReLU activation function is applied, while the ultimate layer employs the softmax classifier [28-31].

Evaluating Performance

Deep learning models were utilized to classify banana leaf diseases, and the resulting performance metrics provided a thorough evaluation of the models' ability to accurately detect these diseases [32]. These metrics collectively highlight the significant potential of advanced machine learning methods, especially deep learning, to transform disease detection in agriculture. The study focused on addressing the pressing challenge of disease detection in banana plants, four deep learning models, namely DenseNet-201, EfficientNet-b0, and VGG16, were leveraged to classify a comprehensive dataset known as the "Banana Leaf Spot Diseases Dataset”, The aim was to accurately identify and classify these diseases to mitigate their detrimental effects on crop yield and quality. To evaluate the effectiveness of these deep learning models in classifying banana plant diseases, a range of essential performance measures were utilized. These measures offer valuable perspectives on the models' capacities and their competence in making precise predictions within the realm of banana plant disease identification. The principal performance criteria under consideration encompass accuracy, precision, recall, and the F1 score, each serving unique purposes in the assessment of model performance [33,34].

Performance Metrics

Accuracy serves as a fundamental measure, assessing the overall correctness of the model's predictions. It quantifies the proportion of correctly classified instances in relation to the total number of instances within the dataset [35]. 

Precision and recall, conversely, provide a more nuanced perspective on model performance. Precision evaluates the fraction of true positive predictions within all instances categorized as positive by the model [36]. 

Recall, often referred to as sensitivity, measures the proportion of true positives among all actual positive instances in the dataset. A high recall value signifies that the model excels at identifying the majority of real disease cases [37]. 

The F1 score, frequently described as the harmonic mean of precision and recall, achieves equilibrium between these two metrics. It furnishes a unified, all-encompassing figure that takes into account both false positives and false negatives, rendering it particularly useful when dealing with class imbalance [38]. A high F1 score signifies a model that not only accurately identifies diseased leaves but also effectively mitigates errors in the process. The formulas [39], for each metric evaluation were presented in table 2.

 Table 2: Performance metrics formulas.

Confusion Matrix 

The Confusion Matrix serves as a pivotal tool for evaluating model performance in the realm of classification tasks. This matrix takes the form of a structured grid, unveiling the intricate interplay between real-world class labels and their corresponding model predictions [40]. Confusion matrix provides a concise summary of how well the model's predictions align with the actual outcomes in a binary or multiclass classification problem [41-43]. The confusion matrix for the multiclass banana leaf diseases dataset is presented in figure 3.

 Figure 3: Multiclass confusion matrix for banana leaf dataset. 

As illustrated in figure 3, (Tx) The number of correctly predicted positive instances. These are cases where the model correctly identifies something as belonging to the positive class. The number of correctly predicted negative instances. These are cases where the model correctly identifies something as not belonging to the positive class. (Fx) The number of instances that were incorrectly predicted as positive when they actually belong to the negative class. The number of instances that were incorrectly predicted as negative when they actually belong to the positive class.

Grad-CAM in Deep Learning

The Gradient-weighted Class Activation Mapping (Grad-CAM) is a deep learning technique that enables the visualization and comprehension of the decisions made by a Convolutional Neural Network (CNN) [44]. Imagine this as a magical lens that creates a clear and detailed heatmap, highlighting the most important aspects of an image that captures the neural network's focus. 

Grad-CAM is a method that preserves the structure of deep models while providing interpretability without sacrificing accuracy [45]. Grad-CAM is a class-discriminative localization technique that produces visual explanations for CNN-based networks without modifying the architecture or re-training the network. The paragraph presents a comparison between Grad-CAM and other visualization methods, highlighting the significance of being both class-discriminative and high-resolution in producing visual explanations [46]. 

Grad-CAM produces a heatmap that identifies the important areas of an image by examining the gradients that pass through the last convolutional layer of the CNN. Grad-CAM calculates the gradient of the projected class score with respect to the feature maps of the final convolutional layer to identify the significance of each feature map for a particular class [47,48].

Results And Discussion

In this study banana leaf disease classified using deep learning models. The models consist of DenseNet-201, EfficientNet-b0, and VGG16. In this research, commonly favored parameters from existing literature were employed. To ensure consistency and avoid any potential negative impact on their performance, these identical parameters were applied across all three models. Parameters used in this study for each model are shown in table 3.

Training Options

Solver

sgdm

Initial Learn Rate

0.0001

Validation Frequency

5

Max Epochs

8

Mini Batch Size

11

Table 3: Training options for all three models.

DenseNet-201, EfficientNet-b0, and VGG16 were utilized to classify the augmented dataset, yielding significant accuracy rates: DenseNet-201 achieved an impressive 98.12%, EfficientNet-b0 recorded 87.81%, and VGG16 demonstrated a strong 97.81%. These outcomes as shown in table 4, underscore the promise of advanced deep learning methods for precise diagnosis of banana leaf spot diseases, with profound implications for safeguarding crop yields, mitigating economic losses, and ensuring global food production security. According to the results, the highest accuracy was gained by DenseNet-201, then VGG16, and the lowest percentage obtained by EfficientNet-b0.

Model

Accuracy

DenseNet-201

98.12%

EfficientNet-b0

87.81%

VGG16

97.81%

Table 4: Results obtained from the study. 

The training curve for DenseNet-201 model shown in figure 4, the figure illustrated 98.12% accuracy ratio, 928 iterations, iteration per epoch 116.

 Figure 4: DenseNet-201 training curve.

Figure 5 displays the confusion matrix corresponding to the DenseNet-201 model. Based on the matrix the actual predicted and classified samples were 80 healthy banana leaves, 80 samples classified as Cordana diseases, 75 samples were Pestalotiopsis disease, and 79 samples were correctly classified as Sigatoka banana leaf disease. 5 samples were misclassified as Pestalotiopsis disease, but they were healthy leaves. 1 sample was classified as Sigatoka, but it was a Healthy leaf actually. Table 5 displays the Performance Metrics for each class in the case of DenseNet-201.

 Figure 5: DenseNet-201 confusion matrix.

Class

Accuracy

Precision

Recall

F1 Score

Healthy

98.13%

1.0

0.93

0.96

Cordana

100%

1.0

1.0

1.0

Pestalotiopsis

98.44%

0.94

1.0

0.97

Sigatoka

99.69%

0.99

1.0

0.99

Table 5: DenseNet-201 performance metrics. 

The training curve for EfficientNet-b0 model shown in figure 6, the figure illustrated 87.81% accuracy ratio, 928 iterations, iteration per epoch 116.

 Figure 6: EfficientNet-b0 training curve.

Figure 7 displays the confusion matrix corresponding to the EfficientNet-b0 model. Based on the matrix the actual predicted and classified samples were 76 healthy banana leaves, 73 samples classified as Cordana diseases, 66 samples were Pestalotiopsis disease, and 79 samples were correctly classified as Sigatoka banana leaf disease. 4 samples were misclassified as Cordana disease, but they were healthy leaves. 6 samples were misclassified as Pestalotiopsis disease, but they were healthy leaves. 3 samples were classified as Sigatoka, but they were Healthy leaves actually. 5 samples were classified as Pestalotiopsis, but they were Cordana leaves actually. 9 samples were classified as Sigatoka, but they were Cordana leaves actually. 3 samples were classified as Healthy leaves, but they were Pestalotiopsis. 2 samples were classified as Sigatoka leaves, but they were Pestalotiopsis. The predicted results were 1, 3, and 3 samples for Healthy, Cordana, and Pestalotiopsis respectively, which were misclassified, and they were Sigatoka actually. Table 6 displays the Performance Metrics for each class in the case of EfficientNet-b0.

 Figure 7: EfficientNet-b0 confusion matrix.

Class

Accuracy

Precision

Recall

F1 Score

Healthy

94.69%

0.95

0.85

0.90

Cordana

93.44%

0.91

0.84

0.87

Pestalotiopsis

94.06%

0.82

0.93

0.87

Sigatoka

93.44%

0.82

0.90

0.86

Table 6: EfficientNet-b0 performance metrics. 

The training curve for VGG16 model shown in figure 8, the figure illustrated 97.81% accuracy ratio, 928 iterations, iteration per epoch 116.

 Figure 8: VGG16 training curve. 

Figure 9 displays the confusion matrix corresponding to the VGG16 model. Based on the matrix the actual predicted and classified samples were 80 healthy banana leaves, 80 samples classified as Cordana diseases, 75 samples were Pestalotiopsis disease, and 66 samples were correctly classified as Sigatoka banana leaf disease. 2 samples were misclassified as Pestalotiopsis disease, but they were healthy leaves. 2 samples were misclassified as Pestalotiopsis disease, but they were Cordana leaves. 1 sample was classified as Sigatoka, but it was a Cordana leaf actually. 1 sample was classified as Sigatoka, but it was a Pestalotiopsis leaf actually. 1 sample was classified as Pestalotiopsis, but it was a Sigatoka leaf actually. Table 7 displays the Performance Metrics for each class in the case of DenseNet-201.

 Figure 9: VGG16 confusion matrix.

Class

Accuracy

Precision

Recall

F1 Score

Healthy

99.38%

1.0

0.98

0.99

Cordana

99.06%

1.0

0.96

0.98

Pestalotiopsis

98.13%

0.94

0.99

0.96

Sigatoka

99.06%

0.97

0.99

0.98

Table 7: VGG16 performance metrics. 

In order to meet the immediate need for understanding deep learning models, several strategies have been established to offer insights into the decision-making process of these models. Grad-CAM is a technology that is very beneficial for computer vision tasks. Grad-CAM is a technique that allows us to visually identify the specific areas of an input image that have the greatest impact on a model's decision-making process. Figure 10 shows the original and grad-cam applied on the dataset images.

 Figure 10: Grad-CAM result for the banana leaves. 

DenseNet-201 turned out to be the best model, reaching an impressive 98.12% accuracy in identifying banana leaf diseases. These results show that using advanced computer technology like deep learning can really help find and protect crops from diseases in farming. The comprehensive summary of the study's overall results for all models, including DenseNet-201, EfficientNet-b0, and VGG16, is provided in figure 11. This chart encapsulates a detailed representation of the findings and performance metrics across these three models, offering a holistic view of the research outcomes.

 Figure 11: Comprehensive performance metrics for applied deep learning models.

Conclusion

In conclusion, banana crops are vital for global food security and economic stability, but they face significant threats from various diseases that can harm their yield and quality. This study introduced the "Banana Leaf Spot Diseases (BananaLSD) Dataset," which includes images of three major banana leaf spot diseases and healthy leaves. The dataset, consisting of 1600 augmented images, was carefully prepared and labeled by plant pathologists. 

To tackle the challenge of disease detection in banana plants, four deep learning models were used: DenseNet-201, EfficientNet-b0, and VGG16. The results were impressive, with DenseNet-201 achieving a remarkable accuracy rate of 98.12%. The results obtained in this study, which involved classifying the Banana Leaf Spot Diseases (BananaLSD) Dataset using the DenseNet-201 model, were compared with those of a prior study that employed the same dataset [1]. In the previous study, a model called BananaSqueezeNet achieved an accuracy rate of 96.25%. This comparison allows us to assess the relative performance and improvements achieved by the DenseNet-201 model in handling the dataset in question. This underscores the potential of advanced deep learning technology in accurately identifying banana leaf spot diseases, offering promising prospects for preserving crop yields, reducing economic losses, and ensuring global food production security.

Data Availability

You can access the dataset via this link https://data.mendeley.com/datasets/9tb7k297ff/1.

Acknowledgment

We would like to express our gratitude to Hacettepe University for their unwavering support and guidance throughout this research project.

References

  1. Bhuiyan MAB, Abdullah HM, Arman SE, Rahman SS, Mahmud KA (2023) BananaSqueezeNet: A very fast, lightweight convolutional neural network for the diagnosis of three prominent banana leaf diseases. Smart Agricultural Technology 4: 100214.
  2. Narayanan KL, Krishnan RS, Robinson YH, Julie EG, Vimal S, et al. (2022) Banana plant disease classification using hybrid convolutional neural network. Computational Intelligence and Neuroscience.
  3. Amara J, Bouaziz B, Algergawy A (2017) A deep learning-based approach for banana leaf diseases classification. BTW.
  4. Koklu M, Unlersen MF, Ozkan IA, Aslan MF, Sabanci K (2022) A CNN-SVM study based on selected deep features for grapevine leaves classification. Measurement 188: 110425.
  5. Kursun R, Cinar I, Taspinar YS, Koklu M (2022) Flower recognition system with optimized features for deep features. Mediterranean Conference on Embedded Computing (MECO).
  6. Evuri SRR (2022) Banana Leaf Disease Detection with Multi Feature Extraction Techniques Using SVM. Dublin, National College of Ireland, Ireland.
  7. Vipinadas M, Thamizharasi A (2016) Detection and Grading of diseases in Banana leaves using Machine Learning. International Journal of Scientific & Engineering Research 7: 916-924.
  8. Krishnan VG, Deepa J, Rao PV, Divya V, Kaviarasan S (2022) An automated segmentation and classification model for banana leaf disease detection. Journal of Applied Biology and Biotechnology 10: 213-220.
  9. Kumar R, Chug A, Singh AP, Singh D (2022) A Systematic analysis of machine learning and deep learning based approaches for plant leaf disease classification: A review. Journal of Sensors.
  10. Yasin ET, Kursun R, Koklu M (2024) Machine learning-based classification of mulberry leaf diseases. Proceedings of International Conference on Intelligent Systems and New Applications 2: 58-63.
  11. Aruraj A, Alex A, Subathra M, Sairamya N, George ST, et al. (2019) Detection and classification of diseases of banana plant using local binary pattern and support vector machine. International Conference on Signal Processing and Communication (ICSPC).
  12. Singh R, Athisayamani S (2020) Banana leaf diseased image classification using novel HEAP auto encoder (HAE) deep learning. Multimedia Tools and Applications 79: 41-42.
  13. Arman SE, Bhuiyan MAB, Abdullah HM, Islam S, Chowdhury TT, et al. (2023) BananaLSD: A banana leaf images dataset for classification of banana leaf diseases using machine learning. Data in Brief 50: 109608.
  14. Albattah W, Nawaz M, Javed A, Masood M, Albahli S (2022) A novel deep learning method for detection and classification of plant diseases. Complex & Intelligent Systems 8: 507-524.
  15. Li L, Zhang S, Wang B (2021) Plant disease detection and classification by deep learning-a review. IEEE Access 9: 56683-56698.
  16. Mohanty SP, Hughes DP, Salathé M (2016) Using deep learning for image-based plant disease detection. Frontiers in plant science 7: 1419.
  17. Lv Q, Zhang S, Wang Y (2022) Deep learning model of image classification using machine learning. Advances in Multimedia 2022: 12.
  18. Lamba M, Gigras Y, Dhull A (2021) Classification of plant diseases using machine and deep learning. Open Computer Science 11: 491-508.
  19. Akila M, Deepan P (2018) Detection and classification of plant leaf diseases by using deep learning algorithm. International Journal of Engineering Research & Technology (IJERT) 6: 1-5.
  20. Saleem MH, Potgieter J, Arif KM (2019) Plant disease detection and classification by deep learning. Plants 8: 468.
  21. Lu T, Han B, Chen L, Yu F, Xue C (2021) A generic intelligent tomato classification system for practical applications using DenseNet-201 with transfer learning. Scientific Reports 11: 15824.
  22. Tan M, Le Q (2019) Efficientnet: Rethinking model scaling for convolutional neural networks. International conference on machine learning PMLR.
  23. Arun Y, Viknesh G (2022) Leaf Classification for Plant Recognition Using EfficientNet Architecture. IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC).
  24. Mirandilla JPC, Bating CB, Cabatuan MK, Jose JAC (2022) Classification of Philippine Herbal Medicine Plant Using EfficientNet on Mobile Platform. TENCON 2022-2022 IEEE Region 10 Conference (TENCON).
  25. Atila U, Ucar M, Akyol K, Ucar E (2021) Plant leaf disease classification using EfficientNet deep learning model. Ecological Informatics 61: 101182.
  26. Theckedath D, Sedamkar RR (2020) Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Computer Science 1: 1-7.
  27. Yildiz MB, Yasin ET, Koklu M (2024) Fisheye freshness detection using common deep learning algorithms and machine learning methods with a developed mobile application. European Food Research and Technology 250: 1919-1932.
  28. Taspinar YS, Dogan M, Cinar I, Kursun R, Ozkan IA, et al. (2022) Computer vision classification of dry beans (Phaseolus vulgaris L.) based on deep transfer learning techniques. European Food Research and Technology 248: 2707-2725.
  29. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. ArXiv.
  30. Banan A, Nasiri A, Taheri-Garavand A (2020) Deep learning-based appearance features extraction for automated carp species identification. Aquacultural Engineering 89: 102053.
  31. Yasin E, Koklu M (2023) Utilizing Random Forests for the Classification of Pudina Leaves through Feature Extraction with InceptionV3 and VGG19. Proceedings of the International Conference on New Trends in Applied Sciences 1: 1-8.
  32. Kursun R, Bastas KK, Koklu M (2023) Segmentation of dry bean (Phaseolus vulgaris L.) leaf disease images with U-Net and classification using deep learning algorithms. European Food Research and Technology 249: 2543-2558.
  33. Cinar I, Taspinar YS, Koklu M (2023) Development of early stage diabetes prediction model based on stacking approach. Tehnicki Glasnik 17: 153-159.
  34. Koklu M, Kursun R, Yasin ET, Taspinar YS (2023) Detection of Defects in Soybean Seeds by Extracting Deep Features with SqueezeNet. IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS) 1: 713-717.
  35. Bishop CM, Nasrabadi NM (2006) Pattern Recognition and Machine Learning. Springer, New York, USA.
  36. Davis J, Goadrich M (2006) The relationship between Precision-Recall and ROC curves. Proceedings of the 23rd international conference on Machine learning.
  37. Duda RO, Hart PE, Stork DG (2000) Pattern Classification. Wiley, Hoboken, USA.
  38. Caruana R (1997) Multitask learning. Machine Learning 28: 41-75.
  39. Feyzioglu A, Taspinar YS (2023) Detection of defects in rolled stainless steel plates by machine learning models. International Journal of Applied Mathematics Electronics and Computers 11: 37-43.
  40. Yasar A, Kaya E, Saritas I (2016) Classification of wheat types by artificial neural network. International Journal of Intelligent Systems and Applications in Engineering 4: 12-15.
  41. Yasin ET, Koklu M (2023) Classification of organic and recyclable waste based on feature extraction and machine learning algorithms. International Conference on Intelligent Systems and New Applications (ICISNA’23).
  42. Saritas MM, Taspinar YS, Cinar I, Koklu M (2023) Railway track fault detection with ResNet deep learning models. International Conference on Intelligent Systems and New Applications (ICISNA’23).
  43. Taspinar YS (2022) Classification and analysis of tomato species with convolutional neural networks. Selcuk Journal of Agriculture and Food Sciences 36: 515-520.
  44. Selvaraju RR, Das A, Vedantam R, Cogswell M, Parikh D, et al. (2016) Grad-CAM: Why did you say that?. ArXiv.
  45. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, et al. (2017) Grad-CAM: Visual explanations from deep networks via gradient-based localization. IEEE international conference on computer vision 618-626.
  46. Selvaraju R, Cogswell M, Das A, Vedantam R, Parikh D, et al. (2016) Grad-CAM: visual explanations from deep networks via gradient-based localization. ArXiv.
  47. Chen L, Chen J, Hajimirsadeghi H, Mori G (2020) Adapting grad-cam for embedding networks. IEEE/CVF winter conference on applications of computer vision 2794-2803.
  48. Kursun R, Koklu M (2023) Enhancing explainability in plant disease classification using score-CAM: Improving early diagnosis for agricultural productivity. IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS) 1: 759-764.

Citation: Unal C (2024) Incorporating Deep Learning into the Diagnosis of Banana Leaf Spot Diseases for the Protection of Banana Crops. J Food Sci Nutr 10: 204.

Copyright: © 2024  Cihan Unal, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.


Herald Scholarly Open Access is a leading, internationally publishing house in the fields of Sciences. Our mission is to provide an access to knowledge globally.



© 2025, Copyrights Herald Scholarly Open Access. All Rights Reserved!