<!DOCTYPE article
PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20190208//EN"
       "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.4" xml:lang="en">
 <front>
  <journal-meta>
   <journal-id journal-id-type="publisher-id">Foods and Raw Materials</journal-id>
   <journal-title-group>
    <journal-title xml:lang="en">Foods and Raw Materials</journal-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Foods and Raw Materials</trans-title>
    </trans-title-group>
   </journal-title-group>
   <issn publication-format="print">2308-4057</issn>
   <issn publication-format="online">2310-9599</issn>
  </journal-meta>
  <article-meta>
   <article-id pub-id-type="publisher-id">46258</article-id>
   <article-id pub-id-type="doi">10.21603/2308-4057-2021-2-387-396</article-id>
   <article-categories>
    <subj-group subj-group-type="toc-heading" xml:lang="ru">
     <subject>Research Article</subject>
    </subj-group>
    <subj-group subj-group-type="toc-heading" xml:lang="en">
     <subject>Research Article</subject>
    </subj-group>
    <subj-group>
     <subject>Research Article</subject>
    </subj-group>
   </article-categories>
   <title-group>
    <article-title xml:lang="en">RNN- and CNN-based weed detection for crop improvement: An overview</article-title>
    <trans-title-group xml:lang="ru">
     <trans-title>RNN- and CNN-based weed detection for crop improvement: An overview</trans-title>
    </trans-title-group>
   </title-group>
   <contrib-group content-type="authors">
    <contrib contrib-type="author">
     <contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-8762-9199</contrib-id>
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Jabir</surname>
       <given-names>Brahim </given-names>
      </name>
      <name xml:lang="en">
       <surname>Jabir</surname>
       <given-names>Brahim </given-names>
      </name>
     </name-alternatives>
     <email>ibra.jabir@gmail.com</email>
     <xref ref-type="aff" rid="aff-1"/>
    </contrib>
    <contrib contrib-type="author">
     <contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-4617-5223</contrib-id>
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Rabhi</surname>
       <given-names>Loubna </given-names>
      </name>
      <name xml:lang="en">
       <surname>Rabhi</surname>
       <given-names>Loubna </given-names>
      </name>
     </name-alternatives>
     <email>rabhi.lubna@gmail.com</email>
     <xref ref-type="aff" rid="aff-2"/>
    </contrib>
    <contrib contrib-type="author">
     <contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-1418-3173</contrib-id>
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Falih</surname>
       <given-names>Noureddine </given-names>
      </name>
      <name xml:lang="en">
       <surname>Falih</surname>
       <given-names>Noureddine </given-names>
      </name>
     </name-alternatives>
     <xref ref-type="aff" rid="aff-3"/>
    </contrib>
   </contrib-group>
   <aff-alternatives id="aff-1">
    <aff>
     <institution xml:lang="ru">Sultan Moulay Slimane University</institution>
     <city>Beni Mellal</city>
     <country>Марокко</country>
    </aff>
    <aff>
     <institution xml:lang="en">Sultan Moulay Slimane University</institution>
     <city>Beni mellal</city>
     <country>Morocco</country>
    </aff>
   </aff-alternatives>
   <aff-alternatives id="aff-2">
    <aff>
     <institution xml:lang="ru">Sultan Moulay Slimane University</institution>
     <city>Beni Mellal</city>
     <country>Марокко</country>
    </aff>
    <aff>
     <institution xml:lang="en">Sultan Moulay Slimane University</institution>
     <city>Beni Mellal</city>
     <country>Morocco</country>
    </aff>
   </aff-alternatives>
   <aff-alternatives id="aff-3">
    <aff>
     <institution xml:lang="ru">Sultan Moulay Slimane University</institution>
     <city>Beni Mellal</city>
     <country>Марокко</country>
    </aff>
    <aff>
     <institution xml:lang="en">Sultan Moulay Slimane University</institution>
     <city>Beni Mellal</city>
     <country>Morocco</country>
    </aff>
   </aff-alternatives>
   <pub-date publication-format="print" date-type="pub" iso-8601-date="2021-10-15T00:00:00+03:00">
    <day>15</day>
    <month>10</month>
    <year>2021</year>
   </pub-date>
   <pub-date publication-format="electronic" date-type="pub" iso-8601-date="2021-10-15T00:00:00+03:00">
    <day>15</day>
    <month>10</month>
    <year>2021</year>
   </pub-date>
   <volume>9</volume>
   <issue>2</issue>
   <fpage>387</fpage>
   <lpage>396</lpage>
   <history>
    <date date-type="received" iso-8601-date="2021-07-22T00:00:00+03:00">
     <day>22</day>
     <month>07</month>
     <year>2021</year>
    </date>
    <date date-type="accepted" iso-8601-date="2021-08-23T00:00:00+03:00">
     <day>23</day>
     <month>08</month>
     <year>2021</year>
    </date>
   </history>
   <self-uri xlink:href="http://jfrm.ru/en/issues/1879/1961/">http://jfrm.ru/en/issues/1879/1961/</self-uri>
   <abstract xml:lang="ru">
    <p>Introduction. Deep learning is a modern technique for image processing and data analysis with promising results and great potential. Successfully applied in various fields, it has recently entered the field of agriculture to address such agricultural problems as disease identification, fruit/plant classification, fruit counting, pest identification, and weed detection. The latter was the subject of our work. Weeds are harmful plants that grow in crops, competing for things like sunlight and water and causing crop yield losses. Traditional data processing techniques have several limitations and consume a lot of time. Therefore, we aimed to take inventory of deep learning networks used in agriculture and conduct experiments to reveal the most efficient ones for weed control.&#13;
Study objects and methods. We used new advanced algorithms based on deep learning to process data in real time with high precision and efficiency.  These algorithms were trained on a dataset containing real images of weeds taken from Moroccan fields.&#13;
Results and discussion. The analysis of deep learning methods and algorithms trained to detect weeds showed that the Convolutional Neural Network is the most widely used in agriculture and the most efficient in weed detection compared to others, such as the Recurrent Neural Network.&#13;
Conclusion. Since the Convolutional Neural Network demonstrated excellent accuracy in weed detection, we adopted it in building a smart system for detecting weeds and spraying them in place.</p>
   </abstract>
   <trans-abstract xml:lang="en">
    <p>Introduction. Deep learning is a modern technique for image processing and data analysis with promising results and great potential. Successfully applied in various fields, it has recently entered the field of agriculture to address such agricultural problems as disease identification, fruit/plant classification, fruit counting, pest identification, and weed detection. The latter was the subject of our work. Weeds are harmful plants that grow in crops, competing for things like sunlight and water and causing crop yield losses. Traditional data processing techniques have several limitations and consume a lot of time. Therefore, we aimed to take inventory of deep learning networks used in agriculture and conduct experiments to reveal the most efficient ones for weed control.&#13;
Study objects and methods. We used new advanced algorithms based on deep learning to process data in real time with high precision and efficiency.  These algorithms were trained on a dataset containing real images of weeds taken from Moroccan fields.&#13;
Results and discussion. The analysis of deep learning methods and algorithms trained to detect weeds showed that the Convolutional Neural Network is the most widely used in agriculture and the most efficient in weed detection compared to others, such as the Recurrent Neural Network.&#13;
Conclusion. Since the Convolutional Neural Network demonstrated excellent accuracy in weed detection, we adopted it in building a smart system for detecting weeds and spraying them in place.</p>
   </trans-abstract>
   <kwd-group xml:lang="ru">
    <kwd>Digital agriculture</kwd>
    <kwd>weed detection</kwd>
    <kwd>machine learning</kwd>
    <kwd>deep learning</kwd>
    <kwd>Convolutional Neural Network (CNN)</kwd>
    <kwd>Recurrent Neural Network (RNN)</kwd>
   </kwd-group>
   <kwd-group xml:lang="en">
    <kwd>Digital agriculture</kwd>
    <kwd>weed detection</kwd>
    <kwd>machine learning</kwd>
    <kwd>deep learning</kwd>
    <kwd>Convolutional Neural Network (CNN)</kwd>
    <kwd>Recurrent Neural Network (RNN)</kwd>
   </kwd-group>
  </article-meta>
 </front>
 <body>
  <p>INTRODUCTIONIn our growing digital world, machine learningis at the core of data science [1]. Machine learningtechniques and computing power play an essential rolein the analysis of collected data. They have focusedon representing input data and generalizing learnedpredictive models to future data [2]. Data representationhas a dramatic effect on machine learner performance.Proper data representation can lead to high performanceeven with straightforward machine learning. In contrast,poor representation of data with advanced complexmachine learning can lead to decreased performance [3].Deep learning is an important branch of machinelearning that has emerged to achieve impressiveresults in the field of artificial intelligence. Its strengthis in its ability to automatically create powerful datarepresentation through layers of learning without humanintervention, thus ensuring great precision of analysis[4]. In comparison with shallow learning algorithms,deep learning uses supervised and unsupervisedtechniques and machine-learning approaches toautomatically learn the hierarchical representationof multi-level data for feature classification [5, 6].This deep learning composition is inspired by therepresentation of human brain for processing naturalsignals. It has attracted the academic community latelydue to its performance in different research fields, suchas agriculture.More recently, a number of technologies commonin industry have been applied to agriculture, such asremote sensing, the Internet of Things (IoT), and roboticplatforms, leading to the concept of “smart agriculture”[7, 8]. Smart agriculture is important to faceagricultural production challenges in terms ofproductivity, environmental impact, and food security.To tackle these challenges, it is necessary to analyzeagricultural ecosystems, which involves constantmonitoring of different variables. These operations388Jabir B. et al. Foods and Raw Materials, 2021, vol. 9, no. 2, pp. 387–396create data that can be used as input values andprocessed with varying analysis techniques in deeplearning to identify weeds, diseases, etc.The objects of our study were two neural networks,namely the Convolutional Neural Network (CNN) andthe Recurrent Neural Network (RNN). A CNN is anartificial neural network used for image recognition andprocessing [9]. It is specially intended for handling pixelinformation. A CNN is viewed as a powerful artificialintelligence (AI) image processing system that employsdeep learning to perform generative and descriptivetasks. It commonly uses machine vision, which includesimage and video recognition, as well as recommendationsystems and natural language processing (NPL) [10].A RNN, in its turn, is an artificial neural networkessentially utilized in discourse identification andprogrammed regular language treatment. RNNsare intended to perceive successive attributes andinformation utilization patterns needed to foresee likelyscenarios [11]. Therefore, the use of a RNN in imageclassification requires optimization with the long shorttermmemory (LSTM) technique to reduce the risk ofgradient vanishing [12]. In this study, we compared thesetwo techniques of deep learning with other state of theart techniques in order to create an optimized model andtrain it to detect weeds. We aimed to create an intelligentsystem that could detect weeds and spray them locallyto avoid wasting herbicides and protect the environment.STUDY OBJECTS AND METHODSIn this study, we used various methods, devices,techniques, and libraries to study deep learning incrop planting and train the deep learning models on adatabase that includes images for relevant and smartweed detection. The following sections contain completedescriptions of these methods.Deep learning. This method came to expandmachine learning (ML) and added a lot of complexityand depth to the model based on artificial neuralnetworks (ANNs). A neural network is a systemdesigned to resemble the neural organization of thehuman brain. A more complex definition would bethat a neural network is a computational model madeup of artificial neurons connected to each other andresulting in a network architecture. This architecturehas specific parameters called weights. Adjustingthem, we can enhance the accuracy of our model.This type of networks contains many layers, eachwith a specific mission. Their number determinesthe complexity of the network. We can find threelayers in a small neural network: the input layer,the hidden layer, and the output layer. Each of theselayers is comprised of hubs called “nodes” and hasa given assignment, as the name suggests. The inputlayer is liable for recovering information and givingit to the following layer. The hidden layer playsout all the back-end assignments of the calculationand change of information utilizing differentcapacities that permit its portrayal in a progressivemanner through a few degrees of abstraction [13].There can be multiple layers hidden in a neural networkas needed. Several parameters influence the layingof various layers, and the goal is always to obtain ahigh degree of accuracy. The output layer passes theconsequence of the hidden layer, as shown in Figure 1.Deep learning has various applications rangingfrom natural language to image processing. Itsimportant advantage is the learning of functionalities,or automatic extraction of functionalities from raw data.Functionalities of more significant levels of a progressivesystem are framed by the arrangement of lower levelfunctionalities.Deep learning can tackle more perplexing issueswell and rapidly by utilizing more complex layers,which permits enormous parallelization. These complexalgorithms increase classification accuracy and reduceerrors, provided there are large, well-prepared andsufficient data sets to describe the problem and the layersare well constructed.The profoundly progressive construction and greatlearning capacity of deep learning algorithms permitthem to perform classification and expectation withhigh accuracy. They are versatile and adaptable to awide range of exceptionally complex problems. Deeplearning has numerous applications in data management(e.g. video, images), tending to be applied to any typeof information, like natural language, speech, andcontinuous or point data [15].The main drawbacks of deep learning could be longlearning time and a need for powerful hardware suitablefor parallel programming (Graphics Processing Unit,Field-programmable Gate Array), while conventionalstrategies like Scale Invariant Feature Transform (SIFT)or Support Vector Machine (SVM) have less difficultlearning measures [16]. In any case, the testing timeis quicker in deep learning tools and most of them aremore accurate. The subsections below present the mostcommon deep learning techniques.Figure 1 Artificial neural network [14]389Jabir B. et al. Foods and Raw Materials, 2021, vol. 9, no. 2, pp. 387–396Convolutional Neural Network (CNN). In deeplearning, convolutional neural networks (CNNs) area class of profound feedforward ANNs that has beeneffectively applied to computer vision. In contrast toan ANN, whose tediously prepared prerequisites maybe unfeasible in some huge scope issues, a CNN canlearn complex issues quite rapidly because of weightsharing and more complex layers utilized.Convolutionalneural networks can increase their likelihood of correctclassifications, provided there are sufficiently largedata sets (i.e. hundreds to thousands of measurements,depending on the complexity of the problem underinvestigation) available to describe the problem. Theyare made up of different convolutional layers, groupedand/or fully connected. Convolutional layers performoperations to extract distinct features from the inputimages whose dimensionality is reduced by groupingthe layers together, while fully connected layersperform classification operations. They usually exploitthe learned high-level functionalities at the last layerin order to classify the input images into predefinedclasses. Many organizations have successfully appliedthis technique in various fields, such as agriculturewhere it accounts for 80% of all methods used [17]. Anexample of CNN architecture is shown in Figure 2 [18].Fig. 2 shows different representations of the trainingdataset created by applying various convolutions tocertain layers of the network. Training always beginsas the most general at the level of the first layers, whichare larger, and becomes more specific at the level of thedeeper layers.A combination of convolutional layers and denselayers makes the production of good precision resultspossible. There are various “successful” architecturesthat researchers commonly use to start buildingtheir models instead of starting from scratch. Theseinclude AlexNet, the Visual Geometry Group (VGG)(shown in Figure 2), GoogleNet, and Inception-ResNet, which uses what we call ‘transfer learning.’Besides, there are various tools and platforms thatallow researchers to experience deep learning. Themost popular are TensorFlow, Theano, Keras (anapplication programming interface on top of TensorFlowand Theano), PyTorch, Caffe, TFLearn, Pylearn2,and Matlab. Some of these tools (e.g. Caffe, Theano)integrate popular platforms such as those mentionedabove (e.g. AlexNet, VGG, GoogleNet) in the form oflibraries or classes [19].Recurrent Neural Network (RNN). Recurrentneural networks (RNNs) are another type of neuralnetworks that is used to solve difficult machine learningproblems involving sequences of inputs. Some RNNarchitectures for sequence prediction issues are:1. One-to-Many: sequence yield, for picture captioning;2. Many-to-One: sequence in input contribution, forsentiment investigation; and3. Many-to-Many: sequence for synchronized input,machine translation and output sequences, typicallyprocessing operations for video classification.RNNs have connections with loops, adding feedbackand memory to networks over time. This memoryhas come to replace traditional learning that relies onindividual patterns. It allows this type of network tolearn and generalize through a sequence of inputs.When an out is produced, it is copied and sent backto the recurrent network [20]. To make a decision, itconsiders the current entry and the exit it learned fromthe previous entry. An example of RNN architecture isshown in Figure 3.Fig. 3 shows an RNN for the entire sequence.For example, if a sentence consists of five words, thenetwork will unwind into a neural network of five layers,one layer for each word. The formulas that governcalculations in an RNN are as follows:– xt entered at time t.– U; V; W are the parameters that the network will learnfrom the training data.Figure 2 An example of CNN architecture (VGG)390Jabir B. et al. Foods and Raw Materials, 2021, vol. 9, no. 2, pp. 387–396– St is the hidden state at time t. It is the ‘memory’ of thenetwork. St is calculated based on the previous hiddenstate and the entry to the current step:(1)where f is a nonlinear function such as ReLu orHyperbolic tangent (TanH).Ot (2) is the exit at time t. A well-known examplehere is the prediction of a word in the sentence. Whenwe want to know the next word in the sentence, it willshow a vector of possibilities in the vocabulary.(2)Deep learning applications in agriculture.Applications of deep learning in agriculture arespread across several areas, the most popular beingweed identification, land cover classification, plantrecognition, fruit counting, and crop type classification.According to Figure 4, which shows deep learningmodels in crop planting, CNNs and RNNs account for80% and only 5% of all methods, respectively.The low ratio of RNNs in agriculture is due to thefact that traditional RNNs have unstable behavior withthe vanishing gradient and therefore are not used inimage classification. For this reason, we will discussan advanced RNN in this article that uses the LSTMtechnique for image classification in weed identification.Weeds are plants that grow spontaneously onagricultural soils where they are unwanted. The growthof these plants causes competition with crops for space,light, and water. Herbicides are the first tool used tofight against weeds, but they present secondary risks forman and nature. Therefore, we need to think about waysto reduce their effects. In this study, we proposed anintelligent system that automatically detects weeds andcontributes to localized spraying of infected areas only.To identify weeds, we processed photos of crops andclassified them to apply specific herbicides.Weeds can be classified according to the size of theirleaves into grass categories (dicot and monocot). Thisdivision is adequate since grasses and broadleaf weedsare differentiated in treatment due to the selectivityof some herbicides to the specific group. Herbicideapplication works best if treatment is targeted at thespecific class of weed. Several studies have shownthe success of CNNs in comparison with RNNs andother deep learning techniques used for weed identification[21–23].Technical details. From a technical standpoint,almost all of the research has used popular CNNarchitectures such as AlexNet, VGG, and Inception-ResNet, or combined CNNs with other procedures. Allthe experiments that exploited a well-known system alsoused a deep learning framework, with Caffe being themost famous. Noteworthily, most studies that only hadsmall datasets to train their CNN models exploited thepower of data augmentation to artificially increase thenumber of training images to enhance their accuracy.They used translations, transposition, and reflections,as well as modified the intensities of the RGB Channels,and that is what we did to prepare our dataset.Also, the majority of related works included imagepreprocessing steps, where each image in the datasetwas scaled down to a smaller size before being usedas input into the model, such as 256×256, 128×128,96×96, 60×60 pixels, or converted to CNN grayscalearchitectures to take advantage of transfer learning.Transfer learning exploits already existing knowledge ofcertain related tasks in order to increase the efficiencyof learning the problem at hand, refining pre-trainedmodels when it is impossible to train the networkon the data from the beginning due to a small set oftraining data or the resolution of a complex problem.We can get significant results if we rely on weightsfrom other models that were previously trained on bigdatasets [24]. In our case, these are preformed CNNsthat have already been trained on datasets related todifferent class numbers. The authors of related workmainly used large datasets to train their CNN models,in some cases containing thousands of images. Someof them came from well-known and publicly availablesources such as PlantVillage, MalayaKew, LifeCLEF,and UC Merced. In contrast, some authors producedtheir own datasets for their research needs, as wecan see in Table 1. The table also shows whether theauthors compared their CNN-based approach withother techniques used to solve the problem under study,as well as the precision of each model. Therefore,conventional precision of the model’s response must beexactly t Figure 4 Deep learning methods in crop planting he expected response.Figure 3 An RNN (left) and its unrolled version (right)391Jabir B. et al. Foods and Raw Materials, 2021, vol. 9, no. 2, pp. 387–396Application of an optimized RNN in weeddetection. All the studies referred to above have usedCNN architectures to create deep learning models thatdetect weeds. Also, they have compared them with othermodels in terms of accuracy and error. RNNs, however,do not feature much in these works, which means theyare hardly used in this field of agriculture, especially forimage classification. That is why we aimed to create anoptimized RNN model with the long short-term memory(LSTM) technique as an alternative to the traditionalRNN [25]. We trained this RNN-LSTM model onour dataset in order to compare the results with thoseobtained by the CNN above. First, we loaded thedataset that was already used in a previous experiment.This database contained a set of weeds known in ourregion and spread over four classes. Then, we builtan RNN model and trained it on the database createdusing the following parameters: inputs = 28, step = 28,neuron = 150, output = 10, epoch = 20, Softmaxfunction. LSTMs were introduced in our model in orderto improve the RNN standards. The RNN model wecreated is shown in Figure 5.To design this deep learning model, we used thepython code, as shown in Figure 6.Table 1 Application of deep learning in agriculture (weed detection)AgriculturalareaDescriptionof the problemData DL ArchitectureDL Model Accuracy Comparison withother methodsReferencesWeeddetectionDetection andclassificationof weedsin soybean crops400 images of cropscaptured by theauthors with a droneCNN CaffeNet(CAFFE FW)98% SVM: 98%AdaBoost: 98.2%RandomForest: 96%[21]WeeddetectionWeed detectionand classification byspectral band analysis200 hyperspectralimages with 61 bandsCNN MatConvnet 94.72 % HoG: 74.34% [22]WeeddetectionAccelerate a DL withFPGA approachto classificationWeed with 8 classes18000 weed imagesfrom the DeepWeedXdatasetCNN VGG-16,DenseNet-128-1090.08% ResNet:95.7%[23]Figure 5 RNN model architectureFigure 6 Python code used to create the RNN model392Jabir B. et al. Foods and Raw Materials, 2021, vol. 9, no. 2, pp. 387–396The above code includes the initialization function__init__, which defines some variables. The fullyconnected layer follows the basic RNN by “self.FC”that allows data to flow through the RNN layer andthen through the fully connected layer, also using thefunction “init_hidden” that exploits hidden weights withzero values.The database used to train the proposed RNN andCNN models comprised about 3000 images taken ina wheat field with a digital camera (Sony 6000) underdifferent lighting conditions (from morning to afternoonin sunny and cloudy weather). We combined theseimages with those from the online Kaggle repositorydataset. The images featured four types of weedsthat propagate in our region. They corresponded tofour classes to be identified by our model (Fig. 7). Awell-prepared database is a very important factor indeep learning. We applied preprocessing and dataaugmentationtechniques on the same data to generateother learning examples through different manipulations(flip, orientation, contrast, crop, exposure, noise,brightness…). These techniques reduced the model’sperformance.RESULTS AND DISCUSSIONBefore training the model, we added all necessaryfunctions (Fig. 8). Firstly, we specified the deviceruntime to use during training, determined in thepython code by torch.device(...). This function givescommands to the program to use the GPU (GraphicsProcessing Unit) if it is available. Otherwise the CPU(Central Processing Unit) will be used as a defaultdevice. The GPU acts as a specialized microprocessor.It is swift and efficient for matrix multiplication andconvolution. Parallelism is often cited as an explanation.The GPU is optimized for bandwidth, while the CPUis optimized for latency. Therefore, the CPU has lesslatency, but its capacity is lower than that of the GPU.In other words, CPUs are not suited to handle massiveamounts of data, while GPUs can provide large amountsFigure 7 The dataset samplesFigure 8 Addition of necessary functionsFigure 9 Training accuracy of the RNN model Figure 10 The error rate of the RNN model393Jabir B. et al. Foods and Raw Materials, 2021, vol. 9, no. 2, pp. 387–396of memory. The CPU is responsible for performing allkinds of calculations, whereas the GPU only handlesgraphics calculations. Since our dataset was not large,we used an i7 CPU with 2.80GHz and 8G of RAM.Then, we create an instance of the model with theImageRNN(...) function, with its own configurationand parameters, the criterion represents the functionwe will use to get the loss of the designed model. Todo this process, it is sufficient to use the function:nn.CrossEntropyLoss(), which is a softmax functionutilized as boundaries log probabilities and followed bya negative log-likelihood loss activity over the output ofthe model. The code shows how to provide this to thecriterion.We add an optimization function that recalculates theweights based on the current loss and updates it. Thisis done using the Optim.adam function, which requiressetting the model parameters and learning rate. Todisplay the results and get the accuracy, we will use theget_accuracy(...) function, which computes the accuracyof the model given the log probabilities and target valuesfor each epoch. All these functions are shown in thefigure below.After training the model on 20 epochs, we obtainedrelevant results (Figs. 9 and 10).There are different ways and measures to evaluatethe performance of a classification model. Theperformance measures often used are precision, kappa,recall, and others [26]. We were therefore interestedin the model’s accuracy and error rate. Accuracy isa proportion of genuine expectations in relation tothe absolute number of input pictures. The error ratemeasures the difference between the model’s predictionsand the real images in the training set [27]. Figs. 9and 10 show the accuracy and error for each epoch. Inparticular, Figure 10 shows how the neural networkgradually decreased the error to arrive at 0.9. Accordingto Figure 9, the training accuracy reached 97.58% dueto a set of factors, such as the dataset, optimizationfunction, and the adjustment of weights and biases.Fig. 11 shows how the model performed on the testimages. The display of predictions on test images is atechnique to test the final solution in order to confirmthe real predictive power of the network. We computedthe accuracy on our dataset to test how well the modelperformed on the test images. Fig. 11 shows a value of96.40%, which means that the predictions on the testimages were well classified.These results indicated a good performance ofthe LSTM-RNN on our dataset. According to thethree figures above, this model updates with everystep, adjusting weights to reduce error and increasingaccuracy using a backpropagation algorithm andgradient descent. In addition to the studies that werebased on CNNs, we also built a CNN-based model(Fig. 12) and trained it on the same dataset that we usedin the RNN experiment.Our results were close to those reported by theauthors referred to above.The training was run on our local machine and aftera few times it reached 98% validation accuracy. Themodel showed good results after 9 hours of training.Fig. 13 shows accuracy taken from Tensboard.To sum up, CNNs are preferred for interpretingvisual data, sparse data or data that does not come insequence. Recurrent neural networks, however, aredesigned to recognize sequential or temporal data.They make better predictions by considering the orderor sequence of data concerning previous or next datanodes. Applications where CNNs are particularly usefulinclude face detection, medical analysis, drug discovery,and image analysis. RNNs are useful for linguistictranslation, entity extraction, conversational intelligence,sentiment analysis, and speech analysis. Our experimentalso showed that RNNs can be used to classify images ifwe add the LSTM technique. Based on literature and ourFigure 11 Test accuracy of the RNN modelFigure 12 CNN basic configuration394Jabir B. et al. Foods and Raw Materials, 2021, vol. 9, no. 2, pp. 387–396results, we compared the characteristics of RNNs andCNNs and summarized them in Table 2.Our experimentation clearly shows why CNNs areso widely used in agriculture despite the abundanceof other deep learning techniques. In addition, weproved that a RNN can also be used to detect weeds,but with less efficiency and more effort. Therefore, werecommend the CNN as the best suited deep learningtechnique for more efficient weed detection as the basisfor smarter precision farming.CONCLUSIONPrecision agriculture encompasses several areasof application, such as plant and leaf disease detection,land cover classification, plant recognition, and weedidentification to name the most common uses. Thedevelopment of precision agriculture requires newmonitoring, control, and information technologies,including deep learning. This paper presents anoverview and a comparative study of deep learningtools in crop planting. First, we looked at agricultureto describe its current problems, specifically weeddetection. Then, we listed the technical characteristics ofpopular deep learning techniques. After that, we createda CNN and a RNN and trained them on our dataset tocompare their overall accuracy. The results showed thatthe optimized RNN model (RNN with LSTM) can alsobe used to classify images with acceptable accuracy.Hence, a RNN combined with the LSTM is suitable fordetecting weeds among other techniques, but a CNNalways comes first in terms of speed and accuracy. Infuture work, we intend to use other metrics to comparethe results, such as recall and Kappa. We will also tryto develop a platform combining the RNN with theCNN to achieve the best accuracy. These results will beused to build an intelligent system based on RaspberryPi 4 that can detect weeds in real time and spray themin their area.CONTRIBUTIONThe authors were equally involved in writing themanuscript and are equally responsible for plagiarism.CONFLICT OF INTERESTThe authors declare no conflict of interest regardingthe publication of this article.ACKNOWLEDGMENTSThis research is part of the Digital Agriculturedoctorate project that involves a group of doctors fromthe Limati Laboratory at Sultan Moulay SlimaneUniversity in Morocco.</p>
 </body>
 <back>
  <ref-list>
   <ref id="B1">
    <label>1.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Ferguson AL. Machine learning and data science in soft materials engineering. Journal of Physics Condensed Matter. 2018;30(4). https://doi.org/10.1088/1361-648X/aa98bd.</mixed-citation>
     <mixed-citation xml:lang="en">Ferguson AL. Machine learning and data science in soft materials engineering. Journal of Physics Condensed Matter. 2018;30(4). https://doi.org/10.1088/1361-648X/aa98bd.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B2">
    <label>2.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Momennejad I. Learning structures: Predictive representations, replay, and generalization. Current Opinion in Behavioral Sciences. 2020;32:155-166. https://doi.org/10.1016/j.cobeha.2020.02.017.</mixed-citation>
     <mixed-citation xml:lang="en">Momennejad I. Learning structures: Predictive representations, replay, and generalization. Current Opinion in Behavioral Sciences. 2020;32:155-166. https://doi.org/10.1016/j.cobeha.2020.02.017.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B3">
    <label>3.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Peng S, Sun S, Yao Y. A survey of modulation classification using deep learning: Signal representation and data preprocessing. IEEE Transactions on Neural Networks and Learning Systems. 2021. https://doi.org/10.1109/TNNLS.2021.3085433.</mixed-citation>
     <mixed-citation xml:lang="en">Peng S, Sun S, Yao Y. A survey of modulation classification using deep learning: Signal representation and data preprocessing. IEEE Transactions on Neural Networks and Learning Systems. 2021. https://doi.org/10.1109/TNNLS.2021.3085433.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B4">
    <label>4.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Salloum SA, Alshurideh M, Elnagar A, Shaalan K. Machine learning and deep learning techniques for cybersecurity: A review. Advances in Intelligent Systems and Computing. 2020;1153:50-57. https://doi.org/10.1007/978-3-030-44289-7_5.</mixed-citation>
     <mixed-citation xml:lang="en">Salloum SA, Alshurideh M, Elnagar A, Shaalan K. Machine learning and deep learning techniques for cybersecurity: A review. Advances in Intelligent Systems and Computing. 2020;1153:50-57. https://doi.org/10.1007/978-3-030-44289-7_5.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B5">
    <label>5.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Alloghani M, Al-Jumeily D, Mustafina J, Hussain A, Aljaaf AJ. A systematic review on supervised and unsupervised machine learning algorithms for data science. In: Berry MW, Mohamed A, Yap BW, editors. Supervised and unsupervised learning for data science. Cham: Springer; 2020. pp. 3-21. https://doi.org/10.1007/978-3-030-22475-2_1.</mixed-citation>
     <mixed-citation xml:lang="en">Alloghani M, Al-Jumeily D, Mustafina J, Hussain A, Aljaaf AJ. A systematic review on supervised and unsupervised machine learning algorithms for data science. In: Berry MW, Mohamed A, Yap BW, editors. Supervised and unsupervised learning for data science. Cham: Springer; 2020. pp. 3-21. https://doi.org/10.1007/978-3-030-22475-2_1.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B6">
    <label>6.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Nowicki RK, Grzanek K, Hayashi Y. Rough support vector machine for classification with interval and incomplete data. Journal of Artificial Intelligence and Soft Computing Research. 2020;10(1):47-56. https://doi.org/10.2478/jaiscr-2020-0004.</mixed-citation>
     <mixed-citation xml:lang="en">Nowicki RK, Grzanek K, Hayashi Y. Rough support vector machine for classification with interval and incomplete data. Journal of Artificial Intelligence and Soft Computing Research. 2020;10(1):47-56. https://doi.org/10.2478/jaiscr-2020-0004.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B7">
    <label>7.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Jabir B, Falih N, Sarih A, Tannouche A. A strategic analytics using convolutional neural networks for weed identification in sugar beet fields. Agris On-line Papers in Economics and Informatics. 2021;13(1):49-57. https://doi.org/10.7160/aol.2021.130104.</mixed-citation>
     <mixed-citation xml:lang="en">Jabir B, Falih N, Sarih A, Tannouche A. A strategic analytics using convolutional neural networks for weed identification in sugar beet fields. Agris On-line Papers in Economics and Informatics. 2021;13(1):49-57. https://doi.org/10.7160/aol.2021.130104.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B8">
    <label>8.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Jabir B, Falih N. Digital agriculture in Morocco, opportunities and challenges. 2020 IEEE 6th International Conference on Optimization and Applications (ICOA); 2020; Beni Mellal. Beni Mellal: Sultan Moulay Slimane University; 2020. https://doi.org/10.1109/ICOA49421.2020.9094450.</mixed-citation>
     <mixed-citation xml:lang="en">Jabir B, Falih N. Digital agriculture in Morocco, opportunities and challenges. 2020 IEEE 6th International Conference on Optimization and Applications (ICOA); 2020; Beni Mellal. Beni Mellal: Sultan Moulay Slimane University; 2020. https://doi.org/10.1109/ICOA49421.2020.9094450.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B9">
    <label>9.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Duda P, Jaworski M, Cader A, Wang L. On training deep neural networks using a streaming approach. Journal of Artificial Intelligence and Soft Computing Research. 2020;10(1):15-26. https://doi.org/10.2478/jaiscr-2020-0002.</mixed-citation>
     <mixed-citation xml:lang="en">Duda P, Jaworski M, Cader A, Wang L. On training deep neural networks using a streaming approach. Journal of Artificial Intelligence and Soft Computing Research. 2020;10(1):15-26. https://doi.org/10.2478/jaiscr-2020-0002.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B10">
    <label>10.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Zhang C, Lin Y, Zhu L, Liu A, Zhang Z, Huang F. CNN-VWII: An efficient approach for large-scale video retrieval by image queries. Pattern Recognition Letters. 2019;123:82-88. https://doi.org/10.1016/j.patrec.2019.03.015.</mixed-citation>
     <mixed-citation xml:lang="en">Zhang C, Lin Y, Zhu L, Liu A, Zhang Z, Huang F. CNN-VWII: An efficient approach for large-scale video retrieval by image queries. Pattern Recognition Letters. 2019;123:82-88. https://doi.org/10.1016/j.patrec.2019.03.015.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B11">
    <label>11.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Lin JC-W, Shao Y, Djenouri Y, Yun U. ASRNN: A recurrent neural network with an attention model for sequence labeling. Knowledge-Based Systems. 2021;212. https://doi.org/10.1016/j.knosys.2020.106548.</mixed-citation>
     <mixed-citation xml:lang="en">Lin JC-W, Shao Y, Djenouri Y, Yun U. ASRNN: A recurrent neural network with an attention model for sequence labeling. Knowledge-Based Systems. 2021;212. https://doi.org/10.1016/j.knosys.2020.106548.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B12">
    <label>12.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Shen J, Ren Y, Wan J, Lan Y. Hard disk drive failure prediction for mobile edge computing based on an LSTM recurrent neural network. Mobile Information Systems. 2021;2021. https://doi.org/10.1155/2021/8878364.</mixed-citation>
     <mixed-citation xml:lang="en">Shen J, Ren Y, Wan J, Lan Y. Hard disk drive failure prediction for mobile edge computing based on an LSTM recurrent neural network. Mobile Information Systems. 2021;2021. https://doi.org/10.1155/2021/8878364.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B13">
    <label>13.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. https://doi.org/10.1038/nature14539.</mixed-citation>
     <mixed-citation xml:lang="en">LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. https://doi.org/10.1038/nature14539.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B14">
    <label>14.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Araujo VJS, Guimaraes AJ, Souza PVD, Rezende TS, Araujo VS. Using resistin, glucose, age and BMI and pruning fuzzy neural network for the construction of expert systems in the prediction of breast cancer. Machine Learning and Knowledge Extraction. 2019;1(1):466-482. https://doi.org/10.3390/make1010028.</mixed-citation>
     <mixed-citation xml:lang="en">Araujo VJS, Guimaraes AJ, Souza PVD, Rezende TS, Araujo VS. Using resistin, glucose, age and BMI and pruning fuzzy neural network for the construction of expert systems in the prediction of breast cancer. Machine Learning and Knowledge Extraction. 2019;1(1):466-482. https://doi.org/10.3390/make1010028.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B15">
    <label>15.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Kulkarni A, Halgekar P, Deshpande GR, Rao A, Dinni A. Dynamic sign language translating system using deep learning and natural language processing. Turkish Journal of Computer and Mathematics Education. 2021;12(10):129-137.</mixed-citation>
     <mixed-citation xml:lang="en">Kulkarni A, Halgekar P, Deshpande GR, Rao A, Dinni A. Dynamic sign language translating system using deep learning and natural language processing. Turkish Journal of Computer and Mathematics Education. 2021;12(10):129-137.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B16">
    <label>16.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Huu PN, Ngoc TP, Manh HT. Proposing gesture recognition algorithm using HOG and SVM for smart-home applications. In: Vo N-S, Hoang V-P, Vien Q-T, editors. Industrial networks and intelligent systems. Cham: Springer; 2021. pp. 315-323. https://doi.org/10.1007/978-3-030-77424-0_26.</mixed-citation>
     <mixed-citation xml:lang="en">Huu PN, Ngoc TP, Manh HT. Proposing gesture recognition algorithm using HOG and SVM for smart-home applications. In: Vo N-S, Hoang V-P, Vien Q-T, editors. Industrial networks and intelligent systems. Cham: Springer; 2021. pp. 315-323. https://doi.org/10.1007/978-3-030-77424-0_26.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B17">
    <label>17.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Kamilaris A, Prenafeta-Boldú FX. A review of the use of convolutional neural networks in agriculture. Journal of Agricultural Science. 2018;156(3):312-322. https://doi.org/10.1017/S0021859618000436.</mixed-citation>
     <mixed-citation xml:lang="en">Kamilaris A, Prenafeta-Boldú FX. A review of the use of convolutional neural networks in agriculture. Journal of Agricultural Science. 2018;156(3):312-322. https://doi.org/10.1017/S0021859618000436.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B18">
    <label>18.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Nash W, Drummond T, Birbilis N. A review of deep learning in the study of materials degradation. npj Mater Degrad. 2018;2(1). https://doi.org/10.1038/s41529-018-0058-x.</mixed-citation>
     <mixed-citation xml:lang="en">Nash W, Drummond T, Birbilis N. A review of deep learning in the study of materials degradation. npj Mater Degrad. 2018;2(1). https://doi.org/10.1038/s41529-018-0058-x.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B19">
    <label>19.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Bousetouane F, Morris B. Off-the-shelf CNN features for fine-grained classification of vessels in a maritime environment. In: Bebis G, Boyle R, Parvin B, Koracin D, Pavlidis I, Feris R, et al., editors. Advances in visual computing. Cham: Springer; 2015. pp. 379-388. https://doi.org/10.1007/978-3-319-27863-6_35.</mixed-citation>
     <mixed-citation xml:lang="en">Bousetouane F, Morris B. Off-the-shelf CNN features for fine-grained classification of vessels in a maritime environment. In: Bebis G, Boyle R, Parvin B, Koracin D, Pavlidis I, Feris R, et al., editors. Advances in visual computing. Cham: Springer; 2015. pp. 379-388. https://doi.org/10.1007/978-3-319-27863-6_35.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B20">
    <label>20.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Ganai AF, Khursheed F. Predicting next word using RNN and LSTM cells: Stastical language modeling. 2019 Fifth International Conference on Image Information Processing (ICIIP); 2019; Shimla. Solan: Jaypee University of Information Technology; 2019. p. 469-474. https://doi.org/10.1109/ICIIP47207.2019.8985885.</mixed-citation>
     <mixed-citation xml:lang="en">Ganai AF, Khursheed F. Predicting next word using RNN and LSTM cells: Stastical language modeling. 2019 Fifth International Conference on Image Information Processing (ICIIP); 2019; Shimla. Solan: Jaypee University of Information Technology; 2019. p. 469-474. https://doi.org/10.1109/ICIIP47207.2019.8985885.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B21">
    <label>21.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">dos Santos Ferreira A, Freitas DM, da Silva GG, Pistori H, Folhes MT. Weed detection in soybean crops using ConvNets. Computers and Electronics in Agriculture. 2017;143:314-324. https://doi.org/10.1016/j.compag.2017.10.027.</mixed-citation>
     <mixed-citation xml:lang="en">dos Santos Ferreira A, Freitas DM, da Silva GG, Pistori H, Folhes MT. Weed detection in soybean crops using ConvNets. Computers and Electronics in Agriculture. 2017;143:314-324. https://doi.org/10.1016/j.compag.2017.10.027.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B22">
    <label>22.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Farooq A, Hu J, Jia X. Analysis of spectral bands and spatial resolutions for weed classification via deep convolutional neural network. IEEE Geoscience and Remote Sensing Letters. 2018;16(2):183-187. https://doi.org/10.1109/LGRS.2018.2869879.</mixed-citation>
     <mixed-citation xml:lang="en">Farooq A, Hu J, Jia X. Analysis of spectral bands and spatial resolutions for weed classification via deep convolutional neural network. IEEE Geoscience and Remote Sensing Letters. 2018;16(2):183-187. https://doi.org/10.1109/LGRS.2018.2869879.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B23">
    <label>23.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Lammie C, Olsen A, Carrick T, Azghadi MR. Low-power and high-speed deep FPGA inference engines for weed classification at the edge. IEEE Access. 2019;7:51171-51184. https://doi.org/10.1109/ACCESS.2019.2911709.</mixed-citation>
     <mixed-citation xml:lang="en">Lammie C, Olsen A, Carrick T, Azghadi MR. Low-power and high-speed deep FPGA inference engines for weed classification at the edge. IEEE Access. 2019;7:51171-51184. https://doi.org/10.1109/ACCESS.2019.2911709.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B24">
    <label>24.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Harsono IW, Liawatimena S, Cenggoro TW. Lung nodule detection and classification from Thorax CT-scan using RetinaNet with transfer learning. Journal of King Saud University - Computer and Information Sciences. 2020. https://doi.org/10.1016/j.jksuci.2020.03.013.</mixed-citation>
     <mixed-citation xml:lang="en">Harsono IW, Liawatimena S, Cenggoro TW. Lung nodule detection and classification from Thorax CT-scan using RetinaNet with transfer learning. Journal of King Saud University - Computer and Information Sciences. 2020. https://doi.org/10.1016/j.jksuci.2020.03.013.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B25">
    <label>25.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Sherstinsky A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena. 2020;404. https://doi.org/10.1016/j.physd.2019.132306.</mixed-citation>
     <mixed-citation xml:lang="en">Sherstinsky A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena. 2020;404. https://doi.org/10.1016/j.physd.2019.132306.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B26">
    <label>26.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Gupta S, Agrawal A, Gopalakrishnan K, Narayanan P. Deep learning with limited numerical precision. Proceedings of the 32 nd International Conference on Machine; 2015; Lille. Lille: JMLR W&amp;CP; 2015. p. 1737-1746.</mixed-citation>
     <mixed-citation xml:lang="en">Gupta S, Agrawal A, Gopalakrishnan K, Narayanan P. Deep learning with limited numerical precision. Proceedings of the 32 nd International Conference on Machine; 2015; Lille. Lille: JMLR W&amp;CP; 2015. p. 1737-1746.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B27">
    <label>27.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Pak M, Kim S. A review of deep learning in image recognition. 2017 4th International Conference on Computer Applications and Information Processing Technology. Kuta Bali; 2017. p. 367-369. https://doi.org/10.1109/CAIPT.2017.8320684.</mixed-citation>
     <mixed-citation xml:lang="en">Pak M, Kim S. A review of deep learning in image recognition. 2017 4th International Conference on Computer Applications and Information Processing Technology. Kuta Bali; 2017. p. 367-369. https://doi.org/10.1109/CAIPT.2017.8320684.</mixed-citation>
    </citation-alternatives>
   </ref>
  </ref-list>
 </back>
</article>
