Next Article in Journal
Motion Adsorption Characteristics of Particulate Matter in Water Supply Network
Next Article in Special Issue
Development of a One-Parameter New Exponential (ONE) Model for Simulating Rainfall-Runoff and Comparison with Data-Driven LSTM Model
Previous Article in Journal
Aquatic Macrophytes Shape the Foraging Efficiency, Trophic Niche Breadth, and Overlap among Small Fish in a Neotropical River
Previous Article in Special Issue
Soil Moisture Data Assimilation in MISDc for Improved Hydrological Simulation in Upper Huai River Basin, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybridized Adaptive Neuro-Fuzzy Inference System with Metaheuristic Algorithms for Modeling Monthly Pan Evaporation

1
School of Economics and Statistics, Guangzhou University, Guangzhou 510006, China
2
Research Institute of Forests and Rangelands, Agricultural Research, Education and Extension Organization (AREEO), Tehran 14968-13111, Iran
3
Department of Water Engineering, Aburaihan Campus, University of Tehran, Tehran 33916-53755, Iran
4
Department of Civil Engineering, Technical University of Lübeck, 23562 Lübeck, Germany
5
School of Technology, Ilia State University, 0162 Tbilisi, Georgia
6
Faculty of Science, Agronomy Department, Hydraulics Division University, 20 Août 1955, Route El Hadaik, BP 26, Skikda 21024, Algeria
7
Department of Water Engineering, Shahid Bahonar University of Kerman, Kerman 76169-13439, Iran
*
Authors to whom correspondence should be addressed.
Water 2022, 14(21), 3549; https://doi.org/10.3390/w14213549
Submission received: 6 September 2022 / Revised: 1 November 2022 / Accepted: 2 November 2022 / Published: 4 November 2022

Abstract

:
Precise estimation of pan evaporation is necessary to manage available water resources. In this study, the capability of three hybridized models for modeling monthly pan evaporation (Epan) at three stations in the Dongting lake basin, China, were investigated. Each model consisted of an adaptive neuro-fuzzy inference system (ANFIS) integrated with a metaheuristic optimization algorithm; i.e., particle swarm optimization (PSO), whale optimization algorithm (WOA), and Harris hawks optimization (HHO). The modeling data were acquired for the period between 1962 and 2001 (480 months) and were grouped into several combinations and incorporated into the hybridized models. The performance of the models was assessed using the root mean square error (RMSE), mean absolute error (MAE), Nash–Sutcliffe Efficiency (NSE), coefficient of determination (R2), Taylor diagram, and Violin plot. The results showed that maximum temperature was the most influential variable for evaporation estimation compared to the other input variables. The effect of periodicity input was investigated, demonstrating the efficacy of this variable in improving the models’ predictive accuracy. Among the models developed, the ANFIS-HHO and ANFIS-WOA models outperformed the other models, predicting Epan in the study stations with different combinations of input variables. Between these two models, ANFIS-WOA performed better than ANFIS-HHO. The results also proved the capability of the models when they were used for the prediction of Epan when given a study station using the data obtained for another station. Our study can provide insights into the development of predictive hybrid models when the analysis is conducted in data-scare regions.

1. Introduction

Out of several components of the hydrological cycle that affect regional water resources and agricultural production, evaporation is known as one of the essential components [1,2]. The importance of evaporation gains even more attention when the objective is the management of water resources in arid and semi-arid regions [3,4]. Evaporation is estimated using various indirect and direct methods, such as water balance, energy balance, mass transfer, Penman method, and evaporation pan [5]. Among them, the evaporation pan is a widely used and cost-effective method that reflects the amount of evaporation in the balance between the water and energy cycles, and measures the combined effect of several climate elements, such as solar radiation (SR), air temperature (TA), rainfall, relative humidity (RH), and wind speed (SW). Therefore, many researchers have attempted to estimate pan evaporation (Epan) using direct and indirect approaches [6,7]. Although a direct estimation of Epan seems to be more accurate, practical limitations and restrictions in instrumental devices encourage meteorologists to implement indirect approaches that work based on empirical, mathematical, and soft computing, as well as machine learning (ML) techniques [8].
With the advancement of computer hardware and the improvement of soft computing techniques over the last three decades, data-driven ML techniques have been successfully used for the prediction of Epan [9]. Artificial neural networks (ANNs) [10], an adaptive neuro-fuzzy inference system (ANFIS) [11], least square support vector regression (LSSVM) [12], tree-based methods [13], a self-organizing map neural network (SOMNN) [14], multiple linear regression (MLR) [15], support vector machines (SVMs) [16], a classification and regression tree (CART) [17], an extreme learning machine (ELM) [18], gene expression programming [19], and a multivariate adaptive regression spline (MARS) [20] are among the methods that have been implemented in the literature.
In the case of the ANFIS method, many works have acknowledged its accuracy for Epan estimation [21,22]. ANFIS is a powerful tool for iterative optimization of complex real-world systems because ANFIS has the advantages of coupling both network-based models for the training process and fuzzy inference logic concepts for converting data patterns through membership functions. ANFIS also benefits from a fast calculation speed that allows for the finding of an optimal solution.
Apart from method selection, the selection of appropriate parameters is a critical step for the development of accurate and reliable Epan prediction models. Since each ML method, such as ANFIS, has several parameters, it is a difficult and time-consuming task to manually tune all parameters. Therefore, automatic parameter tuning using metaheuristic optimization algorithms has received attention from many researchers studying real-world problems. Examples of the metaheuristic optimization algorithms used for tuning a base method, such as ANN, LSSVM, SVR and ANFIS [23,24], include an electrostatic discharge algorithm [25], a water cycle optimization algorithm (WCA) [26], atom search optimization (ASO) [27], particle swarm optimization (PSO) [28], a cultural algorithm (CA) [29], a bee colony algorithm (BCA) [30], a genetic algorithm (GA) [31], biogeography-based optimization [24], and a firefly algorithm (FFA) [32]. Table 1 compares the hybridized ANFIS models with other ML methods for modeling evaporation as used in previous studies.
Owing to the capabilities of hybridized ANFIS models in capturing nonlinearity features of complex systems, this research aims to evaluate the performance of standalone ANFIS, along with three hybridized types of ANFIS, including ANFIS-PSO, ANFIS-WOA (Whale optimization algorithm), and ANFIS-HHO (Harris hawks optimization) in predicting Epan at three stations located in China. In this framework, limited data (Tmin and Tmax) and extraterrestrial radiation (Ra), which can be easily calculated with the Julian date, are used to develop the models. The major novelty of this study is to assess the potential of three hybridized models for estimating Epan. Furthermore, the contribution of the study relies on the evaluation of the models’ ability in extrapolating Epan in one of the stations (Nanxian Station) in proportion to the other two stations (Jingzhou and Yueyang Stations).

2. Case Study

The Dongting Lake basin (DLB) in southeastern China (20°–32° N and 108°–123° E) was selected as the case study area to illustrate the methodology proposed for Epan estimation (Figure 1). Covering an area of around 261,400 km2, the basin is located in the south of the Yangtze River. It covers 12% of the Yangtze River Basin and is ranked as the world’s second-largest freshwater lake. The DLB is drained by four main rivers, i.e., Li River, Yuan River, Zi River, and Xiang River. The DLB has a humid climate with many variations (mean annual =1380 mm) in precipitation that result in drought periods and flood events. The mean annual temperature of the basin is 17 °C, with the lowest temperature of 4.2 °C in January and the highest temperature of 28.9 °C in July. This region is mainly known for rice production, although production has been reduced due to drought events in recent years. The basin has an altitude gradient ranging from 30 m in plain areas to 2500 m in mountainous areas. Monthly minimum and maximum temperature data, as well as evaporation data, for the period between 1962 and 2001 (480 months) at the three selected stations were collected from the China Meteorological Administration (CMA). A summary of the statistical characteristics of the data is listed in Table 2. For the application of the machine learning models, data were divided into two sets, with 75% (360 months) of the data for model training and the remaining 25% (120 months) for model validation [13,28,33].

3. Methods

3.1. Adaptive Neuro-Fuzzy Inference System (ANFIS)

The adaptive neuro-fuzzy inference system (ANFIS) is an artificial intelligence method that was inspired by a combination of ANN and fuzzy logic. For an approximation function with three input variables and one output variable, a fuzzy rules base can be expressed as follows:
R u l e   1 : = I f   x 1 = A 1   a n d   x 2 = B 1   a n d   x 3 = C 1 t h e n   f 1 = α 1 x 1 + β 1 x 2 + γ 1 x 3 + δ 1
R u l e   2 : = I f   x 1 = A 2   a n d   x 2 = B 2   a n d   x 3 = C 2 t h e n   f 2 = α 2 x 1 + β 2 x 2 + γ 2 x 3 + δ 2
R u l e   3 : = I f   x 1 = A 3   a n d   x 2 = B 3   a n d   x 3 = C 3 ; t h e n   f 3 = α 3 x 1 + β 3 x 2 + γ 3 x 3 + δ 3
where αi, βi, γi, and δi are the linear parameters in the consequent part of the ANFIS model; x1, x2, and x3 are the input variables; and Ai, Bi, and Ci are the fuzzy sets [36]. ANFIS consists of five layers which can be summarized as follows. In the first layer, all nodes are adaptive and their parameters (i.e., the premise parameters) are updated during the training process. For each node in the first layer, the linguistic label is calculated using the corresponding membership functions (MFs), as follows:
Ω i 1 = μ A i ( x 1 ) .                                         i = 1.2.3
Ω i 1 = μ B i 3 ( x 2 ) .                                 i = 4.5.6
Ω i 1 = μ C i 6 ( x 3 ) .                                 i = 7.8.9
For example, if the Gaussian membership function is used, it can be applied as follows:
f ( x . σ . c ) = μ A i ( x 1 ) = e ( x c ) 2 2 σ 2
where σ and c are the parameters of the membership function (i.e., the premise parameters).
For the second layer, all nodes are fixed (not adaptive), performing a simple multiplier; each one only calculates the firing strength (wi) of the fuzzy rule.
Ω i 2 = ω i = μ A i ( x 1 ) × μ B i ( x 2 ) × μ C i ( x 3 )                                         i = 1.2.3
The output of the third layer is calculated as the normalization of the firing strengths from the previous layer:
Ω i 3 = ω ¯ i = ω i ω 1 + ω 2 + ω 3                 i = 1.2
For the fourth layer, the nodes are all adaptive and they perform the following output:
Ω i 4 = ω ¯ i f i = ω ¯ i ( α i x 1 + β i x 2 + γ i x 3 + δ i ) .           i = 1.2.3
where αi, βi, γi, and δi are the linear parameters (i.e., the consequent parameters) of the fuzzy rules. For the fifth layer, only one node is available and it provides the final response by calculating the overall output as a summation:
Ω i 5 = i = 1 ω ¯ i f i = i ω i f i i ω i                             i = 1.2.3
ANFIS uses a hybrid training algorithm in two steps: (i) a forward pass, for which the least square method (LSM) is used for updating the linear parameters (i.e., consequent parameters) in the fourth layer; and (ii) the backward pass for which the gradient descent (GD) algorithm is used for nonlinear parameters (i.e., premise parameters).

3.2. Particle Swarm Optimization (PSO)

Particle swarm optimization (PSO) was proposed by Kennedy and Eberhart [37]. PSO is a metaheuristic algorithm inspired by a swarm and is generally based on simulation of the migration, natural behaviors, and aggregation of birds, insects, herds, and fishes during foraging. An important point to note is that each individual in the colony represents one particle. PSO is an effective tool for solving global optimization problems based on two major components: iterations and search [38]. The major goal of PSO is that, within the swarm, the best global and optimal solution should be sought among the undetermined number of particles using cooperation and information sharing. According to Kennedy and Eberhart [37], each particle in the swarm is defined by a pair of numbers containing two co-ordinates: flying speed, i.e., velocity (Vi), and position, i.e., location (Xi):
V i = [ v i 1 . v i 2 . v i 3 . . v i D ]
X i = [ x i 1 . x i 2 . x i 3 . . x i D ]
We denote by Pbest the optimal position of each particle and Gbest the cluster (i.e., entire population) historical optimal position (i.e., location) as follows:
P b e s t = [ P i 1 . P i 2 . P i 3 . . P i D ]                          
  G b e s t = [ p g 1 . p g 2 . p g 3 . . p g D ]                          
During the PSO process, and in each individual flight, the particles change and update their positions and velocities as follows:
v i d k + 1 = ω v i d k + δ 1 θ 1 ( p i d k x i d k ) + δ 2 θ 2 ( p g d k x i d k )
x i d k + 1 = x i d k + v i d k + 1
i = 1.2.3 N   a n d   d = 1.2.3 D
where i is a particle in the group; N is the total number of particles in the swarm; d is the dimension of every particle; D is the number of variables; v i d k and v i d k + 1 are the velocity of each individual particle i at the iteration k and k+1, respectively; x i d k and x i d k + 1 are the positions of each individual particle i at the iteration k and k+1, respectively; θ1 and θ2 are random numbers ranging from 0 to 1; δ1 and δ2 are the cognitive and social acceleration coefficients; ω is an inertia weight parameter; p i d k is the best position of the particle; and p i d k is the swarm best position [39].

3.3. Whale Optimization Algorithm (WOA)

The whale optimization algorithm (WOA) is a swarm intelligence metaheuristic algorithm proposed by Mirjalili and Lewis [40], mimicking the physical movement and behavior of humpback whales in the ocean when swimming and attacking their prey. The overall WOA is formulated based on three optimization modes, namely: (i) encircling prey, (ii) spiral bubble-net feeding maneuver, and (iii) searching for prey. The algorithm process is divided into two important stages: exploration (i.e., spiral bubble-net feeding maneuver) and exploitation stages (i.e., searching for prey). To use WOA, similar to all metaheuristic algorithms, it is necessary to see the individual as an information carrier, for which each whale is considered an agent or a possible candidate solution. The agent, i.e., the individual whale, starts doing so using a solid commitment to look (i.e., keep seeking) for the best solution to the identified problem, i.e., the prey, using an iteration process [40].
Humpback whales possess the ability to detect even small prey and surround them (i.e., encircle). What is important to note is that, at the beginning of the process, the whale does not know the localization of the best solution, WOA hypothesizes that the identified target prey corresponds to the best candidate solution, or is relatively and approximately close to optimum. Consequently, once the best solution is decided, it becomes a haven for all remaining agents and will be forced to leave toward the best candidate (i.e., solution), and therefore, update their positions with respect to the best one. This first process is formulated as follows [40]:
Z = | β · X * ( t ) X ( t ) |
X ( t + 1 ) = X * ( t ) δ   · Z
where Z represents the distance separating the search agent to the position of the best candidate; t refers to the current iteration; X * is the position vector of the best solution; β and δ are coefficient vectors; X is the position vector; | | is the absolute value; and (.) is the multiplication. The two vectors β   and δ are formulated as follows:
δ = 2 μ · θ μ
β = 2 · θ
where μ is a variable with an absolute value, and it should be decreased from 2 to 0 during the iteration process; θ is a random vector ranging from 0 to 1. Furthermore, to surround their prey, whales perform a spiral movement downwards as far as possible from the bottom to the top, simultaneously emitting an ensemble of bubbles in different sizes. This movement of the humpback whales is designated as a helix-shaped movement, and is expressed as follows:
X ( t + 1 ) = Z ~ · e b l · cos ( 2 π l ) + X * ( t )
where Z ~ = ( X * ( t ) X ( t ) ) is the distance separating the prey and the whale; l is a variable ranging from −1 to +1; and b is a constant related to the shape of the logarithmic spiral [41]. The previous equation can be formulated as follows:
X ( t + 1 ) = { X * ( t ) δ · Z                                                                     i f   p < 0.5 Z ~ · e b l · cos ( 2 π l ) + X * ( t )                       i f   p 0.5
where p is a random variable ranging from 0 to 1. Finally, during the exploration phase, a randomly searching strategy is adopted by the whale, in respect to the position of each whale compared to the other and not according to the best agent. Hence, the process can be formulated as follows:
Z = | β · X r a n d X |
X ( t + 1 ) = X r a n d δ · Z
where X r a n d is a random position vector selected from the current population (Mirjalili and Lewis, 2016; Wong et al., 2019) [40].

3.4. Harris Hawk Optimization (HHO)

The Harris hawk optimization algorithm (HHO) was developed and introduced by Heidari et al. [41]. Since it was proposed, it has become a famous and attractive metaheuristic optimization algorithm largely used for solving high complex nonlinear problems. The HHO can be classified into the category of population-based algorithms. The algorithm has two major phases: exploration and exploitation. During the exploration phase, we should highlight two important connected waypoints: (i) the candidate solution, which is the Harris hawk, and (ii) the best candidate solution, which corresponds to the targeted prey (i.e., the rabbit). During wild animal hunting (i.e., attack of the rabbit), the Harris hawk uses two strategies (Equation (27)) having the same chance q: (i) based on the position of other Hawk members of the population (q < 0.5), and (ii) the position of the prey (q ≥ 0.5) [41].
                        X ( t + 1 ) = { X r a n d ( t ) α 1 | X r a n d ( t ) 2 α 2 X ( t ) |                                 i f   q 0.5 ( X r a b b i t ( t ) X m ( t ) ) α 3 ( β + α 4 ( δ β ) )                       i f   q < 0.5
In the equation, we can see that there are four different positions, namely the actual position of the Harris Hawk ( X ( t ) ), the position of the Harris Hawk at the iteration t( X ( t + 1 ) ), the mean position of the Harris Hawk at the actual iteration ( X m ( t ) ), and the position of the prey (the rabbit), which is the X r a b b i t ( t ) . More precisely, α1, α2, α3, and α4 are random numbers ranging from 0 to 1; β and δ are the upper and lower bounds of variables; and X r a n d is a randomly selected Harris Hawk among all available individuals. The average position of the population is obtained using Equation (28):
X m ( t ) = 1 N   i = 1 N X i ( t )
where X i ( t ) is the position of the ith Harris hawk and N is the total number of Harris hawks in the population. Before moving to the exploitation phase, we should consider a transition phase within which the energy of the prey is evaluated in a decreasing trend, which provides an opportunity for the Harris hawk to select a strategy among a large number of possible strategic choices depending on the energy loss of the prey (i.e., the rabbit), which steadily decreases as the number of iterations rises. Hence, the energy loss of the prey can be formulated as follows:
E = 2 E 0 ( 1 t T )  
where E is the escaping energy of the prey; E0 is the prey energy before starting the flight forward; and T is the total number of iterations. From a mathematical point of view, the absolute value of E corresponds to two different cases: (i) if |E|≥1, we are in the exploration phase, and (ii) if |E|< 1, we are in the exploitation phase. We logically assume that the value of E0 ranges in the interval from −1 to +1, and that its value should be updated at each iteration in decreasing or increasing ways. If the value of E0 decreases from 0 to −1, we assume that the prey is weak, and if E0 is higher than 0, we assume that the prey is strong [41].
During the exploitation phase, the prey (i.e., the rabbit) starts by drafting and adopting several escaping strategies in response to the different Harris hawk attack scenarios. Within the combinations of the escaping and hunting strategies, four possible scenarios can be formulated for the HHO. For this mathematical formulation, a numerical number (r) is used to demonstrate the success or failure of the escape plan adopted by the prey. Hence, r < 0.5 indicates the success of the escape, and conversely, r ≥ 0.5 corresponds to a failed escape plan. Based on these two values, a hard or soft besiege is performed, and the mathematical formulation of the two besieges is based on the values of (E). An absolute value of E higher than 0.5 (|E| ≥ 0.5) indicates that the HHO adopts a soft besiege, and an absolute value less than 0.5 (|E| < 0.5) leads to performing a hard besiege. The function of the soft besiege can be formulated as follows:
X ( t + 1 ) = Δ X ( t ) E | J X r a b b i t ( t ) X ( t ) |
Δ X ( t ) = X r a b b i t ( t ) X ( t )
The distance separating the location point of the prey and the location point of the Harris hawk corresponds to ΔX, and J is the random jump strength of the prey during flight; calculated as follows:
J = 2 ( 1 r 5 )
where r5 is a random number ranging from 0 to 1. The function of the hard besiege can be formulated as follows (r ≥ 0.5 and |E| < 0.5):
X ( t + 1 ) = X r a b b i t ( t ) E | Δ X ( t ) |
The function of soft besiege with progressive rapid dives (r < 0.5 and |E| ≥ 0.5):
Y = X r a b b i t ( t ) E | J X r a b b i t ( t ) X ( t ) |
If the result of the attack procedure is not positive, an update is made as follows:
Z = Y + S × L F ( D )
where LF is a levy flight; D and S are the number of dimensions of the problem and a random vector, respectively. LF can be expressed as follows:
L F ( x ) = 0.01 × μ × σ | υ | 1 β . σ = ( Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 ( β 1 2 ) ) 1 β
where β is a constant equal to 1.5; μ and υ are random values ranging from 0 to 1; and Γ is the gamma function. Finally, final formulation can be expressed as follows [42]:
X ( t + 1 ) = Y i f F ( Y ) < F ( X ( t ) ) Z i f F ( Z ) < F ( X ( t ) )

3.5. ANFIS Optimization Using PSO, WOA, and HHO

To train ANFIS toward the aim of achieving a high level of efficiency and accuracy, two parameters should be properly adjusted. They are (i) linear parameters (i.e., consequent parameters) available in the fourth layer {α, β, γ, δ}, and (ii) nonlinear parameters (i.e., premise parameters) available in the second layer {c, σ} for the membership function. The LSM can be adopted to better identify the linear parameters, while the GD algorithm is used for nonlinear parameters. In this study, we used three metaheuristic algorithms for optimizing the premise and the consequent parameters. The number of parameters to be optimized can be calculated depending on the number of fuzzy rules and the number of MFs for each input variable (i.e., the linguistic label). The total number of premise parameters is equal to the total number of fuzzy labels multiplied by two (i.e., c and σ), while the total number of consequent parameters should be equal to the number of fuzzy rules multiplied by the number of input variables and one constant. Consequently, in the metaheuristic algorithms, the initial population (i.e., the total number of agents) is equal to the total number of parameters; thus, the initial particles (i.e., agents) are randomly obtained and the fitness function is then calculated, and the process will continue until the optimal condition with an acceptable error is achieved. In this study, the mean squared error (MSE) was used as the fitness function. Details of the HHA, WOA, and PSO algorithms are depicted in Figure 2 as the research flowchart.
When the hybridized models were successfully developed, their performances were compared using the following metrics [43,44]:
R M S E : R o o t   M e a n   S q u a r e   E r r o r = 1 N i = 1 N [ ( Y 0 ) i ( Y C ) i ] 2  
            M A E : M e a n   A b s o l u t e   E r r o r = 1 N i = 1 N | ( Y 0 ) i ( Y C ) i |  
N S E : N a s h S u u t c l i f f e = 1 i = 1 N [ ( Y 0 ) i ( Y c ) i ] 2 i = 1 N [ ( Y 0 ) i Y ¯ 0 ] 2 . < N S E 1
        R 2 : D e t e r m i n a t i o n   C o e f f i c i e n t = [ t = 1 N ( Y o Y o ¯ ) ( Y c Y c ¯ ) t = 1 N ( Y o Y o ¯ ) 2 ( Y c Y c ¯ ) 2 ] 2
where Y c , Y o ,   Y ¯ o ,   and   N are the calculated, measured, and mean of the measured Epan, and data quantity. The evaluation metrics were applied to data from three stations in China using several control parameters. The information on these parameter values is shown in Table 3. Population and iteration were set at 30 and 150 for 10 times to reach the robust outcomes.

4. Results and Discussion

The hybridized ANFIS models were compared for estimating the monthly Epan of the Jinzhou Station using various input combinations (Table 4). The effect of periodicity was also investigated by adding the number of months over the year (α varies from 1 to 12) into the optimum input combination. From the first three input combinations, it is evident that the Tmax was the most influential parameter for Epan estimations for all models. For example, ANFIS with Tmax input had lower RMSE (0.6508 mm) and MAE (0.5210 mm), and higher R2 (0.8956) and NSE (0.8922) compared to ANFIS, with only Tmin or Ra as input. Among the double input combinations, Tmin and Tmax had the highest accuracy (see combinations 4–6 in Table 4). Of the all input combinations, three inputs (Tmin, Tmax, and Ra) provided the lowest RMSE and MAE, and the highest R2 and NSE in estimating monthly Epan for all four models. The last input combination comprised optimum inputs (Tmin, Tmax, and Ra) and α (Opt inputs, α). Table 4 reveals that the periodicity positively affected the accuracy of the ANFIS-HHO and ANFIS-WOA models, while involving α decreased the performance of the ANFIS and ANFIS-PSO models in the estimation of monthly Epan. Among the models developed here, the ANFIS-WOA provided the best accuracy, and the model with Opt inputs, α (Tmin, Tmax, Ra, and α), had the lowest RMSE (0.3127 mm) and MAE (0.2561 mm), and the highest R2 (0.9726) and NSE (0.9704), followed by the ANFIS-HHO and ANFIS-PSO models. Table 4 shows that metaheuristic algorithms improved the performance of ANFIS in both the training and testing phases; the modeling error (RMSE) decreased by about 1.1%, 24.7%, and 29.1% in the testing phases of ANFIS-PSO, ANFIS-HHO, and ANFIS-WOA, respectively. The dominance of ANFIS-WOA can be clearly examined from the mean values of the statistics, which improved from 0.6661 to 0.5032, 0.5260 to 0.4030, 0.8622 to 0.9082, and 0.8592 to 9054 for the RMSE, MAE, R2, and NSE for the split scenarios from ANFIS to ANFIS-WOA, respectively.
Table 5 shows the training and testing outcomes of the four hybridized ANFIS models in estimating the monthly Epan of the Nanxian Station. Similar to the Jinzhou Station, Tmax was identified as the most effective variable for Epan prediction in all models. Among the double input combinations, Tmin, Tmax, Tmax, and Ra generally produced the same level of accuracy and performed better than the input combinations of Tmin and Ra. Similar to the previous station, three input combinations with all three parameters (Tmin, Tmax, and Ra) had the lowest RMSE and MAE and the highest R2 and NSE both in the training and testing phases of all models. Periodicity improved the estimation accuracy of the ANFIS-HHO and ANFIS-WOA models. Improvements in testing accuracy of ANFIS-WOA were 11.4%, 10.6%, 0.6, and 0.5% in terms of RMSE, MAE, R2, and NSE, respectively. It is clear from Table 5 that the metaheuristic algorithms improved the efficiency of ANFIS, and that the hybridized models performed much better in estimating monthly Epan using limited climatic input. The ANFIS-WOA model with three parameters and periodicity (Tmin, Tmax, Ra, and α) achieved the lowest RMSE (0.5886 mm) and MAE (0.4629 mm) and the highest R2 (0.9526) and NSE (0.9507) in the testing phase. The WOA improved the accuracy of ANFIS with input Tmin, Tmax, and Ra by 23.9%, 20.7%, 4.5%, and 4.6% with respect to RMSE, MAE, R2, and NSE, respectively. The mean values of the comparison of RMSE, MAE, R2, and NSE statistics also endorsed the outperformed performance of ANFIS-WOA by improving these statistical values from 0.9029 to 0.7712, 0.6843 to 0.5893, 0.8484 to 0.9079, and 0.8461 to 0.9057 for the models ANFIS to ANFIS-WOA, respectively.
Table 6 shows the modeling results for the Yueyang Station. Similar to the other two stations, Tmax was found to be the most influential variable on the performance of the models in estimating Epan. The combination of Tmin and Tmax had the best accuracy among the input combinations and produced the best accuracy in all models developed. Periodicity had a positive effect on the performance of all models. In this station, improvements in the accuracy of ANFIS were further achieved using the metaheuristic algorithms, with 11%, 17%, and 20.5% decreases in RMSE for the PSO, HHO, and WOA algorithms, respectively. The ANFIS-WOA model with three inputs and periodicity performed better than ANFIS-HHO, ANFIS-PSO, and ANFIS in estimating monthly Epan. The averages of the RMSE, MAE, R2, and NSE statistics also confirmed the dominancy of ANFIS-WOA over other ANFIS-based models in predicting monthly Epan.
Figure 3, Figure 4 and Figure 5 illustrate the scatterplots which compare the observed and predicted Epan by the best hybridized ANFIS models for the three stations. ANFIS-WOA had the least scattered predictions, followed by ANFIS-HHO, ANFIS-PSO, and ANFIS. Figure 6 displays the Taylor diagrams of the models of the Nanxian Station. This diagram is a useful approach in assessing three statistics (e.g., RMSE, correlation, and standard deviation) at the same time. The diagram revealed that the ANFIS-WOA model had the least RMSE and the greatest correlation and the nearest standard deviation to the observations, followed by the ANFIS-HHO models. Figure 7 shows the violin plots of the models for the Nanxian Station. The plots reveal that the most similar distribution to the observations belongs to the ANFIS-WOA model compared to other models. While known as a simple structure algorithm, WOA has fast convergence speed, high convergence accuracy, strong robustness, and stability [45]. WOA has the ability to balance exploration and exploitation to avoid local optima and reach a global optimal solution. Many studies have demonstrated that WOA not only minimizes the cost of solving engineering problems, but also provides better optimization efficiency compared to other population-based metaheuristic algorithms [45,46].
Overall, our results demonstrated that the hybridized models reduced the drawbacks of the standalone ANFIS and enabled a more accurate model Epan, in line with previous studies [14,21,23,31,32,34].
Table 7 shows the performance of the hybridized ANFIS models in estimating the monthly Epan of the Nanxian Station using climatic data of the Jinzhou Station. This type of comparison is essential, especially for the scare-data regions, and reveals that the models’ outcomes with external input data are not as accurate as local input data. However, the models had acceptable accuracy in estimating Epan without local input data. For example, R2 and NSE of the ANFIS-WOA model in the testing phase ranged from 0.8152 and 0.8126 to 0.9281 and 0.9286 for the worst and best input combinations. Again, the results demonstrated the efficiency of the metaheuristic algorithms for the improvement of the accuracy of ANFIS. The ANFIS-WOA performed the best in estimating monthly Epan using external input data.
Table 8 compares the efficiency of the models in estimating the monthly Epan of the Nanxian Station using climatic data from the Yueyang Station. For this case, similar results were obtained; R2 and NSE of the ANFIS-WOA model in the testing phase ranged from 0.8153 and 0.8125 to 0.9330 and 0.9283. The improvements in the accuracy of ANFIS when the metaheuristic algorithms were used were evident. A comparison between the results presented in Table 7 and Table 8 reveals that the models performed better in estimating Epan at the Nanxian Station using the climatic input data of Jinzhou Station. The main reason for this might be related to the climatic characteristics of the stations. The Yueyang Station is located very near to Dongting Lake (Figure 1), and relative humidity may affect Epan. This study used the temperature input without considering this information.

5. Conclusions

This paper investigated the accuracy of four hybridized ANFIS models in modeling monthly Epan using limited climatic data from three stations in China. Various combinations of minimum and maximum temperature and extraterrestrial radiation were considered as inputs to the models to investigate the effect of each variable on Epan. Among the input variables, Tmax was found to be the most effective variable on Epan in all stations and for all models. An examination of the effect of periodicity revealed that it could improve the models’ accuracy in some cases. The hybridized models were also compared in estimating one station’s evaporation using input data from other stations. It was found that the metaheuristic algorithms, specifically WOA and HHO, considerably improved the efficiency of ANFIS in modeling monthly Epan using limited climatic inputs in both applications. The ANFIS-WOA and ANFIS-HHO models with R2 ≥ 0.81 and RMSE ≤ 1.04 mm performed the best using data from the all three stations, and successfully outperformed the other hybridized models in modeling Epan with local external temperature data. Our study can provide insights into the development of predictive models when analysis is based upon a narrow range of climatic variables.

Author Contributions

Conceptualization: R.M.A.I., A.J. and O.K.; formal analysis: A.J. and S.G.M.; validation: R.M.A.I., O.K., S.H. and M.Z.-K.; supervision: A.J. and O.K.; writing—original draft: R.M.A.I., A.J., S.G.M., O.K., S.H. and M.Z.-K.; visualization: R.M.A.I., A.J. and S.G.M.; investigation: S.G.M., O.K. and S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study will be available on an interesting request from the corresponding author.

Conflicts of Interest

There is no conflict of interest in this study.

References

  1. Zounemat-Kermani, M.; Keshtegar, B.; Kisi, O.; Scholz, M. Towards a comprehensive assessment of statistical versus soft computing models in hydrology: Application to monthly pan evaporation prediction. Water 2021, 13, 2451. [Google Scholar] [CrossRef]
  2. Jasmine, M.; Mohammadian, A.; Bonakdari, H. On the Prediction of Evaporation in Arid Climate Using Machine Learning Model. Math. Comput. Appl. 2022, 27, 32. [Google Scholar] [CrossRef]
  3. Piri, J.; Amin, S.; Moghaddamnia, A.; Keshavarz, A.; Han, D.; Remesan, R. Daily pan evaporation modeling in a hot and dry climate. J. Hydrol. Eng. 2009, 14, 803–811. [Google Scholar] [CrossRef]
  4. Schwalm, C.R.; Huntinzger, D.N.; Michalak, A.M.; Fisher, J.B.; Kimball, J.S.; Mueller, B.; Zhang, K.; Zhang, Y. Sensitivity of inferred climate model skill to evaluation decisions: A case study using CMIP5 evapotranspiration. Environ. Res. Lett. 2013, 8, 024028. [Google Scholar] [CrossRef]
  5. Quan, Q.; Liang, W.; Yan, D.; Lei, J. Influences of joint action of natural and social factors on atmospheric process of hydrological cycle in Inner Mongolia, China. Urban Clim. 2022, 41, 101043. [Google Scholar] [CrossRef]
  6. Adnan, R.M.; Malik, A.; Kumar, A.; Parmar, K.S.; Kisi, O. Pan evaporation modeling by three different neuro-fuzzy intelligent systems using climatic inputs. Arab. J. Geosci. 2019, 12, 1–14. [Google Scholar] [CrossRef]
  7. Kisi, O.; Genc, O.; Dinc, S.; Zounemat-Kermani, M. Daily pan evaporation modeling using chi-squared automatic interaction detector, neural networks, classification and regression tree. Comput. Electron. Agric. 2016, 122, 112–117. [Google Scholar] [CrossRef]
  8. Sudheer, K.; Gosain, A.; Mohana Rangan, D.; Saheb, S. Modelling evaporation using an artificial neural network algorithm. Hydrol. Process. 2002, 16, 3189–3202. [Google Scholar] [CrossRef]
  9. Wang, L.; Kisi, O.; Zounemat-Kermani, M.; Li, H. Pan evaporation modeling using six different heuristic computing methods in different climates of China. J. Hydrol. 2017, 544, 407–427. [Google Scholar] [CrossRef]
  10. Keskin, M.E.; Terzi, Ö. Artificial neural network models of daily pan evaporation. J. Hydrol. Eng. 2006, 11, 65–70. [Google Scholar] [CrossRef]
  11. Al Sudani, Z.A.; Salem, G.S.A. Evaporation Rate Prediction Using Advanced Machine Learning Models: A Comparative Study. Adv. Meteorol. 2022, 2022, 1433835. [Google Scholar] [CrossRef]
  12. Chen, J.-L.; Yang, H.; Lv, M.-Q.; Xiao, Z.-L.; Wu, S.J. Estimation of monthly pan evaporation using support vector machine in Three Gorges Reservoir Area, China. Theor. Appl. Climatol. 2019, 138, 1095–1107. [Google Scholar] [CrossRef]
  13. Guven, A.; Kişi, Ö. Daily pan evaporation modeling using linear genetic programming technique. Irrig. Sci. 2011, 29, 135–145. [Google Scholar] [CrossRef]
  14. Goyal, M.K.; Bharti, B.; Quilty, J.; Adamowski, J.; Pandey, A. Modeling of daily pan evaporation in sub tropical climates using ANN, LS-SVR, Fuzzy Logic, and ANFIS. Expert Syst. Appl. 2014, 41, 5267–5276. [Google Scholar] [CrossRef]
  15. Allawi, M.F.; Ahmed, M.L.; Aidan, I.A.; Deo, R.C.; El-Shafie, A. Developing reservoir evaporation predictive model for successful dam management. Stoch. Environ. Res. Risk Assess. 2021, 35, 499–514. [Google Scholar] [CrossRef]
  16. Malik, A.; Kumar, A.; Kisi, O. Daily pan evaporation estimation using heuristic methods with gamma test. J. Irrig. Drain Eng 2018, 144, 4018023. [Google Scholar] [CrossRef]
  17. Wu, L.; Huang, G.; Fan, J.; Ma, X.; Zhou, H.; Zeng, W. Hybrid extreme learning machine with meta-heuristic algorithms for monthly pan evaporation prediction. Comput. Electron. Agric. 2020, 168, 105115. [Google Scholar] [CrossRef]
  18. Yaseen, Z.M.; Al-Juboori, A.M.; Beyaztas, U.; Al-Ansari, N.; Chau, K.-W.; Qi, C.; Ali, M.; Salih, S.Q.; Shahid, S. Prediction of evaporation in arid and semi-arid regions: A comparative study using different machine learning models. Eng. Appl. Comput. Fluid Mech. 2020, 14, 70–89. [Google Scholar] [CrossRef] [Green Version]
  19. Emadi, A.; Zamanzad-Ghavidel, S.; Fazeli, S.; Zarei, S.; Rashid-Niaghi, A. Multivariate modeling of pan evaporation in monthly temporal resolution using a hybrid evolutionary data-driven method (case study: Urmia Lake and Gavkhouni basins). Environ. Monit. Assess. 2021, 193, 1–32. [Google Scholar] [CrossRef]
  20. Dehghanipour, M.H.; Karami, H.; Ghazvinian, H.; Kalantari, Z.; Dehghanipour, A.H. Two comprehensive and practical methods for simulating pan evaporation under different climatic conditions in iran. Water 2021, 13, 2814. [Google Scholar] [CrossRef]
  21. Shiri, J.; Dierickx, W.; Pour-Ali Baba, A.; Neamati, S.; Ghorbani, M. Estimating daily pan evaporation from climatic data of the State of Illinois, USA using adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Hydrol. Res. 2011, 42, 491–502. [Google Scholar] [CrossRef]
  22. Kişi, Ö. Evolutionary neural networks for monthly pan evaporation modeling. J. Hydrol. 2013, 498, 36–45. [Google Scholar] [CrossRef]
  23. Ikram, R.M.A.; Goliatt, L.; Kisi, O.; Trajkovic, S.; Shahid, S. Covariance Matrix Adaptation Evolution Strategy for Improving Machine Learning Approaches in Streamflow Prediction. Mathematics 2022, 10, 2971. [Google Scholar] [CrossRef]
  24. Ikram, R.M.A.; Dai, H.L.; Ewees, A.A.; Shiri, J.; Kisi, O.; Zounemat-Kermani, M. Application of improved version of multi verse optimizer algorithm for modeling solar radiation. Energy Reports 2022, 8, 12063–12080. [Google Scholar] [CrossRef]
  25. Zhao, Y.; Foong, L.K. Predicting Electrical Power Output of Combined Cycle Power Plants Using a Novel Artificial Neural Network Optimized by Electrostatic Discharge Algorithm. Measurement 2022, 198, 111405. [Google Scholar] [CrossRef]
  26. Foong, L.K.; Zhao, Y.; Bai, C.; Xu, C. Efficient metaheuristic-retrofitted techniques for concrete slump simulation. Smart Struct. Syst. Int. J. 2021, 27, 745–759. [Google Scholar]
  27. Zhao, Y.; Zhong, X.; Foong, L.K. Predicting the splitting tensile strength of concrete using an equilibrium optimization model. Steel Compos. Struct. Int. J. 2021, 39, 81–93. [Google Scholar]
  28. Adnan, R.M.; Mostafa, R.R.; Islam, A.R.M.T.; Kisi, O.; Kuriqi, A.; Heddam, S. Estimating reference evapotranspiration using hybrid adaptive fuzzy inferencing coupled with heuristic algorithms. Comput. Electron. Agric. 2021, 191, 106541. [Google Scholar] [CrossRef]
  29. Adnan, R.M.; Mostafa, R.R.; Elbeltagi, A.; Yaseen, Z.M.; Shahid, S.; Kisi, O. Development of new machine learning model for streamflow prediction: Case studies in Pakistan. Stoch. Environ. Res. Risk Assess. 2022, 36, 999–1033. [Google Scholar] [CrossRef]
  30. Zhao, Y.; Yan, Q.; Yang, Z.; Yu, X.; Jia, B. A novel artificial bee colony algorithm for structural damage detection. Adv. Civ. Eng. 2020, 2020, 3743089. [Google Scholar] [CrossRef] [Green Version]
  31. Devaraj, R.; Mahalingam, S.K.; Esakki, B.; Astarita, A.; Mirjalili, S. A hybrid GA-ANFIS and F-Race tuned harmony search algorithm for Multi-Response optimization of Non-Traditional Machining process. Expert Syst. Appl. 2022, 199, 116965. [Google Scholar] [CrossRef]
  32. Bazrafshan, O.; Ehteram, M.; Latif, S.D.; Huang, Y.F.; Teo, F.Y.; Ahmed, A.N.; El-Shafie, A. Predicting crop yields using a new robust Bayesian averaging model based on multiple hybrid ANFIS and MLP models. Ain Shams Eng. J. 2022, 13, 101724. [Google Scholar] [CrossRef]
  33. Malik, A.; Kumar, A.; Kisi, O. Monthly pan-evaporation estimation in Indian central Himalayas using different heuristic approaches and climate based models. Comput. Electron. Agric. 2017, 143, 302–313. [Google Scholar] [CrossRef]
  34. Arya Azar, N.; Ghordoyee Milan, S.; Kayhomayoon, Z. Predicting monthly evaporation from dam reservoirs using LS-SVR and ANFIS optimized by Harris hawks optimization algorithm. Environ. Monit. Assess. 2021, 193, 1–14. [Google Scholar] [CrossRef] [PubMed]
  35. Seifi, A.; Ehteram, M.; Soroush, F.; Haghighi, A.T. Multi-model ensemble prediction of pan evaporation based on the Copula Bayesian Model Averaging approach. Eng. Appl. Artif. Intell. 2022, 114, 105124. [Google Scholar] [CrossRef]
  36. Khalaf, M.M. Algorithms and optimal choice for power plants based on M-polar fuzzy soft set decision making criterions. Acta Electron Malays. 2020, 4, 11–23. [Google Scholar] [CrossRef]
  37. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November 1995; pp. 1942–1948. [Google Scholar]
  38. Purba, S.; Amarilies, H.; Rachmawati, N.; Redi, A. Implementation of particle swarm optimization algorithm in cross-docking distribution problem. Acta Inform. Malays 2021, 5, 16–20. [Google Scholar] [CrossRef]
  39. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  41. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  42. Jiang, R.; Yang, M.; Wang, S.; Chao, T. An improved whale optimization algorithm with armed force program and strategic adjustment. Appl. Math. Model. 2020, 81, 603–623. [Google Scholar] [CrossRef]
  43. Zhao, Y.; Moayedi, H.; Bahiraei, M.; Foong, L.K. Employing TLBO and SCE for optimal prediction of the compressive strength of concrete. Smart Struct. Syst. 2020, 26, 753–763. [Google Scholar]
  44. Yin, L.; Wang, L.; Huang, W.; Liu, S.; Yang, B.; Zheng, W. Spatiotemporal analysis of haze in Beijing based on the multi-convolution model. Atmosphere 2021, 12, 1408. [Google Scholar] [CrossRef]
  45. Aljarah, I.; Faris, H.; Mirjalili, S. Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput. 2018, 22, 1–15. [Google Scholar] [CrossRef]
  46. Gharehchopogh, F.S.; Gholizadeh, H. A comprehensive survey: Whale Optimization Algorithm and its applications. Swarm and Evol. Comput. 2019, 48, 1–24. [Google Scholar] [CrossRef]
Figure 1. The Dongting Lake basin and the study stations (Jingzhou, Nanxian, and Yueyang) in southern China.
Figure 1. The Dongting Lake basin and the study stations (Jingzhou, Nanxian, and Yueyang) in southern China.
Water 14 03549 g001
Figure 2. Flowchart of the methodology adopted in this study.
Figure 2. Flowchart of the methodology adopted in this study.
Water 14 03549 g002
Figure 3. Scatterplots of the observed and predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Jingzhou Station.
Figure 3. Scatterplots of the observed and predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Jingzhou Station.
Water 14 03549 g003
Figure 4. Scatterplots of the observed and predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Nanxian Station.
Figure 4. Scatterplots of the observed and predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Nanxian Station.
Water 14 03549 g004
Figure 5. Scatterplots of the observed and predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Yueyang Station.
Figure 5. Scatterplots of the observed and predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Yueyang Station.
Water 14 03549 g005
Figure 6. Taylor diagram of the predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Nanxian Station.
Figure 6. Taylor diagram of the predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Nanxian Station.
Water 14 03549 g006
Figure 7. Violin plots of the predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Nanxian Station.
Figure 7. Violin plots of the predicted Epan using the hybridized ANFIS models in the testing phase using the best input combination—Nanxian Station.
Water 14 03549 g007
Table 1. Comparing the performance of standalone ANFIS, hybridized ANFIS with metaheuristic algorithms, and other ML models in modeling Epan based on the previous reports.
Table 1. Comparing the performance of standalone ANFIS, hybridized ANFIS with metaheuristic algorithms, and other ML models in modeling Epan based on the previous reports.
StudyDeveloped Model(s)Performance Comparison
Jasmin et al. [2]ANFIS and hybridized ANFIS with FFA, GA, and PSOThe ANFIS-PSO model with R2 = 0.99 and RMSE = 9.73 performed the best.
Wang et al. [9]Multi-layer perceptron (MLP), generalized regression neural network (GRNN), fuzzy genetic (FG), LSSVM, MARS, ANFIS with grid partition (ANFIS-GP)The ANFIS-GP model did not perform better than MLP and GRNN. It provided less accurate results than SVM. Therefore, the use of metaheuristic algorithms was recommended for improving ANFIS.
Malik et al. [33]MLP, co-active ANFIS (CANFIS), radial basis neural network (RBNN), and self-organizing map neural network (SOMNN)The hybridized CANFIS model with RMSE = 0.627 was ranked among the most accurate models.
Arya Azar et al. [34]Least-squares support vector regression (LS-SVR), ANFIS, and ANFIS-HHOThe hybridized ANFIS-HHO (RMSE  =  2.35 and NSE  =  0.95) model successfully outperformed the other models.
Seifi et al. [35]Copula-based Bayesian Model Averaging (CBMA) and hybridized ANFIS with seagull optimization algorithm (SOA), crow search algorithm (CA), FA, and PSOThe hybridized models improved prediction accuracy by 20.35–64.36%. Thus, solidifying ANFIS with metaheuristic algorithms was recommended.
Table 2. The statistical characteristics of the data used in this study.
Table 2. The statistical characteristics of the data used in this study.
Jingzhou StationNanxian StationYueyang Station
Whole DataTrainingTestingWhole DataTrainingTestingWhole DataTrainingTesting
Tmin
Mean13.33613.01613.63913.56213.44413.91814.38414.19314.960
Min.−2.360−2.3600.742−1.303−1.3030.761−0.935−0.9351.426
Max.26.03926.03925.16527.14826.70627.14828.16527.95228.165
Skewness−0.102−0.850−0.071−0.051−0.050−0.047−0.051−0.046−0.056
Std. dev.7.9288.7437.6718.3258.3738.1708.3758.4448.138
Tmax
Mean21.39721.30121.68520.77420.68121.05320.76920.71620.929
Min.4.4484.4486.4483.1623.1625.7062.8522.8525.677
Max.36.28436.28434.72635.08435.08434.44535.17435.17434.116
Skewness−0.138−0.125−0.176−0.139−0.130−0.162−0.122−0.111−0.154
Std. dev.8.4468.5148.2328.5348.6148.2838.5118.6008.236
Extraterrestrial radiation
Mean31.39831.39831.39731.69631.69631.69531.88831.88831.887
Min.19.75319.75319.75320.38220.38220.38220.79720.79720.797
Max.41.13341.13341.13341.01641.01641.01640.93440.93440.934
Skewness−0.185−0.185−0.187−0.199−0.200−0.201−0.210−0.210−0.212
Std. dev.7.6397.6397.6407.3777.3777.3787.2027.2027.203
Evaporation
Mean3.6303.6533.5623.3853.2563.7733.9563.8724.207
Min.0.8840.9610.8840.8030.8030.9970.9110.9111.116
Max.10.61910.6197.8619.0879.0879.04511.11911.11911.029
Skewness0.6050.6660.3320.7060.7530.5430.8460.8940.729
Std. dev.1.8161.8571.6831.8101.7511.9262.1852.1772.189
Table 3. Parameter settings of ANFIS and three optimization algorithms used in this study.
Table 3. Parameter settings of ANFIS and three optimization algorithms used in this study.
Method/AlgorithmParameterValue
ANFISError goal0
Increase rate1.1
Initial step0.01
ANFIS-DEcrease rate0.9
Maximum epochs100
PSOCognitive component ( c 1 )2
Social component ( c 2 )2
inertia weight0.2–0.9
HHO β 1.5
E 0 [ 1 . 1 ]
WOA a [ 0 .   2 ]
a 2 [ 1 . 2 ]
All algorithmsPopulation30
Number of iterations150
Number of runs for each algorithm10
Table 4. Training and testing performances of the models for monthly Epan prediction—Jingzhou Station.
Table 4. Training and testing performances of the models for monthly Epan prediction—Jingzhou Station.
Model InputsTrainingTest
RMSEMAER2NSERMSEMAER2NSE
ANFIS
Tmin0.85000.62850.79070.78840.88460.69020.74640.7436
Tmax0.63400.48370.88360.88170.65080.52100.89560.8922
Ra1.06040.79560.67400.67120.84080.66580.75450.7521
Tmin, Tmax0.51620.39910.92460.91930.51350.41240.91850.9163
Tmin, Ra0.78560.56800.82110.81840.92120.72850.78450.7816
Tmax, Ra0.57440.43780.90440.90170.61860.48530.91090.9070
Tmin, Tmax, Ra0.45930.35620.93880.93620.44120.35100.94550.9426
Opt inputs, α0.46980.35870.93600.93210.45820.35340.94170.9378
Mean0.66870.50350.85920.85610.66610.52600.86220.8592
ANFIS-PSO
Tmin0.81110.60630.80930.80740.78510.62490.77980.7752
Tmax0.61820.47530.88920.88760.56280.45770.90200.8996
Ra1.05020.78670.68020.67750.83270.65600.75970.7564
Tmin, Tmax0.51210.39170.92690.92370.51800.42440.92360.9215
Tmin, Ra0.74930.55010.83120.82840.77060.59890.81050.7905
Tmax, Ra0.53050.41100.91840.91670.53260.44190.91830.9158
Tmin, Tmax, Ra0.45900.35290.93890.93580.43650.32180.95610.9542
Opt inputs, α0.46270.35650.93820.93620.44760.33270.95210.9493
Mean0.64910.49130.86650.86420.61070.48230.87530.8703
ANFIS-HHO
Tmin0.80290.59870.81310.81150.73680.57810.81680.8143
Tmax0.61630.46580.88990.88620.52680.42470.90940.9067
Ra0.84190.61590.77940.77540.71850.56470.82630.8242
Tmin, Tmax0.50510.38650.92900.92680.37490.29930.96090.9573
Tmin, Ra0.74980.54450.83700.83410.68130.53830.84390.8422
Tmax, Ra0.51320.39030.92360.92050.48920.38150.92610.9236
Tmin, Tmax, Ra0.45690.35280.93950.93720.36460.29580.96230.9604
Opt inputs, α0.44390.34050.94290.94040.33220.27460.96910.9675
Mean0.61630.46190.88180.87900.52800.41960.90190.8995
ANFIS-WOA
Tmin0.69700.49770.85900.85630.71550.56460.82740.8243
Tmax0.57220.43260.90510.90260.51480.41600.91250.9103
Ra0.83420.60830.79450.79180.71190.55850.82940.8261
Tmin, Tmax0.43850.31810.94430.94210.36430.29610.96180.9595
Tmin, Ra0.69430.49140.86020.85790.67060.52910.84800.8452
Tmax, Ra0.49470.35650.92910.92740.41470.34350.94470.9417
Tmin, Tmax, Ra0.42030.31220.94880.94570.32140.26040.96910.9658
Opt inputs, α0.40980.30500.95130.94920.31270.25610.97260.9704
Mean0.57010.41520.89900.89660.50320.40300.90820.9054
Table 5. Training and testing performances of the models for monthly Epan prediction—Nanxian Station.
Table 5. Training and testing performances of the models for monthly Epan prediction—Nanxian Station.
Model InputsTrainingTest
RMSEMAER2NSERMSEMAER2NSE
ANFIS
Tmin0.67320.51140.85230.85031.00000.77160.81260.8103
Tmax0.54790.41180.90230.90140.81300.63300.90920.9067
Ra1.09600.85470.60820.60581.29740.93360.62580.6225
Tmin, Tmax0.48190.35660.92430.92210.79610.58790.90870.9064
Tmin, Ra0.65090.49620.86020.85830.97120.75110.80530.8032
Tmax, Ra0.52720.39710.90940.90670.79280.62590.90530.9027
Tmin, Tmax, Ra0.43950.33180.93700.93420.77380.58370.91140.9093
Opt inputs, α0.45480.33400.93280.93010.77920.58770.90910.9075
0.60890.46170.86580.86360.90290.68430.84840.8461
ANFIS-PSO
Tmin0.65480.49790.86020.85830.96310.73200.83120.8283
Tmax0.53380.40480.90610.90420.77750.59870.91830.9156
Ra1.03420.79810.65120.65031.22630.87990.67440.6717
Tmin, Tmax0.47810.35000.92550.92280.78290.59470.93840.9352
Tmin, Ra0.63890.48780.86270.86010.95180.71550.82020.8179
Tmax, Ra0.52360.39030.91060.90840.78410.60320.91850.9156
Tmin, Tmax, Ra0.43330.31740.93880.93560.76350.56720.93210.9305
Opt inputs, α0.43950.33180.93700.93420.76920.57420.92860.9253
0.59200.44730.87400.87170.87730.65820.87020.8675
ANFIS-HHO
Tmin0.63820.49430.86720.86470.88590.67450.85650.8537
Tmax0.53180.40140.90780.90530.77570.59750.92260.9205
Ra0.69440.51170.84280.84020.99090.76590.81260.8103
Tmin, Tmax0.45570.34780.93230.93040.76240.57820.94180.9402
Tmin, Ra0.63900.49500.86690.86520.86890.65700.85830.8563
Tmax, Ra0.51300.38310.91420.91270.76600.59120.92510.9227
Tmin, Tmax, Ra0.42730.31590.94050.93870.74840.56890.94130.9394
Opt inputs, α0.41970.30940.94260.94030.72430.56370.94320.9408
0.53990.40730.90180.89970.81530.62460.90020.8980
ANFIS-WOA
Tmin0.55400.40730.89990.89730.84370.64990.87150.8702
Tmax0.52410.38500.91040.90820.77260.59550.92330.9214
Ra0.69250.51030.84360.84070.98530.75630.81530.8127
Tmin, Tmax0.39910.28460.94810.94560.73670.51310.95160.9493
Tmin, Ra0.54740.39820.90230.90070.82010.63450.87480.8721
Tmax, Ra0.44850.33960.93440.93240.75810.58420.92660.9234
Tmin, Tmax, Ra0.36870.27090.95570.95310.66430.51770.94720.9456
Opt inputs, α0.36430.26900.95670.95480.58860.46290.95260.9507
0.48730.35810.91890.91660.77120.58930.90790.9057
Table 6. Training and testing performances of the models for monthly Epan prediction—Yueyang Station.
Table 6. Training and testing performances of the models for monthly Epan prediction—Yueyang Station.
Model InputsTrainingTest
RMSEMAER2NSERMSEMAER2NSE
ANFIS
Tmin0.78070.59070.87160.86940.93350.72000.85450.8524
Tmax0.70810.52170.89440.89170.88430.65620.86630.8638
Ra1.39041.06280.59200.59061.38491.03130.62390.6207
Tmin, Tmax0.58880.44110.92690.92430.81780.60950.89110.8893
Tmin, Ra0.72770.55620.88830.88560.88310.68750.85480.8524
Tmax, Ra0.65080.48720.91070.90820.82100.61390.88160.8793
Tmin, Tmax, Ra0.54790.42250.93690.93370.81780.60950.89110.8902
Opt inputs, α0.52270.41630.94230.92210.80130.60100.89600.8936
0.73960.56230.87040.86570.91800.69110.84490.8427
ANFIS-PSO
Tmin0.72030.54990.89050.88840.85100.64820.85760.8543
Tmax0.68820.50970.90000.89860.80870.64120.89180.8893
Ra1.32821.00650.62770.62531.31770.97780.66160.6594
Tmin, Tmax0.54530.42450.93730.93520.72340.57150.92470.9225
Tmin, Ra0.71220.54660.89300.89070.85650.66240.87560.9726
Tmax, Ra0.63140.47590.91590.91240.76550.59680.90940.9071
Tmin, Tmax, Ra0.53280.42070.94010.93830.72440.55330.92750.9253
Opt inputs, α0.51460.40110.94410.94250.71350.53600.93000.9287
0.70910.54190.88110.87890.84510.64840.87230.8824
ANFIS-HHO
Tmin0.71330.54820.89260.89010.81810.61670.86710.8643
Tmax0.67740.50400.90100.89930.80030.60420.89930.8972
Ra0.88730.64610.83380.83161.03870.79640.79840.7958
Tmin, Tmax0.53650.42440.93930.93740.72020.54520.92570.9235
Tmin, Ra0.70220.53830.89590.89320.77280.60450.88260.8804
Tmax, Ra0.62860.47460.91660.91450.73630.57490.91880.9157
Tmin, Tmax, Ra0.51080.39920.94670.94510.70930.53310.93330.9306
Opt inputs, α0.50410.39900.94640.94480.66490.52440.93960.9372
0.64500.49170.90900.90700.78260.59990.89560.8931
ANFIS-WOA
Tmin0.65650.48040.90900.90720.80280.60250.87490.8723
Tmax0.61350.43710.92060.91830.78960.59570.90490.9027
Ra0.88620.64550.83420.83161.03650.79540.79910.7973
Tmin, Tmax0.46580.33780.95420.95240.71610.53780.92780.9252
Tmin, Ra0.61350.45240.92060.91820.75330.57950.88880.8856
Tmax, Ra0.50760.37800.94560.94270.72780.56470.92110.9202
Tmin, Tmax, Ra0.46580.33780.95420.95230.65080.50340.94330.9413
Opt inputs, α0.46360.33300.95460.95310.63700.50520.94220.9404
0.58410.42530.92410.92200.76420.58550.90030.8981
Table 7. Training and testing performances of the models for monthly Epan prediction—Nanxian Station using data from Jingzhou Station.
Table 7. Training and testing performances of the models for monthly Epan prediction—Nanxian Station using data from Jingzhou Station.
Model InputsTrainingTest
RMSEMAER2NSERMSEMAER2NSE
ANFIS
Tmin0.68630.51220.84670.84361.04220.92080.68460.6814
Tmax0.55090.41480.90120.89940.82290.61290.87520.8723
Ra1.09480.85370.60910.60721.29630.93260.62660.6237
Tmin, Tmax0.52290.38710.91090.90830.81570.63260.83260.8302
Tmin, Ra0.65360.49430.86080.85740.97520.86360.72360.7208
Tmax, Ra0.53400.40540.90700.90460.81220.63540.87080.8682
Tmin, Tmax, Ra0.52520.38710.91070.90810.80770.61330.85480.8517
Opt inputs, α0.50420.37750.91710.91530.80510.59660.88420.8823
ANFIS-PSO
Tmin0.64320.48830.86510.86230.96290.69840.80990.8073
Tmax0.54350.41070.90370.90140.81300.59330.87600.8732
Ra1.01930.78410.66120.65921.21160.86950.68410.6814
Tmin, Tmax0.51740.38390.91270.91030.80480.57930.88760.8852
Tmin, Ra0.63840.48890.86710.86520.94960.83530.81650.8137
Tmax, Ra0.51720.39380.91280.91010.80780.61720.88460.8824
Tmin, Tmax, Ra0.50320.38060.91720.91540.76610.59310.89360.8917
Opt inputs, α0.50040.37790.91830.91660.75610.55690.90490.9025
ANFIS-HHO
Tmin0.63770.48820.86740.86520.91550.69040.84560.8423
Tmax0.54270.41010.90400.90270.79700.58080.89560.8934
Ra0.69430.51180.84280.84030.99090.76590.81260.8102
Tmin, Tmax0.50770.37870.91590.91270.77330.56830.91240.9096
Tmin, Ra0.63180.48230.86980.86740.91810.70570.84740.8455
Tmax, Ra0.51240.39150.91440.91260.77610.56960.91020.9081
Tmin, Tmax, Ra0.49630.36920.91970.91730.74470.55200.91630.9145
Opt inputs, α0.48500.36340.92330.92040.71970.52900.92570.9223
ANFIS-WOA
Tmin0.54450.39470.90330.90140.91250.68860.84740.8453
Tmax0.51370.37910.91400.91260.72330.56270.90540.9024
Ra0.69250.51030.84360.84150.98540.75660.81520.8126
Tmin, Tmax0.42560.30380.94090.93820.76550.55450.91830.9157
Tmin, Ra0.51550.37940.91330.91040.90560.68030.85180.8485
Tmax, Ra0.46760.34780.92870.92570.76720.55260.91730.9152
Tmin, Tmax, Ra0.42650.31410.94070.93830.74340.55050.92160.9184
Opt inputs, α0.41570.30290.94360.94050.70850.50860.92810.9252
Table 8. Training and testing performances of the models for monthly Epan prediction—Nanxian Station using data from Yueyang Station.
Table 8. Training and testing performances of the models for monthly Epan prediction—Nanxian Station using data from Yueyang Station.
Model InputsTrainingTest
RMSEMAER2NSERMSEMAER2NSE
ANFIS
Tmin0.62510.48130.87270.87020.94220.72940.85980.8573
Tmax0.56660.42490.89540.89230.86960.64850.88930.8871
Ra1.09690.85530.60760.60541.29810.93440.62530.6228
Tmin, Tmax0.50310.37750.91750.91560.83030.60200.90610.9043
Tmin, Ra0.59920.46390.88290.88010.91590.70810.84350.8407
Tmax, Ra0.54780.41160.90220.90040.83430.63280.89280.8902
Tmin, Tmax, Ra0.45740.35450.93180.92950.79930.62490.92310.9224
Opt inputs, α0.48450.36490.92360.92080.82450.62950.90940.9075
ANFIS-PSO
Tmin0.59090.45880.88610.88430.85130.65500.86170.8593
Tmax0.55600.41710.89920.89710.83790.63170.89860.8962
Ra1.03700.79930.64930.64721.22670.87910.67440.6721
Tmin, Tmax0.47620.36480.92610.92450.81310.63090.91040.9075
Tmin, Ra0.59140.45920.88590.88270.88870.68970.85400.8523
Tmax, Ra0.53010.39760.90840.90600.81630.63840.90520.9027
Tmin, Tmax, Ra0.45450.34370.93260.93040.76760.61380.92530.9228
Opt inputs, α0.46630.35750.92910.92750.80650.62480.91150.9095
ANFIS-HHO
Tmin0.58790.45670.88730.88520.83280.62720.87570.8724
Tmax0.55800.42010.89850.89560.80580.61700.90450.9023
Ra0.69440.51170.84270.84030.99090.76590.81260.8105
Tmin, Tmax0.46170.35940.93050.92830.79030.59890.91650.9146
Tmin, Ra0.57370.44710.89270.89040.80670.62450.87550.8721
Tmax, Ra0.52500.39530.91010.90850.79790.61160.91530.9127
Tmin, Tmax, Ra0.43990.33360.93690.93370.76650.60650.92720.9258
Opt inputs, α0.45490.34830.93280.93020.78250.60580.91670.9149
ANFIS-WOA
Tmin0.53770.40620.90570.90280.80790.62230.87690.8742
Tmax0.50480.35990.91690.91370.79630.61590.90970.9074
Ra0.69250.51030.84360.84050.98520.75610.81530.8125
Tmin, Tmax0.42600.31420.94080.93820.78060.58400.92250.9203
Tmin, Ra0.50090.37710.91820.91570.79950.62160.89410.8917
Tmax, Ra0.45160.33900.93350.93050.79230.60230.91990.9174
Tmin, Tmax, Ra0.41860.31130.94220.94010.74420.57710.92990.9268
Opt inputs, α0.41820.30820.94300.94160.73440.57680.93300.9283
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Adnan Ikram, R.M.; Jaafari, A.; Milan, S.G.; Kisi, O.; Heddam, S.; Zounemat-Kermani, M. Hybridized Adaptive Neuro-Fuzzy Inference System with Metaheuristic Algorithms for Modeling Monthly Pan Evaporation. Water 2022, 14, 3549. https://doi.org/10.3390/w14213549

AMA Style

Adnan Ikram RM, Jaafari A, Milan SG, Kisi O, Heddam S, Zounemat-Kermani M. Hybridized Adaptive Neuro-Fuzzy Inference System with Metaheuristic Algorithms for Modeling Monthly Pan Evaporation. Water. 2022; 14(21):3549. https://doi.org/10.3390/w14213549

Chicago/Turabian Style

Adnan Ikram, Rana Muhammad, Abolfazl Jaafari, Sami Ghordoyee Milan, Ozgur Kisi, Salim Heddam, and Mohammad Zounemat-Kermani. 2022. "Hybridized Adaptive Neuro-Fuzzy Inference System with Metaheuristic Algorithms for Modeling Monthly Pan Evaporation" Water 14, no. 21: 3549. https://doi.org/10.3390/w14213549

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop