The purpose of the MannKendall (MK) test (Mann 1945, Kendall 1975, Gilbert 1987) is to statistically assess if there is a monotonic upward or downward trend of the variable of interest over time. A monotonic upward (downward) trend means that the variable consistently increases (decreases) through time, but the trend may or may not be linear. The MK test can be used in place of a parametric linear regression analysis, which can be used to test if the slope of the estimated linear regression line is different from zero. The regression analysis requires that the residuals from the fitted regression line be normally distributed; an assumption not required by the MK test, that is, the MK test is a nonparametric (distributionfree) test.
Hirsch, Slack and Smith (1982, page 107) indicate that the MK test is best viewed as an exploratory analysis and is most appropriately used to identify stations where changes are significant or of large magnitude and to quantify these findings.
The following assumptions underlie the MK test:
When no trend is present, the measurements (observations or data) obtained over time are independent and identically distributed. The assumption of independence means that the observations are not serially correlated over time.
The observations obtained over time are representative of the true conditions at sampling times.
The sample collection, handling, and measurement methods provide unbiased and representative observations of the underlying populations over time.
There is no requirement that the measurements be normally distributed or that the trend, if present, is linear. The MK test can be computed if there are missing values and values below the one or more limits of detection (LD), but the performance of the test will be adversely affected by such events. The assumption of independence requires that the time between samples be sufficiently large so that there is no correlation between measurements collected at different times.
The MK test tests whether to reject the null hypothesis (\(H_0\)) and accept the alternative hypothesis (\(H_a\)), where
\(H_0\): No monotonic trend
\(H_a\): Monotonic trend is present
The initial assumption of the MK test is that the \(H_0\) is true and that the data must be convincing beyond a reasonable doubt before \(H_0\) is rejected and \(H_a\) is accepted.
The MK test is conducted as follows (from Gilbert 1987, pp. 209213) :
1. List the data in the order in which they were collected over time,\(x_1, x_2, ..., x_n\), which denote the measurements obtained at times \(1, 2, ..., n\), respectively.
2. Determine the sign of all \(n(n1)/2\) possible differences \(x_jx_k\), where \(j \gt k\). These differences are
\(x_2x_1, x_3x_1, ..., x_nx_1, x_3x_2, x_4x_2, ..., x_nx_{n2}, x_nx_{n1}\)
3. Let \(\text{sgn}(x_jx_k)\) be an indicator function that takes on the values 1, 0, or 1 according to the sign of \(x_jx_k\), that is,
\(\text{sgn}(x_jx_k)\) 
= 1 if \(x_jx_k \gt\) 0 

= 0 if \(x_jx_k\) = 0, or if the sign of \(x_jx_k\) cannot be determined due to nondetects 

= 1 if \(x_jx_k \lt\) 0 
For example, if \(x_jx_k \gt\) 0, that means
that the observation at time \(j\), denoted
by \(x_j\), is greater than the observation at time \(k\), denoted by \(x_k\).
4. Compute
\begin{equation} S = \displaystyle\sum_{k1}^{n1}\displaystyle\sum_{jk+1}^{n}\text{sgn}(x_jx_k) \end{equation}
which is the number of positive differences minus the number of negative differences. If \(S\) is a positive number, observations obtained later in time tend to be larger than observations made earlier. If \(S\) is a negative number, then observations made later in time tend to be smaller than observations made earlier.
5. If \(n \le\) 10, follow the procedure described in Gilbert (1987, page 209, Section 16.4.1) by looking up \(S\) in a table of probabilities (Gilbert 1987, Table A18, page 272) . If this probability is less than \(\alpha\) (the probability of concluding a trend exists when there is none), then reject the null hypothesis and conclude the trend exists. If \(n\) cannot be found in the table of probabilities (which can happen if there are tied data values), the next value farther from zero in the table is used. For example, if \(S\) = 12 and there is no value for \(S\) = 12 in the table, it is handled the same as \(S\) = 13.
If \(n \gt\) 10, continue with steps 6 through 10 to determine whether a trend exists. This follows the procedure described in Gilbert (1987, page 211, Section 16.4.2).
6. Compute the variance of \(S\) as follows:
\begin{equation} \text{VAR}(S) = \frac{1}{18}\Big[n(n1)(2n+5)  \displaystyle\sum_{p1}^{g}t_p(t_p1)(2t_p+5)\Big] \end{equation}
where \(g\) is the number of tied groups and \(t_p\) is the number of observations in the \(p\) th group. For example, in the sequence of measurements in time {23, 24, 29, 6, 29, 24, 24, 29, 23} we have \(g\) = 3 tied groups, for which \(t_1\) = 2 for the tied value 23, \(t_2\) = 3 for the tied value 24, and \(t_3\) = 3 for the tied value 29. When there are ties in the data due to equal values or nondetects, \(\text{VAR}(S)\) is adjusted by a tie correction method described in Helsel (2005, p. 191) .
7. Compute the MK test statistic, \(Z_{MK}\), as follows:
\( Z_{MK} \) 
= \(\frac{S1}{\sqrt{VAR}(S)} \text{if} S \gt\) 0 

= 0 \(\text{if} S\) = 0 

= \(\frac{S+1}{\sqrt{VAR}(S)} \text{if} S \lt \) 0 
\begin{equation} \end{equation}
A positive (negative) value of \(Z_{MK}\)
indicates that the data tend to increase
(decrease) with time.
8. Suppose we want to test the null hypothesis
\(H_0\): No monotonic trend
versus the alternative hypothesis
\(H_a\): Upward monotonic trend
at the Type I error rate \(\alpha\) , where
0 < \(\alpha\) < 0.5. (Note that \(\alpha\) is the tolerable
probability that the MK test will falsely reject the null hypothesis.)
Then \(H_0\) is
rejected and \(H_a\) is accepted if \(Z_{MK} \geq Z_{1  \alpha}\), where
\(Z_{1  \alpha}\) is the \(100(1  \alpha)^{th}\)
percentile of the standard normal distribution. These percentiles are provided
in
many statistics book (for example Gilbert 1987,
Table A1, page 254) and
statistical software packages.
9. To test \(H_0\) above versus
\(H_a\): Downward monotonic trend
at the Type I error rate \(\alpha\), \(H_0\) is rejected and \(H_a\) is accepted if \(Z_{MK} \leq  Z_{1  \alpha}\).
10. To test the \(H_0\) above versus
H a : Upward or downward monotonic trend
at the Type I error rate \(\alpha\), \(H_0\)
is rejected and \(H_a\) is accepted if \(Z_{MK} \geq Z_{1  \alpha /2}
\),
where the vertical bars denote absolute value.
Suppose there are missing data in the time series. For example, suppose that data are collected the first day of each month, but the data for March 1st and July 1st have been lost. In that case, VSP computes the MK test in the usual way using the smaller data set, reducing the value of \(n\) as appropriate.
VSP uses a MonteCarlo simulation to determine the required number of points in time, \(n\), to take a measurement in order to detect a linear trend for specified small probabilities that the MK test will make decision errors. If a nonlinear trend is actually present, then the value of \(n\) computed by VSP is only an approximation to the correct \(n\). If nondetects are expected in the resulting data, then the value of \(n\) computed by VSP is only an approximation to the correct \(n\), and this approximation will tend to be less accurate as the number of nondetects increases.
When an exponential curve trend is requested by the user, the curve is placed on a log scale where it becomes linear. VSP converts the % change per time period to a slope on the log scale using the formula:
\(\text{slope} = \text{ln}\Big(\frac{100+\text{change}\%}{100}\Big)\)
The simulation, which is a binary search on the number of samples needed, proceeds as follows:
1. The required probability of detecting a linear trend (if present) is set at \(1\beta\) where \(\beta\) is the userspecified probability of falsely accepting the null hypothesis.
2. The required number of samples, \(n\), is initially set to 4, which is the minimum number of samples that can be analyzed using the MannKendall test.
3. A set of \(n\) random numbers is created that conforms to the linear trend (change per unit time) that the VSP user indicates needs to be detected and to the standard deviation of normally distributed residuals about that trend line. This standard deviation is also specified by the VSP user.
a. A set of \(n\) numbers is randomly chosen from a normal distribution having a mean of zero and the specified standard deviation of the residuals. Call this set of random numbers (\(r_1, r_2, r_3, ..., r_n\)).
b. The change per sample period, i.e., the change that occurs between two adjacent sampling times, \(Delta\), is calculated based on the userspecified trend slope and sample period.
c. A multiple of \(Delta\) is added to each random number to create the necessary slope. The resulting numbers are (\(x_1 = r_1, x_2 = r_2 + \Delta, x_3 = r_3 + 2\Delta, ..., x_n = r_n + (n1)\Delta\))
4. The MK test (described above) is conducted on the set of numbers (\(x_1, x_2, ..., x_n\)) using the userspecified alpha error rate (\(\alpha\)). If the null hypothesis is rejected, which indicates that the MK test detected a trend, then one is added to the count of trend detections.
5. Steps 3 and 4 are repeated 1000 times. The count of trend detections is then divided by 1000 to compute an estimate of the probability, \(P_d\), that the MK test will detect a trend of the magnitude specified in Step 3 above.
6. \(P_d\) is compared to \(1\beta\). If \(P_d\) equals \(1\beta\) then the target probability of detection has been achieved with n samples. In that case the simulation ends and VSP reports that \(n\) samples are required. If \(P_d \lt 1\beta\) then \(n\) is increased and steps 3 through 6 are repeated. If \(P_d \gt 1\beta\) then \(n\) is decreased and steps 3 through 6 are repeated. The process continues until \(P_d\) equals \(1\beta\) or \(n\) does not change.
A Sign Test is used to examine the residuals (measurement minus predicted values) to determine if the number of positive and negative residuals is significantly different. If there is a significant difference, then we can conclude that the model appears to overpredict or underpredict more often.
In the Sign Test, each measurement is subtracted from the predicted value to obtain \(n\) residuals . Any residuals of zero are discarded from consideration and the sample size is reduced accordingly. The test statistic \(S+\) is calculated by counting the number of positive residuals to test the null hypothesis \(\mu\) = 0.5 where \(\mu\) is the probability a residual will be positive. \(S+\) is then compared to a binomial distribution with mean 0.5 to determine if \(S+\) is above the (\(100\alpha/2\))th percentile or below the (\(\alpha/2\))th percentile. If either occurs, then the null hypothesis is rejected.
A Runs Test is used to examine the residuals and determine if residuals with the same sign are random or if they tend to cluster together, such as having long runs of positive residuals followed by long runs of negative residuals.
In the Runs Test the test statistic \(R\) is calculated by ordering measurements by time and discarding any residuals of zero, leaving \(n\) residuals. \(R\) is the number of runs in these ordered residuals, or groups of consecutive residuals which have the same sign. The null hypothesis is there is no evidence that residuals of the same sign cluster. Where \(m\) is the number of positive residuals, and \(k\) is the number of negative residuals, the conditional distribution of \(R\) is $$P(R = 2k) = \frac{\text{choose}(m1,r/21)*\text{choose}(k1,r/21)}{\text{choose}(m+k,k)}$$
when \(R\) is even, and $$P(R = 2k+1) = \frac{\text{choose}(m1,(r/21)/2)*\text{choose}(k1,(r3)/2)+\text{choose}(m1,(r3)/2)*\text{choose}(k1,(r1)/2)}{\text{choose}(m+k,k)}$$
when \(R\) is odd (Gibbons, 2003) . If the probability of \(R\) or fewer runs is less than the specified alpha level, then we reject the null hypothesis and conclude residuals with the same sign tend to cluster. Rejecting the null hypothesis suggests there could be some factor not accounted for in the model (for example, seasonality).
Esterby, S.R. Review of methods for the detection and estimation of trends with emphasis on water quality applications , Hydrological Processes 10:127149.
Gilbert, R.O. 1987 . Statistical Methods for Environmental Pollution Monitoring, Wiley, NY.
Gibbons, J.D., and S. Chakraborti. 2003. Nonparametric Statistical Inference, Marcel Dekker, NY.
Hirsch, R.M., J.R. Slack, and R.A. Smith. 1982. Techniques of trend analysis for monthly water quality data , Water Resources Research 18(1):107121.
Mann, H.B. 1945. Nonparametric tests against trend, Econometrica 13:163171.
Kendall, M.G. 1975. Rank Correlation Methods, 4th edition, Charles Griffin, London.
Change that is Important to Detect
Standard Deviation of Residuals from Regression Line
% Change that is Important to Detect
% Standard Deviation of Residuals from Regression Line