Home

TSM4 Documentation - Time Series Modelling (TSM)

image

Contents

1. 8 13 1 exp frz Rs Takt JI so that 0 lt G lt 1 This specification includes the well known SETAR model as a special case by allowing autoregressive parameters to switch with z Y a However as in the Markov switching models any desired subset of the parameters y can be allowed to vary across regimes driven by any variable s An ST GARCH model can also be specified by replacing h h Y s Xt gt 1 y in 6 1 or 6 2 by hy Gir 1 Gi ho 8 14 where h h y for j 1 2 With zg 0 the intercept t measures the transition value of the regime indicator z whose location determines which regime receives most weight at date t With y gt 0 for example regime 1 dominates when z lt t and regime 2 dominates when z gt t However the switch value can depend on additional explanatory variables xg if desired As the smoothness parameter y takes absolutely large values the transition becomes abrupt and the model is similar to the ordinary threshold TAR type models Note changing the sign of y replaces G with 1 G and so is equivalent to interchanging the values of w and y2 The model has two observationally equivalent versions Be careful to interpret the estimates correctly 33 James Davidson 2015 8 4 2 Double Transition Model A further variant is the double transition model in which 1 Deg 8 15 1 exp y Z Ti fake MS Ta MoXo W
2. An EDF file can contain tabulations for parameter t values and for any statistic computed by TSM or coded by the user identified by name It can also contain tabulations for the same Statistic in different sample sizes These composite tables can be created from the results of several simulation experiments using a Merge EDF Tables command Tables for multiple sample sizes are interpolated to generate an approximate p value relevant to the actual sample in use Suppose tabulations exist for the sample sizes Ti lt T2 lt Ty If the actual sample size is T where T lt T lt T then the p value returned is a weighted average of the T and T tabulations with weights w and 1 w where Pery w 13 1 Int If either T lt 7 or T gt Ty then the nearest tabulation is used un weighted Note that the squared weighting coefficient places greater weight on the larger sample size than a simple linear interpolation would do as appropriate if the distributions in question enjoy convergence to the limit at the rate TI 13 4 Bootstrap Inference The program optionally computes bootstrap standard errors and p values for test statistics including f tests on parameters diagnostic tests and specified tests of model restrictions See for example Horowitz 2000 Li and Maddala 1996 for background information on the bootstrap The main method adopted is the parametric bootstrap In other words for each bootstrap rep
3. 12 4 Information Matrix Test This is a diagnostic for maximum likelihood estimation with a CM type motivation The test in effect compares the matrices 45 in 11 1 and Q term by term with the null hypothesis that the expected difference is zero Thus letting c l where l denotes the Ee contribution to the log likelihood and defining Or Ze L 12 4 10 5000 gam the moments under test are of the form m Veclu 3 4 4 12 5 where the hats denote evaluation at estimated values and the denotes the omission of elements redundant through symmetry See Davidson 2000 Section 12 5 4 for further details If either this test or the heteroscedasticity tests reject the use of the robust covariance matrix option is strongly advised 12 5 Nyblom Hansen Stability Tests These are useful general purpose tests for the stability of an estimated model based on the partial sums of the gradient contributions see Nyblom 1989 Hansen 1990 1992 They are valid for any of the time domain estimators implemented in TSM4 and can be computed for the model as a whole and also optionally for the parameters of the model one at a time Let s Dal px1 12 6 where q is defined in 12 4 The full model test statistic called the Lc statistic in Hansen 1990 is then 1 n n i a NHP 2 1194 s 12 7 For testing parameters for constancy individually the statistics have the form 1 n n d NH 1 ak KEE EE 1
4. In the fractional model too the different types of intercept represent different models not just alternative parameterizations While yo 4 0 represents merely a location shift of the fractionally integrated process yo2 O implies the presence of a deterministic trend of Or When d lt 0 this term degenerates to 0 after a finite number of steps yo2 would be asymptotically unidentified in that case and should be suppressed In the bilinear model see Section 4 3 the two intercept cases likewise represent different models with different dynamics 4 2 4 Regressor Types The variables x1 x2 and x3 in equation 4 1 are referred to as regressors of Type 1 Type 2 and Type 3 respectively As in the linear regression each Type can be specified with a different number of lags However in an ARMA or ARFIMA equation the Types have special significance because of the way the variables enter the dynamics in equation 4 1 e A model with only Type 1 regressors can be thought of as exhibiting error dynamics since a transformation allows it to be rewritten with only the error term u entering in lagged form e A model with Type 2 regressors exhibits structural dynamics since it has a distributed lag representation As in a linear regression the dependent variable can enter as a Type 2 variable with lags and in this case the current value is suppressed However specifying an AR form is a more natural way to include the lagged dependen
5. SC expt vu XP The t are also reported in the output with standard errors Standard errors are not reported for the po 1 M 1 and Pur 1 Dy pg Pye i 1 M 8 5 ji The series Pr S JP for j 1 M 1 the filter probabilities are a by product of the estimation From these we can also compute the sequences of smoothed probabilities Pr S j Y which are obtained from the backwards recursion 31 James Davidson 2015 Pr S j Pr S sl t T 1 1 8 6 i ES M Pr S 7 G Sg 2 P Se t 1 These series should give the best indication of which regime the system is occupying at each date in the sample see Kim and Nelson 1999 Section 4 3 1 for details 8 2 Explained Switching This setup is the same as Case 1 except that the t are functions of specified predetermined variables Two versions of this model can be specified In the first the equations have the general form R ta 0 B D Y mXn J M 1 8 7 m 1 where D 1 if the current regime is i for i 2 M and zero otherwise J gt 0 is a fixed lag that can be specified interactively although distributed lags would need to be set up manually In other words the representation may depend on the current regime through possible intercept shifts This model nests the Markov switching model as the case where m O for all j and m However the way the probability of switching to regime j d
6. 53 ECM 13 16 18 25 EGARCH 26 27 Elliott Rothenberg Stock tests 56 EM algorithm 45 empirical distribution function 58 equilibrium relations 17 20 24 25 error correction model 17 20 error dynamics 15 ESTAR 18 explained switching 32 exponential smooth transition 18 ex post forecasts 37 fast double bootstrap 65 66 FIEGARCH 28 FIGARCH 27 filter probabilities 31 35 FIML 13 23 44 45 fixed effects 10 Fourier bootstrap 62 fractional cointegration 18 24 25 73 fractional difference 14 full information maximum likelihood See FIML FVECM 24 25 GARCH 13 26 27 28 30 33 38 42 63 76 GARCH M 26 Gaussian distribution 61 generalized cointegration 25 generalized error distribution GED 58 Generalized Error Distribution GED 42 45 generalized least squares 10 generalized method of moments 11 44 Geweke Porter Hudak 67 GMM See generalized method of moments HAC 41 44 46 47 48 49 50 Hamilton 32 33 39 75 Hannan Quinn criterion 36 Harris McCabe Leybourne test 56 Hausman test 12 Hessian 46 64 James Davidson 2015 heteroscedasticity 26 28 44 45 50 HYEGARCH 28 HYGARCH 27 IGARCH 27 impulse responses 39 inequality constraints 15 information matrix 51 Information Matrix 46 instrumental variables 11 41 integration order 55 intercept types 14 IV See instrumental variables Jacobian matrix 49 Jarque Bera
7. McCabe and Leybourne 2008 The HML test is based on the long range autocovariances and is asymptotically N 0 1 under the null hypothesis of short memory Two settings need to be selected a truncation parameter c where k cT 2 is the lowest order lag and L where LTS is the bandwidth truncation of the variance parameter See the Harris et al paper for details These settings can be changed in the Options General dialog under Special Settings e All of these tests except the Robinson Lobato test can be computed post estimation for model residuals as well as for raw data See the Diagnostics dialog under Options Tests and Diagnostics to select these options In this case note that the HML bias correction is implemented for nonlinear as well as linear models by substituting derivatives for measured linear regressors This extension is natural but is not dealt with by HML in their paper hence it should be used with caution 12 11 2 Tests of I 1 Augmented Dickey Fuller Test Dickey and Fuller 1979 Said and Dickey 1984 The number of lags for the ADF test is chosen from 0 27 by a user selectable information criterion see Options Tests and Diagnostics Model Selection Criterion If the option is set to None the lag length must be set manually using a scroll bar in the same dialog Phillips Perron Test Phillips and Perron 1988 Elliott Rothenberg Stock Tests Elliott et al 1996 There are two
8. Note that econometrics packages often report the criteria in Smaller Better style and also divide them conventionally by T The SIC and the HQC are consistent selection criteria meaning that if the true model is one of those compared it will be selected with a probability that approaches as T increases This is not true of the AIC 3 Sum of squared residuals Sum of Squares or RSS 4 R R squared is defined in every case as the square of the correlation coefficient of the actual and fitted values of the dependent variable Note that it is accordingly always defined on the interval 0 1 This formula agrees with the conventional formula 1 RSS TSS where TSS is the sum of squared mean deviations of the dependent variable in the case of linear least squares estimation with an intercept included However note that the latter formula does not generalize to nonlinear models IV estimates etc The given formula is always valid although not of course a consistent model selection criterion 5 R R bar squared 1 1 R T D T p where p is the number of parameters fitted in the equation This is a commonly used model selection criterion for regression models 6 The standard deviation SD skewness Sk and kurtosis Kt of the residuals and Jarque Bera statistic The latter is defined as 2 4 2 js pd fe RE EE ee 9 4 6 sp 24 SD 36 James Davidson 2015 and is asymptot
9. Note that the generalized and regular cointegrating models are not distinguished when du dzji does not depend on i or j Also be careful to note that the parameter da is unidentified if the jth variable is suppressed in the ith cointegrating relation In this case it should be fixed to 0 Tests of zero restrictions on the ECM coefficients are also problematic because the corresponding d coefficients are unidentified under the null hypothesis For more information on fractional cointegration models see Davidson 2002 2005 and also Davidson Byers and Peel 2006 for an example using TSM4 In both models the program allows the d and du parameters for j 2 N to be estimated as the differences from du and d31 This allows the natural restriction that all variables are integrated cointegrated to the same order to be easily imposed and tested 25 James Davidson 2015 6 Conditional Heteroscedasticity Models 6 1 Single Equation Models In any of 4 1 4 2 or 4 21 4 2 or 4 29 4 2 or 4 15 4 16 let u h e where e tid 0 1 h can be defined either by BNA as Kries SIB SUAL D ps u TZ vn where s I u lt 0 the GARCH class of models with n 2 or the APARCH class with n gt 0 unrestricted or by BCL och 0 1ix 62 ET Ers HBO A L INE ru where g h u m thu 6 3 the EGARCH class Only one of and x in 6 1 and 6 2 may be different from zero Here L and
10. option and set up a Wald test of the significance of some all the Regime 2 parameters The sup Wald statistic is computed by fixing y at a large value say 100 and generating a grid of values of the switch date parameter t The maximum of the statistic over the grid is reported in the output Also see Sections 12 6 and 12 7 on pre programmed tests of specification using the sup principle 34 James Davidson 2015 9 Post Estimation Options 9 1 Residuals and Associated Series Two varieties of residual can be plotted and retrieved for further analysis In the case of the models represented in 4 1 4 15 4 16 and 5 1 these are respectively the ordinary residuals and the variance adjusted residuals h 2i where the hats denote evaluation at the estimated parameters If there is no conditional variance model specified adjustment simply means that the variance is normalized to unity Otherwise the conditional variances h are also reported In systems the adjusted residuals are H a In probit and logit models the ordinary residuals are computed as Y F h LAr while in the probit models the adjusted residuals are what are normally called the generalized residuals defined as i ho wt E hos F h 2 A F 2 where 4 denotes the standard normal density This is the series which is orthogonal by construction to the regressors or more generally to the derivatives of h 2 evaluated at the maximum lik
11. there are O t T observations but because of the repeated entries the true degrees of freedom for a regression with these data is N p If 0 the averaged form of equation 3 1 has a disturbance of the form n Tag gt 3 Time differences w Aw W W fort T 2 7 and w 0 Co Dot AD t l Ty 1 2 Wis T H t 4 Orthogonal deviations w w EH EH for t T 1 7 1 2i and Wir W This transformation corresponds to the operations of differencing and then applying the appropriate transformation for GLS estimation of the equation assuming that Av is the difference of an independent sequence see Arellano 2003 page 17 Under transformations 1 3 or 4 applied to the variables in equation 3 1 note that n disappears 3 2 Dummy Variables Dummy variables can be generated and added to the equation automatically The available options are as follows 9 James Davidson 2015 1 Individual dummies d j 1 i j 0 otherwise Including these dummies in the equation for j 1 N allows the estimation of the coefficients n assuming the data have not already been transformed to remove these effects This option is only available if no data transformation is selected 2 Time dummies d s 1 t s 0 otherwise for s min T 1 max T Include these dummies to estimate fixed time effects 3 Group dummies d k 1 i G 0 otherwise where G
12. 5 2 1 System lee 23 5 2 2 System Exogenous VariaDlES sesqutess retas one a dana c rata gue change uai diodo 23 5 2 3 Simultaneous E EE 23 5 2 4 Nonlinear Systems nenene niii a EA E E aad 23 5 3 Error ee Ee E 24 6 Conditional Heteroscedasticity Models 26 6 1 Single Equation Models getest gege 26 6 2 Definitions and Details ue acatcccsscieteasattienineieea ean ene EE 26 6 2 1 GARCH Parameterization EE 26 6 2 2 HYGARCH and RE Ee 27 6 2 3 Asymmetric GARCH and Power OGARCH Zs 6 2 4 GARCH RegressOTS ien casas stesacadancduatanksucntdvantedenatdeiatadantecaaesteceegusesectvens 27 GS GAR CEI RE 27 6 2 6 HYEGARCH and FIEGARCH ge asa ia anal dee 28 6 3 Conditional Heteroscedasticity in Discrete Data Models 28 7 Conditionally Heteroscedastic Systems ceceeeeeeeeees 29 7 1 Implemented Model Variants erre 29 7 2 Definitions and RE 30 12A Multivariate GARCH EE 30 7 2 2 DCC Multivariate GARCH zorse a ra i Anta es 30 7 2 3 BEKK Multivariate GARC saga seaceccaavectans ca ege Seege 30 8 Regime Switching eege suum aaa ara aaa va 31 8 1 Simple Markov SWILChIDS str sasiss sas aaa a ip tetas 31 8 2 le PRN PRP RNP SR 32 2 James Davidson 2015 8 3 Hamilton s Markov switching model 32 8 4 The Smooth Transition ST Model 33 EE EE We EE 33 8 4 2 Double Transition ModeL sas etapa tases RE a A 34 8 4 3 Structural Change Date Transition Model 34 8 5 Testing for Breaks and Regimes cii res 34
13. 87 87 113 Davidson J 2000 Econometric Theory Oxford Blackwell Publishers Davidson J 2002 A model of fractional cointegration and tests for cointegration using the bootstrap Journal of Econometrics 110 2 pp187 212 Davidson J 2004a Moment and memory properties of linear conditional heteroscedasticity models Journal of Business and Economics Statistics 22 1 pp 16 29 Davidson J 2004b Forecasting Markov switching dynamic processes Statistics and Probability Letters Vol 68 2 pp 137 147 Davidson J 2005 Testing for fractional cointegration the relationship between government popularity and economic performance in the UK in New Trends in Macroeconomics eds C Diebolt and C Kyrtsou Springer Verlag Davidson J 2006 Alternative bootstrap procedures for testing cointegration in fractionally integrated processes Journal of Econometrics 133 2 741 777 Davidson J 2009 When is a time series I 0 Chapter 13 of The Methodology and Practice of Econometrics eds Jennifer Castle and Neil Shepherd Oxford University Press Davidson J D Byers and D Peel 2006 Support for Governments and Leaders Fractional Cointegration Analysis of Poll Evidence from the UK 1960 2004 Studies in Nonlinear Dynamic and Econometrics 10 1 Davidson J A Monticini and D Peel 2007 Implementing the wild bootstrap using a two point distribution Economics Letters 96 3 309 315 Davidson J and P Sibbertsen
14. 9 Post Estimation Opttons rien 35 9 1 Residuals and Associated Series ccescceceereeceesteceeeneeeeneeeeees 35 9 2 Model Performance and Selection Criteria cecceeeseceeeeeees 36 9 3 Q Leeds 37 9 4 ee E 37 9 5 Ex ante Multi step Ota ee eege eege 38 Dal Analyte EE 38 9 5 2 Moving Average Coefficients Impulse and Step Responses 39 9 5 3 Forecast Error Variance Decompostton 39 9 5 4 Forecasting Regime Switching Models ssc s cssccsees idee 39 9 5 5 Monte Carlo rte Eege 40 10 Estimation EE 41 10 1 Single Equation Methods a essas iistavon anne ed Ed eet 41 VOGT Least EE 41 10 1 2 Instrumental VanablES ssa gastei Ri dp aa aU a 41 10 1 3 Gaussian maximum likelihood i eira 41 10 1 4 Student t maximum reene ett eebe asus cet ca 41 10 1 5 Skew student maximum likelihood ci erre 41 10 1 6 GED maximun TRE D heeten DerhtagettA Eege Gen 42 10 1 7 Whittle maximum eet Heu gees 42 10 1 8 Probit ande oO EE 42 10 1 9 Ordered Probit and Eeer ee 43 LOLA O POISSOM EE 43 10 1 11 Z ro inflated Discrete Ee EE EEN 43 UE EE 43 10 2 1 Least Generalized e 43 10 2 2 Generalized Method of Moments q assis as assi ETA 44 10 2 3 Gaussian ML with Conditional Heteroscedasticity 44 10 2 4 Student t ML with Conditional Heteroscedasticity 44 10 2 5 GED ML with Conditional Heteroscedasttet 45 10 3 Markov Switching MOGeIS cessar cnenavise
15. Andrews D W K 1991 Heteroskedasticity and autocorrelation consistent covariance matrix estimation Econometrica 59 817 58 Andrews D W K 1993 Tests for parameter instability and structural change with unknown change point Econometrica 61 821 856 Andrews D W K and J C Monahan 1992 An improved heteroskedasticity and autocorrelation consistent covariance matrix estimator Econometrica 60 953 66 Andrews D W K and W Ploberger 1994 Optimal tests when a nuisance parameter is present only under the alternative Econometrica 62 1383 1414 Arellano M 2003 Panel Data Econometrics Oxford University Press Arellano M and S Bond 1991 Some tests of specification for panel data Monte Carlo evidence and an application to employment equations Review of Economic Studies 58 277 297 Bhargava A L Franzini and W Narendranathan 1982 Serial Correlation and the fixed effects model Review of Economic Studies 49 533 549 Bierens H J 1990 A consistent conditional moment test of functional form Econometrica 58 1443 1458 Bollerslev T 1986 Generalized autoregressive conditional heteroscedasticity Journal of Econometrics 31 307 27 Box G E P and Cox D R 1964 An analysis of transformations Journal of Royal Statistical Society Series B vol 26 pp 211 246 Box G E P and D A Pierce 1970 The distribution of residual autocorrelations in autoregressive integrated moving averag
16. B L are finite lag polynomials analogous to L and O L in 4 1 The x for j 4 5 6 are vectors of variables with coefficient vectors 7 Also note that the vectors X11 Xu OF X3 in equation 4 1 or 4 15 4 16 can include either h or hi This is the GARCH M model It might often be appropriate to treat the GARCH M variable as a Type 3 regressor It cannot be of Type 1 when the data are differenced as in 4 6 The basic ARCH and GARCH models are implemented as the special case of 6 1 In the usual Bollerslev 1986 notation the GARCH model is written as B L A a L u 6 4 where a L B L d L and x B 1 o 6 2 Definitions and Details 6 2 1 GARCH Parameterization In equations 6 1 and 6 2 observe that the zero order term of the lag polynomial on the right hand side is zero by construction so that the model only involves lagged values of u The usual GARCH formulation is obtained if only the parameters B L and af are present The parameterisation adopted in 6 1 is the ARMA in squares form Du B L v 6 5 where v ur h and x B 1 o The roots of the polynomial S L must be stable for covariance stationarity The package will optionally report the estimates in the conventional Bollerslev 1986 form as in equation 6 4 As a simple example the GARCH 1 1 model may be written either as h K a u Bh 6 6 or as 26 James Davidson 2015 u K 8u v Bv 6 7
17. Hausman 1978 bias test Since the pre sample shocks are set to zero the data are created using the correction tem derived by Davidson and Hashimzade 2009 This correction is only implemented for stationary series d lt so if d gt the data are integer differenced resampled using d 1 then re cumulated using the observed initial observation to supply the initial condition Note that the resulting series after normalization would converge to Type I fractional Brownian motion 14 1 4 Specification Tests in Geweke Porter Hudak Estimation The bias test of Davidson and Sibbertsen 2009 compares wide and narrow band GPH estimates The statistic has an asymptotic standard normal distribution upper tail rejection on the null hypothesis of a pure fractional model and more broadly in models without significant bias due to neglected short run dynamics The skip sampling test of Davidson and Rambaccussing 2015 tests the null hypothesis of long memory against the alternative of weakly dependent data This test is most effective consistent and asymptotically correctly sized when a pseudo p value is computed as a composite with that of the usual Wald test that the memory parameter d is zero see the cited article for details 14 2 Cointegration Analysis The menu item Setup Cointegration Analysis gives access to the maximum eigenvalue and trace tests of cointegrating rank of a set of I 1 variables and also to tests of restriction
18. James Davidson 2015 9 2 Model Performance and Selection Criteria The program output routinely reports the following statistics 1 By default the maximum value of the maximand is reported that is C where estimation is by minimizing the criterion C This criterion may be a sum of squared residuals or quadratic form in methods of moments estimation Of course in maximum likelihood estimation EC L where L is the maximum of the log likelihood function defined by the relevant formula see equations 10 3 10 21 below Optionally the sign can be reversed so that the minimand is reported It appears most natural to report the maximand when estimation is by maximum likelihood but possibly less confusing to report the minimand in method of moments estimation when there is no natural analogue of the log likelihood function to consider 2 By default the following model selection criteria are reported where in each case p is the number of fitted parameters and T is sample size Akaike Information Criterion AIC C p Schwarz information criterion SIC C 5plogT Hannan Quinn criterion HQC C ploglogT Note that by default each of these measures provides a basis for choosing between a set of alternative models on the criterion Larger Better By choosing to report the minimand the signs of all these criteria can be optionally reversed so that the selection criterion should be Smaller Better
19. M is a dummy variable denoting the regime prevailing a time t and 0 S represents the parameter values applying in regime S Here the vector O is to be thought of as the concatenation of all the parameters in the specified model The vectors 1 O M are estimated although the package allows different groups of parameters to either switch or be constrained equal across regimes The current version of the program allows M 2 3 or 4 Three types of switching mechanism are supported in the package See Kim and Nelson 1999 for additional details on these models 8 1 Simple Markov switching Here the switching is under the control of a Markov chain updating mechanism with fixed transition probabilities Let AY S 1 denote the probability density of the dependent variable at time t when regime j is operating where Y represents the history of the process to date t 1 and let the probability of falling in regime j at time t evolve according to TOS LEUKEN KC E Pr S j 8 2 where M Pr S j Y KE Pji Pr S _ il Y a 8 3 The transition probabilities p Pr S j S i are M M 1 fixed parameters to be estimated subject to gt P 1 The likelihood function maximized for estimation is T M L gt log f S 7 8 4 PS IR A 8 4 t 1 j l Note that the transition probabilities are mapped onto the real line for unrestricted estimation so that the parameters are t such that Cep
20. SS where g A u Tu m 7 3 Here u e N x 1 where H diag h N x N e iid 0 C N x 1 and C N x N is a fixed correlation matrix with units on the diagonal M and T are diagonal matrices with equation asymmetry parameters on the diagonal m is defined in 6 2 5 and 1 N x 1 is the unit vector Only one of the N vectors and x in 7 1 and 7 2 may be different from zero Refer to the discussion of the various special cases in Section 6 2 which generalize in the natural way to the multivariate model The fixed correlation restriction can be relaxed in the dynamic conditional correlation DCC model Here it is assumed that e iid 0 R where R diag Q O diag Q 7 4 and Q I a B C ce e BQ 7 5 In this implementation a gt 0 and B gt O are additional scalar parameters so that for N gt 2 all dynamic correlations follow the same process Alternatively in the BEKK model Engle and Kroner 1995 u Hie where e iid 0 DON x 1 and B L vec H E IL x x IL 7 6 vec KCK ILx x IL A L vec u u ZIL where A L A QA L A BAD N xN 7 7 B L 1 B B L B B L N xN 7 8 Only one of diag and K diag x may be different from zero and C and E are alternative correlation matrices 29 James Davidson 2015 7 2 Definitions and Details 7 2 1 Multivariate GARCH In equations 7 1 and 7 2 B L and D L are NxN lag polynomial matrices
21. Se 1 1471 3 2 0 In FGLS Tt is estimated from the residuals of the within and between regressions These are respectively regressions in the individual mean deviations and the individual time means In the latter regression the disturbances are u v 1 with variances RR ml 2 2 depending oni o T o 0 Letting 8 OEN D E Bx 3 3 and 10 James Davidson 2015 ENS TG BS 3 4 denote the residual variances from the regressions under transformations 1 and 3 respectively o is estimated by A2 a2 an N om E 3 5 Note that in the case of a balanced panel with T T N O 1 T The second step of FGLS is performed by replacing t by T 5 IG in 3 2 Note that time effects cannot be modelled as random in this release 3 3 3 Maximum Likelihood for Random Effects Maximum likelihood extends the GLS approach by optimizing the concentrated Gaussian log likelihood of the sample with respect to t This requires just a univariate numerical maximization line search over t values The criterion function is L S log 2m es vol 7 log 70 3 6 where 6 1 is defined similarly to 3 3 except that y x are replaced by y x as in 3 2 3 3 4 Instrumental Variables GMM Panel data models may be estimated by IV if valid instruments are constructed by the user Note that specialized GMM procedures for dynamic panel data models unless they can be implemented by a suitable constructio
22. and non nested hypotheses Econometrica 57 307 333 White Halbert 1980 A heteroskedasticity consistent covariance matrix and a direct test for heteroskedasticity Econometrica 48 817 838 77 James Davidson 2015 Index 2SLS 7 3SLS 7 8 ADF test See augmented Dickey Fuller test Akaike Information Criterion 36 Andrews parameter stability test 52 APARCH 26 27 28 ARCH 13 ARFIMA 13 15 57 67 75 ARIMA 13 14 ARMA 13 14 15 16 26 30 37 41 57 67 69 75 76 asymmetric error correction 18 27 augmented Dickey Fuller test 7 8 56 autoregressive moving average 13 bandwidth 8 41 44 46 48 56 57 67 75 Bartlett kernel 46 48 75 BEKK 29 30 Bierens 72 Bierens test 52 bilinear model 15 16 binary dependent variable 21 28 block bootstrap 61 Bollerslev 26 72 bootstrap test 56 Box Cox transformation 19 Box Pierce 37 Breitung 56 Breusch Pagan test 12 calculator 70 chi squared probit 21 Chow test 7 38 coded formulae 18 cointegration 8 common factor test 50 conditional moment test 7 49 consistent specification tests 52 count data 22 cubic polynomial 18 cusum of squares 54 data generation 63 data resampling 62 data transformations panel data 9 DCC 29 30 delta method 15 discrete autoregression 22 78 dummy variables panel data 9 Durbin Watson test 7 12 37 Durbin Wu Hausman 37 dynamic specification tests
23. are cointegrated but which contain no cointegrated subsets It is shown in Davidson 1998a that identified structural cointegrating relations must be irreducible and in any case the maximum likelihood estimates of irreducible relations are mixed Gaussian so that valid t statistics can be computed For the irreducible vectors that the MINIMAL procedure picks out implied point estimates and standard errors are reported and also for sets of more than two variables the omission tests for each included variable being the chi squared statistic obtained by omitting it Also given are the Phillips Perron statistics computed from the implied cointegrating residuals Assessing this evidence allows a decision to be made whether to treat a particular relation as irreducibly cointegrating Alternative significance levels can be chosen 10 5 2 5 or 1 to give a more or less stringent decision criterion on rejecting cointegration The rule of thumb correction suggested by Monte Carlo experiments modifies the test critical values depending on the degree of over identification implied by the null hypothesis See Davidson 1998a for additional details 14 3 Automatic Model Selection The program has two routines for automatic model selection These work by estimating all members of a specified class of models and reporting the case which optimizes a chosen model selection criterion The choices accordingly do not depend on test outcomes and
24. be estimated individually but the series of log likelihood contributions for any model can be retrieved and added to the data set for subsequent analysis The statistic is then easily computed by hand using TSM s various data handling features Use the Data Transformation and Editing dialog to form the differences of two series then the Compute Summary Statistics dialog for the mean and standard deviation The Calculator dialog can then compute the statistic and finally consult Look Up Tail Probability for an asymptotic p value 12 9 Cusum of Squares Test This is a version of the Brown Durbin Evans 1975 test for parameter stability computed from the regression residuals and corrected for possible autocorrelation and conditional heteroscedasticity using a HAC variance estimator as derived by Deng and Perron 2008 The statistic has the form 3 Jo PIR i CUSO max To where Q is any of the variance estimators described in Section 11 3 computed for the 12 20 sample mean deviations of The asymptotic distribution under the assumptions specified in Deng and Perron 2008 is the supremum of the absolute value of a Brownian 54 James Davidson 2015 bridge The test has power against the alternative of unconditional changes in the variance of the residuals in a regression model Note that the assumptions require the existence of moments greater than fourth order so the test may be inappropriate for fat taile
25. combined with a linear AR term corresponds to the ESTAR model This model interpolates smoothly between two AR coefficients depending on the path of the process If a unit root is imposed and v lt 0 it embodies target zone type behaviour in which the process resembles a random walk for small deviations but reverts to the mean after large deviations An alternative way of implementing nonlinear dynamics is through the smooth transition regime switching option These models allow the value of exogenous variables to control the dynamic regime but not the value of the ECM residual itself as here Note treatments of the ESTAR model sometimes allow a different lag on the two occurrences of x in 4 24 This option is not available in the present implementation 4 6 User Coded Functions 4 6 1 Coded Formulae A virtually unlimited range of nonlinear specifications can be implemented by entering mathematical formulae directly In other words a formula can be typed using natural notation combining the operators parentheses and standard mathematical functions log exp sin etc with parameters and variables As a simple example to compute the nonlinear regression y a x u 4 27 where the data set contained corresponding variables WYE and EXE one could simply enter the line WYE alpha beta EXE gamma See the User s Manual for detailed coding instructions 18 James Davidson 2015 Another possibil
26. dependence are the provided by the Q tests for levels Box Pierce 1970 and squares McLeod Li 1983 of the data The latter is a test for nonlinear dependence in a serially uncorrelated white noise series The default statistic is the Box Pierce 1970 formula O m n gt r 9 6 which is asymptotically chi squared with m p q degrees of freedom when applied to the residuals of an ARMA p q model Optionally this can be replaced by asymptotically equivalent Ljung Box 1978 formula r2 i O m n n 25 jin 9 7 with claimed better small sample properties 9 4 Ex post forecasts One step ex post forecasts are obtained by fitting the model using data up to time T and then computing the usual fitted equation and residuals for periods T 1 to T F such that all right hand side variables are treated as known The main purpose of this option is to test model stability In the general case two test statistics are computed 37 James Davidson 2015 T F a2 KE t T 1 t Forecast Test I 9 8 Var u where the denominator is the usual residual variance from the sample period and T F F T7 Forecast Test II Dut rail Den Se 9 9 JF Var a7 T Var ii Test I is an asymptotically valid version of Chow s prediction test distributed as y7 F under the null hypothesis of model stability also assuming the disturbances are Gaussian Test II is the usual difference of means test on th
27. have regressors drawn randomly in repeated runs as in a Monte Carlo experiment it is generally necessary to generate them from recursive equations in a multi equation model although there is an option in the coded function feature to insert 1 1 d standard normal data in place of a measured variable see the Users Manual Section 1 5 for details 13 6 1 Dynamic Data Simulation By default the dynamics of the specified model are fully reproduced in the simulation procedure Inversion of the AR MA and fractional lag components is generally self explanatory Inverted lag polynomials are truncated at the start of the available data or at the truncation point specified under Dynamic Model Settings in the Options ML and Dynamics dialog if this is set Unobserved pre sample values are set to 0 or in the case of ARCH GARCH processes to the appropriate power of the innovation variance not the unconditional variance note since this may be undefined as in nonstationary cases In the cases where Y is replaced by a coded function f in 4 31 the equation is first solved for f from u then the coded function g specified in f is added to it to generate Y Note that simulations cannot be performed with the Residual coding variant in 4 32 However it is generally possible to create an equivalent model dedicated to simulation using the W reserved name to represent f in 4 32 Be careful to note too that f u
28. infinite lag structures represented by 4 9 are approximated in the sample by replacing 1 L by do t MI gt bl 4 11 In other words the lag distribution is truncated at the beginning of the available sample In long memory models the omission of the pre sample observations can change the distribution of the estimates even asymptotically A technique for correcting this effect is implemented experimentally see Davidson and Hashimzade 2009 4 2 3 Intercept and Linear Trend Dummies These are built in options and do not need to be added as dummy regressors Yo Or Yo2 in equation 4 1 are called respectively intercepts of Type 1 and Type 2 At most one can be present In autoregressive models the Type chosen makes no difference to the fit but the value and interpretation of the coefficient is different compare equations 4 4 and 4 5 The built in trend is of Type 1 with coefficient y A Type 2 trend dummy is not built in but it could be included as a generated regressor in the set zz in equation 4 1 If included in equation 4 4 for example its coefficient would have the form 1 9 y Also note that a Type 2 intercept would become y y in this case If a unit root is imposed as in equation 4 6 Y is unidentified The Type 1 trend coefficient y becomes in effect an intercept behaving exactly like a Type 2 intercept You cannot add both of these simultaneously 14 James Davidson 2015
29. k 1 M represents a partition of the individual indices into subsets Group membership has to be specified by setting up indicator variables in the data set This option can be used to estimate the model subject to the restriction that n n when i j G If an intercept is included in x the dummies d OU and d 1 are automatically it excluded to avoid the dummy variable trap 3 3 Estimation Methods Three estimation methods are implemented ordinary least squares OLS two step feasible generalized least squares FGLS and maximum likelihood 3 3 1 Ordinary Least Squares for Fixed Effects In OLS the n terms are treated as fixed and estimated as the coefficients of the dummies of type 1 fixed effects Note that the regression with data transformation of type 1 is identical to the regression with untransformed data including dummies of type 1 but in the second case the estimates of n are reported 3 3 2 Generalized Least Squares for Random Effects In the random effects model the n terms are treated as random variables with mean 0 and variance o and assumed to be distributed independently of x for all i and t Therefore the disturbances n v are correlated and efficient estimation requires a GLS regression Letting t o Jo and assuming this known the exact GLS estimator is obtained by computing the regression in the transformed variables w w 0 w where To 1 2 2 0
30. model case see preceding paragraph Be careful to note that these are treated as coefficients of f not Y The implicit nonlinear specification z fijo J gt 9 456 5 5 is also permitted again subject to the restriction that each element of f is a function of one and only one element of Y The warning in footnote applies here too In addition please note that models of this type cannot be simulated Simulations will be computed but will correspond to an incorrect model Instead create a separate model for simulation This can optionally use the reserved names W 1 W 2 etc to represent the disturbances on each equation 5 3 Error Correction Models Two types of vector error correction model VECM are implemented open loop and closed loop In open loop models the equilibrium relations are specified as in 4 22 or 4 30 with x7 a Px1 vector that may include any specified endogenous variables elements of Y as well as exogenous variables Closed loop models are implemented by generalizing the representation in 4 23 In other words let Z E IL L BY You TVi HM x 5 6 noting that the matrix I L of cointegrating coefficients can optionally include lag polynomial factors In the standard open loop or closed loop VECM model the N x S matrix Y is a matrix of constant loadings coefficients v weighting the ith lagged equilibrium relation in equation j of the system Optionally it can be re
31. non diagonal these variables must be included as regressors of Type 1 in the equation that is as elements of x When one or more endogenous variables are present in this set the program automatically computes the FIML estimator see Estimation Criteria below In this case the corresponding elements of II are either to be interpreted as non diagonal elements of B or in the case of the normalized left hand side variable of the equation are automatically fixed at 0 Some of these coefficients must be subject to identifying restrictions to ensure consistent estimation The user must use the Values Equation dialog to impose these restrictions on each equation typically by fixing some coefficients to 0 5 2 4 Nonlinear Systems In equation 5 1 the vector Y can be replaced by a vector f Y x where 23 James Davidson 2015 fa Y 8 fij J gt 0 X48 5 4 Exactly as for the single equation case these functional forms can be either coded with the built in formula parser or supplied as an Ox function It s important to note that these functions cannot be simultaneous in other words x may only contain predetermined exogenous or lagged endogenous variables However linear simultaneity is permitted in the sense that B is allowed to be non diagonal in 5 1 If elements of Y are included in x t the corresponding coefficients of II are treated as non diagonal elements of B or fixed at 0 just as in the linear
32. option f allows a nonlinear function of regressors x4 to appear in the dynamic part of the model The dependent variable s can appear in linear autoregressive form Note that without an autoregressive or fractional component it would be equally possible to set up this nonlinear specification through fi but otherwise the dynamics will be different similarly to the distinction between Type 1 and Type 2 regressors 4 6 5 Coded Error Correction Mechanism Pre programmed formulae for the option fz have already been defined in 4 24 4 25 and 4 26 It will always be computationally more efficient to use these pre programmed cases so do not attempt to code them unless variations are desired Also note that the pre programmed cases can be combined with either fi f2 or fa 4 6 6 Coded Moving Average Model A fairly general pre programmed form for f4 is already defined in 4 20 depending on up to five parameters Coding this function allows other variants to be implemented including system variants Note that the coded function must correspond to g A in 4 17 which allows the lag to be modified as well as the functional form Note recursive formulae require T calls to the parsing routine at each function evaluation instead of one where T denotes sample size Accordingly these models are relatively more expensive computationally than static formulations 4 6 7 Coded Equilibrium Relations Equation 4 30 provides a third option alongs
33. stacked form Let uj uji ujr for j 1 N and u vec u1 Un Also let W In Z where Z is the matrix of instruments specified by the user The basic GMM minimand is then C u A W W A WY W A u 10 17 where A S Ir S is the estimator of the error variance matrix which on the first run of this estimator is replaced by the NxN identity matrix However if the estimation is iterated giving the Run command again immediately without any intervening user actions the optimization is repeated with S containing the estimate of the covariance matrix from the current residuals This iteration can be repeated as often as desired Further if the covariance matrix option is set to Robust the default or HAC see below then the efficient GMM minimand takes the form C 4u A WM W A 10 18 where TM denotes respectively either the White Eicker or the Newey West estimator of the covariance matrix of the vectors z u for k 1 NT where z denotes the row of the matrix A W The kernel and bandwidth choice for the HAC estimator is the same as currently selected for the computation of tests and standard errors 10 2 3 Gaussian ML with Conditional Heteroscedasticity To estimate a Gaussian system featuring a conditional variance model the maximand takes the form T t Flog 2n EK det H log det C u A C A u 10 19 t 1 where H diag h and C is the NxN contemporaneous correlation matrix havi
34. test 11 36 60 65 Johansen 68 69 72 75 Kalman filter 70 Kim s algorithm 35 KPSS 57 KPSS test 55 KVB 46 48 lag distribution 14 lag polynomials 13 14 18 23 26 Lagrange multiplier test 7 49 least generalized variance 43 Least Generalized Variance 44 least squares 10 41 LGV See least generalized variance Ljung Box 37 loadings coefficients 17 24 logistic transformation 15 logit 21 28 35 58 log periodogram regression 67 MA coefficients 14 Markov switching 13 31 32 33 matrix calculator 70 maximum likelihood 10 36 41 42 51 61 67 69 76 McLeod Li 37 MINIMAL 68 69 moment test 49 Monte Carlo experiments 40 58 60 61 63 69 72 75 Monte Carlo forecasts 40 Moulines Soulier 67 negative binomial 43 NegBin I 22 NegBin II 22 neglected ARCH 50 Newey West 44 46 nonlinear autoregression 17 nonlinear error correction 18 79 nonlinear functional form 50 Nyblom Hansen 51 ordered logit 21 28 43 ordered probit 21 28 ordered Probit 43 ordinary least squares 7 10 Osterwald Lenum 69 76 Ox Coding 20 panel data 9 parallel processing 58 parametric bootstrap 59 65 Parzen kernel 46 percentile t 60 Phillips Hansen 8 Phillips Perron test 7 8 plug in bandwidth 46 48 Poisson 22 28 43 58 polynomial distributed lags 15 pre whitening 47 48 probit 21 28 35 42 58 O test 37 60 65 Quadratic Spectral kernel 4
35. that the residuals are 1 i d They are recommended only for comparability with other packages 11 2 Robust Formulae In the robust formulae recommended for general use A gt is set to the outer product of the gradient OPG In the MLE and OLS cases this has the form T c O A E 11 1 Ca fas ED where c is the tth term of the log likelihood or sum of squares In the linear least squares case this corresponds to the White 1980 formula In GMM 5 is replaced by F Dae la z u NB these terms sum to the gradient GC in each case t 1 11 3 HAC Variance Estimators The heteroscedasticity and autocorrelation consistent formulae replace the OPG covariance estimate with the Newey West 1987 type of estimate using a kernel function to taper the contribution of serial covariances This formula is recommended when there is evidence of residual serial dependence in the equation There is a choice of Parzen Bartlett Quadratic Spectral and Tukey Hanning kernels with Parzen the default Note The same options are available for the Whittle estimator although in this case the terms in the OPG and HAC formulae represent coordinates in the frequency domain not the time domain Application in this context is at the user s discretion 11 3 1 Bandwidth Selection The bandwidth can be set manually or by an automatic plug in procedure In the latter case the bandwidth is set as m the largest integer below 46 Jame
36. the null hypothesis are not imposed on the data Hence the usual asymptotic p values are always given in the output for these and likewise for the diagnostic Q tests and Jarque Bera test The bias correction feature can be implemented in the same manner as for the bootstrap 13 10 The Fast Double Bootstrap The fast double bootstrap is a technique proposed by Davidson and Mackinnon 2002 2007 that may under certain circumstances reduce the error in rejection probability of a bootstrap test See also Davidson 2006 for an explanation of the principle In each bootstrap replication the model is fitted to the generated data and these estimated parameters are then used to generate a further bootstrap sample The model is then fitted to these second generation data to provide the bootstrap statistics for tabulation This technique can minimise the error in rejection probability ERP due to errors of estimation particularly when the statistic in question is not asymptotically pivotal However it cannot guard against errors of specification where the model being investigated is different from the DGP of the observed data It is most useful as a check on 65 James Davidson 2015 the performance of the bootstrap If the two p values from the single and double bootstraps respectively differ substantially caution is advisable in interpreting the results 13 11 Warp speed Monte Carlo for Bootstrap Estimators Monte Carlo experiments o
37. time series o is the unobserved state process and by assumption e NID 0 7 r x1 The matrices T Z G and H and vectors d and c are usually constants but can be made time varying depending on t as shown by having columns of the data matrix regressors supply the elements at time t Models that can be cast in this form include ARMA models unobserved component models featuring local level local trend seasonal and cyclical components and cubic splines The built in features of SsfPack allow these model components to be assembled automatically and model parameters to be estimated by maximum likelihood although in this mode the user is limited to univariate models n 1 It is also possible to formulate the models by direct specification of the matrices although in this case there is no option to estimate unknown parameters Making use of the Kalman filter the equations in 14 9 can be used to estimate the states and disturbances from the sample to construct forecasts and to generate simulations The SsfPack procedures are documented in Koopman Shepherd and Doornik 1998 This article is distributed with the software in PDF format Another useful reference is Commandeur Koopman and Ooms 2011 14 5 Calculator and Matrix Calculator TSM features facilities for direct data manipulation and the constructions of statistics by hand allowing the implementation of various procedures that are difficult to implement as a m
38. which 63 James Davidson 2015 generates the stationary process without any requirement to set presample lags see Davidson and Hashimzade 2009 for the details Discrete data models are simulated by making conditional drawings from the distributions specified by the likelihood model Probit Tobit or a count data distribution as the case may be Note that these drawings may be serially dependent through their dependence on the conditioning variables but the models are not otherwise dynamic Markov switching models are generated by making random drawings from the relevant switch distribution This source of randomness is additional to the generation of the equation disturbances in each regime which are drawn in parallel from the specified distributions switching variances are of course possible 13 6 2 The Static Bootstrap If the Static option for bootstrap tests is selected the simulated data are generated by adding the resampled residuals to the fitted values from the estimated model In the case of static equations in which all explanatory variables are exogenous 1 e held fixed in the replications this option has no effect on the simulation procedure although it should generally run considerably faster than the default simulation procedure In dynamic equations the generated data are conditioned on the actual lagged dependent variables not the generated lagged dependent variables In this case the resampled residua
39. 002 for the partial sums The same statistic is also computed for an 1 1 d sequence which is to say the shocks used to drive the model simulation these can be either computer generated or randomly resampled residuals These bootstrap distributions are compared using the Kolmogorov Smimov test for equality of empirical distributions The reported p values are obtained from the formula in Feller 1948 56 James Davidson 2015 Observe the following points l The test is motivated as an attempt to answer the question Will asymptotic distribution results based on the assumption of I 0 provide more accurate approximate inferences than alternatives in my sample Test outcomes depend on sample size as well as the form of the data generation process DGP It provides an alternative to tests such as the KPSS whose performance tends to be excessively dependent on the choice of bandwidth Don t overlook that the test is performed on the fitted model of the series not the series itself Test performance depends on how well the DGP is able to reproduce the autocorrelation characteristics of the data to which it is fitted Careful fitting and experimentation with different models for example ARMA and ARFIMA variants is recommended The choice of shock distribution Student s t versus Gaussian for example may also affect the outcome Two variants of the I 0 test are implemented The one most easily performed is to model the de
40. 15 Kunsch H R 1987 Statistical aspects of self similar processes Proceedings of 1 World Congress of the Bernoulli Soc Eds Yu Prohorov and V V Sazanov VNU Science Press Utrecht 1 67 74 Kunsch H R 1989 The jack knife and the bootstrap for general stationary observations Annals of Statistics 17 1217 1241 Kwiatkowski D P C B Phillips P Schmidt and Y Shin 1992 Testing the null hypothesis of stationarity against the alternative of a unit root Journal of Econometrics 54 159 178 Lambert P and S Laurent 2001 Modelling financial time series using GARCH type models with a skewed Student distribution for the innovations Working Paper Li H and Maddala G S 1996 Bootstrapping time series models Econometric Reviews 15 2 1 115 Ljung G M and G E P Box 1978 On a measure of lack of fit in time series models Biometrika 65 297 303 Lo Andrew W 1991 Long term memory in stock market prices Econometrica 59 5 1279 1313 L tkepohl H 2007 New Introduction to Multiple Time Series Analysis Springer MacKinnon J G 1991 Critical values for cointegration tests Ch 13 in Long run Economic Relationships Readings in Cointegration eds R F Engle and C W J Granger Oxford Oxford University Press McLeod A I and Li W K 1983 Diagnostic checking ARMA time series models using squared residual autocorrelations Journal of Time Series Analysis 4 pp 269 273 Mou
41. 2 8 Note that 0 by construction When the model is correctly specified the normalized partial sum process s should behave like the mean deviations of a martingale and hence in the limit like a Brownian bridge Rejections are expected when the elements exhibit features such as shifts of mean or other time dependent patterns While set up as tests for random parameter variation the NH tests should hopefully have power against a wide range of misspecifications 51 James Davidson 2015 12 6 Andrews Structural Change LM Test This is the LM version of the test developed in Andrews 1993 to detect structural change with an unknown change point In Andrews paper the test is defined for changes in any subset of the model parameters In this implementation the options are to test the full parameter set and to test each parameter individually This test is implemented for models estimated by least squares maximum likelihood and GMM The test formula is Sub LM T where T _ Atay S rea get Area M 7 m n S M M S ul Meier 12 9 m 1 7 Consider first the least squares maximum likelihood implementation of the test In 12 9 m 7 E m where the m are the score gradient contributions corresponding to the parameters under test evaluated at the estimates under the null hypothesis and hence having the property m 1 0 In the full model stability test m is the complete score vector In the individual parameter tests i
42. 2004 A Smooth Permanent Surge Process Stockholm School of Economics SSE EFI Working Paper Series in Economics and Finance No 572 Gonzalo J and O Martinez 2006 Threshold integrated moving average process Does size matter Maybe so Journal of Econometrics 135 311 347 Granger C W J and T Ter svirta 1993 Modelling Nonlinear Economic Relationships Oxford University Press Hall P 1992 The Bootstrap and Edgeworth Expansion New York Springer Verlag Hamilton J D 1989 A new approach to the economic analysis of nonstationary time series and the business cycle Econometrica 57 357 384 Hamilton J D and R Susmel 1994 Autoregressive conditional heteroscedasticity and changes in regime Journal of Econometrics 64 307 333 Hansen B E 1990 Lagrange multiplier tests for parameter instability in nonlinear models Mimeo at http www ssc wisc edu bhansen papers LMTests pdf Hansen B E 1992 Testing for parameter instability in linear models Journal of Policy Modelling 14 517 533 Hansen B E 1996 Inference when a nuisance parameter is not identified under the null hypothesis Econometrica 64 413 430 Harris D B McCabe and S Leybourne 2008 Testing for long memory Econometric Theory 24 1 143 175 Hauser M A 1999 Maximum likelihood estimators for ARMA and ARFIMA models a Monte Carlo study Journal of Statistical Planning and Inference 80 1 2 229 255 Hausman J A 1978
43. 2009 Tests of Bias in Log Periodogram Regression Economics Letters 102 83 86 Davidson J and N Hashimzade 2009 Type I and type II fractional Brownian motions a reconsideration Computational Statistics and Data Analysis 53 6 2089 2106 Davidson J and A Halunga 2013 Consistent tests of functional form in dynamic models Chapter 2 of Essays in Nonlinear Time Series Econometrics eds N Haldrup P Saikkonen and M Meitz Oxford University Press Davidson J and D Rambaccussing 2015 A test of the long memory hypothesis based on self similarity Journal of Time Series Econometrics 7 2 2015 115 142 Davidson R and E Flachaire 2008 The wild bootstrap tamed at last Journal of Econometrics 146 162 169 Davidson R and J G MacKinnon 1999b Bootstrap testing in nonlinear models International Economic Review 40 487 508 Davidson R and J MacKinnon 2002 Fast double bootstrap tests of nonnested linear regression models Econometric Reviews 21 417 427 Davidson R and J MacKinnon 2007 Improving the reliability of bootstrap tests with the fast double bootstrap Computational Statistics and Data Analysis 51 3259 3281 Davies R B 1977 Hypothesis testing when a nuisance parameter is present only under the alternative Biometrika 64 247 254 73 James Davidson 2015 Dempster A P N M Laird and D B Rubin 1977 Maximum likelihood from incomplete data via the EM algorithm Journal of the Royal Stati
44. 4 33 In the negative binomial models the variance of Y has the representation Var V xi X2 x3 0 1 ab 8 4 37 where k 1 in the NegBin I case and k 0 in the NegBin II case The Poisson is the case a See Cameron and Trivedi 1986 for details of these cases 4 7 4 Autoregressive discrete models A further dynamic variant is to replace z in 4 33 by A L z where ML 1 AL gt A EP 4 38 As an example consider the Poisson specification 4 35 and Z Yoo Fach TaZ 4 39 where all the parameters are positive It is known that the stochastic process Y is stationary and ergodic if 1 1 lt 1 see Fokianos et al 2009 The lags z z are specified formally in the software as Type 2 regressors so that this structure is distinct from the ARFIMA setup in 4 1 which is of course not available in the discrete case Therefore note that the intercept is only available in the Type 2 form in this specification 4 7 5 Zero inflated Poisson and ordered Probit In some data sets the number of zero cases may appear excessive relative to the assumed distribution of the cases To deal with this phenomenon it may be hypothesized that the observations are drawn from two regimes one of which yields a zero the other a Poisson or ordered Probit drawing The probability of the drawing coming from the first regime can be modelled by a separate regression function having the general form F w where W Yos
45. 5 2 Fractional Cointegration In the usual case the vector v is constant with elements v However it can optionally be replaced by a vector of lag polynomials v L with elements oO Li for i 1 8 where d is the fractional differencing coefficient in 4 21 and O lt d3 lt d This is the fractional cointegration model The usual full cointegration case is d3 di 4 5 3 Nonlinear Error Correction and Nonlinear AR A further variant of 4 21 is to replace the ECM term oz by vf Z where f represents a vector of the same dimensions as its argument whose elements are transformations of the corresponding elements of the argument The usual linear case is f x x However other programmed options available include Exponential Smooth Transition f x x 1 exp G9 gt 0 4 24 Asymmetric fo x x Ch 4 25 Cubic Polynomial h a Ox 4 26 where is in each case an additional parameter In these cases the pair of parameters v O determine the error correcting behaviour although be careful to note that in 4 24 each parameter is unidentified when the other is 0 whereas in 4 25 and 4 26 is unidentified when v 0 This makes significance tests problematic although note that the parameters can be fixed at chosen values as well as estimated When option 4 23 is selected in conjunction with 4 24 4 25 or 4 26 the model implements a nonlinear autoregressive specification For example 4 24
46. 6 R S test 55 random effects 10 R bar squared 36 regressor types 7 residual Aautocorrelation 50 Robinson Lobato test 55 Saikkonen 8 Sargan 37 Schwarz information criterion 36 62 score contribution tests 53 seemingly unrelated regressions 7 SETAR 33 sieve AR bootstrap 62 simulation 63 simultaneous equations 23 skewed Student s t 41 smooth permanent surge 17 smooth transition 33 smoothed probabilities 31 SsfPack 70 stable distribution 61 state space 70 step responses 39 STIMA 16 Stock and Watson 8 STOPBREAK 16 17 structural dynamics 15 Student s t 41 44 57 Subba Rao 16 77 James Davidson 2015 subsampling 65 76 Sup F 55 sup Wald 34 SUR 7 12 TAR 32 33 transition probabilities 31 33 trend dummy 14 38 Tukey Hanning 46 two stage least squares 7 type I fractional Brownian motion 63 68 Types See regressor types 80 V S test 55 VAR 7 23 32 47 48 68 VECM 24 Vuong test 54 Wald tests 7 warp speed Monte Carlo 66 White test 50 White Eicker 44 Whittle 42 45 46 67 68 wild bootstrap 62 zero inflated 22 43 James Davidson 2015
47. 7 and gives details of the forecasts tests and other post estimation options optimization criteria simulation and bootstrap options and supplementary capabilities It does not explain how to use the program The Users Manual is included in PDF format tsm4ghp pdf as well as being available interactively via the program s Help system The Appendices to this document t sm4app pdf explain how to install and customize the program The Programming Manual t sm4prg pdf explains how to call the program functions from within a user s Ox program 1 1 Copyright Notice Time Series Modelling 4 47 is copyright James Davidson 2002 2015 http www timeseriesmodelling com Please cite Time Series Modelling 4 46 in any publications where results obtained with the program are reported Ox 7 01 or later versions J A Doornik 1994 2013 is required to run the package Ox Console is free to academic users only from http www doornik com It should also be cited in any publications please visit the web site for details Ox Professional is required for 64 bit installation The GUI version incorporates the following freely distributed copyright components OxJapi Version 2 2008 2013 Timothy Miller OxJapi 2002 Christine Choirat Rafaello Seri and Licensed under Gnu Lesser General Public Licence Version 2 1 February 1999 http www tinbergen nl cbos index html GnuDraw 6 3 Charles Bos http www tinbergen nl cbos index html GnuP
48. 7j T as before See Robinson 1995b and Kunsch 1987 Note that this is the concentrated form of the Whittle log likelihood function LY sa TA L g d log gi 14 6 M 2 j Sc so that the method treats g in 14 2 as a constant but by choosing M such that M T 5 0 the method yields consistent estimates under suitable assumptions about the properties of g A around 0 The output reports in each case the point estimate standard error and the test of significance of d Also reported in the case of the log periodogram regressions is a Hausman 1978 type test for the presence of bias in each estimator This compares the broadband estimate that would be consistent and asymptotically normal if f were constant with an estimator that would exhibit less bias if the null hypothesis was false In the GPH case this is a narrow band estimator with M 0 64 7 2 where 0 64 is a value chosen to maximize the non centrality and in the MS case it is the MS estimator with the selected P Both of these statistics are asymptotically normal under Ho The resampling inference options can be selected for any of these estimation options In this case the resampled series is generated by implementing the sieve AR bootstrap method for the fractionally differenced data using the estimated d to perform the differencing and subsequent re integration This allows a bootstrap implementation of the significance test t test on d and also the
49. 8 Panel Data In panel data models the actual observations are simulated using the fitted model regardless of the transformation used in estimation individual mean deviations differencing etc Residuals are calculated directly from the raw data and put into individual mean deviation form for resampling A two stage resampling method is adopted For each individual in turn a random equal probability drawing with replacement is made from the set of all individuals The within variation is then created by resampling the residuals relating to this second individual Any of the implemented resampling schemes for time series can be 64 James Davidson 2015 specified for this second stage In the case of a fixed effects model the individual s own mean is then added back In the case of a random effects model on the other hand the mean to be added back is that of the second randomly drawn individual Note that when individual dummies are included in the model the individual means should be zeros The same scheme nonetheless continues to apply in principle Alternatively Gaussian disturbances can be generated These are assigned the specified within variance and in fixed effects models are augmented by the individual s own residual mean if different from 0 In random effects models they are augmented by an independent Gaussian drawing having the specified between variance 13 9 Subsampling Inference This is an al
50. Carlo is the only forecasting option implemented There are two options for reporting point forecasts as either the means of the Monte Carlo distributions for each step ahead or as the medians of these distributions In the former case 2 standard error bands are constructed using the variances of the Monte Carlo distributions In the latter case the 2 5 and 97 5 quantiles are reported to provide an approximate 95 confidence band Note that the validity of the median forecasts and confidence bands does not depend on the forecast mean and variance being well defined 40 James Davidson 2015 10 Estimation Criteria 10 1 Single Equation Methods 10 1 1 Least Squares For equations 4 1 4 2 only Cr gt u t 1 t 10 1 2 Instrumental Variables For equations 4 1 4 2 the initial estimation minimand is c Seu Zee ECH 10 1 where z is a vector of instruments selected from the data set If the covariance matrix formula is set to Robust the default or HAC see below then GMM can be computed as a two or multi stage estimator The first run minimises 10 1 Further runs then minimise C au Je Seu 10 2 where M T is the estimated covariance matrix of Zu either White or HAC according to the current selection with the residuals evaluated at the last stage estimates The kernel and bandwidth choice for the HAC estimator is the same as currently selected for the computation of tests and standard errors Note mult
51. ISM so Time Series Modelling Version 4 47 Models and Methods James Davidson University of Exeter 22 September 2015 Contents 1 STATO IC e E 6 Lil SGP VEI IL leet 6 L2 EE 6 1 3 Acknowledgements amp scssuiixaisinosesisupoinanadeetasnetebinat ateebyontebaiaaaasioaunntnes 6 2 ENEE 7 2 1 Regressor Types cine cesccnkianevecuinsltdudatenaiinnsedemaeeninls Saga 7 2 2 Instrumental Variables scciisccsssxsiacrrsesnssadeanavansaadeaxsusnacaeravenensdaatusesrdns 7 2 3 Restrictions and e KEE 7 2 4 COMIC oral oR CETESSIOING aipim tania nina baba anita di eb 8 2 5 AS E TT pg ties sen KESER r Ea KEENE ERE E 8 3 Panel EE 9 3 1 Data Tee 9 3 2 Dummy ATA AS setts es cerns NR GG RR Seicuansecenes 9 3 3 Estimation Methods vessscictaccnsaeddeduusnieindacraracensbeileueratadelinionstacaueant 10 3 3 1 Ordinary Least Squares for Fixed Effects 10 3 3 2 Generalized Least Squares for Random Effect 10 3 3 3 Maximum Likelihood for Random Ettects 11 3 3 4 Instrumental Variables GMM n ns ieeeeeeerererereeereeeeeereeneos 11 3 4 Tests and Dia e 11 3 5 System Estimation o 12 4 Single Equation Dynamic Models 13 4 1 Linear Models of the Conditional Mean ccccccecesseeeeeeeees 13 4 2 Definitions and Details vc ivsiiicvsnscseceinsesertesutatidusdiidepsiludhneiecaaviaas 14 ADA Lag Eleng ege ekeen geren eieregeh nai ae an inca E EERE 14 4 2 2 Fractional Difference Operator cceececceeseeeteceeeceeeeeeseecsaeceeeeeeeeenseee
52. On the Kolmogorov Smirnov limit theorems for empirical distributions Annals of Math Stat 19 177 169 Fernandez C and M Steel 1998 On Bayesian modelling of fat tails and skewness Journal of the American Statistical Association 93 359 371 Fokianos K A Rahbek and D Tj stheim 2009 Poisson autoregression Journal of the American Statistical Association 04 1430 1439 Geweke J and S Porter Hudak 1983 The estimation and application of long memory time series models Journal of Time Series Analysis 4 221 237 Giacomini R D N Politis and H White 2013 A warp speed method for conducting Monte Carlo experiments involving bootstrap estimators Econometric Theory 29 567 589 Giraitis L P Kokoszka R Leipus and G Teyssi re 2003 Rescaled variance and related tests for long memory in volatility and levels Journal of Econometrics 112 265 294 Glosten L R Jagannathan and D Runkle 1993 On the relation between expected value and the volatility of the nominal excess return on stocks Journal of Finance 48 1779 1801 Goffe William L Gary D Ferrier and John Rogers 1994 Global Optimization of Statistical Functions with Simulated Annealing Journal of Econometrics 60 1 2 65 99 Goncalves S and L Kilian 2003 Bootstrapping autoregressions with conditional heteroscedasticity of unknown form University of Montreal Working Paper 74 James Davidson 2015 Gonzalez Andr s
53. Specification tests in econometrics Econometrica 48 1251 1271 Horowitz J L 2000 The Bootstrap Chapter for Handbook of Econometrics Vol 5 eds J J Heckman and E Leamer North Holland Elsevier Hurvich C M R Deo and J Brodsky 1998 The mean squared error of Geweke and Porter Hudak s estimator of a long memory time series Journal of Time Series Analysis 19 19 46 Johansen S 1988 Statistical analysis of cointegration vectors Journal of Economic Dynamics and Control 12 231 54 Johansen S 1991 Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models Econometrica 59 1551 80 Kiefer N T Vogelsang and H Bunzel 2000 Simple robust testing of regression hypotheses Econometrica Vol 68 No 3 May 2000 pp 695 714 Kiefer N and T Vogelsang 2002a Heteroskedasticity autocorrelation robust standard errors using the Bartlett kernel without truncation Econometrica 70 5 2093 2095 Kiefer N and T Vogelsang 2002b Heteroskedasticity autocorrelation robust testing using bandwidth equal to sample size Econometric Theory 18 1350 1366 Kim C J and C R Nelson 1999 State space Models with Regime Switching Classical and Gibbs sampling Approaches with Applications MIT Press Koopman S J N Shepherd and J A Doornik 1998 Statistical Igorithms for models in state space using SsfPack 2 2 Econometrics Journal 1 1 55 75 James Davidson 20
54. TX 4 40 and F denotes the Gaussian CDF in the ordered Probit case and the logistic in the Poisson and negative binomial cases 22 James Davidson 2015 5 Systems of Equations 5 1 The Basic Model Now let Y denote a N x 1 vector of jointly determined variables The generalization of equation 4 21 takes the form O L A BY o yt IL 5 1 Yo ILx Y A Z OL x u Note that multiple equations are not available for bilinear or discrete data models 5 2 Definitions and Details 5 2 1 System Notation In equation 5 1 O L 1 OL OD 5 2 and O L I 0L 0 1 5 3 are NxN matrices of lag polynomials A isa N x N diagonal matrix with elements 1 L on the diagonal for j 1 N Be careful to note that in the VAR and VMA models lags on all the endogenous variables appear in each equation by default To fit a restricted model with e g only own lags included restrictions can be imposed individually in the Values dialogs 5 2 2 System Exogenous Variables Constant coefficient matrices are yo1 Y21 and y Nx1 and I IL I are matrices with N rows conformable with x1 x2 Aa respectively Note that by default the specification of every equation is the same Individual zero restrictions can be imposed on an individual basis in the Values dialogs 5 2 3 Simultaneous Equations To specify that an equation contains current endogenous variables such that the matrix B in equation 5 1 is
55. The kernel and bandwidth settings are selectable in the Options Tests and Diagnostics dialog 12 11 1 Tests of I 0 Robinson Lobato Test This is the signed one dimensional version of the nonparametric test based on the periodogram proposed in Robinson and Lobato 1998 The statistic is t where is defined in 2 2 that paper which is standard normal under the null hypothesis This provides a test against the alternative d gt 0 with rejection in the upper tail and also a test against the alternative d lt 0 with rejection in the lower tail The p value quoted is for the former test The bandwidth m is chosen using the formula advocated in Section 3 of that paper KPSS Test This is the test is due to Kwiatkowski et al 1992 The quoted p value inequalities use the table given in the cited paper V S Test Modified version of the KPSS test due to Giraitis et al 2003 The p values are computed analytically using the formula given in the paper Lo s modified R S Test Lo 1991 This the version of Hurst s test for short memory using a kernel HAC estimator of the variance The quoted p value inequalities use the table given in the cited paper This is the same as the distribution of the Kolmogorov Smirnov test for the equality of empirical distributions note 5 Note that these authors assign the tails of their statistic incorrectly in their discussion 55 James Davidson 2015 Harris McCabe Leybourne Test Harris
56. ademic Press Robinson P M 1994 Semiparametric analysis of long memory time series Annals of Statistics 22 1 515 539 Robinson P M 1995a Log periodogram regression of time series with long range dependence Annals of Statistics 23 1048 1072 Robinson P M 1995b Gaussian semiparametric estimation of long range dependence Annals of Statistics 23 1630 1661 Robinson P M and I N Lobato 1998 A nonparametric test for I 0 Review of Economic Studies 65 3 475 495 Said E S and D A Dickey 1984 Testing for unit roots in autoregressive moving average models of unknown order Biometrika 71 599 607 Saikkonen P 1991 Asymptotically efficient estimation of cointegration regressions Econometric Theory 7 1 21 Stock J H and Watson M W 1993 A simple estimator of cointegrating vectors in higher order integrated systems Econometrica 61 783 820 Subba Rao T 1981 On the theory of bilinear models Journal of the Royal Statistical Society B 43 244 245 Tanaka k 1999 The nonstationary fractional unit root Econometric Theory 15 549 582 Terasvirta T 1998 Modeling economic relationships with smooth transition regressions in A Ullah and D E Giles eds Handbook of Applied Economic Statistics Dekker New York pp 507 552 Tong H 1990 Non Linear Time Series A Dynamical System Approach Oxford Clarendon Press Vuong Q H 1989 Likelihood ratio tests for model selection
57. alent to a random change of sign always yields a symmetric distribution while preserving the kurtosis Setting _1 618 13 10 2 sets the skewness equal to that of the original distribution but doubles the kurtosis The program prints the Kolmogorov Smirnov statistic comparing the sample with the bootstrap distribution as a guide to choosing the best value of a 13 5 8 Fourier Bootstrap The Fourier bootstrap applies the fast Fourier transform FFT to the residuals The transformed series is approximately serially independent but heteroscedastic in large sample with variance at frequency j given by the spectral density at j A resampling step makes a drawing from this series using the Rademacher wild bootstrap and then applies the inverse FFT to obtain the bootstrap sample This method provides for more general forms of stationary autocorrelation than the sieve AR bootstrap but is not suitable for heteroscedastic or non Gaussian processes Note the CLT implies that the bootstrap series are necessarily Gaussian in large samples whatever the parent distribution 13 5 9 Sieve AR Bootstrap Any of the bootstrap methods can be combined with a sieve autoregression to allow for dependent data see Buhlmann 1997 The residuals are whitened by fitting an autoregressive process The maximum lag order is chosen by default as 0 6T and can be set manually as an option The actual lag order p is chosen to optimize the Schwarz in
58. are path independent Note that these routines cannot be used in combination 14 3 1 ARMA Order Selection ARMA p q models are estimated for all pairs p and q such that p q and p q do not exceed specified bounds The initial case is p g 0 and parameters are added sequentially using either current estimates or zeros as starting values as appropriate Models must be single equations but may include any other features with the exception of regime switching 14 3 2 Regressor Selection If a model is specified to contain N regressors there are 2 alternative models containing subsets of this set All these models can be estimated in sequence and the case yielding the optimal value of a chosen selection criterion Akaike Schwarz or Hannan Quinn is 69 James Davidson 2015 reported This feature is available for all models except regime switching although note that if N is large and or the models nonlinear the time taken to complete the run could also be large 14 4 SsfPack State Space Modelling A state space modelling capability is optionally provided by making use of the SsfPack Basic suite of routines which is available as a free download for academic and teaching purposes If SsfPack is installed when the TSM installation is run access to the package is set up automatically The models that can be constructed take the general form Cn d Ta T He mx 1 14 9 y Z a Ge nx1 where y is an observed
59. average MA lag polynomials of order p and q respectively For example if p q 1 the equation would have the explicit form Y Y Y tu 9 4 _ 4 4 Note that the specification Y Yo Q X Yor u Ou 4 5 is equivalent implying the identity y o 1 y Either form can be estimated By setting d 1 a unit root can be imposed defining the nonstationary ARIMA p 1 9 model This is equivalent to differencing the series before fitting the ARMA model and so takes the form DAY Dn 4 6 In the ARFIMA model the simple difference is replaced by a fractional difference This is equivalent to expressing the left hand side of the equation as INL Ur DU 4 7 13 James Davidson 2015 This model reduces to the ARMA model if d O and the ARIMA model if d 1 It is stationary and invertible if d lt 0 5 See Section 4 2 3 for details of how to include mean and trend components to these models 4 2 Definitions and Details 4 2 1 Lag Polynomials o L O L are finite order lag polynomials such that for example AD l AL 4 L 4 8 where L denotes the lag operator such that Lx x Thus DY Y oY e Ne hh E By default the values 4 and 0 are reported However the sign convention can be optionally changed for the MA coefficients reporting 0 instead 4 2 2 Fractional Difference Operator This is ET EK 4 9 where bo 1 and SHUG 4 ia rei 4 10 I KEE e SS The
60. bance at date t These are asymptotically valid assuming independent innovations but ignore parameter uncertainty s is estimated by the equation residual variance in models without conditional heteroscedasticity and otherwise is the multi step variance forecast generated from equations 6 1 or 6 2 as appropriate Note that in case 6 2 the unbiased forecast of log h is generated and hence the implied forecast of h is biased towards zero see Nelson 1991 for details The reported bands should be treated as lower bounds on the 2 standard error bands in this case Forecasts are computed by default using a numerical solution to the model simulation algorithm using zero shocks in contrast to stochastic simulation used for Monte Carlo 38 James Davidson 2015 forecasts and hence are available for both linear and nonlinear structures The exception is Markov switching models see below 9 5 2 Moving Average Coefficients Impulse and Step Responses There is an option to compute the sequence of solved moving average coefficients impulse responses and also the cumulated sequence step responses In the case of equation 4 1 for example the sequence is the solved coefficients of the polynomial o L O L I L 9 12 In the case of equations 6 1 or 6 2 the sequence is the solved coefficients of I BO ADA a l L 1 9 13 Since the weights are computed by perturbing the numerical solution of the model simulation algor
61. ces of normalized residuals with p lags of x1 null hypothesis CV model In the case of ARCH type or Markov switching models normalized means that the residuals have been divided by their estimated conditional SDs Their squares after subtracting 1 are iid 0 1 by the Tn the case where no conditional variance model is specified under Hp the statistic is computed from the regression of the squared residuals on the test variables Otherwise the test variables are added to the Tn case the indicators in the White tests are linearly dependent the statistic is computed with the maximum available principle components with degrees of freedom adjusted accordingly The common factor tests are available in the case where under Ho there are no lags on x1 and no variables of Type 2 are specified In systems of equations the diagnostic tests include the test variables e g lagged residuals from every equation in every equation Hence for example the number of degrees of freedom for an autocorrelation test with one lag in a system of two equations is four 50 James Davidson 2015 The LM and CM tests of corresponding hypotheses can give different results since they are based on different assumptions Thus CM tests have the advantage that they impose minimal assumptions about the form of the model under the alternative hypothesis but they can require the existence of higher order moments for validity
62. d distributions 12 10 Sup F Tests A number of testing situations face the problem that one or more parameters are unidentified under the null hypothesis Say a statistic F depends on a parameter 7 which cannot be estimated under H A solution to this problem is to base the test on the statistic sup en EF see e g Davies 1977 The Andrews 1993 LM test is one such case already implemented However it is not difficult to compute other statistics of this type in TSM where 7 is either one or two dimensional using the Criterion Plot feature This will evaluate the model and any specified test statistics over a specified grid of fixed values of one or two model parameters The output reports the extreme values attained over the grid for all specified test statistics including absolute f ratios diagnostic statistics user specified Wald tests etc There is no limit apart from computing time to the number of grid points that can be evaluated For applications see the sections above on regime switching and date transition models 12 11 Tests of Integration Order A range of tests on individual time series of the parameter d in the representation I d can be computed through the Setup Compute Summary Statistics dialog To compute these tests after allowing for a linear trend in the series check the Detrend option in the dialog In the cases of the nonparametric tests that employ a kernel estimate of the long run variance
63. e of pre set diagnostic tests Table 1 shows the cases available in the latter tests Note that the LM test is implemented by computing the scores of the criterion functions numerically after extending the specification of the model as indicated Therefore it does not depend on extra simplifying assumptions such as normal disturbances Table 1 Dummy Alternative Hypotheses in Diagnostic Tests A Residual Autocorrelation LM Test Lagged residuals in equation 4 1 or equation 5 1 CM Test Covariances of current and lagged normalized residuals B Neglected ARCH Lagged squared residuals in equation 6 1 or equation 7 1 Covariances of current and lagged squared normalized residuals C Nonlinear functional form RESET Integer powers of the fitted values in equation 4 1 or equation 5 1 Covariances of normalized residuals with integer powers of fitted values mean deviations D Heteroscedasticity Square of fitted values in equation 6 1 or equation 7 1 Covariances of squared normalized residuals with squared fitted values E White s Heteroscedasticity Test Squares and products of explanatory variables in equation 6 1 or equation 7 1 Covariances of squared normalized residuals with squares and products of explanatory variables F AR Common Factors p lags of variables x in equation 4 1 or equation 5 1 included in xz Covarian
64. e residual and forecast error variances It is asymptotically N 0 1 under the stability hypothesis assuming 4 moments exist where asymptotic is interpreted as min F 7 gt That is these tests are appropriate to small and large forecast periods respectively In single equation linear regression models Forecast Test II is replaced by Chow s stability test with the formula F version T F DE T F 7 Chow T F WF u gt TH T 3 o u TEF s3 k Da a poe where and represent respectively residuals from the model fitted to the forecast period and the whole period and k is the number of regression parameters The Chi squared version of the test omits the k in the denominator The theory of this test is well known If it is desired to compute formula 9 9 instead compute the regression through the dynamic equation dialog 9 10 9 5 Ex ante Multi step Forecasts Multi step ex ante forecasts can be computed beyond the end of the available data period provided the model does not contain exogenous variables other than the trend dummy or GARCH M term Two methods are available 9 5 1 Analytic Forecasts These are computed by solving the dynamic model forward with zero shocks to generate the expected path Confidence bands are computed based on the standard error formula k Sri VA j 0 Pray 9 11 where q L 1 L L L and s is the predicted conditional variance of the distur
65. e time series models Journal of the American Statistical Association 65 1509 26 Breitung J and U Hassler 2002 Inference on the cointegration rank of fractionally integrated processes Journal of Econometrics 110 167 185 Breusch T S and A R Pagan 1980 The Lagrange multiplier test and its applications to model specification in econometrics Review of Economic Studies 47 239 253 Brown R L J Durbin amp J M Evans 1975 Techniques for testing the constancy of regression relationships over time Journal of the Royal Statistical Society Series B 37 149 163 B hlmann P 1997 Sieve bootstrap for time series Bernoulli 3 123 148 Bunzel H N Kiefer and T Vogelsang 2001 Simple robust testing of hypotheses in nonlinear models Journal of the American Statistical Association 96 1088 1096 Cameron A C and P K Trivedi 1986 Econometric models based on count data comparisons and applications of some estimators and tests Journal of Applied Econometrics 1 29 53 Commandeur J J F S J Koopman and M Ooms 2011 Statistical software for state space methods Journal of Statistical Software 41 1 18 Davidson J 1998a A Wald test of restrictions on the cointegrating space based on Johansen s estimator Economics Letters 59 183 7 72 James Davidson 2015 Davidson J 1998b Structural relations cointegration and identification some simple results and their application Journal of Econometrics
66. e tth term from the relevant case of Lr as defined above evaluated at the parameters for regime j This can be viewed as an application of the EM algorithm Dempster Laird and Rubin 1977 in which the latent indicators of the different regimes S j are replaced by their conditional expected values i e the conditional probabilities of regime j prevailing at dates t 45 James Davidson 2015 11 Standard Errors and Covariance Matrix Formulae Four alternative estimators are available for the standard errors and covariance matrices used to compute the standard test statistics V Standard V2 Robust and V3 HAC V4 KVB Let Q denote an estimate of the expected Hessian matrix of the criterion function and A an estimate of the covariance matrix of the gradient OC 00 For the log likelihood and least squares criteria Q is the actual numerical Hessian of Cr at the estimated point For GMM Q is computed in effect by replacing u by Ou 00 and its transpose in the formula for Cr The general formulae then take the form V 0 4 0 for i 1 2 3 11 1 Information Matrix Formulae For the log likelihood cases A Q resulting in the information matrix estimate of the covariance matrix For least squares and GMM A s Q where s is the residual variance These standard formulae are not robust depending on the assumption that the likelihood function is correctly specified not a quasi likelihood and
67. e two cases Note three further points about the BEKK model a The case implemented is the case K 1 defined by Engle and Kroner 1995 This model conveniently has the same number of AR MA parameters as the models in 7 1 and 7 2 The cases K gt 1 are not implemented in this release b The parameterization of the intercept matrix as QCG where C is symmetric with unit diagonal and off diagonals constrained inside 1 1 exploits the common structure of the different models and is also a convenient way to impose positive definiteness Note KCK B E B represents an alternative parameterization c Becareful to note that the parameterization represented by 7 7 and 7 8 is not unique For example if all nonzero elements of B or A are negative this gives the same likelihood as when all are positive so the likelihoods are liable to be multimodal This implies global under identification although the model will be locally identified in general This fact has no practical importance provided the results are interpreted correctly 30 James Davidson 2015 8 Regime Switching The switching options allow the model driving the series to switch stochastically between alternative cases of the conditional mean and variance equations For example a process might behave as a random walk in one regime and as a stable process in another The models can be written in generic form as u OSD h 0 S 2 e 8 1 where S 1
68. ed as exogenously given for the purposes of the forecast In Hamilton s model case 3 the formula is modified as appropriate to allow for the MT distinct states Confidence bands can be computed using the standard error formula M M A A P A D St K De oem ben Stix GET rh EA E 9 16 K 1 2 F where Srex j1 x denotes the K step forecast standard error analogous to 9 11 conditional on regime configuration D Jr This formula takes account of the uncertainty about which regime an observation represents but ignores parameter uncertainty There is no short cut to evaluation of all the M terms hence it can be 39 James Davidson 2015 computationally intensive for large M and or K although if it converges rapidly it can be extrapolated See Davidson 2004b for additional details This method is currently enabled only for ARMA ARIMA ARFIMA models with ARCH GARCH conditional heteroscedasticity For cases of bilinear and ECM structures with Markov switching use the Monte Carlo forecasting option 9 5 5 Monte Carlo Forecasts Monte Carlo forecasts are available for all models linear and nonlinear Here the dynamic model is stochastically simulated F steps forward using one of three options for generating the shocks Gaussian likelihood matching and bootstrap This method is available for all the models For some models such as discrete and count data smooth transition models and user supplied models Monte
69. ed in 4 35 and T represents the gamma function In the negative binomial models the log likelihood is s EE v A Ly oel a jev wd hn wd es 10 12 where v 1 a and v q in the first and second cases respectively Note that in either case 10 12 reduces to 10 11 when a 0 10 1 11 Zero inflated Discrete Let the expressions in 10 10 10 11 and 10 12 be denoted generically as L Di 10 13 The zero inflated models introduce a distribution function F w representing the probability of the zero regime and define the log likelihood as L gt og F w ly 1 F w expC H 10 14 where 1 denotes the indicator function of its argument 10 2 System Methods 10 2 1 Least Generalized Variance For a system of equations without simultaneity conditional variance model or regime switching the basic minimand is the generalized variance determinant of the covariance matrix The criterion maximized is therefore 43 James Davidson 2015 logdet T 2 mu 10 15 Least Generalized Variance LGV is equivalent to conditional Gaussian ML for these models For a simultaneous equations model in which dependent variables are also specified as explanatory variables such that B I in 5 1 the maximand is L T log det B logdet T uu 10 16 10 2 2 Generalized Method of Moments To estimate a system of equations by GMM the minimand can be written by expressing the model in
70. elihood estimates In the ordered probit and logit models the ordinary residuals are computed as A J 1 wn a E Di 08 9 2 where ER eo KH 22 whereas the generalized residuals are again the series that is orthogonal by construction to the derivatives of Z or h 22 These are defined as A do d A A esl p gt Ly 1 A 9 3 t O e E E I a o Ja j Tri I Ja where1 _ is the indicator function equal to 1 when the argument is true and 0 otherwise F F Ch 22 forj 1 J 1 and the A are the corresponding density functions evaluated at the same points Note that 9 3 reduces to 9 1 in the binary data case In count data models the residuals are constructed as Y t In all these cases the adjusted generalized residuals are used to compute the various diagnostic tests available using the LM and conditional moment principles In switching models the residuals are computed for each regime and the reported series are the weighted average of these where the weights are the filter probabilities In switching models the series of conditional probabilities are also retrievable including the smoothed probabilities computed using Kim s algorithm see Kim and Nelson 1999 and the variable switch probabilities in the case of explained switching models specifically if p t denote the time varying transition probabilities the series available for plotting are of the form p t xo D 35
71. ends on the regime under which the variables are generated The model has the representation Y So dt SD dat Hun Dall Ur 8 9 Where u o S h e and h 1 0 0 S A pupo S p 8 10 32 James Davidson 2015 Here u 1 1 M and 0 1 0 M as well as transition probabilities Dn are parameters of the process The mechanism governing the updating of probabilities generalizes 8 2 8 4 to incorporate MIT distinct states in which each of the p lag terms fall in each of M regimes See Hamilton 1989 and Hamilton and Susmel 1994 for details Note that their row sums are constrained to lie in 0 1 but for estimation purposes they are mapped into the real line for unconstrained estimation In other words the fitted parameters are rj where e Pi xPO j 1 M 1 and p 1 J exp r Ee o 8 4 The Smooth Transition ST Model This is an alternative regime switching specification see Granger and Ter svirta 1994 Terasvirta 1998 comparable to explained Markov switching but without an explicitly probabilistic interpretation 8 4 1 Single Transition Model Let equation 4 1 be written in the implicit form Us HUE Xt 55 se 0 y 8 11 where x X11 X2 X3r X7 and y here denotes the complete set of model parameters If y and y2 represent alternative parameter values and uj u wy for j 1 2 the ST model takes the form Ut Gt uit 1 Cl Ut 8 12 where 1
72. enu item The scalar calculator provides a text box into which formulae for numerical values can be typed in conventional algebraic notation If formulae include either one or two algebraic symbols alphabetical string denoting variables the resulting functions are graphed in 2D or 3D plots respectively In this case the user is prompted for upper and lower bounds for each variable The number of points to be plotted is selectable The matrix calculator provides a dialog where moment matrices mean squares and mean products can be computed for any pair of sets of variables from the data matrix The resulting matrices square and symmetric when the same variable sets are paired can be named and stored printed out and saved in vectorized form in a spreadsheet file Matrices can also be created and edited directly in a tabular format The matrix calculator itself is a text box into which matrix expressions can be entered using conventional notation Transposes sums products including Hadamard and Kronecker 70 James Davidson 2015 products inverses and generalized inverses and vectorizations may be constructed and combined in the usual way The usual scalar functions including determinant trace and norm can be computed printed out and also saved as one dimensional matrices Eigenvalues and eigenvectors are available but for symmetric matrices only since complex numbers cannot be handled 71 James Davidson 2015 15 References
73. epending on the size of shock Large shocks tend to have permanent effects while the effect of small shocks is transient The basic scheme is of the form AY u 8 4 4 17 where g is a function depending on u that interpolates between 0 and 1 For example in the STOPBREAK model of Engle and Smith 1999 2 u GE y gt 0 4 18 so that g varies inversely with the magnitude of the shock In the STIMA model of Gonzalo and Martinez 2003 t Da lt r 4 19 9 u r 16 James Davidson 2015 where for direct comparability with the STOPBREAK model we would have 0 1 and 0 0 TSM implements a form suggested by Gonzalez 2004 called the smooth permanent surge or SPS model of which special cases can closely approximate 4 18 and 4 19 This can be written as 2 E tee 4 20 1 exp y u cu where y gt 0 and c gt c by convention This depends on five parameters that can all be freely estimated in principle although fixing certain of them at given values yields the special cases indicated Thus setting a 2 B 2 and c c 0 yields a function depending on y gt 0 which in common with the STOPBREAK function lies close to 1 when u is small and smoothly approaches 0 as u7 increases On the other hand setting c r t t and c r and y 100 or any sufficiently large value gives a close approximation to 4 19 with 0 a and 0 a f 4 5 Error Correction Models Equatio
74. epends on Xm does not depend on i In the second extended version of the model called regime dependent coefficients the equations take the form R tt tetas E LusM Liss 8 8 m 1 Note that equation 8 8 nests the model in 8 7 which can be obtained by imposing suitable restrictions However the simpler model is specified slightly differently and so is retained for compatibility In 8 7 the dummy variable format estimates the intercepts for regimes i 2 M in the form a Bun whereas in 8 8 the equivalent parameter is q However note that optionally any switching parameter for regimes i 2 M can be expressed as differences from regime 1 and hence the B can also be estimated in this setup In the current version of the program the vector xm may include the dependent variable s although in this case the condition J gt 1 is enforced Model 8 7 nests cases such as the threshold autoregression TAR model Put M 2 and R 1 suppress the dummies and for example set x1 y 1 c or alternatively x hu c for a threshold parameter c For large mu this two regime model is arbitrarily close to the case where pi Ky 1 gt c and pz Ky 1 lt c where I denotes the indicator function The parameters could be imposed or alternatively estimated as c 1 y11 8 3 Hamilton s Markov switching model Here the mean and or unconditional variance in a finite order AR ARCH or VAR VARCH process dep
75. er take care to note that the sign of has the reverse interpretation of that of u Note that the dynamic parameterization adopted precludes a separate parameter for the absolute shock m is an optional term representing E h u according to the likelihood model selected It can be replaced by zero in which case the mean of g is absorbed by the intercept parameter represented by either x or B 1 By default it is set as follows 27 James Davidson 2015 m 2 7 ifthe likelihood is Gaussian goe T v 1 2 Iv T v 2 v l Va TQ v TGITA V 6 2 6 HYEGARCH and FIEGARCH By setting 0 and d 0 in 6 2 hyperbolic decay models can be set up Take care to note that the interpretation of the parameters different from the case of 6 1 With d gt 0 the case of hyperbolic memory decay with summable hyperbolic coefficients is represented with sum 1 1 a d 1 B 1 Call this the HYEGARCH model as with HYGARCH the rate of decay varies inversely with d The FIEGARCH case is where d lt 0 and here it is necessary for a lt 0 also otherwise the hyperbolic lag coefficients have the wrong sign In this case the lag coefficients and hence the autocovariances of logh are non summable and the process exhibits true long memory in volatility if the likelihood is Student s t with degrees of freedom v if the likelihood is GED with parameter v 6 3 Conditional Heteroscedasticity in Discrete Data Models In
76. eriodogram point and 27j T Here J is fixed and M and L should diverge with sample size such that M T gt 0 and L M gt 0 No guidelines are available for choices of L and J and if in doubt set these to O and 1 respectively GPH suggested a bandwidth M O T Hurvich Deo and Brodsky 1998 showed that the optimal MSE bandwidth is of the form M CTT although the optimal C depends on the unknown short range components of the spectrum and may need to be small They also show asymptotic normality only for M o T Some evidence on the relative bias and variance in these cases is given in Davidson and Sibbertsen 2009 14 1 2 Moulines Soulier Log Periodogram Regression The Moulines and Soulier 1999 broad band log periodogram method MS sets M 7 2 but models log f by a Fourier expansion of finite order Regressors of the form cos jAx are added to the regression for j 1 P where P should diverge with T but P T 0 The optimal P depends on the form of log f and therefore it is difficult to offer guidelines although see Moulines and Soulier 1999 for discussion of different cases The obvious procedure with either estimator is to experiment and determine the sensitivity of the estimates to these choices 14 1 3 Local Whittle ML The third option is the local Whittle Gaussian maximum likelihood estimator which maximizes the function 67 James Davidson 2015 Late el A Zum 14 5 ma mm where 2
77. formation criterion in the specified interval The resampled residuals are then re coloured by the fitted AR p in each replication This procedure can be applied in conjunction with the simple bootstrap or wild bootstrap and dependence can also be dealt with by a combination of sieve AR pre whitening and the blocking or Fourier techniques 13 5 10 Data Resampling In this method the data rather than the residuals are resampled randomly with replacement This is only suitable for cases where the observations are serially independent hence only for cross section samples in general However note that all the variables in the model both dependent and explanatory are resampled jointly The range of 62 James Davidson 2015 the data set from which the observations are drawn can be different from the estimation sample itself 13 6 Data Generation In a simulation run equations 4 1 4 2 or 4 21 4 2 are inverted to obtain Y from u or Y from u in the system counterpart 5 1 When equations are specified in the Linear Regression dialog these are treated for the purposes of simulation as special cases of these dynamic formulations In conditional heteroscedasticity models equations 6 1 or 6 2 as appropriate are used to generate the conditional variances and get u from e or 7 1 or 7 2 to get u from e in the system case Exogenous variables denoted as x for k 1 6 in the equations are treated as fixed To
78. generation of artificial data for simulation exercises This approach requires some knowledge of programming basics and of the Ox language although significantly less than would be required to create an Ox program from scratch Moreover all the usual features of the package including estimation diagnostic testing simulation and forecasting are available just as for the pre programmed models Ox coding cannot be combined with interactive formulae nor in the current release there is any option to create functions of type f2 f3 or f4 as Ox code These features can of course be incorporated into a complete Ox coded model 4 7 Discrete Data Models 4 7 1 Probit and Logit Models In probit and logit models for binary data the probability of the binary dependent variable Y taking the value 1 is modelled as F z where F denotes respectively the standard normal and logistic CDF and Z Yo Vie SUE EE CT ANEN 4 33 is a continuously distributed latent process ECMs can be included in this equation using either of the formulations 4 22 and 4 30 for the equilibrium condition Endogenous dynamics are also possible by including Y for j gt 0 in the vector x Optionally the chi squared can be used for the distribution of the latent probit variable In this case the degrees of freedom of the distribution becomes an additional parameter to be estimated or optionally fixed 4 7 2 Ordered Probit and Logit These models allow
79. i stage estimation is disabled under criterion grid plotting and multiple ARMA estimation 10 1 3 Gaussian maximum likelihood e is standard Gaussian The criterion function maximised is e oe Je logh E 10 3 SC peer h i 10 1 4 Student maximum likelihood e is Student s t distributed with v gt 2 degrees of freedom The criterion is T log m v 2 CIE h v 1 oe E ZC 10 4 D v 1 2 T T 2 2 L T log To improve numerical stability the parameter actually estimated by TSM is vi 10 1 5 Skew student maximum likelihood e has the skewed Student s distribution with parameters v gt 2 and gt 0 See Fernandez and Steel 1998 Lambert and Laurent 2001 The criterion is Tr v 1 2 T 2s L T log T v 2 rlogn y 2 4Tlog Zi a ic ci 10 5 sh u m or 31 to ve Diog 1 O 41 James Davidson 2015 where DT 1 EEE T v 2 Vx and Fi t L shu m gt 0 otherwise The parameter E measures the skewness of the distribution which reduces to 10 4 when amp 1 10 1 6 GED maximum likelihood e has the Generalized Error Distribution GED with parameter v gt 0 see Nelson 1991 The criterion is 10 6 1 1 v T ee e a log h AN v 2 A ra v 2 2 rB v Note that GED corresponds to the Gaussian case when v 2 and is leptokurtic when v lt 2 u x kl VT where 10 1 7 Whittle maximum likelihood For equati
80. ically distributed as chi squared with two degrees of freedom when the residuals are normal Gaussian and independently distributed In addition to these standard statistics some special statistics are reported for particular models 7 For discrete data models the Likelihood Ratio index is reported defined as LRI 1 L Ly 9 5 where L is the log likelihood function evaluated with all parameters except for the equation intercept and in the Markov switching case the switching probabilities replaced by 0 The LRI statistic tends to 0 when the explanatory variables have true zero coefficients and it cannot exceed 1 noting that both functions are negative being sums of log probabilities and that L gt Lyr 8 For instrumental variables and GMM estimation the Sargan overidentification test and the Durbin Wu Hausman exogeneity test are reported See e g Davidson 2000 Chapter 8 for details of these tests Statistics commonly reported by other regression packages but available only as options in TSM are the Durbin Watson statistic and the F statistic for joint significance of the slope coefficients sometimes called the F test of the Regression The version of the latter test implemented in TSM excludes trend and seasonal dummies and also lagged dependent variables from the tested set Therefore it tests only the joint significance of true exogenous variables in the regression 9 3 Q Tests Standard tests for serial
81. ide 4 22 and 4 23 to define equilibrium relations These are entered in the same way as implicit formulae for the residuals A natural application for this feature is to introduce structural shifts in coefficients which are easily coded using dummy variables Be careful to distinguish between nonlinear equilibrium conditions fs and nonlinear ECMs f4 These can coexist provided the latter is one of the pre programmed options 4 24 4 26 4 6 8 Ox Coding Although there is no practical limit to the size of interactive formulae they are still limited to a single line of code An alternative option for coding models is to write an Ox function with the requisite code and compile this with the program Such functions can be arbitrarily complex taking as many lines as necessary and employing loops conditional statements function calls etc Such code may also run significantly faster than interactive formulae of comparable complexity This option is available for creating model components of type fi which can of course represent complete models There are also options to return a complete likelihood functions and test statistics either using estimation outputs or as free standing functions of the data By creating an explicit model solution separately this approach can also be used 20 James Davidson 2015 to create simulations and forecasts for models expressed in implicit form for estimation Finally Ox code can be supplied for the
82. ilinear Models The bilinear option see Priestley 1988 Tong 1990 replaces equation 4 1 with O LW Yo MX Un y L w O L v 4 15 where w 1 L Y y yt nx V TX U 4 16 In 4 15 y L yi t y2 L wp L where p is also the order of the AR polynomial d L and ML 1 A L 4 4L similarly This is a restricted version of the BL p q m r class of models specified by Subba Rao 1981 The models are equivalent in the case p m and r 1 but for r gt 1 our case is restricted to have just p r additional parameters relative to the ARMA p q case instead of pr additional parameters where the coefficients of wa j 1 p k 1 r are unrestricted The bilinear model can also be implemented in combination with ECM terms and nonlinear features Simply modify equations 4 21 or 4 29 with the new features of 4 15 4 16 Note that p m can be implemented by fixing parameters at zero In this model note that including an intercept and or trend term in x has a different effect from using the built in dummies which play the same role in the dynamics as variables in Xi Intercept and trend dummies can be included in x2 as ordinary regressors but in this case don t forget to deselect the built in intercept and trend 4 4 Nonlinear Moving Average Models Several schemes have been proposed recently for modelling processes which switch stochastically between stationary and nonstationary behaviour d
83. ing 1 when H is true and hence the test based on y critical values is asymptotically correctly sized Under the alternative note that S must diverge at the rate of max S 5 hence ensuring the power of the optimized test In this implementation the bound in 12 15 is computed as Bound y T D 1 I sup test K gt 1 K 12 16 where is the indicator function sup test means that option 12 14 is selected and y D D and p are user selectable The set is constructed as a K dimensional hypercube whose sides have user selectable upper and lower bounds However amp is normalized to fix the range of variation of arctan x over the sample also user selectable Hence there is little reason to change these from the defaults other than to introduce asymmetry around zero 12 7 2 Score contribution Tests These tests generalize the Bierens testing procedure by using score contributions in place of residuals as the target series to be tested for correlation with 12 11 Either a p degree of freedom test can be constructed using the complete score vector or 1 degree of freedom tests based on individual score elements 12 7 3 Dynamic Specification Tests The Bierens and score contribution tests can be considered exclusively as tests of functional form in which case x in 12 11 can be thought of as variables appearing in the 53 James Davidson 2015 model only These tests can also test for omitted variab
84. ith zg 0 79 O and assuming T lt T2 say this model can be seen as defining an interval To t2 With y gt 0 the model is predominantly in regime 1 when 7 lt z lt T such that the multiplied factors have different signs and in regime 2 otherwise 8 4 3 Structural Change Date Transition Model An important application of ST models is to capture structural breaks and estimate break dates Set z t the time trend and then with 27 0 t measures the break date between consecutive regimes y can be fixed at a large value to create a sudden break or can permit smooth variation 8 5 Testing for Breaks and Regimes One problem with Markov and ST models is that the switch probabilities or switch parameters are unidentified when there are in fact no differences between the models in each regime This fact can give rise to numerical problems in optimizing the models It also means that it is difficult to test the hypothesis of no switching since parameters are unidentified under the null hypothesis The latter problem may be resolvable by computing sup F tests based on the largest value of the statistic over the range of eligible values of the unidentified parameter These statistics can be computed using TSM s Criterion Plot feature For example the sup Wald test of Andrews 1993 can be computed in this way Use the Date Transition model to parameterize the switches select the Estimate Regime Differences
85. ithm results are also available for nonlinear dynamic structures such as bilinear error correction and smooth transition models In the case of a nonlinear in variables specification the sequence of weights so produced represents a linear approximation to the actual expected response to a unit shock This option is not available for Markov switching models see below 9 5 3 Forecast Error Variance Decomposition For forecasts of multi equation models using the analytic approach error variance decompositions using the method of Liitkepohl 2007 Section 2 3 3 are optionally available This output shows for each forecast horizon the proportion of the forecast error variance attributable to the shocks to each equation in the system This option is not available for Markov switching models 9 5 4 Forecasting Regime Switching Models The following formulae are used for Markov switching models Let J jy denote the optimal forecast of y7 for the case where it is known that the process is in regime j in period T i i 1 K Then the K step forecast VX is computed as Ss M M as A S x Ke KEE e e Ju 9 14 where D d M D ER e Eer Ee ege 9 15 Note that 9 14 can be computed by a straightforward recursion and does not in practice involve M terms The sequence in 9 15 converges to the ergodic steady state probabilities if the p are constants while under explained switching case 2 the forcing variables are treat
86. ity is to enter an implicit formula for the residual For example to estimate the Box Cox 1964 transformed regression P SS X ea Y d one would select the Residual option and enter the line 4 28 t WYE gamma 1 gamma alpha beta EXE gamma 1 gamma Note that in this case no should appear since the formula is an implicit representation of the residual The formula to represent 4 27 could likewise be entered under this option as WYE alpha beta EXE gamma but the Equation style is to be preferred whenever it exists since it identifies the normalized variable for the construction of simulations forecasts etc 4 6 2 Formula Types Coded formulae can enter the model in five different ways represented by the symbols fi f3 and f in LD L AO XE Yo TV TX Yo t TX 4 29 Eh E ORZ r Xa E DY Lt J gt 95459 and also by fs in Z f6 Sx 1 4 30 However note that a model can contain only one formula at a time These five options are mutually exclusive since they share the same locations to store specifications and parameter values Thus the symbols for the variables x4 and parameters E are generic here to be defined in context 4 6 3 Coded Equations As noted there are two options for representing fi the usual case being the Equation form fa 5Y 8 1 hie J gt 0 459 4 31 Thus note how the equation can have a recursive f
87. lected formula either simple robust or HAC Being based on the restricted model these tests typically have a diagnostic role 12 3 Moment and Conditional Moment Tests Moment tests are tests of hypotheses of the form E m 0 where m is a vector of sample moments of functions depending on the model parameters The leading cases are covariances of model disturbances or squared model disturbances with a set of test variables In general these variables are indicators of incorrect specification so that these tests also have a diagnostic role The test statistic takes the form M m Vom 12 3 where V is the asymptotic covariance matrix of the moments taking into account their dependence on unknown parameters 0 and all terms are evaluated under the null hypothesis that the estimated model is correct If Vm is computed using a formula that assumes the terms of m are uncorrelated the null hypothesis can be thought of as taking the form E wP 1 0 t 1 n where m n Zw w is a chosen function of data for observation t and parameters and denotes the history of the process to date In this form 49 James Davidson 2015 they are called conditional moment CM tests To test the less restrictive simple moment hypothesis that the unconditional mean of the terms is zero Ha should be computed using an HAC formulation in case the data are not serially independent LM and CM tests can be computed both for user selected hypotheses and a rang
88. les and especially dynamic specification in which case x can include variables not in the null model In dynamic models testing for omission of lags poses the problem of specifying a lag truncation point An option in these tests is to include an indefinite number of lags increasing with sample size while keeping the number of nuisance parameters fixed by means of a polynomial distributed lag scheme Thus define the test function as wO exp E Aaen A 12 17 where r mx1 is the vector of test variables and P E te EE es 12 18 The nuisance parameters are 6 6 of dimension K Pm Even a modest choice of P can model a large range of lag distributions See Davidson and Halunga 2012 for further details and some Monte Carlo evidence 12 8 Vuong s Test of Non Nested Models Vuong 1989 describes test procedures for comparing non nested models If i and L for t 1 7 are the log likelihood contributions for each fitted model and m 4 L are their differences the normalized likelihood ratio statistic V VTm s 12 19 where m and s are respectively the sample mean and standard deviation of m can be used as a guide to model selection When the data cannot discriminate between the rival specifications V is asymptotically distributed as standard normal under specified regularity conditions otherwise its sign indicates the ranking of the alternatives Such tests cannot be pre programmed since the models must
89. lication an artificial sample is created by the simulation module The object is to reproduce the distribution of the test statistic under the null hypothesis For those tests where estimates under the alternative are used such as t tests and Wald tests the tests statistic is centred on the pseudo true parameter values Thus the null distribution for the jth t statistic is computed as the distribution of f 13 2 where 6 and 0 are respectively the point estimate from the observed data and the point estimate from the bootstrap replication The null distribution of the Wald statistic is generated similarly by replicating the formula W EGYEI E 8 13 3 where g is the vector of restrictions at the sample estimates and denotes the value computed from the bootstrap sample Null distributions for M and CM tests are generated similarly by expressing the moments under test as deviations from the sample values In general bootstrap variants of LM tests are not available because the null distributions of 59 James Davidson 2015 these statistics are cannot be so easily simulated if the null is false in the sample data However the important diagnostic tests are tests for the absence of heteroscedasticity and autocorrelation in the disturbances Since these are 1 1 d by construction in the bootstrap samples the bootstrap p values should be valid for specification testing and are quoted Use them with caution in other contex
90. lines E and P Soulier 1999 Broad band log periodogram estimation of time series with long range dependence Annals of Statistics 27 1415 1439 Newey W K and K D West 1987 A simple positive semi definite heteroskedasticity and autocorrelation consistent covariance matrix Econometrica 55 703 8 Newey W K and K D West 1994 Automatic lag selection in covariance matrix estimation Review of Economic Studies 61 631 653 Nelson D B 1991 Conditional heteroscedasticity in asset returns a new approach Econometrica 59 347 70 Z yblom J 1989 Testing for the constancy of parameters over time Journal of the American Statistical Association 84 223 230 Osterwald Lenum M 1992 A note with quantiles of the asymptotic distribution of the maximum likelihood cointegration rank test statistics Oxford Bulletin of Economics and Statistics 54 461 72 Phillips P C B and P Perron 1988 Testing for a unit root in time series regression Biometrika 75 335 346 Politis D N and J P Romano 1994 The Stationary Bootstrap Journal of the American Statistical Association 89 1303 1313 Politis D N J P Romano and M Wolf 1999 Subsampling Springer Verlag Psarakis S and J Panetaros 1990 The folded distribution Communications in Statistics Theory and Methods 19 7 2717 2734 He James Davidson 2015 Priestley M B 1988 Non Linear and Non Stationary Time Series Analysis London Ac
91. lot 4 2 6 1986 1993 1998 2008 Thomas Williams Colin Kelley and others http www gnuplot info 1 2 Disclaimer This program is distributed with no warranties as to fitness for any purpose Use it at your own risk 1 3 Acknowledgements Special thanks to Tim Miller for developing the OxJapi 2 package to run the GUI under the latest Java implementation to Charles Bos for his estimable support with implementing and developing Gnudraw and with the Linux implementation to Andreea Halunga for her important programming contributions especially in connection with analytic derivatives to Paulo Dias Costa Parente for advice on teaching applications Andrea Monticini for his many contributions to the development of TSM to Ossama Mikhail for his initiative in setting up a discussion list and all those TSM users too numerous to mention who have contributed helpful suggestions and bug reports 6 James Davidson 2015 2 Linear Regression The Linear Regression dialog accessed with the RE button on the toolbar offers ordinary least squares OLS two stage least squares 2SLS and for systems of equations three stage least squares 3SLS and seemingly unrelated regressions SUR For example VARs can be easily specified in the latter mode with the specified set of lags generated automatically for each variable All these estimators are computed in one or two steps from closed formulae 2 1 Regressor Types The componen
92. ls are not corrected for conditional heteroscedasticity as is done in those models where these effects are modelled and hence re introduced as part of the simulation This means that a resampling method robust to heteroscedasticity must be adopted either the wild bootstrap or one of the block bootstrap variants Note that this simulation option is available only in the context of bootstrap tests Other applications such as Monte Carlo exercises must use the full dynamic simulation method It is likewise not available for regime switching models 13 7 Nonlinear Models The bootstrap is computationally burdensome when applied to nonlinear models estimated by numerical iteration which must be repeated in every bootstrap replication However note that the starting value for the iterations can always be chosen as a consistent point the pseudo true model used to generate the data Davidson and MacKinnon 1999 show that a relatively small number of iterations can suffice to yield a valid bootstrap distribution that improves on the error in rejection probability ERP of asymptotic criteria The optimization is performed by Newton Raphson steps using the inverse Hessian evaluated at the pseudo true point as a fixed metric so that iterations are extremely rapid It is difficult to give guidelines for choice of the convergence criterion and maximum iterations that optimally trade off speed and best ERP Experimentation is recommended 13
93. me as the DGM or different The latter option allows misspecification analysis 13 2 Parallel Processing Monte Carlo experiments can be either run interactively or launched as external batch jobs running from the Windows command line There is a facility to launch a set of identical parallel runs and later combine their outputs into a single set of results In this way dual core or quad core processors can be exploited to run experiments two or four times faster as with a single processor while at the same time keeping TSM free for other tasks 13 3 Numerical Test Distributions The empirical distribution functions EDFs of simulated test statistics can be written to a spreadsheet file If a statistic is simulated when the DGM represents a case of the null hypothesis or equivalently with the centred t statistic option selected the tabulations can be used to compute p values in subsequent runs The rejection frequencies in 58 James Davidson 2015 simulations of the alternative hypothesis can then estimate the true size corrected test powers The stored EDFs can also be used to calculate p values for models estimated from observed data in the ordinary way In one context this method might operate as an alternative implementation of the bootstrap However tabulations of non standard distributions with a general application might also be generated to extend or improve published tables or even to implement completely new tests
94. mizing a log likelihood function or other criterion function although linear models can also be estimated The options include ARMA ARIMA and ARFIMA models and error correction ECM models Conditional variance models see Section 6 include ARCH GARCH and numerous variants For systems of equations the vector generalizations of all these models are available see Sections 5 and 7 FIML for linear simultaneous systems is implemented automatically if current endogenous variables are included as explanatory variables All the specifications allow stochastic regime switching in mean and variance including Markov switching and smooth transition see Section 8 4 1 Linear Models of the Conditional Mean Let Y for t 1 T denote the time series to be modelled Consider the ARFIMA class of dynamic regression models having the general form DADE Y Yoi Vit Xi Yoo T X OCL 4 1 where V X U 4 2 and at most one of yo and mus can be different from zero The x for j 1 2 and 3 are vectors of explanatory variables entering with coefficient vectors 1 Equation 4 1 encompasses a range of options of which no more than a few are likely to be selected at once The most basic time series model for a single series is the univariate ARMA p q form Setting d 0 and suppressing the explanatory variables this might be written as DIY Yo HIM 4 3 in which o L and 0 L represent the autoregressive AR and moving
95. mposed by hand equation by equation 8 James Davidson 2015 3 Panel Data Panel regression is supported when the data file is created with a specified format The basic model is assumed to take the form ya P x A N v t T 4L Dpi 1 N 3 1 9079 The subscript i indexes individuals or cross sectional units while t indexes dates v is a disturbance with mean 0 and variance o and distributed independently of x for all i and t Panels can be unbalanced with different start and end dates for different individuals and can even be irregular with missing time periods although this possibility is not indicated explicitly in the notation of equation 3 1 Except in the irregular case dates can be seasonal with a year quarter or year month format The vector x p x 1 contains regressors which can consist of current and lagged exogenous variables and also lagged endogenous variables It can optionally include an intercept and also a trend term i e x t min T We define T T T 1 for brevity and let the total number of observations be O 3R 3 1 Data Transformations The variables w y x may be automatically subjected to one of the following transformations denoted generically by w H H H 1 T 1 Deviations from time means w w W where w T KS CW t 7 1 H T gt SS d ee 1 2 Time means w W t T 1 T where w T y Wa Thus
96. n 4 1 can be modified as ADA D Y Ya Vt nix Yo T LO OL 4 21 where Z isa S x 1 vector of equilibrium relations and v is a S x 1 vector of loadings coefficients The lag K 1 is selectable While in a single equation S 1 would be the typical case the only restrictions needing to be observed by the nonlinear mapping is to ensure parameters are identified in general no more unrestricted parameters than independent variables 4 5 1 Equilibrium Relations Two schemes are implemented to form the elements of Z as linear combinations of specified variables 1 Set Z x Ilx 4 22 where IT is a P x S matrix of coefficients of S equilibrium relations in P variables of which one element of each column must be normalized to 1 When the data are nonstationary and a unit root is imposed these relations are commonly called cointegrating although such a model is also compatible with stationary data The matrix must have sufficient restrictions imposed to identify its remaining elements See Davidson 1998 2000 for details The vector x7 may include the dependent variable 2 Let S 1 and define Z Ya Ui TX 4 23 where the parameters are constrained to match those in equation 4 1 This allows nonlinear autoregressions to be implemented see the next section To include regressors not subject to the implicit coefficient restrictions of 4 23 these can be included as Type 2 17 James Davidson 2015 4
97. n bootstrap inference can be computationally burdensome since the total number of estimations and data generation cycles becomes the product of the number of Monte Carlo replications K and the number of bootstrap replications B The warp speed method see Giacomini et al 2013 collapses these stages together The method has an affinity with the fast double bootstrap technique In each Monte Carlo replication a single bootstrap replication is performed In other words the cycle of generating a bootstrap sample using the model with fitted parameters is performed just once not B times At the end of the experiment it follows that 2K test statistics have been computed the K statistics computed using the Monte Carlo data and the K statistics obtained using the bootstrap samples generated from these first K estimations This latter set are sorted and used to derive an empirical distribution EDF which stands in for the bootstrap distribution A bootstrap p value is generated for each Monte Carlo replication by locating the statistic from the first set in this EDF In a correctly sized test the distribution of these p values in simulations of the null hypothesis should be uniform on the unit interval so that for each a 0 1 the proportion not exceeding a should approach a as K gt The ERP is estimated as the difference between a and the actual proportion recorded in the experiment As a rule the number of Monte Carlo replications sho
98. n of the instrument series are not implemented in this release 3 4 Tests and Diagnostics Two options are available for computing the covariance matrix for t values and Wald Statistics 1 Standard formula dE gt XX J 3 7 i l t 7 41 Th where s WEEK KE i rhezl 2 Robust formula dk F xix I ER G aba See ab Se i l t 1 i rer i l t 7 1 The following test statistics are reported automatically under the setup indicated 1 Jarque Bera test for normality of within disturbances All cases 11 James Davidson 2015 2 Breusch and Pagan 1980 LM test of the null hypothesis EN 0 OLS without transformations 3 Bhargava Franzini and Narendranathan 1982 modified Durbin Watson statistic This tests the null hypothesis of serial uncorrelatedness of within disturbances Not under transformation 3 4 Hausman 1978 test for correct specification in the random effects model Under the null hypothesis E m Xj Xir 0 FGLS and ML 3 5 System Estimation Systems of panel equations with fixed effects can be estimated in the same way as for one dimensional samples Either least squares SUR or instrumental variables can be specified Systems with random effects cannot be estimated in this release 12 James Davidson 2015 4 Single Equation Dynamic Models The Dynamic Equation dialog accessed with the button on the toolbar gives access to models that require opti
99. nce EE 59 13 5 Resamplino Methods E 61 13 5 1 Likelihood Model srs aetna a RR 61 13 5 2 Gaussian aan a OR o SRA ra 61 Ee EE 61 155 APOLO EE 61 13 5 5 Simple Block Bootstrap a ca urso aaa eene 61 13 5 6 Stationary Block Booisifaps sussa sas lada 61 13 5 7 RE 62 138 F rier Boo Sra Poin fcc aaa da ran idea Macha 62 13 5 9 Sieve AR E EE 62 13 5 10 Data REsp Ee 62 136 CSC 63 13 6 1 Dynamic Data Simulation EEN 63 13 6 2 WEE EE 64 13 7 Nonlinear Models sicknemi astro Maite ae e ee 64 13 8 Panel Data EE 64 UE Ee E 65 13 10 The Fast Double Bootstrap wivscc ssniccscsewesecsssvecessececaucoscssneedessavesdde 65 13 11 Warp speed Monte Carlo for Bootstrap Estimators 0 00000 66 14 Additional Ee E 67 14 1 Semiparametric Long Memor 67 14 1 1 Geweke Porter Hudak Log Periodogram Regression 67 14 1 2 Moulines Soulier Log Periodogram Regression cccceeseeeeeeeeteeeteees 67 1413 Local Whittle EE 67 E Comte oration AMAly CG 68 14 3 Automatic Model Selection i ccs5 ucia weet eit Estab bien 69 143 CARMA Order Selection EE 69 4 James Davidson 2015 14 3 2 EE 69 14 4 SsfPack State Space Modelling 2 5006 anexeatcasacvevieveccaassezes Steet 70 14 5 Calculator and Matrix Calculator 0 ccccccececscceeeeeceeeesenenes 70 ED DENTEN 72 fd op ee 78 5 James Davidson 2015 1 Introduction This document describes the econometric models that can be estimated in Time Series Modelling 4 4
100. nces on these tests see Davidson 2000 Chapter 12 Under standard regularity conditions all these tests have asymptotic y r distributions when the null hypothesis is true with the number of degrees of freedom r corresponding to the number of restrictions under test 12 1 Wald Tests Wald tests are computed using the formula W 8 GVG g 12 1 where g denotes a vector of r linear or nonlinear restrictions and G 0g 00 r x p is the Jacobian matrix of the restrictions In many cases this is just a selection matrix picking out a subset of the parameters to test that they are jointly equal to zero The hats denote evaluation at unrestricted parameter values Nonlinear restrictions are implemented by typing the specified formulae in standard notation see Section 4 6 1 and evaluating the Jacobian numerically Ordinary t values ratios of estimates to standard errors are asymptotically normal under the null hypothesis and can be thought of as Wald statistics KA after squaring These statistics are not given explicitly but the nominal p values for 2 sided tests are quoted where appropriate as for all the quoted statistics 12 2 Lagrange Multiplier Tests The basic statistic is obtained from the formula LM q Q G GVG GO q 12 2 where q denotes the gradient of the unrestricted model criterion and the dots denote evaluation at the restricted estimates The matrix V in 12 1 and 12 2 is computed by the currently se
101. ng unit diagonal In this set up the off diagonal elements of C are additional parameters to be estimated and are constrained by a logistic mapping to lie in 1 1 If the equations are simultaneous with B I in 5 1 the term T log der B is added to the maximand in 10 19 to implement FIML 10 2 4 Student ML with Conditional Heteroscedasticity To estimate a system with multivariate Student s t errors the maximand is 44 James Davidson 2015 T v l 2 T T v 2 2 L T log log m v 2 10 20 T tyy ll2p l yy 1 2 5 log det H log det C v 1 log pa CH u A HEEN t 1 Note that in this set up the degrees of freedom are constrained to be equal for all equations 10 2 5 GED ML with Conditional Heteroscedasticity A system with GED errors has maximand y2 VI 2 Fy 1 2 L T log 5 luta netas E RE PS nz t v 2 E 10 21 AM v 2 If the equations are simultaneous with B Tin 5 1 the term T log der B is added to the maximands in 10 19 10 20 or 10 21 to implement FIML System versions of the skewed Student and Whittle likelihoods are not implemented 10 3 Markov Switching Models In switching regime models the likelihood functions take the form of 8 4 where the regime probabilities are generated by the recursive formulae 8 2 and 8 3 and FMS j Fa exp lis 10 22 where l is the relevant likelihood contribution in other words th
102. nstrain coefficients to any chosen values either zero or nonzero This feature allows the imposition of identifying restrictions in simultaneous systems for example Tests of coefficient restrictions based on the unrestricted regression can be computed using the Wald principle Zero restrictions linear restrictions and nonlinear restrictions can be tested In the latter case the restrictions are coded as algebraic expressions allowing any degree of flexibility Diagnostic tests based on the Lagrange multiplier and conditional moment principles can be computed using either preset specifications autocorrelation RESET heteroscedasticity and neglected ARCH as well as user selected test variables Other options include the ADF and Phillips Perron tests for cointegration the Durbin Watson statistic and a general test of model significance allowing for dummies and lagged dependent variables under the null hypothesis By dividing the sample into estimation and forecast periods Chow s forecasting and parameter stability tests can be computed Advanced users also have the option of bootstrap methods including bootstrap confidence intervals test p values and bias corrections 7 James Davidson 2015 2 4 Cointegrating Regressions To estimate cointegrating relations semi parametrically two methods are implemented the Phillips Hansen 1990 fully modified least squares estimator and the least squares estimator augmented by lags and leads of
103. ny situations e g diagnostic tests for incorrect specification The block bootstrap implemented for resampling of stationary dependent data is the moving blocks method of Kunsch 1989 Integers are drawn with equal probability from 1 o 2 L with replacement and used to define the beginning of blocks of consecutive observations of length L Randomly drawn blocks are concatenated and truncated as required to form the bootstrap samples of length n 13 5 6 Stationary Block Bootstrap This variant of the block bootstrap due to Politis and Romano 1994 draws blocks with random length as well as random initial observation where the block lengths are drawn independently from the geometric distribution with parameter 1 L such that the mean block 61 James Davidson 2015 length is L For the purpose of drawing the blocks the residual series is wrapped such that u u U u and so forth 13 5 7 Wild Bootstrap The wild bootstrap see Gongalves and Kilian 2003 Davidson Monticini and Peel 2007 Davidson and Flachaire 2008 is robust to neglected heteroscedasticity If H H are the model residuals the re sampled variates in this case are wu m for t 1 7 where D is a vector of i i d drawings from a 2 point distribution et a with probability 1 1 a 13 9 1 a with probability a 1 a The first two moments of the distribution are preserved for any choice of a gt 0 Setting a 1 is equiv
104. on 4 3 with Gaussian errors only The criterion function see Hauser 1999 is 1 M M bo ee vp oe a Dogg 10 7 j l 8 j l where Z is the jth point of the periodogram of Y expressed in mean deviations M 7 2 and 2 1 2 2 dell d o One COS 27 T D Ona sin 2amj T i K SCH cos Zug vd d Sin Ze This algorithm is quite a lot faster than conditional least squares i e formula 10 3 without GARCH in large samples 10 8 10 1 8 Probit and Logit The log likelihood function is defined as L gt log F 0 log FY 10 9 where Y denotes the binary 0 or 1 dependent variable F denotes the standard normal or logistic CDF and Y is defined in 4 33 and 6 8 or 6 9 42 James Davidson 2015 10 1 9 Ordered Probit and Logit Letting 1 _ denote the indicator function taking the value 1 when its argument is true and 0 otherwise the log likelihood for the model of J states J 3 can be written as T L GR log F Y F t 1 J 1 KU log F D A x F Dir SE a 10 10 J 1 D Ju log I F io d I z where y 0 and F and Y are defined as before Note that for 1 lt j lt J 1 y 0 isa feasible value only if the jth category is empty Then the corresponding term is omitted from the likelihood function 10 1 10 Poisson For count data the log likelihood in the Poisson model is defined as L gt 6 Y log logI 1 10 11 where 4 is defin
105. only in the case where none of the programmed model features are specified If a pre sample period is specified in other words if some initial observations are excluded from the selected sample there are two options for setting the initial conditions see the Options Simulation and Resampling dialog In the case Fixed Presample Data the actual data set and estimated residuals are used This mode is always adopted for Monte Carlo forecasting and bootstrap calculations Otherwise it is user selectable In the case Random Presample Data the simulation run for a stationary process actually starts at date 1 start of the observed sample although the simulation is only reported from the specified start date By choosing a long enough presample period the presample data can therefore attain their stationary distribution If a unit root is specified however the cumulation starts at the initial date specified not date 1 Long memory models pose a special problem because the dependence on presample shocks is potentially large and persistent If these effects are suppressed the resulting distributions of partial sums converge to functionals of the type II fractional Brownian having nonstationary increments Even a very large number of presample lags may fail to reproduce the stationary process whose partial sums converge to the type I fractional Brownian motion An alternative simulation method is provided for this case
106. orm allowing specialized forms of nonlinear dynamics The symbol E 1 is used to represent the lagged residual in formulae Since this option allows a normalized left hand side variable to be specified fitted values simulations and forecasts can be generated This is not possible for the implicit Residual form Sa SU fij gt 0 Seel 4 32 which accordingly should only be used on cases such as 4 28 where a suitable normalization does not exist In the Equation coding style the disturbance is added automatically to the formula and does not need to appear explicitly For models to be used for stochastic simulation another style of coding is available in which the disturbance appears explicitly Thus equation 4 27 might be coded as WYE alpha beta EXE gamma W 19 James Davidson 2015 where W is a reserved name denoting the artificially generated disturbance It can also appear lagged as W i for j gt 0 The disturbance can be transformed or can enter the model nonlinearly providing much greater modelling flexibility However this coding style cannot be used for estimation purposes since such equations cannot be inverted to generate residuals Attempting to estimate an equation containing W produces an error Instead create a separate model with the Residual option The reserved name E j can be used denote the residual lagged j gt 0 periods in this case 4 6 4 Coded Component The
107. pendence of the series automatically by a sieve autoregression choosing lag length by the Akaike criterion with maximum lag length limited to OTT This is fast but needs a linear representation of the series to be adequate The second variant uses a model constructed by the investigator to represent the dependence and hence can include such features as long memory and nonlinearities as well as permitting more care in validating the model specification In this case the 1 1 d sequence is generated from the investigator s model in which all time dependence parameters are suppressed but disturbance distribution parameters are retained This variant is valid in particular for dynamic binary and count data models 57 James Davidson 2015 13 Simulation and Resampling Options A special feature of the program is the simulation capability Any model that can be estimated can also be simulated A range of methods for generating the random shocks is provided including the bootstrap and model specific distributions including the Gaussian Student t GED as well as probit logit and Poisson for the discrete data cases Simulations can be run on a one off basis and comparing the appearance of series generated from a fitted model with the original data is an excellent informal method of specification checking 13 1 Monte Carlo Experiments The Monte Carlo option provides the means to study the distributions of estimators and test stati
108. placed by the matrix Y L containing elements vl LU where dy is defined 5 2 1 above and the dz are additional parameters This is the fractional VECM FVECM model and this case is called regular fractional cointegration The equilibrium relations are potentially cointegrating in the sense that they are integrated to order d dau lt du While this value must be the same for all j di and dr can depend on j d3 can potentially differ with respect to i when variables with different integration order appear in different cointegrating relations The standard VECM model corresponds to the case d3 d 1 all j and i CAUTION it is the user s responsibility to respect this requirement The program cannot monitor a nonlinear specification If this violates the exogeneity restriction the estimates will not correspond to a valid estimator and will be inconsistent in general Simulations will also be incorrect 24 James Davidson 2015 In the closed loop model in which P N and 5 6 represents the equilibrium relations a constant loadings matrix Y can be optionally combined with a cointegrating matrix IL L dy ds ji whose typical elements are of the form the form x 1 L Here the jth element of the vector BY To Sdt SUE 5 7 enters the ith cointegrating relation in fractionally differenced form with differencing parameter du da This case of the FVECM is called generalized fractional cointegration
109. roposed by Bierens 1990 The test covariance is between the model residuals and a bounded nonlinear function of the explanatory variables Let x denote the vector of all exogenous variables in the mean model The test function takes the form w 6 exp arctan x 12 11 52 James Davidson 2015 where x denotes the standardized series with elements expressed in deviations from means and divided by standard deviations and the vector K x1 has elements drawn from the interval 1 1 Setting amp is to a fixed value amp the corresponding statistic S is asymptotically y 1 under the null hypothesis of correct specification as for any fixed choice of E but this initial choice is not necessarily optimal To remove dependence of the test on consider an integrated statistic computed by one of the following formulae where is a compact subset of R S stgas 12 12 2log fep 5 d 12 13 ST sup S 12 14 See Andrews and Ploberger 1994 for an analysis of tests of this type These integrated statistics have unknown null distributions and two approaches exist to computing p value The first is the bootstrap method described by Hansen 1996 which can however be quite computationally intensive Alternatively Bierens 1990 suggests defining the statistic S Get 12 15 S where y gt 0 and 0 lt p lt 1 Judicious choice of the bound ensures that 5 S with probability approach
110. s 14 4 2 3 Intercept and Linear Trend Dummies 0 ccccescceeseeesceeteeeeeeeteeeeeeeeneees 14 42 4 Regr ssor A lt csisicsd ccananbdunapshanckaandanecdnss adcdandedsudenssstcctuanascaanosaheesaneaecnaass 15 4 2 5 In guality Cons ttait S sorsien a ei k a d 15 4 2 6 Polynomial Distributed Lags sicccecesinedacarecoedeedestuesssvindecvieees oaaseucecneewies 15 1 James Davidson 2015 A gt VHS at Mode EE 16 4 4 Nonlinear Moving Average Model 16 4 5 Error Correction koleegen Eegeregie 17 45 l Equilibrium Relations steigert egegtengch st eengEee denge aoi oei erran ATESA Ea aN 17 4 5 2 Fractional Coiite stations c i gscitsctsaasssattvchtecnsasacdetthcudemedentncgaadtvcntgutedecteans 18 4 5 3 Nonlinear Error Correction and Nonlinear AR 18 4 6 Kleer Coded Munch ONS spread Sia ees 18 AOAC Oded Formulas eissernir a aits A 004 RS e 18 46 2 F rmula EE 19 4 6 3 Coded Equations sauna dah e RAS UA SS cone diaiteaee 19 4 04 Coded O Eer EE 20 4 6 5 Coded Error Correction Mechanism 20 4 6 6 Coded Moving Average Model 20 4 6 7 Coded Equilibrium Relat Ons suas inss aa sadios aaa 20 EE ER 20 4 7 Discrete Data Models eene Ee 21 4 7 1 Probit and Logit EE 21 4 7 2 Ordered Probit and E EEN 21 4 T3 Count Data EE 22 4 7 4 Autoregressive discrete model 22 4 7 5 Zero inflated Poisson and ordered Probit cecceeceeseeseeeteeeteeeeeeeeseees 22 5 Systems of Equations EE 23 5 Kette 23 5 2 Definitions and Delas aaa naan enn quai 23
111. s Davidson 2015 1 1447 aT Bartlett kernel o 2 6614 aT Parzen kernel 113231 aT Quadratic spectral kernel 1 7462 aT Tukey Hanning kernel 11 2 where a is a data dependent factor Let the vector of variables whose covariance matrix is to be computed generally the criterion gradients be denoted x x x Following Andrews 1991 and Andrews and Monahan 1992 a can be chosen as p dree ege Bartlett kernel O KH e j l l p E 11 3 AQAA E 4950 j l 1 p e SN other cases w 2 2 dp where o is the first order autoregression coefficient of the x variable and o the corresponding residual variance TSM chooses the weight w to be the reciprocal of the sample variance of x Following Newey and West 1994 a can also be chosen as fe a V 25 j l JY Dun SE 11 4 0 25077 where is the jth order autocovariance of the series w x and w w w is the same weight vector as in a and MT 100 Bartlett kernel n 44 T 100 Quadratic Spectral kernel 11 5 4 T 100 Parzen and Tukey Hanning kernels See also Den Haan and Levin 1997 for all formulae details and recommendations 11 3 2 Pre whitening The pre whitening option computes a VAR 1 model u Au 11 6 for the series u for example the scores in parameter covariance matrix estimation and applies the kernel estimator to the residuals from this regres
112. s on the cointegrating space and MINIMAL analysis The context for these tests is the Johansen 1988 1991 type cointegrating VAR model Ax TI L Ax aB x _ vtu mxl 14 7 68 James Davidson 2015 where TI L IL TI for k gt 0 0 otherwise and o and B are m x s and the cointegrating rank of the system is 0 lt s lt m One option in this dialog reports selection criteria allowing the best choice of k the additional lag length for the analysis See Davidson 2000 among many recent references for details on the theory and implementation of these tests The tables of critical values used to compute p value inequalities are taken from Osterwald Lenum 1992 Asymptotic chi squared tests based on the assumption that the cointegrating rank is known test restrictions on the Johansen matrix of cointegrating vectors B of the form 4 a such that HBa 0 14 8 where H p x m is a known matrix representing linear restrictions here exclusion restrictions Thus the null hypothesis specifies that a vector obeying the restrictions lies in the cointegrating space spanned by If so the subset of variables with non restricted coefficients Ba are cointegrating amongst themselves See Davidson 1998b or Davidson 2000 Chapter 16 6 on the theory of these tests The MINIMAL test algorithm works through all the possible exclusion tests of this type to identify the subsets of variables that are irreducibly cointegrating that is
113. sion Call this estimate U This matrix is re coloured to yield the HAC covariance matrix for u as 47 James Davidson 2015 V I AU I A 11 7 Use the pre whitening option with caution since its properties in particular cases are unclear Ifthe scores are highly collinear the VAR estimate could be poorly conditioned and I A close to singular Over compensating for autocorrelation by pre whitening and or using plug in bandwidths could effectively kill the power of the tests of I 0 11 4 KVB Inconsistent Variance Estimates A fourth option implemented is that of Kiefer Vogelsang and Bunzel 2000 KVB These authors suggest using an inconsistent estimator of the covariance matrix which yields asymptotically pivotal statistics having a non standard distribution As shown in Kiefer and Vogelsang 2002a 2002b an equivalent procedure is obtained by using the HAC estimator with the Bartlett kernel and bandwidth set equal to sample size p value inequalities are reported using the tabulation in Kiefer and Vogelsang 2002b See the cited papers for details The method is also implemented for nonlinear models hypotheses See Bunzel Kiefer and Vogelsang 2001 on these applications 48 James Davidson 2015 12 Test Statistics Three types of test statistic can be computed Wald tests Lagrange multiplier LM tests also known as score tests and moment tests M tests For the background theory and literature refere
114. stical Society Series B Vol 39 Deng A and P Perron 2008 The limit distribution of the cusum of squares test under general mixing conditions Econometric Theory 24 809 822 Den Haan W J and A Levin 1997 A practitioner s guide to robust covariance matrix estimation Chapter 12 pp291 341 of Handbook of Statistics 15 North Holland Elsevier Dickey D A and Fuller W A 1979 Distribution of the estimators for autoregressive time series with a unit root Journal of the American Statistical Association 74 427 431 Ding Z C W J Granger and R F Engle 1993 A Long Memory Property of Stock Market Returns and a New Model Journal of Empirical Finance 1 83 106 Doornik J A 1999 Object Oriented Matrix Programming Using Ox 3rd ed London Timberlake Consultants Ltd and Oxford www nuff ox ac uk Users Doornik Elliott G T J Rothenberg and J H Stock 1996 Efficient tests for an autoregressive unit root Econometrica 64 4 813 836 Engle R F 2002 Dynamic Conditional Correlation A Simple Class of Multivariate Generalized Autoregressive Conditional Heteroskedasticity Models Journal of Business amp Economic Statistics Vol 20 No 3 pp 339 350 Engle R F and K Kroner 1995 Multivariate simultaneous generalized ARCH Econometric Theory 11 122 50 Engle R F and A D Smith 1999 Stochastic permanent breaks Review of Economics and Statistics 81 553 574 Feller W 1948
115. stics The reported statistics from the experiment can include the following 1 Moments of the distributions of parameter estimates mean variance skewness kurtosis 2 Bias and RMSE estimates For this option the parameter sets of the simulated and estimated models must match but restrictions can be placed on the estimated model to simulate misspecification 3 Upper tail quantiles of all test statistics including parameter absolute or signed t values diagnostic statistics and additional Wald LM and moment tests These outputs can be used to generate tables of critical values for tests for given sample sizes that may improve on the asymptotic approximations 4 Empirical distribution functions of p values These should be uniformly distributed when the null is true and the tests are correctly sized These outputs are useful to assess the size and power of tests under different data generating setups To run an experiment a model to generate the data DGM must be specified and stored using the Model Manager see the Setup menu Simply set up the model specification in the usual manner for estimation Parameter values can be the results of the most recent estimation run or they can be set manually in the Values dialogs A shock distribution must be selected in Bootstrap and Simulation Options The bootstrap options randomly resample the residuals from the latest run Also specify a model to be estimated EM which can be the sa
116. stimation This requires the parameters of the model to be specified in the usual manner Note that in Monte Carlo experiments it is possible to use one model for data generation and another for estimation and hence experiment with misspecification 13 5 2 Gaussian Independent Gaussian drawings with mean zero The standard deviation is selectable 13 5 3 Stable Independent drawings from a fat tailed stable distribution with infinite variance The parameters controlling kurtosis a and skewness B as well as a scale parameter analogous to the standard deviation are selectable 13 5 4 Formula A user supplied formula using the program s symbolic algebra capability to generate any distribution derived from standard Gaussian or uniform 0 1 components as well as elements of the data and parameter sets Mixed Gaussian distributions are just one possibility 13 5 5 Simple Block Bootstrap These options resample of the currently stored model residuals The residuals are centred and multiplied by a variance correction factor of n n k where n is sample size and k the number of fitted parameters The block length L must be set by the user and to get the simple bootstrap set this to 1 In the simple bootstrap procedure note the importance of the assumption that the fitted model is correct such that in particular the true disturbances are 1 1 d However this only needs to hold under the null hypothesis which is appropriate in ma
117. t is a scalar the corresponding score element A is the covariance matrix of the full score vector and the estimated matrix D may be computed by the robust or HAC formulae according the program settings selected M is the matrix of derivatives of m or in other words the log likelihood Hessian or columns thereof in the case of individual parameter tests Note that in the full model test M is square px p and the formula in 12 9 reduces to LM nx m 7 Sm n 12 10 T m 1 7 In the GMM implementation typically m m ye z mx1 where m p the column dimension of the instrument matrix Z S T Z Z and M Z G where G is the matrix whose rows are the derivatives of u with respect to the test parameters Note that in this case the orthogonality condition imposed by estimation takes the form M Sm 1 0 The test is implemented by maximizing the expression in 12 9 by a grid search over the interval 7 7 for 0 lt n lt m lt 1 The bounds are user selectable with default values of mt 0 1 m 0 9 Andrews 1993 table of simulated critical values is used to compute p value bounds 12 7 Consistent Specification Tests Note the tests in this section are advanced and to simplify the program interface dialog elements to control them are not displayed by default See the user s manual Section 8 2a for further information 12 7 1 Bierens Tests This is one of a class of consistent CM tests of functional form p
118. t variable in most situations e Type 3 regressors act in effect as components of the error term adjusting its mean systematically Important note when a unit root is imposed the effect on equation 4 1 is to replace Y by AY t by 1 and x by Ax However the Type 2 and 3 regressors enter as before 4 2 5 Inequality Constraints Optionally estimation can be performed subject to inequality constraints imposed by means of a logistic map If upper and lower bounds UB and LB are specified the reported parameter HB is a logistic transformation of an underlying unconstrained value 0 B LB UB LB ey 4 12 Provided the constraint does not bind with A gt 00 approximate standard errors for B are computed by the delta method 4 2 6 Polynomial Distributed Lags The Data Editing and Transformation dialog contains an option to create moving averages with the form N x i e pape E SCH SE DN Ie where x is a variable in the data set and N is a chosen lag length If all or some of these variables are included in an equation as Type 1 regressors with coefficients a this is equivalent to including lags x Xx n with coefficients i 0 4 4 13 15 James Davidson 2015 4 an gt SE E 4 14 ui N d II i 0 GD These lie on a polynomial of order lt 4 and hence are constrained to vary smoothly Note that suppressing the zero order term imposes the end point constraint By 0 4 3 B
119. ternative resampling procedure that involves resampling the actual data not randomly redrawing residuals The model is fitted to each of the T b 1 contiguous sub samples of length b which must be chosen by the user as a function of sample size and must satisfy b gt and b T 0 This is easily accomplished in TSM using the rolling regressions feature Provided the data are stationary and mixing asymptotically independent the distributions so generated suitably re normalized converge to the limiting distributions of the full sample statistics under very general conditions Subject to these requirements the subsampling method is not subject to the specification or estimation errors that can affect the parametric bootstrap On the other hand it is not suitable for nonstationary or strongly dependent data For further information on the properties of subsampling tests and guidance on the choice of subsample length see for example Politis Romano and Wolf 1999 Confidence intervals for parameters are computed using the formulae L 6 b t T U 6 b t T 13 11 where and t are defined as in 13 4 with respect to the 2 5 and 97 5 quantile of the subsampling distributions Null distributions for the Wald statistics t values and M statistics are generated in the same way as for the bootstrap However LM tests cannot be effectively tabulated by subsampling even in the case of diagnostic tests since the restrictions of
120. tests proposed in this paper a modified Dickey Fuller test DF GLS and a feasible likelihood ratio test which these authors call the Pr test The former chooses lags using and information criterion as for the regular ADF and the latter uses a kernel estimate of the variance which is computed as for the other tests in this dialog e The ADF and PP tests of I 1 can also be computed in the Model Linear Regression dialog by running a regression of the test series on intercept and optionally trend In this case the lag length for the ADF test can be chosen manually by the user e The critical values for these tests are taken from the tables reported in Elliott et al 1996 and Fuller 1976 respectively The entries in these tables for finite sample sizes are linearly interpolated to provide a value appropriate to the actual sample being analysed and for this purpose the case co is treated as equivalent to 1000 observations 12 12 Bootstrap Test of I 0 This test is proposed in Davidson 2009 Defining I 0 to be the property that the normalized partial sums of a time series converge to Brownian motion and hence yield the standard asymptotic distributions postulated by cointegration theory the object of this test is the determine whether the approximation to such distributions is adequate in a sample of given size This is done by simulating the series using a fitted model and computing the statistic defined by Breitung 2
121. the analysis of more than two discrete states provided these are ordered monotonically as functions of the explanatory variables Essentially the same model must explain the probabilities of the states apart from shifts of intercept Suppose for example that Y can assume the values 0 1 2 or 3 These might correspond to responses varying from negative to positive in a sample survey The probabilities of these states would be modelled as P Y 0 F z P Y D F y z F z POY 2 F Ys Kg z P Y 3 1 F Y 2 4 34 where F denotes the normal or logistic CDF as before z is defined by 4 33 as before and y and y are two additional parameters to be estimated Note that the y parameters are constrained to be non negative and are necessarily positive unless the corresponding category is empty In the latter case the corresponding term is omitted from the likelihood function in effect and the parameter is unidentified This problem can be overcome in estimation by fixing the parameter in question at 0 21 James Davidson 2015 4 7 3 Count Data In count data models the data are again integer valued but in this case there is no upper bound the set of possible values In the Poisson model the probabilities that Y 0 1 2 3 are modelled using the Poisson distribution with conditional mean 0 E Y 1x1 x2 x31 where optionally either z 4 35 or d exp Z 4 36 where z is defined by
122. the differenced right hand side variables as proposed by Saikkonen 1991 and Stock and Watson 1993 A range of bandwidth kernel and automatic lag lead selection options are provided to support these procedures The augmented Dickey Fuller and Phillips Perron tests for the null hypothesis of cointegration are also implemented in this dialog By regressing a variable on intercept or intercept and trend only these options can also be used to test the I 1 hypothesis See also Section 12 8 2 for alternative options for I 1 tests 2 5 System Estimation A system of equations is specified by a selecting a set of dependent variables and a set of explanatory variables The latter are allocated to one of the two Types each with its own lag length The same explanatory variables appear in each equation by default but variables can be optionally excluded from an equation by fixing their coefficients to 0 The Type 2 set can include dependent variables in which case the zero order lags are automatically suppressed To estimate the system by 3SLS requires only that a set of instruments be selected Additional instruments are included as lags up to the maximum specified by the Type 2 regressors setting The Type 1 regressor set can include both current and lagged dependent variables The current values are automatically excluded from the equation in which they appear on the left hand side Of course identifying restrictions may also need to be i
123. the probit and logit models the probability of the dependent variable Y taking the value 1 can be modelled as F h 2z where as before F denotes respectively the standard normal and logistic CDF and either hv 1 TX TX 1X 6 8 which we call the GARCH APARCH model by analogy with equation 6 1 or logh 1 1 x 1x h mix 6 9 t which we call the EGARCH model by analogy with equation 6 2 The different explanatory variable Types in 6 8 and 6 9 exist mainly by default as specializations of the usual models but could be used to set up different lag structures The ordered probit and logit models are generalized in just the same way with PUREE Gl Zz PO DEG E t t 1 J Land P Y J 1 F y y hz t In Poisson models equations 4 35 and 4 37 can be modified as E Y xr X7 Ot 6 10 Var F xi X70 dr 1 ad D 6 11 where d expf hr z and h is given by 6 8 or 6 9 See Psarakis and Panetaros 1990 See Nelson 1991 28 James Davidson 2015 7 Conditionally Heteroscedastic Systems 7 1 Implemented Model Variants In a system of equation with the form 5 1 the default generalizations of equations 6 1 and 6 2 are BL Dr TI x ILx B L 1 A A D D OD KA Ms u ILx SCH where s I u lt 0 and M is a diagonal matrix of parameters and B L log h I x ILx B L D L A A D I g
124. tile has bounds L 0 sia U siq 13 7 where q3 is the 0 95 quantile of the bootstrap distribution of Ir This last interval is centred on the point estimate unlike the first two but its tails represent unequal probabilities when the distribution is skewed otherwise the intervals should be similar in large samples See Hall 1992 Chapter 1 for a good exposition of the bootstrap principle Bias correction for point estimates is also optionally implemented The bias is estimated by B E 0 6 6 estimating E 0 0 and hence the corrected estimates take the form 6 6 B 20 E 6 6 13 8 In these formulae P 6 and E 6 denote probabilities and expectations under the bootstrap distribution estimated in practice by Monte Carlo replication 60 James Davidson 2015 The number of replications used to construct the bootstrap distributions is 99 by default While this number may be increased for more accurate Monte Carlo estimation increasing it does not increase the accuracy of the bootstrap statistics beyond a certain point at least in a finite sample because the error in estimating the parameters of OT will start to dominate 399 is probably sufficient for nearly every purpose 13 5 Resampling Methods The following randomization methods are implemented to draw the disturbances for Monte Carlo and bootstrap resampling 13 5 1 Likelihood Model The distribution specified for maximum likelihood e
125. ts The diagnostic Q tests can be interpreted similarly in the bootstrap context and bootstrap p values are given However for obvious reasons the Jarque Bera test for normality of the residuals is bootstrapped only in the case where Gaussian disturbances are specified in the resampling NOTE bootstrap p values are always marked with a in the output for easy recognition Bootstrap 95 equal tail confidence intervals L U are reported by default Note that these are not centred on the point estimate Letting 6 denote the estimate from the observed sample the formulae are respectively L 6 1 U 6 t 13 4 such that 2 5 of the estimates from the bootstrap replications 6 lie above 6 t and 2 5 lie below 6 1 To interpret this estimate note that 13 5 where P 6 is the probability under the bootstrap distribution estimated by the corresponding proportion of Monte Carlo replications and 0 is the true value denotes that the bootstrap distribution is expected to mimic the true sampling distribution to within some approximation Alternative confidence intervals are based on the distribution of the bootstrap t ratios 1 as defined in 13 2 The equal tailed percentile interval has bounds L 0 s q Ui 13 6 where q and q2 are respectively the 0 975 and 0 025 quantiles of the bootstrap distribution of the 1 and s is the standard error of the bootstrap distribution of 6 The symmetric percen
126. ts of a linear model can include intercept trend and regressors which can be allocated one of two Types The main purpose of Types is to allow different orders of lags to be specified By setting the scroll bar in the Linear Regression dialog lags can be included automatically without having to create them ahead of time and different orders can be specified for each Type For example dummy variables that should not be lagged can be entered as Type 1 variables Type 2 variables are special since this set can include the dependent variable or variables in a multiple equation model In this case the current value of the variable is automatically suppressed and lags start from 1 For variables not in the dependent set lags start from zero This makes it easy to include the lagged dependent variable as a regressor While the maximum lag is set with the scrollbar note that the specification can be fine tuned by restricting individual coefficients to 0 This is done by checking the Fixed checkboxes in the Values Equation dialog 2 2 Instrumental Variables When 2SLS is selected an instrument set must be specified which should include any exogenous variables in the equation Lags of the additional instruments will be included to match those specified for the Type 2 regressors Lagged endogenous variables are not used as instruments by default but this is a selectable option 2 3 Restrictions and Tests It is possible to co
127. tstcansesebensaanctegeneoeseases 45 11 Standard Errors and Covariance Matrix Formulae 46 11 1 Information Matrix Formulae cccccccccceessececessessseceeeeessees 46 TEZ Robust e 46 11 3 HAC Variance Estimators ccscccssseccsessrsecersreeceseescssneceees 46 11 3 1 Ba dwidth Seleciona EE 46 IR NEE 47 11 4 KVB Inconsistent Variance Eatmates 48 PZ EST StALISUICS EE 49 Ch Wald LEE 49 12 2 Lagrange Multiplier Tests scsssccassatetenteeivcecsnatscsceedaed ovnaideitetaseleas 49 3 James Davidson 2015 12 3 Moment and Conditional Moment Tests 49 12 4 Information Matrix TeSt s i0 si cscseiessisavsscinaacsessacniinedessiaeieeaaseiens 51 12 5 Nyblom Hansen Stability Tests 51 12 6 Andrews Structural Change LM Test 52 12 7 Consistent Specification RK 52 ER E KS E EE S EAE A EE 52 12 12 Scor contrib tion KC 53 12 7 3 Dynamic Specification Less Zotetttiedetereietck e deeg Zeteiee eege 53 12 8 Vuong s Test of Non Nested Models cccccesseeeesseeeeeeeeeees 54 12 9 Cusum of Squares Desta aa ee dese 54 DOE SUPER EE 55 12 11 Tests of ET EE 55 EN A S D O EE 55 A DV SES OP NA e DEE 56 12 12 Bootstrap Test of TEE csssssccvecccsasvavecsaclantacdnd ege enge Uia Gana ido 56 13 Simulation and Resampling Options 000snnnnnseessssesse1110 58 13 1 Monte Carlo Experiments s sccassascecestenaiatantshiveenniosaiaatetanines 58 13 2 erer ek 58 13 3 Numerical Test Terasse 58 13 4 Bootstrap Infere
128. uld be set larger than both B and Kin the comparable conventional experiment but can be substantially less that BK See Giacomini et al 2013 for a discussion of cases where this method should work and cases where it should be used with caution 66 James Davidson 2015 14 Additional Features 14 1 Semiparametric Long Memory The menu item Setup Semiparametric Long Memory provides a choice of three nonparametric estimators of the long memory parameter d These methods are not a recommended substitute for maximum likelihood estimation of an ARFIMA p d q model if there is confidence that the ARMA components are correctly specified but they impose fewer assumptions about the short run The assumption is that the spectrum of the process takes the form fO 1 e PE TO 14 1 where f represents the short range component of the dependence This is assumed smooth in the neighbourhood of the origin with Cp 0 Note the alternative representation f A A 8A 14 2 where g is likewise assumed smooth at the origin with g 0 0 14 1 1 Geweke Porter Hudak Log Periodogram Regression There two variants of the log periodogram regression method The Geweke and Porter Hudak 1983 estimator GPH is implemented with trimming and smoothing options available as proposed in Robinson 1995 The regression YO c dX U k L J L 2J M 14 3 is computed where X 2log sin x 2 and E sisl S Ar gt 14 4 I is the jth p
129. where v x h 6 04 B The program can report either the o B1 pair or the 61 B1 pair Whether x or B 1 is estimated is also a user selectable option We refer to K as the GARCH intercept of Type 2 by analogy with equation 4 1 see Section 4 2 3 for the details The lag structures in 6 1 and 6 2 are the same for convenience and comparability This can be thought of as a flexible dynamic form with general application but note that the interpretation of restrictions is different in each case The case a gt 1 in 6 2 has no implications for covariance stationarity as it does in equation 6 1 Abstracting from the role of possible exogenous variables the EGARCH model is stationary if the lag coefficients in 6 2 are square summable see Nelson 1991 This is satisfied here provided the roots of BO are stable and d gt 0 5 6 2 2 HYGARCH and FIGARCH The HYGARCH model Davidson 2004a specifies o in 6 1 referred to as the amplitude parameter The case a 1 is the FIGARCH model with dz the hyperbolic memory parameter Note that setting 0 lt a lt 1 gives a stationary process while with a gt 1 which includes FIGARCH it is nonstationary If d 1 a reduces to the status of an additional autoregressive root and so gives the IGARCH model with a 1 6 2 3 Asymmetric GARCH and Power GARCH In equation 6 1 s 1 if u lt O and O otherwise u is the so called leverage asymmetr
130. with zero order terms equal to Jy A is a NxN diagonal coefficient matrix and A is a NxN diagonal matrix with terms of the form 1 L on the diagonal 7 2 2 DCC Multivariate GARCH This model is proposed by Engle 2002 It is available in either the standard or the ARMA in squares parameterizations that is it is possible to estimate either the pairs a B or 6 B where a B In the latter case it is easy to impose the stationarity condition lt 1 This choice is controlled by the same option as for the GARCH specification proper 7 2 3 BEKK Multivariate GARCH This model is suggested by Engle and Kroner 1995 Both diagonal and off diagonal elements of the conditional covariance matrices are modelled using a linear vector ARMA structure to explain the evolution of vec H N x 1 Considering for simplicity the case p q 1 note that this model is equivalent to H KCK IL xs xs 1L a A u u ILX en JL LA 7 9 BH _ B 1x x BIL xy 5 ILB E In the lag polynomial matrices in 7 7 and 7 8 the A and B matrices are specified in the Values dialogs and in the output just like the corresponding matrices in 7 1 and 7 2 In other words each row of these matrices is assigned for reporting purposes arbitrarily in this case to the corresponding equation of the system Be careful to interpret the results correctly for the interpretation of these matrices is of course entirely different in th
131. y parameter which permits positive and negative disturbances to contribute differently to the conditional variance If u gt 0 there is a larger contribution when u lt 0 than otherwise In the case where n is fixed at 2 this supplies a variant of the threshold GARCH or GJR model Glosten et al 1993 In the case where n gt 0 is a free parameter 6 1 is equivalent to the asymmetric power ARCH APARCH model Ding et al 1993 Note that all these models can be extended with the natural FIGARCH HYGARCH generalizations 6 2 4 GARCH Regressors The variables x4 xs and xs will be called GARCH regressors of Types 1 3 in parallel with the exogenous components of the mean equation In 6 1 these variables ought to be nonnegative to ensure the model always yields a positive value for the conditional variance There is a program option to convert the selected regressors to absolute form automatically The intercept parameter in 6 1 corresponds to the variance when all conditional heteroscedasticity effects are absent Note different orders of lag of each Type can be specified just as for regressors in mean 6 2 5 EGARCH The appearance of h on both sides of 6 2 is purely formal since the zero order lag term is zero in 6 2 as in 6 1 the equation is recursive The parameter t represents the degree of asymmetry effects analogous to u in equation 6 1 fixed at zero unless this model feature is selected Howev

Download Pdf Manuals

image

Related Search

Related Contents

LC-Power LC7600 power supply unit  Présentation PowerPoint  Cables Direct CB-009  9 - Fiat Cesaro  CETVLED32HD3 MODE D`EMPLOI  Manual  μGPCsH マニュアル  Samsung E838 用户手册  Vista  RENOUVEAU RÉALISME - FRAC Poitou  

Copyright © All rights reserved.
Failed to retrieve file