Home
cost733class-1.2 User guide
Contents
1. 30 37 34 31 28 25 22 19 16 13 10 7 4 1 2 5 8 11 14 17 20 23 26 29 32 35 38 41 44 47 50 53 56 Figure 6 1 Selected grid points red circles for ERA40 domain 00 37W 56E 32pts by 3 30N 76N 24pts by 2 37W 30N is the first position in data 56E 76N the last The classification grid has an extent of 36 E W x 24 N S and is centered in 8E 52N This method was developed by Jenkinson and Collison 1977 and is intended to provide an objective scheme that acceptably reproduces the subjective Lamb weather types Jenkinson and Collison 1977 Jones et al 1993 Therefore daily grid point mean sea level pressure data is classified as follows 1 First the input data is analysed due to its grid 6 Classification methods 43 12 e 9 11 1064 O P 9 9 8 7 0 Q Qo 0 6 5 4 3G 9 e 9 2 A o 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 A Figure 6 2 Selected grid points red circles for a 17x12 grid if crit 2 is used Linear interpolation between the neighboring points is done to achieve the values for the middle line The grid is no more equally spaced in latitudinal direction 20 W 10 W o 10 W 124 13 4 44 5 westerly flow S 174 1 5 2x9 13 1 4 2x 8 12 southerly flow F S W resultant flow ZW 1 07 4 15 16
2. cost733class dat pth slp dat clain pth KMN_ncl9 cla met BRIER idx brier KMN ncl9 txt 3 2 3 Comparing classifications Comparing two or more classifications is rather easy cost733class clain specification gt clain specification gt met method idx filename gt more method specific options The only difference to an evaluation and classification run is the absence of the dat specification option Instead at least two classification catalogs more than one per file are possible must be provided A comparison of two partitions based on different methods could be done with cost733class clain pth DKM mncl9 cla clain pth KMN_ncl9 cla met CPART idx KMN DKM ncl9 compared txt 3 2 4 Assignment to existing classifications Assignments to existing classifications can be done on two different bases 1 for a given classification catalog 2 for predefined class centroids 3 Getting started 10 In both cases one has to provide and describe data which is then assigned to an existing classification Other required cost733class options differ in both cases Please refer to the corresponding sections below 3 2 5 Rather simple pre processing Beyond the use cases mentioned above there are a few methods in cost733class which process input data in a different way They are grouped together under the term mis cellaneous functions Furthermore
3. 2 Installation 4 switches off the compilation of the NetCDF library thus only ASCII data can be read by the software configure enable grib disable jpeg switches on the compilation of the GRIB API thus grib data can be read by the software This requires the jasper package configure enable opengl switches on the compilation of routines which visualise some of the classification pro cesses like SOM SAN or CKM This feature is working only for unix systems with opengl development packages gl dev glut dev x11 dev installed 2 2 2 make The compiling process itself is done by the tool called make make If everthing went well you will find the excutable cost733class in the src directory Check it by running the command src cost733class If you have administrator priviledges on your system you can install the executables into usr local bin Note that the NetCDF tools coming along with cost733class are also installed there sudo su make install Alternatively you can copy only the excutable to any bin directory to have it in your command search path sudo copy src cost733class usr local bin That s it you can now start to run classifications which of course needs some input data describing the atmospheric state of each time step i e object you want to classify You can now try with the next chapter Quick start or look into th
4. correlation based resulting in 26 types GWIWS gwtws based on GWT above using 8 types resulting in 11 types LIT lit litynski thresholds one circulation field dates ncl 9 18 27 JOT jenkcoll Jenkinson Collison scheme WIK wlk threshold based using pressure wind temperature and humidity PCT TPC tmodpca t mode principal component analysis of 10 data subsets oblique rotation PTT TPT tmodpcat t mode principal component analysis varimax rotation PXE pcaxtr s mode PCA using high positive and negative scores to classify objects KRZ kruiz Kruizinga PCA scheme 3 Getting started 14 LND lund count most frequent similar patterns thres KIR kirchhofer count most frequent similar patterns thres ERP erpicum count most frequent similar patterns angle distance adjusting thresholds HCL hclust hierarchical cluster analysis Murtagh 1986 see parameter crit KMN kmeans k means cluster analyis Hartigan Wong 1979 algorithm CKM ckmeans like dkmeans but eventually skips small clusters lt 5 population DKM dkmeans k means simple algorithm with most different start patterns SAN sandra simulated annealing and diversified randomisation clustering SOM som self organising maps Kohonen neural network KMD kmedoids Partitioning Around Medoids RAN random just produce random classification catalogues RAC randomcent determi
5. 08 9 1 2000 O1 09 10 3 2000 O1 10 11 1 2000 O1 11 12 2000 O1 12 8 7 2000 O1 13 11 2 2000 O1 14 9 6 2008 12 31 8 Table 4 2 Example ASCII file contents including three date columns for year month and day to achieve exactly the same result All flags are recognized to belong to the dat option until another option beginning with appears Note that in this example no information about the time was made i e the date of each line is unknown to the program In this case this is unproblematic because k means doesn t care about the meaning of objects it just classifies it according to their attriubtes the pressure values at the grid points in this case 4 4 2 ASCII data file with date columns cost733class dat pth station zugspitze 2000 2008 Tmean dat dtc 3 ano 33 met BIN ncl 10 In this example the software is told that there are three leading comluns in the file holding the date of each line year month day because there are 3 columns and this is the hierarchy per definition there is no way to change this order The total number of columns four and the number of rows is detected by the software itself by counting the blank gaps in the lines The beginning of this file looks like shown in table 4 4 2 Also there is an option ano 33 in order to use the anomalies deviation referring to the long ter
6. 37 56 3 lat 30 76 2 A dat pth dir era40_ Z500 12Z 195709 200208 domain00 datG1on 37 56 3 lat 30 76 2 A dat pth dir era40_ TWC 12Z 195709 200208 domain00 dat lon 37 56 3 lat 30 76 2 Another more advanced example omitting the moisture index could be cost733class v 2 met WLK per era40 dat pth alcc ptmp geodata ERA40 ascii era40 UT700 12Z 195709 200208 domain00 dat lon 37 56 3 lat 30 76 2 dat pth alcc ptmp geodata ERA40 ascii era40 V700 12Z 195709 200208 domain00 dat lon 37 56 3 lat 30 76 2 dat pth alcc ptmp geodata ERA40 ascii era40 Z925 12Z 195709 200208 domain00 dat lon 37 56 3 lat 30 76 2 dat pth alcc ptmp geodata ERA40 ascii era40 Z500 12Z 195709 200208 domain00 dat lon 37 56 3 lat 30 76 2 step 8 shift thres 0 35 crit 0 alpha 7 delta 0 15 out WLK09 YR S0 U7 V7 D00 txt dcol 3 P M A 6 Classification methods 48 producing the output starting cost733class method WIK ncl period era40 months 1 2 3 4 5 6 7 8 9 10 11 12 hours 00 06 12 18 verbose 2 data input applyweighting T writedat writestd E npar A got 16436 lines from alcc ptmp geodata ERA40 ascii era40 U700_12Z_195709 200208 domain00 dat got 16436 alcc ptmp geodata ERA40 ascii era40 V700 12Z 195709 200208 domain00 dat lines from got 16436 lines from alcc p
7. Brier Skill Score c 0 1 N number of cases I number of classes y conditional event frequency ee a o unconditional event frequency events N Command line parameters relevant for BRIER 8 Evaluation of classifications 77 clain lt spec gt catalog input see 4 2 2 dat lt specification gt input data set to which evaluation metrics are applied crit int if 1 quantile thres is applied to absolut values default if 2 quantile thres is applied to euclidean distances between patterns thres real quantile 0 1 default 0 9 to define events An event is de fined when the euclidean distance to the periods seasonal monthly mean pattern is greater than the given quantile If lt thres gt is signed negative e g 0 8 than events are defined if smaller than the given quantile alpha real lt 0 default use all values crit 1 or patterns crit 2 gt 0 a value or pattern is processed only if itself or mean pattern gt alpha e idx character string base string for naming of output file s Output e lt idx gt _brier list Brier scores estimated over all variables from the input data set for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea 9 Comparison of classifications 78 9 Comparison of classifications 9 1 CPART cpart Catalog comparison The following indices determining the similar
8. type 9 if the mean sea level pressure is higher than delta real mb it s high type 10 if the mean windspeed at 500mb is lower than beta real m s it s flat type 11 2 the four thresholds default 0 275 0 073 0 153 and 0 377 can be varied half automatized as the values for mean wind speed at 500mb and mean sea level pressure rise or descend a given percentile alpha beta gamma or delta 0 1 a mean windspeed at 500mb lower than alpha real will result in one of the following three types if the mean sea level pressure is lower than gamma real it s low type 9 if the mean sea level pressure is higher than delta real it s high type 10 if the mean windspeed at 500mb is lower than beta real it s flat type 11 6 Classification methods 39 Options for data output e cla filename Output filename for the classification catalog e idx lt basename gt Output basename for means of MSLP and windspeed at 500mb per timestep e dcol int Number of date columns in both files mentioned above e cnt filename Output filename for the class centroids Output This method returns one file containing the classification catalog The three prototype patterns are written to NetCDF files proto001 nc to proto003 nc in the directory where cost733class was executed Parameter wise class centroids as well as one file containing means of MSLP and windspeed at 50
9. 0 36353 1 3 3 2 66 66667 0 51063 1 For ncl 30 pc n i S pe 1 5 1 20 00000 1 29727 6 1 5 2 40 00000 0 99266 6 1 5 3 60 00000 0 73334 6 1 5 4 80 00000 0 40355 6 2 3 1 33 33333 0 42237 2 2 3 2 66 66667 0 40905 2 3 2 1 50 00000 0 07110 1 6 3 Methods using the leader algorithm Leader algorithms Hartigan 1975 search for leaders within the set of observations concerning the numbers of other observations being similar to them They need a certain threshold for definition of another observation being similar Commonly they use the Pearson correlation coefficient as similarity metric although other metrics could be implemented 6 3 1 LND lund the Lund method Lund 1963 published an early automated classification scheme based on pattern cor relations In a first part the so called leader patterns are determined as follows 1 For each observation case the number of cases being more similar to it than a similarity threshold is counted 2 The observation showing the maximum ammount of similar cases is declared to be the leader of the first class 3 The leader and all its similar cases are removed from the set 4 Among the rest the second leader leader of the scond class is determined in the same manner as the first leader 5 After the 2nd leader and its similar cases are removed the next leaders are deter mined until all classes have a leader 6 Classification methods 55 Figure
10. 3 0 63000 14423 18 4 0 63000 1628 5 5 0 63000 5252 3 6 0 63000 1 il 7 0 63000 1 1 8 0 63000 1 1 9 0 63000 1 1 0 missing sab cl ic obs count 1 1 00000 1 2 0 99000 1 1 3 0 98000 1 1 4 0 97000 1 1 5 0 96000 1399 3 6 0 95000 15704 16 7 0 94000 9148 87 8 0 93000 4009 280 9 0 92000 11333 659 15390 missing cl ic obs count 1 0 70000 12815 16093 2 0 69000 9232 141 3 0 68000 3521 111 4 0 67000 7468 57 5 0 66000 3780 24 6 0 65000 7151 5 7 0 64000 7769 3 8 0 63000 1 1 9 0 62000 1 1 0 missing Tab 3 6 Classification methods 58 cl ic obs count 1 1 00000 1 1 2 0 95000 15704 16 3 0 90000 11333 2498 4 0 85000 14935 5069 5 0 80000 3431 4559 6 0 75000 7323 2602 7 0 70000 10059 1103 8 0 65000 12865 464 9 0 60000 7407 88 36 missing cl ic obs count 1 0 91000 11333 1523 2 0 86000 6594 5218 3 0 81000 1157 4391 4 0 76000 7976 3509 5 0 71000 10059 1512 0 6 0 66000 10710 456 7 0 61000 7408 155 8 0 56000 13201 48 9 0 51000 1191 9 0 missing 6 4 Hierarchical Cluster analysis Hierarchical cluster analysis can be realized in two opposite ways The first divisive clustering splits up the whole sample according to some criterion into two classes in the first step On a second hierarchy level it splits up one of these classes again into two groups and so The opposite way is the agglomerative clustering each single observation is treated as a single cluster and on each hierarchy level of the process the two nearest
11. cd cost733class 1 2 2 2 Using configure and make The next step is to compile the source code to generate the executable called cost733class For this you need to have a C and a FORTRANO90 compiler installed on your system The compiler reads the source code from the src directory and gener ates the executable src cost733class in the same directory which can be started on the command line of a terminal In order to prepare the compilation process for auto matic execution so called Makefiles are generated by a script called configure The configure script checks whether everything tools libraries compilers which is needed for compilation is installed on your system If configure stops with an error you have to install the package it is claiming about see troubleshooting and rerun configure until it is happy and creates the Makefiles The package include some example shell scripts containing the command to configure and make the software in one step 2 Installation 3 e compile gnu debug sh this script uses the GNU Linux compilers gcc gfortran and c to compile the software with NetCDF support but without opengl sup port Debugging information will be given in case of software errors e compile gnu debug opengl sh this script additionally includes opengl support for visualization e compile gnu debug grib sh this script additionally includes grib support e compile gnu debug omp sh this script additonally incl
12. cla obs 1 case NOA cla obs 2 case NEC cla obs 3 case NEA cla obs 4 case 0EC cla obs 5 case 0EA cla obs 6 case SEC cla obs 7 case SEA cla obs 8 6 Classification methods 41 case S0C cla obs 9 case SOA cla obs 10 case SWC cla obs 11 case SWA cla obs 12 case 0WC cla obs 13 case 0WA cla obs 14 case NWC cla obs 15 case NWA cla obs 16 case 00C cla obs 17 case 00A cla obs 18 Ignoring the Cp index completely leads to 9 types case NO cla obs 1 case NE cla obs 2 case 0E cla obs 3 case SE cla obs 4 case S0 cla obs 5 case SW cla obs 6 case 0OW cla obs 7 case NW cla obs 8 case 00 cla obs 9 In order to calculate geostrophic wind it is necessary to provide the coordinates of the input data set Also this method is dedicated to sea level pressure since it includes a fixed threshold Finally the dates of the objects in the input file have to be provided in order to calculate the annual cycle Options Strictly necessary options e dat lt specification gt Input data the sea level pressure Grid and time de scriptions are necessary The following options define the classification e ncl int The number of types as described above Default is 9
13. clusters are merged to build a new common cluster In cost733class the agglomerative method is implemented 6 4 1 HCL hclust The routine for agglomerative hierarchical clustering offers various criteria for deciding which clusters should be merged which can be chosen by the crit int option 1 Ward s minimum variance method 2 single linkage 3 complete linkage 4 average linkage 5 Mc Quitty s method 6 median Gower s method 6 Classification methods 59 7 centroid method Ward s method reduces the overall variance and is therefore most often used and the default 6 5 Optimization algorithms 6 5 1 KMN kmeans conventional k means with random seeds The k means algorithm for non hierarchical cluster analysis works rather simple k represents the number of clusters types or classes which should be derived This number has to be decided by the user There is no routine to determine an appropriate number automatically means denotes the average of each type which is called centroid in cluster analysis and plays a fundamental role for this algorithm It begins with a so called starting partition In case of the kmeans implementation this starting partition is determined by random i e each object is assigned to a cluster by random Starting from this undesirable state of partitioning the centroids are calculated as the average of all objects e g daily pressure fields within
14. ncl int number of classes Has to be always an even number 2 4 8 10 Note however that sometimes not all classes are realized possibly leading to empty less types e iter int maximum number of iterations Use iter 0 the default for direct assignment of cases using the Euclidian distance Use iter gt 0 for a k means cluster analysis like met KMN but with starting partitions derived by PXE 6 Classification methods 52 Further relevant switches concern the normalization method of the input data before the PCA is performed e crit O only normalize patterns for PCA original e crit 1 normalize patterns and normalize gridpoint values afterwards default e crit 2 normalize patterns and center gridpoint values afterwards The thresholds for definition of extreme score cases can be changed by e thres real threshold defining key group default 2 0 e delta real score limit for other PCs to define uniquely leading PC default 1 0 6 2 4 KRZ kruiz Kruizingas PCA based types P27 is a simple eigenvector based classification scheme Buishand and Brandsma 1997 The scheme uses daily 500 hPa GPH on a regular grid original one 6 x 6 points with the step of 5 in latitude and 10 in longitude The actual 500 hPa height htq for the day t at gridpoint q is reduced first by subtracting the daily average height ht over the grid This operation removes a substantial part of the annua
15. sle lt level between 1000 and 10 gt selection arw lt integer gt area weighting l cos latitude 2 sqrt cos latitude 3 calculated weights by area of grid box which is the same as cos latitude of option 1 scl lt float gt scaling of data values off lt float gt offset value to add after scaling nrm lt int gt object row wise normalisation l centering 2 std sample 3 std population Qano lt int gt variable column wise normalisation done after selection of time steps l centering 2 std sample 3 std population done before selection of time steps l centering 2 std sample 3 std population ll centering for days of year 12 std for days sample 13 std population 21 centering for months 22 std for months sample 23 std population 3l centering on monthly mean running 31day window 32 std for months sample running 31day window 33 std population running 31day window fil lt integer gt gaussian time filter int gt 0 gt low pass int lt 0 gt high pass Qpca lt float integer gt PCA of parameter data set if lt float gt for retaining explained variance fraction if int gt number of PCs pew lt float integer gt as pca but with weighting by explained variance seq lt sequence length for extension gt Qwgt lt weighting factor gt cnt lt file name gt file name to write centroid composite values to if extension x nc it is netcdf format asci
16. they are replaced by running numbers given by the fdt and ldt flags see below 4 Data input 20 Information about time and grid coordinates of the data set are retrieved from the grid data file automatically 4 1 4 Other data formats At the moment there is no direct support for other data formats Also it may be necessary to convert NetCDF files first if e g the time axis is not compatible A good choice in many aspects might be the software package CDO climate data operators https code zmaw de projects cdo which is released under GNU General Public License v2 GPL Attention has to be paid to the type of calendar used It can be adjusted by cdo setreftime 1 01 01 00 00 hours setcalendar standard input nc output nc where hours has to be replaced e g by days or another appropriate time step if the time step given in the input file differs 4 2 Self generated formats 4 2 1 Binary data format Binary data files are those written with form unformatted by FORTRAN programs For input their structure has to be known and provided by the specifications lon lat fdt ldt and eventually ddt and mdt Reading of unformatted data is consideraly faster than reading formatted files Therefore it might be a good idea to use the writedat option to create a binary file from one or more other files if many runs of cost733class are planned If the extension of the file name is bin unformatted files are assu
17. 2003 originally including 40 different types The types are defined according to the wind field U and V of a certain level as well as according to cyclonicity of pressure fields of a first and a second level and according to total precipitable water the latter cyclonicity and water only if the correspond ing data sets are provided at the command line In comparison to the initial OWLK method the recent classification WLKC733 which is included in cost733cat provides a few simplifications regarding output parameters see Philipp et al 2010 The alphanumeric output consists of five letters the first two letters denote the flow pattern in terms of a dominant wind sector counting clockwise i e 01 NE 02 SE 03 SW 04 NW and 00 undefined varying directions For determination of the dominant wind sector the true wind direction obtained from U and V components at 700 hPa is used and circulation patterns are derived from a simple majority threshold of the weighted wind field vectors at 700 hPa If no majority could be found the class 00 will be selected For counting the respective wind directions a weighting mask putting higher weights on grid points in the centre of the domain is applied The third and fourth letter denote Anticyclonicity or Cyclonicity at 925 hPa and 500 hPa respectively based on the weighted mean value of the quasi geostrophic vorticity again putting higher weights on central grid points The fifth l
18. 30 2 5 lat 35 60 2 5 fdt 2000 1 1 1dt 2008 12 31 A ddt 1d nrm 1 fil 31 pcw 0 9 clain pth HCL10 cla dtc 3 met KMN ncl 10 v 3 6 5 3 CKM ckmeans k means with dissimilar seeds In this classification a modified minimum distance method is used for producing the seeds for k means clustering According to the concept of optimum complexity which is applied to the gain in explained variance with growing number of classes the authors of this method suggest a number of around 10 clusters Enke and Spekat 1997 The following steps are performed e The initialization takes place by randomly selecting one weather situation object e In a stepwise procedure the starting partition consisting of the ten most dissimilar weather situations days is gradually identified see Fig 6 6 Similarity and dis similarity are being defined by a variant of the euclidean distance measure RMSD With R Difference between class centroid value and current day value n Number of gridpoints If more than one atmospheric field is used a normalization is necessary to maintain comparability and to retain the additive nature of the distances Enke et al 2005 for more details see e All remaining days of the archive are assigned to their most similar class With each day entering a class a position shift takes place which in turn makes it necessary to re compute the centroid positions As a consequence the multi dimensional 6 Classifi
19. 389 379 365 349 355 350 369 374 10 0 002520445169 378 373 364 367 399 334 395 326 352 Best 0 0040255630 6 6 2 RAC randomcent In contrast to the method above input data have to be provided for method RAC Any arbitrary object of these input data is used as seed or key object centroid for each class All the rest of the input data are then assigned to these centroids by determination of the minimum Euclidean distance If the option idx lt filename gt provides a name of an non existing file the numbers of the objects used as medoids which are determined by random are stored to this file It has as many rows as there are different classification nrun lt integer gt and as many columns as there are types ncl lt integer gt If in contrast the file is old and contains at least enough numbers to provide the seeds these numbers will be used for seeding the types However they all have to less or equal than the number of objects to be classified NOBS This feature allows for comparable random medoid classifications just differing e g by data preprocessing 7 Assignments to existing classifications 70 7 Assignments to existing classifications 7 1 ASC assign Using this option any data set of the same number of variables as a given centroid file can be assigned to the respective classes By providing the cntin lt filename gt ar gument the centroid data are read by software ASCII only The data gi
20. 4 8 9 0 95 4 8 9 4 1 2 westerly shear vorticity ZS 1 52 6 2 x 104 14 3 5 294 13 3 442x 84 12 4 40342x74 11 southerly shear vorticity Z ZW ZS total shear vorticity Figure 6 3 Grid point and index scheme from Jones et al 1993 6 Classification methods 44 Because the Jenkinson Collison scheme uses 16 grid points out of a 5 north south and 10 east west resolved grid with an overall extent of 20 x 30 latitude by longitude the classification grid has to be altered if resolu tion and extent of the data grid differ see Fig 6 1 If the data spreads a broader region the classification grid is centered in the middle of it If you use crit 2 default is crit 1 the classification grid is extended to the whole data region see Fig 6 2 Moreover the 16 grid points are chosen out of the grid only their data is read in from the input data file and considered further on 2 In a second step the classification criteria are calculated Therefore also some adjustment is done according to the relative grid point spacings and the middle latitude of the classification region Tang et al 2008 Now the wind flow characteristics are computed Jones et al 1993 where after each day s pressure pattern is represented by westerly southerly and resultant flow as well as westerly southerly and total shear vorticity 3 Finally the classification is done By reason of being predefined the possible
21. 6 4 The red line indicates the 3D geopotential direction of the given grid point according to the surrounding field Red indicates high values in data blue low ones In a second step all cases are put back into the pool and each case is assigned to the class of the nearest most similar leader regardless of any threshold In the cost733class package the default threshold is a correlation coefficient of 0 7 It may be changed using the thres real option 6 3 2 KIR kh Kirchhofer The Kirchhofer method Kirchhofer 1974 Blair 1998 works similar to the Lund 1963 technique However the similarity measure accounts not only for the simple overall cor relation coefficient Instead the similarity measure is the minimum of all correlation coefficients calculated for each row latitude and each column longitude of the grid separately and the overall correlation coefficient This should make sure that two pat terns are similar in all parts of the map As a consequence this similarity metric is usually smaller than the overall correlation alone Accordingly the threshold for finding the key group see 6 3 1 has to be smaller too A typical value is 0 4 the default but it can be changed using the thres real option In order to know the grid geometry it is necessary to provide information about the coordinates see 4 However as an alterna tive it is possible to give the number of longitudes grid columns by the nx integer optio
22. Component Analysis as an objective way for obtaining the number of circulation types CT hereafter and its initial centroids that our final classification will have The general procedure does not vary with respect to the usual outline of PCA S mode structure gridpoints as the columns and cases as the rows of the matrix use of the 6 Classification methods 51 correlation matrix to obtain the new variables and orthogonal rotation Varimax to better adjust the result to the climatic reality There are few novelties e spatial standardization of the gridded data before the PCA procedure e after aplying PCA the use of a rule the extreme scores which by being fulfilled or not by each original case leads to the final number of CTs and their centroids Thus based on those cases of our sample that according to their scores show a spatial structure similar to some rotated PC cases with high score values with respect to that component i e higher lower than 2 2 respectively and lower scores for the rest of the PCs i e between 2 and 2 we obtain the centroids for each component and phase positive negative These are then our provisional CTs with clear climatic meanning since they derive from the real data existence in the reality of the spatial patterns identified by the PCA Despite that in case that a PC does not have any real case that fulfills the rule of the extremes scores assigned to it this spatial pattern is consider
23. a before call sort cla add an entry in the help function Edit the source file arguments f90 Scroll down to the function help here to the Methods section Copy the line from another method in the section for explaining met and change it to briefly descibe your method write a xyz classifying by x according to y concerning z organizing compilation In order to have the new subroutine compiled otherwise the calling function in main f90 produces unknown function xyz you have to add it to the list of source code files in src Makefile am line 5 bin PROGRAMS cost 733class cost733class SOURCES globvar f90 arguments f90 avelink f90 dkmeans f90 iguapcas f90 pcaxtr f90 random f90 som f90 assigncor f90 kirchhofer f90 lund f90 prototype f90 sandra f90 tmodpca f90 assign f90 datainput f90 hcluster f90 kmeans f90 main f90 randomcent f90 sandrat f90 xyz f90 cost733class LDADD top srcdir netcdf 4 0 libsrc libs libnetcdf a AM FCFLAGS I top srcdir netcdf 4 0 f90 If you want to continue in a new line just add a backslash at the very end of the line before rebuild the configure script in the main directory type make distclean autoreconf install e test the new subroutine configure make If it works you can pack it see below for others If not you have to correct the subroutine source code and try again however now you can omit the cleaning
24. as 4 but threshold is interpreted a percentile 0 to 100 HCL hierarchical clustering number of criterion 1 Ward s minimum variance method 2 single linkage 3 complete linkage 3 Getting started 15 thres lt real gt shift ncl lt int gt nrun lt int gt 4 average linkage 5 Mc Quitty s method 6 median Gower s method 7 centroid method GWT 1 raw coefficients for vorticity default 2 normalized vorticity coefficients GWIWS 1 classification based on absolut values default Y 2 classification based on percentiles ECV 1 monthly normalized data default 0 raw data for calculating explained cluster variance POT rotation criteria 1 direct oblimin gamma 0 default WK 0 use raw cyclonicity for deciding anticyclonic or cylonic default 1 use anomalies of cyclonicity JCT 1 centered classification grid with an extend of 30 WE 20 N S default 2 classification grid extended to data region SOM 1 1 dimensional network topology E 2 2 dimensional network topology KMD 2 0 use Chebychev distance d max xa xb 1 use Manhattan distance d sum xa xb 2 use Euclidean distance d sqrt sum xa xb 2 p use Minkovski distance of order p d sum xa xb p 1 p PXE PXK 0 only normalize patterns for PCA original 1 normalize patterns and normalize gridpoint values afterwards default
25. by percentiles of the variable distri bution Options Strictly necessary options e dat specification Input data The following options define the classification e ncl int Determine the number of classes bins Default is 9 e svar int The variable column number of the input data set which should be used to calculate the bin thresholds Default is 1 e crit int Determine the threshold type where int may be 1 Means the threshold is the i th percentile where i is cl 1 ncl 2 Means the ncl bins are centered around the mean value and have the size of the dev 2 ncl where dev is the largest deviation from the mean within data set 3 Means the bin size is the data range divided by ncl the bins are not centered 4 Classification into 2 bins a lower of values less than thres real for svar int and one above 5 As in crit 4 but the threshold is interpreted as a percentile threshold between 0 and 100 e dist 0 For crit 1 and crit 5 the precentiles are calculated without the minimum value of var This is useful e g for daily precipitation with many dry days 6 Classification methods 36 Options for data output e cla filename Output filename for the classification catalog e dcol int Number of date columns in the classification catalog e cnt filename Output filename for the class centroids Output This method returns one file containi
26. can easily run on compute servers or clusters in the background Recently a graphical user interface GUI has been added by using the OpenGL menu functions which is accessible by a right click into the OpenGL window however it is not very intuitive yet Therefore the documentaion will concentrate on the CLI interface If you type src cost733class just after the compilation process you will see the output of the help function In order to run a classification you have to provide command line arguments for the program i e you have to type expressions behind the cost733class command which are separated by blanks The program will scan the command line and recognize all arguments beginning with a several of which have to be followed by another expression All expressions have to be separated by one or more blanks which is the reason that inbetween any expression e g a file name no blank is allowed Also all expressions have to be written in lower upper case letters if said so in the help output Command lines can be rather simple however some methods and the data configurations can be complex thus that the command line gets longer and longer the more you want to fine tune your classification In this case and especially if just variations of a classifications should be run one after another it can be useful to use shell scripts to run the software which will be explained below In order to understand which options play a role for whic
27. e mod Set number of days per month to 30 model months Options for data output e cla filename Output filename for the classification catalog e dcol int Number of date columns in the classification catalog e cnt filename Output filename for the class centroids Output This method returns one file containing the classification catalog Overall class centroids and centroids for each input data set are optional 6 Classification methods 42 Examples An example with default values cost733class dat pth slp dat fmt ascii lon 10 30 2 5 lat 35 60 2 5 fdt 2000 1 1 12 1dt 2008 12 31 12 ddt 1d met LIT ncl 9 cla LITO9 cla dcol 3 Another example cost733class dat pth slp dat fmt ascii lon 10 30 2 5 lat 35 60 2 5 fdt 2000 1 1 12 1dt 2008 12 31 12 ddt 1d met LIT ncl 27 cla LIT27 cla dcol 3 cnt LIT27 cnt Same but with a different number of classes and output of centroids 6 1 5 JCT jenkcol Jenkinson Collison Types The classification scheme according to Jenkinson and Collison 1977 is based on the variability of 16 selected grid points of the pressure field around the region of interest The possible numbers of types are 8 9 10 11 12 18 19 20 26 27 and 28 64 o e 62 60 58 9 Q Q 9 56 54 52 9 e e e 50 48 46 9 9 Q e 44 42 40 o o
28. each cluster The rest of the process is organized in iterations In each iteration each object is checked for being in the cluster with the most similar cluster centroid Similarity is defined to be the Euclidean distance between the centroid and the object in question If there is a centroid being more similar than the one of the current cluster the object is shifted and the centroids of the old cluster and of the new cluster are updated the average is recalculated in order to reflect the change in the membership This means that some of the objects which have been checked before could be now in the wrong cluster since its centroid has changed Therefore all objects have to be checked again in the next iteration This process keeps going on until no shifts occur and all objects are in the cluster with the most similar centroid In this case the optimization process has converged to an optimized partition or in other words to an optimum of the distance minimization function Due to the existence of local optima in the minimisation function reducing the within cluster variance for different random starting partitions different solutions optimized partitions may occur Therefore the KMN routine runs 10 times by default and selects the best solution in terms of explained cluster variance ECV The number of runs can be set by e nrun lt int gt the number of runs for finding better solutions For better results it is adviced to set this para
29. for met ASS and ASC pca lt float integer gt PCA of input data all together if float gt for retaining explained variance fraction if int gt number of PCs pcw float integer gt PCA of input data PCs weighted by explained variance if float gt for retaining explained variance fraction if int gt number of PCs OUTPUT cla lt clafile gt name of classification output file contains number of class for each object in one line default lt basename datfile gt method nclcint gt cla mcla lt clafile gt multiple classification output file Some methods generate more than one nrun classification and select the best option mcla lt file gt makes them writing out all skipempty lt F T gt skip empty classes in the numbering scheme T yes F no Default is T writedat lt char gt write preprocessed input data into file name char dcol int write datum columns to the cla outputfile int gt 1 year int 2 year month int gt 3 year month day int gt 4 year month day hour cnt lt char gt write centroid data to file named char contains composites of the input data for each type in a column sub lt char gt method SUB write substitute data to file named lt char gt agg lt char gt method AGG write aggregated data to file named lt char gt idx char write index data used for classifica
30. initialized by random numbers Then for each iteration the data of the objects to classify are presented one after the other to these neurons and in each case a winner neuron is determined by the minimum Euclidean distance between the presesented object and the neurons The winner neuron and its neighbours are then modified to become more similar to the presented object by calculating the weighted mean between the old neuron values and the object where the weight is a tuning parameter called alpha How many neighbours of the winner neuron are affected of this modification of the neuron values depends on the radius parameter At the beginning of the classification process it is large thus all neurons are affected but it is slowly reduced so that at the end only the winner neuron will be modifed Also the weight alpha is slowly modified during the iterations training epochs thus at the end the object pattern which is assigned to the winner neuron has a large impact on the modification of the neuron values If the neuron values do not change any more convergence the process is finished and each object is assigned to the most similar neuron type 6 5 9 KMD kmedoids partitioning around medoids K medoids is similar to k means as it iteratively minimizes a cost function within cluster variability However it differs as it does not use the cluster mean i e centroids for measurement of within type variability but uses given data points as ce
31. it is possible to just preprocess input data without calling a distinctive method cost733class dat specification options for selecting dates writedat lt filename gt In this case data output can be accomplished by the witedat filename option There are several flags for preprocessing and spatial selection which are listed and de scribed in the relative section An example might be cost733class dat pth slp dat lon 10 30 2 5 1at 35 60 2 5 fdt 2000 1 1 12 1dt 2008 12 31 12 ddt 1d ano 3 fil 30 per 2000 1 1 12 2005 12 31 12 1d writedat output bin Note that the output data are unformatted binary files These files can be read by cost733class in a subsequent run This proceeding can be very helpful if many runs of cost733class are planned 3 3 Help listing If you run the cost733class command without any command line argument or with the help option you should see the following help text giving a brief description of the various options USAGE src cost733class dat specification gt dat specification gt options OPTIONS INPUT DATA dat char specification of input data More than one dat arguments are allowed to combine data of same sample size but from different files in different formats char consists of various specifications separated by the character or one or more blanks var lt name of variable for fmt netcdf th
32. jump can result in strange effects E g for t mode PCA PPT there might be just one single class explaining this jump while all other classes are empty Therefore in case of using more than one data set i e multifield classification each data set separately from the others should be normalized as a whole one single mean and standard deviation for all columns and rows of this single data set together This can be achieved by using the flag e Owgt lt float gt at least for one data set Where float can be 1 DO the D is for double precision in order to apply a weight of 1 0 If a weighting factor is given using the flag wgt lt float gt the normalized data sets are multiplied with this factor respectively The default value is 1 D0 however it will be applied only if at least for one data set the flag Cwgt lt float gt is given If this is the case always all data sets will be normalized and weigthed eventually with 1 D0 if no individual weight is given 4 Data input 30 In case of methods using the Euclidean distance the square root of the weight is used in order to account for the square in the distance calculations This allows for a correct weight of the data set in the distance measure Also different numbers of variables grid points are accounted for by dividing the data by the number of variables Therefore the weight is applied to each data set as a whole in respect to the others If e g a weight of 1 0 is give
33. lt ext gt write parameter composit centroid to file of format depending on extension txt ascii xyz data nc netcdf 4 3 6 Options for selecting dates If there is at least one input data set with given flags for date description this information about the time dimension can be used to select dates or time steps for the procedure Such a selection applies to all data sets if more than one is used Note that the following switches are options beginning by a because they apply to the whole of the data set e per YYYY MM DD HH YYYY MM DD HH nX This option selects a certain time period starting at a certain year 1st YYYY month 1st MM day 1st DD and hour 1st HH ending at another year 2nd YYYY month 2nd MM day 2nd DD and hour 2nd HH and stepping by n time units nX The stepping information can be omitted Time unit X can be y for years m for months d for days h for hours Stepping for hours must be 12 6 3 2 or 1 The default stepping is 1 while the time unit for the stepping if omitted is assumed to be the last date description value years if only is given Y YY Y months if also MM is provided days for DD or hours for HH e mon 1 2 3 In combination with the per argument the number of months used by the software can be restricted by the mon option followed by a list of month numbers e dlist file Alternatively it is possible to select time steps continuously or discontin
34. maybe more than is available If this is the case you can try to enlarge stacksize each time you run the software by ulimit s unlimited e If the following error or a similar error occurs xxx glibc detected double free or corruption prev 0x08196948 xxx this is probably due to differing compiler library versions You can try to get rid of it by export MALLOC CHECK 0 2 Installation 6 e If compiled with grib support and running cost733class results in the following error GRIB_API ERROR Unable to find boot def grib context c at line 156 assertion failure Assert 0 Aborted one has to define the path to the so called grib definitions In an Unix environment something like export GRIB DEFINITION PATH PATH cost733class 1 2 grib api 1 9 18 definitions should do it Use absolute paths e Depending on the processor type and data size method SAN and SOM may run for several hours up to days This is no bug 3 Getting started 7 3 Getting started 3 1 Principal usage cost733class has been written to simplify and unify the generation of classification catalogues It s functionality is controlled by command line arguments i e options are given as key words after the command which is entered into a terminal or console provid ing a command prompt The command line interface CLI makes the software suitable for shell scripts and
35. ny lt int gt KIR number of latitudes wgttyp euclid normal adjust weights for simulating Euclidean distances of original data or not OTHER v int verbose default v 0 i e quiet not working for all routines yet 0 show nothing 1 show essential information 2 information about routines 3 show detailed informations about routines work slows down computation significantly help generate this output and exit 4 Data input 18 4 Data input The first step of a cost733class run is to read input data Depending on the use case these data can have very different formats Principally there are two kinds Foreign data were not produced by cost733class whereas self generated indeed are 4 1 Foreign formats cost733class can read any data in order to classify it It is originally made for weather and circulation type classification but most of the methods are able to classify anything regardless of the meaning of the data only in some cases information about dates and spatial coordinates are necessary The concept of classification is to define groups classes or types all these terms are used equivalently encompassing objects entities cases or objects again these terms mean more or less the same belonging together The rule how the grouping should be established differs from method to method however in most of the cases the similarity between objects is utilized whil
36. set up time steps time dimension for each data set the number of observations is determined and the maximum is used for allocating RAWDAT the data sets are read into the RAWDAT array if available the date information for the timesteps for each data set is gathered if desired and possible sub grid sections are selected if desired normalisation centering of each observation map is done if desired a time filter is applied to each variable time series if desired map sequences are constructed enlarging the number of variables if desired and possible date selections are done The selected data are stored in the DAT array now if desired centralisation normalisation of each variable of a data set is done cal culation of anomalies if desired a principal component analysis is done for each data set separately The data are replaced by the scores The number of variables are changed if desired the data sets are normalized as a whole over columns and rows and weighted by the given factors for each data set if desired the data as a whole are compressed by PCA 12 5 The gnu autotools files The package is maintained by using gnu autotools In order to rebuild the configure script run autoreconf install Only three files are needed for autotools e configure ac AC PREREQ 2 50 AC INIT cost733class 18 27 a philippQgeo uni augsburg de AM INIT AUTOMAKE Wall Werror forei
37. the first level in the file is selected 4 3 4 Flags for data Preprocessing e seq lt integer gt This specifies whether this input data matrix should be extended in order to build sequences of observations for the classification The integer specifies the length of the sequence If this flag isn t provided then the sequence length is 1 i e no extension will be made If it is 2 then one copy of the data matrix shifted by one row time step into the past will be concatenated to the right side of the 4 Data input 26 original data matrix resulting in a doubling of the variables columns This copy is shifted by 1 observation line downwards thus each line holds the values of the original variable and the values of the preceeding observation e g day In this way each observation is characterised additionally by the information of the preceeding observation and the history of the variables are included for the classification lt integer gt can be as high as desired e g 12 for a sequence of 12 observations In order to keep the original number of observations for the first lines which do not have preceeding observations the values of the same observation are copied to fill up the gaps e wgt lt number gt If more than one data set is given each data set can be weighted relative to others i e its values count more or less for determination of the similarity between the observations If this flag is omitted the weight i
38. the user ncl 5 An oblique rotation using direct oblimin is applied on the principal components employing a FORTRAN90 adaptation of the GPA Gradient Projection Algo rithm according to Bernaards and Jennrich 2005 6 The principal components are mirrored whereafter the respective maximum ab solute loadings are positive 7 The projection of the principal component scores of each subset onto the remaining data is realized by calculating the pearson correlation coefficients between the data and the scores 8 To evaluate the ten classifications based on the subsets contingency tables are used Subsequently CHI square is calculated between each subset s types every subset solution is compared with the other nine ones The subset which gains the highest sum of the nine comparisons is selected and only its types are returned 6 2 2 PTT tpcat t mode principal component analysis using orthogonal rotation PTT is a simplified variant of t mode principal component analysis with subsequent as signment of the cases to the PC with the maximum loading Since there is no split into subsamples the amount of computer memory is relatively high as well as the run time In contrast to PCT the PCs are rotated orthogonally Following options are available e idx lt idxfile gt scores and loadings are saved to extra files 6 2 3 PXE pcaxtr the extreme score method This methodological approach has as main goal to use Principal
39. 0 7 00000000000000 2 00000000000000 1 00000000000000 0 150000000000000 24 PEN NN NATTY NNN NNN BREN NN NAVINI NANNY NEE PRNNNNIIIIIIIASALNN OMM n n n D OL RO MORD RO RO RO HO HO IO IO IO RO RO RO NONO INO I2 I m D do NO IO IO IO IO IO IO IO Ra sd dy is the 63000000000 24336000000 HG FPRFRrFENNNNNNNNNNNNNNNNNNF CA L2 LE pL bh hJ h2 b2 h2 h2 h2 h2 h2 b2 bhJ b2 bJ h2 NNN h2 L2 A RS FPrRFREFENNNNNNNNNNNNNNNNNNF HK RS oo oes a a oy PRP RPE RP RP BRP RPP RP Re Pe Pee ee ee ps ella ie 1062 87168000000 5913 77969000000 D D D D done done done done 6 Classification methods 49 2 05CCx 210 28 06CCx 270 270 330 1 OOAAx 9999 9 9999 2 O1AAx 330 0 30 3 02AAx 30 0 90 4 03AAx 90 0 150 5 04AAx 150 0 210 6 05AAx 210 0 270 7 06AAx 270 0 330 8 OOACx 9999 9 9999 9 01ACx 330 0 30 10 02ACx 30 0 90 11 03ACx 90 0 150 12 04ACx 150 0 210 13 05ACx 210 0 270 14 06ACx 270 0 330 15 00CAx 9999 9 9999 16 01CAx 330 0 30 Jim 02CAx 30 0 90 18 03CAx 90 0 150 19 04CAx 150 0 210 20 05CAx 210 0 270 21 06CAx 270 0 330 22 00CCx 9999 9 9999 23 01CCx 330 0 30 24 02CCx 30 0 90 25 03CCx 90 0 150 26 04CCx 150 0 210 0 0 catalog output WLK27 YR S01 U7 V7 Z29 25 D00 txt final result WLK ecv 0 138173112002 35 85 19 y 584 40 69 Sle eee to Wee ae SiS tS e MS SSS See SS Se D csize 402 294 80 117 443 3690 230
40. 0mb for each timestep are optional Examples An example with default values cost733class dat pth hgt500 dat fmt ascii lon 37 56 3 lat 30 76 2 dat pth slp dat fmt ascii lon 37 56 3 lat 30 76 2 met GWIWS crit 1 alpha 7 beta 3 gamma 1010 delta 1015 cla GWIWS cla dcol 3 Another example cost733class dat pth hgt500 dat fmt ascii lon 37 56 3 lat 30 76 2 cnt GWTWS hgt500 cnt dat pth slp dat fmt ascii lon 37 56 3 lat 30 76 2 cnt GWTWS slp cnt met GWIWS crit 2 alpha 0 275 beta 0 073 gamma 0 153 delta 0 377 cla GWIWS cla dcol 3 Running this command a classification catalog will be produced as well as parameter wise class centroids 6 1 4 LIT lit litynski threshold based method Litynski 1969 developed a classification scheme based on sea level pressure maps for the Polish region The original LIT classification is based on three indices meridional Wp zonal Ws and Cp Pcentral where Pcentral is the pressure at the central grid point for the domain The main steps include e Calculate two indices meridional Wp zonal Ws Wp and Ws are defined by the averaged components of the geostrophical wind vector and describe the advection of the air masses e Select the Cp index Cp Pcentral 6 Classification methods 40 Calculate lower and upper boundary values for Wp Ws and Cp for each month as the 33rd percentile assuming normal
41. 2 normalize patterns and center gridpoint values afterwards EVPF WSDCIM FSIL SIL DRAT 0 evaluate on the basis of the original data values 1 evaluate on the basis of daily anomaly values 2 evaluate on the basis of monthly anomlay values BRIER 1 quantile to absolut values default 2 quantile to euclidean distances between patterns KRC and LND distance threshold to search for key patterns default 0 4 for kirchhofer and 0 7 for lund INT threshold between bins WLK fraction of gridpoints for decision on main wind sector default 0 6 PXE PXK threshold defining key group default 2 0 BRIER quantile 0 1 default 0 9 to define extreme events An event is defined when the euclidean distance to the periods seasonal monthly mean pattern is greater than the given quantile If lt thres gt is signed negative e g 0 8 than events are defined if smaller than the given quantile WK shift 90 degree wind sectors by 45 degree Default is no shift number of classes must be between 2 and 256 number of runs for SAN SAT SOM KMN for selection of best result 3 Getting started 16 step lt int gt reduced iter lt int gt temp lt real gt cool lt real gt svar lt int gt alpha lt real gt beta lt real gt gamma lt real gt Cluster analysis is by design an unstable method for complex datasets The more repeated ru
42. 4 2 2 e dat specification input data set to which evaluation metrics are applied e dist integer distance metric to use for calculating DRAT 1 Euclidean distance default 2 Pearson correlation e idx character string base string for naming of output file s Output e idx drat list DRAT indices estimated over all variables from the input data set for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea 8 5 FSIL fsil Fast Silhouette Index The Silhouette index SIL according to Rousseeuw 1987 is often used for evaluating the quality of cluster separation As the calculation of the Silhouette index after Rousseeuw 1987 is rather CPU intensive when applied to large data sets a modified approach is used for estimating a faster Silhouette Index FSIL The difference between FSIL and SIL is that for FSIL for any case day i the distances to its own class fa and its nearest neighboring class fb are estimated as the euclidean distances to the respective class centroids and not as for SIL as the average distance between the case and all cases in its own class and its closest class respectively 1 fbi fai Hip maza f5 8 10 i 1 Command line parameters relevant for FSIL e clain spec catalog input see 4 2 2 e step integer missing value indicator for catalog data e dat specification input data se
43. 8 35 85 19 7 60 802 695 15 11 1 5 78 1187 584 40 69 2 5 145 3205 2052 802 145 16436 402 294 80 117 443 3690 695 15 11 1 5 78 3205 2052 2308 1187 6 2 Methods based on Eigenvectors cost733class contains source code to calculate the eigenvectors of the input data set by singular value decomposition 6 2 1 PCT tpca t mode principal component analysis using oblique rotation The classification by obliquely rotated Principal Components in T mode PCT also called TPCA Principal Component Analysis in T mode is based on Huth 2000 With regard to computer resources and simultaneously to speed up the calculation of the PCA the data is split into subsets Consequently the principal components obtained for each subset are projected onto the rest of data 1 The Data is standardised spatially each pattern s mean is substracted from the data then the patterns are divided by their standard deviations 6 Classification methods 50 2 The data is split into ten subsets selecting the 1st 11th 21st 31st 41st etc pattern as the first subset the 2nd 12th 22nd 32nd 42nd etc as the second subset and so on 3 In order to achieve the principal components the algorithm based on the singular value decomposition is used 4 The definition of the numbers of principal components in cost 733class is in contrast to the original scheme where the scree plot method is used defined by
44. Classification methods 47 Since within this method the number of types is defined by the number of wind sectors and the input data the ncl flag can be omitted and is ignored There are several special flags controlling the wlk classification beside the control via input data sets e step lt integer gt The number of windsectors The 3600 of the windrose will be devided by lt integer gt to determine the threshold angle for the different wind sectors e shift Shift the sectors defined by step integer by half of the angle Le North will be in the center of sector e thres real fraction of grid points for decision on main wind sector default 0 6 e alpha real central weight for weighting mask default 3 D0 e beta real middle zone weight for weighting mask default 2 D0 e gamma real margin zone weight for weighting mask default 1 D0 e delta real width factor for weigthing zones nx delta ny delta default 0 2 For example a classification with the maximum number of data set fields and a division into 8 wind sectors dir alcc ptmp geodata ERA40 ascii cost733class met wlk per era40 v 2 step 8 dat pth dir era40_U700_12Z_ 195709 200208 domain00 dat lon 37 56 3 lat 30 76 2 A dat pth dir era40_V700_12Z_ 195709 200208 domain00 dat lon 37 56 3 lat 30 76 2 V dat pth dir era40_Z925 12Z 195709 200208 domain00 datG1on
45. E S SW W NW are used Two additional types for pure cyclonic and pure anticyclonic situations lead to 10 types and an indifferent type according to cyclonicity to 11 types For 16 types the following numbers apply 1 8 cyclonic 9 16 anticyclonic and for 24 1 8 cyclonic 9 16 anticyclonic 17 24 indifferent Adding 2 or 3 cyclonicity types then results in 18 or 19 and 26 or 27 types accordingly 6 Classification methods 37 Options Strictly necessary options e dat lt specification gt Input data The following options define the classification e ncl int The number of types as described above Default is 8 e crit int Vorticity index int can be 1 No normalization of the vorticity index 2 Normalization of the vorticity index Options for data output e cla filename Output filename for the classification catalog e dcol int Number of date columns in the classification catalog e cnt filename Output filename for the class centroids Output This method returns one file containing the classification catalog The three prototype patterns are written to NetCDF files proto001 nc to proto003 nc in the directory where cost733class was executed Overall class centroids are optional Examples An example with default values cost733class dat pth slp dat fmt ascii lon 37 56 3 lat 30 76 2 met GWT ncl 8 crit 1 cla GWTO09 cla dcol 3 Anoth
46. ame 10 2 COR cor Correlation The Method COR calculates the so called reduction of error variance RV the spearman and the pearson correlation coefficients for all input data sets Computation is carried out for all variable combinations and results in three matrices which are written to one file named correlations txt The equation for RV is 10 1 2 is pura RMSE where RMSE and RM SE are the root mean square errors of the two variables to be compared 10 3 SUB substitute Substitute This method replaces class numbers with their corresponding centroid values and results in a file with as many rows as in the given catalog file Furthermore each column represents a timeseries of one variable Thus the number of columns equals the number of rows in the necessary centroid file Command line parameters relevant for SUB 10 Miscellaneous functions 80 e clain spec catalog input see 4 2 2 e cntin filename centroid input e sub filename filename for output of timeseries 11 Visualization 81 11 Visualization If configured and compiled with the enable_opengl option cost733class accepts the option opengl and starts a window for viewing the data and watching their classi ficiation Datainput and preprocessing has to be configured via command line interface though However the choice of the classification method the number of classes and some more options are available within a men
47. and reconfigure steps from above your subroutine is already implemented but say only make again to retry the compilation process 12 Development 84 12 2 Packing the directory for shipping In order to allow others to download the new package you have to clean it and create a tar archive file The cleaning is necessary since others might use another oprating system and the binaries produced by your system won t work Cleaning is done by make clean Then you can pack it using the shell script tools pack sh It automatically finds the diirectory name which should include the new version number and runs tar and bzip2 to compress it tools pack sh Now you have a new package e g cost733class 99 99 00 tar bz2 in the top direc tory that might be uploaded to the homepage and made available for others 12 3 Use of variables in the subroutine In order to write a new method subroutine you need some input for your routine The basic input is already provided by the program and you can just use it There are two different variable types according to the way you can access them e global variables being declared in any case Global variables can be used if you say use globvar at the very beginning of your subroutine They are declared by a module in globvar f90 and values are provided by other subroutines in particular datainput f90 depending on the ar guments given on the command line b
48. be a point fmt netcdf means the data should be extracted from NetCDF files If the file name extension of the provided file name is nc NetCDF format is assumed The information about data dimensions are extracted form the NetCDF files fmt binary denotes unformatted data The dimensions of the data set have to provided by the specifications lon lat fdt ldt and eventually ddt and mdt This format is assumed if the file name extension is bin The default which is assumed if no fmt flag is given is ASCII e dtc lt integer gt If the file format is fmt ascii and there are leading columns specifying the date of each observation this flags has to be used to specify how many date columns exist 1 means that there is only one date column the first column holding the year 2 means the two first columns hold the year and the month 3 means the first three columns hold the year the month and the day and 4 means the first four coluns holds the year month day and hour of the observation e fdt YYYY MM DD HH first date of data in data file date for first line or row Hours days and months may be omitted For data with fmt netcdf in multiple files it can be an integer number indicating the first year or running number which will be inserted to replace the placeholder symbols in the filename e ldt YYYY MM DD HH last date of data in data file date for last line or row Hours days and months may be omitted If so and ld
49. bution from small scale variability and noise is ignored The retained components were rotated using a VARI MAX procedure to facilitate the spatial interpretation of the principal components and to avoid well known problems due to the orthogonality constraint of the eigenvector model Actual maps mix those modes of variability whose intensity expressed by the 6 Classification methods 61 weights of the component scores varies from time step to time step Thus it is necessary to apply a clustering procedure to identify the most common combinations First the initial number of clusters and the mean conditions within each cluster mean scores are determined by Ward s hierarchical algorithm Later those initial clusters are used as seeds to obtain the final solution through a K means agglomerative algorithm In terms of cost733class command line arguments this method can be realized by the following two commands the first providing the hierarchical starting partion of the preprocessed data the second to apply k means to that atrting partition first provide starting partitions by hierarchical CA of high pass filtered PC scores src cost 733class dat pth grid slp dat lon 10 30 2 5 lat 35 60 2 5 fdt 2000 1 1 1dt 2008 12 31 A ddt 1d nrm 1 fil 31 pcw 0 9 met HCL ncl 10 v 3 run cost733class with kmeans using the starting partition created above src cost733class dat pth grid slp dat lon 10
50. cation methods 62 distance between class centroids continually decreases see Fig 6 7 the variability within the individual classes on the other hand increases e After the assignment of all days has been performed and an intermediate stage has been reached an iterative process is launched With each pass members of the clusters are exchanged in order to improve the separation of the classes and their compactness Gradually a final centroid configuration emerges In order to retain representativity classes are kept in the process only of they do not fall short of a certain number e g 596 of all days Otherwise the class will be removed and its contents distributed among the remaining classes The iterative process will be stopped if two successive iterations leaves the contents of the classes unchanged The centroids converge towards a final configuration which has no similarity with the starting partition The process is sketched in Fig 6 7 from Enke and Spekat 1997 Beginning with the initial conditions denoted a the intermediate stage denoted b is reached which is characterized on the one hand by a higher proximity of the classes but on the other hand by large class volumes i e high within type variability since the assigment of days is not optimized After the iterative exchange process the final stage denoted c is free of overlap i e it is well separated exhibiting high between type variab
51. cost733class 1 2 User guide Andreas Philipp Christoph Beck Pere Esteban Frank Kreienkamp Thomas Krennert Kai Uwe Lochbihler Spyros P Lykoudis Krystyna Pianko Kluczynska Piia Post Domingo Rasilla Alvarez Arne Spekat and Florian Streicher University of Augsburg Germany 2 Climate and Environment Consulting Potsdam GmbH Germany 3 Institute of Environmental Research and Sustainable Development National Observatory of Athens Greece Group of Climatology University of Barcelona Spain 6 Institut d Estudis Andorrans CENMA IEA Principality of Andorra University of Tartu Estonia 8 Institute of Meteorology and Water Management Warsaw Poland Zentralanstalt fuer Meteorologie und Geodynamik Vienna Austria 10 University of Cantabria Santander Spain 13 02 2014 Contents 1 Introduction 1 2 Installation 2 DL Getting the source ods uus b so RR RE howe ed ege ER Rs 2 22 Using conhgure and make sso oak oo rm E E GR Eo OY Rs 2 221 DIM ces e 3 220 WINS 12 as ad IEA 4 2 3 Manual compilation as an alternative 4 24 Troubleshooting o o eoo e ea ga k ea yeei a a eda e a E Bee UE EM 5 241 Oren 222x234 chkxdokEEGexG E enha 5 A 2 2 es Bh Ge eS eR SRR eSB B Ue RR EE 5 24 3 Forruntime problems oo sa aor dioas o RR RG RO RU a 5 3 Getting started 7 at PA Tao lt lt AA AMA mE SEE Eck ce e Ww ee 7 mer REINO BEER castas y ME Be ee oe d dodo EP b dose B PS 8 we Creating cla
52. ct e g day or the coordinates provided by lon and lat while other methods do not depend on it If necessary the given dates of a data set have to be described for each data set or the first in case of same lengths separately see fdt ldt and ddt or dtc After the data set s is are loaded some selections and preprocessing can be done again by giving specification flags key words for only one single data set without a leading and options keywords which are applied to all data sets together with a leading 4 3 1 Specification flags In order to read data from files which should be classified these data have to be specified by the dat lt specification gt option It is important to distinguish between specification flags to describe the given data set as contained in the input files on the one hand and flags and options to select or preprocess these data The lt specification gt is a text string or sequence of key words providing all infor mation needed to read and process one data set note that more data sets can be read thus more than one specifications are needed Each kind of information within such a string or key word sequence is provided by a lt flag gt follwed by a and a value The different information substrings have to follow the dat option i e they are recognized to belong together to one data set as long as no other option beginning with appears They may be concatenated by the s
53. der of the columns provided in the input data If the cnt ile nc flag has been given within the individual specification of a data set the corresponding centroids for this parameter will be written to an extra file Thus it is possible to easily discern the type composits for different parameters The file format depends on the extension of the offered file name the file name extension nc leads to NetCDF output which might be used for plotting with gards the file name extension txt denotes ASCII output including grid point coordinates in the first two columns if available while the extension dat skips the coordinates in any case useful for viewing or for further processing e g with met asc 5 3 Output on the screen The verbosity of screen output can be controlled by the v int flag e v 0 NONE errors only 5 Data output 34 e v 1 MAIN warnings major calls and proceeding e v 2 SUB major subroutine s calls and proceeding routine s major results e v 3 DET detailed all calls proceeding routine s intermediate results no arrays e v 4 ALL extensive results arrays etc At the end of each classification the explained cluster variance ECV is calculated and printed on the screen Further on the final class frequencies are printed on the screen 5 4 Output of the input data Providing the option writedat filename writes the input data to a single file as it would be used for classificatio
54. distribution Interpolate these thresholds throughout the year for each day We have component N Wp E Ws C Cp when the indices Wp Ws and Cp for a day are less than the lower boundary value We have component 0 Wp 0 Ws 0 Cp when the indices are between lower and upper boundary values We have component S Wp W Ws A Cp when the indices aren t less than the upper boundary value Finally the types are the 27 superpositions of these three components Thus the 27 types are defined by case NOC cla obs 1 case N00 cla obs 2 case NOA cla obs 3 case NEC cla obs 4 case NEO cla obs 5 case NEA cla obs 6 case 0EC cla obs 7 case 0E0 cla obs 8 case 0EA cla obs 9 case SEC cla obs 10 case SEO cla obs 11 case SEA cla obs 12 case SO0C cla obs 13 case S00 cla obs 14 case S0A cla obs 15 case SWC cla obs 16 case SW0 cla obs 17 case SWA cla obs 18 case 0WC cla obs 19 case 0W0 cla obs 20 case 0WA cla obs 21 case NWC cla obs 22 case NWO cla obs 23 case NWA cla obs 24 case 00C cla obs 25 case 000 cla obs 26 case 00A cla obs 27 18 types are achieved by omitting the intermediate interval for Cp according to the following code case NOC
55. ds has to be provided by one or more dat specification arguments The number of records or lines e g time steps in the catalog file has to fit the number of records or lines of the data set By providing the cnt filename option the centroids are written to the respective file In a subsequent run of cost733class these centroid data can be used for assigning additional data to the centroids see method ASC above in order to produce a new catalog file 8 Evaluation of classifications 72 8 Evaluation of classifications Several statistical metrics characterizing classifications in terms of separability among classes and variability within classes see e g Beck and Philipp 2010 can be applied to existing catalogs provided by clain lt spec gt and for a given input data set dat lt specification gt 8 1 EVPF evpf explained variation and pseudo F value This routine calculates the explained variation EV on the basis of the ratio of the sum of Squares within classes circulation types WSS and the total sum of squares TSS WSS EV 1 TSS 8 1 Additionally the so called Pseudo F statistic PF according to Calinski and Harabasz 1974 is calculated as the ratio of the sum of squares between classes BSS and the sum of squares within classes WSS thereby taking into account the number of cases n and classes k _ BSS k 1 WSS n k Command line parameters relevant for EVPF PE 8 2 e clai
56. e Data input chapter 2 3 Manual compilation as an alternative Within the package directory there is a batch file called compile_mingw bat for Win dows This script file contains the direct compile command for the gfortran and g95 compiler without any dependencies to any libraries It can be used if no NetCDF input is needed or if there are troubles compiling the NetCDF package 2 Installation 5 2 4 Troubleshooting 2 4 1 For configure e If configure claims about something missing or a wrong version number you have to install or update the concerning software packages For Debian and Ubuntu systems that s rather easy First you have to find out the official name of the package which contains the files configure was claiming about Here you can use the tool apt file which must be installed first updated and run sudo apt get install apt file apt file update apt file search lt filename gt It will then print the name of the package in the first column and you can use it to install the missing package sudo apt get install lt package gt 2 4 2 For make e If the following error meassage appears Catastrophic error could not set locale to allow processing of multibyte characters setting the environment variable LANG to C export LANG C in the shell you use for compilation will fix it 2 4 3 For runtime problems e Some methods need a lot of RAM and
57. e the definition of similarity differs again In order to distinguish between various objects and to describe similarity or dissimilarity each object is defined by a set of attributes or variables this is what an entity is made of A useful model for the data representation is the concept of a rectangular matrix where each row represents one object and each column represents one attribute of the objects In case of ASCII formatted input data this is exactly the way how the input data format is defined 4 1 1 ASCII data file format ASCII text files have to be formatted in such a way that they contain the values of one object time slice in one line and the values for one variable commonly a grid point in a column For more than one variable grid point the following columns have to be separated by one or more blank letters Note that tabulator characters sometimes used by spread sheet programs as default are not sufficient to separate columns The file should contain a rectangular input matrix of numbers i e constant number of columns at each row and a constant number of rows for each column No missing values are allowed The software evaluates an ASCII file on its own to find out the number of lines days objects entities and the number of variables attributes parameters grid points For this the numbers in one line have to be separated by one or more blanks depending on the compiler commas or slashes may also be used as separa
58. ed as a statistical artifact this theoretical CT according to PCA is eliminated and the final number of CTs of the classification diminishes Finally to assign each case of the sample to one of the CTs the euclidean distance is used This is calculated using the multivariate coordinates of each cass expressed by its scores and the location in the PC space of the centroid of each CT Obviously these centroids and number of clusters can also be used as initial seeds for an optimization algorithm like K means see method PCAXTRKM PXK Once the entire sample has been classified we have our catalog of final CTs For more details about the rule of the extreme scores see Esteban el al 2006 and Philipp el al 2010 We could say that this approach is an attempt to reproduce the T mode PCA ap proach that uses the coeficents obtained with PCA correlations for distributing the cases among the different groups considered PXE employs the PC scores to do this assignment reducing substantially the calculation requirements On the contrary some uncertainity is introduced by the use of a similarity threshold based on the PC scores normally 2 2 thus the suitability of this threshold value depends on the variability of the sample The normally used 2 2 can be inappropriate for samples with very few cases or with little variability being recomendable to change to for example a 1 1 threshold see Esteban et al 2005 Options for PXE e
59. egional climates in the giss global circulation model and synoptic scale circulation Journal of Climate 5 1002 1011 Huth R 1996 An intercomparison of computer assisted circulation classification methods Int J Climatol 16 893 922 Huth R 2000 A circulation classification scheme applicable in gcm studies Theor Appl Climatol 67 1 18 Jenkinson A and Collison B 1977 An initial climatology of gales over the north sea In Synop Climatol Branch Memo 62 Meteorological Office Jolliffe I and Philipp A 2010 Some recent ideas in cluster analysis Physics and Chemistry of the Earth 35 309 315 Jones P Hulme M and Briffa K 1993 A comparison of lamb circulation types with an objective classification scheme International Journal of Climatology 13 655 663 Kalkstein L S Tan G and Skindlov J A 1987 An evaluation of three clustering procedures for use in synoptic climatological classification J Appl Meteor 26 717 130 Kalnay E Kanamitsua M Kistlera R Collinsa W Deavena D Gandina L Iredella M Sahaa S Whitea G Woollena J Zhua Y Leetmaaa A Reynoldsa R Chelliahb M Ebisuzakib W Higginsb W Janowiakb J Mob K Ro pelewskib C Wangb J Jennec R and Josephc D 1996 The ncep ncar 40 year reanalysis project Bull Amer Meteor Soc 77 437 470 Kirchhofer W 1974 Classification of european 500 mb patterns In Arbeitsbericht de
60. eparator symbol directly together without any blank inbetween or they may be separated by one or more blanks The order of their appeareance is not important however if one flag is provided more than once the last one will be used Please note that flags differ from options by the missing leading minus The end of a specification of a data set is recognized by the occurence of an option i e by a leading The following flags are recognized 4 Data input 23 4 3 2 Flags for data set description e var lt character gt This is the Name of the variable If the data are NetCDF files this must be exactly the variable name as in the NetCDF file else it could not be found resulting in an error If this flag is omitted the program tries to find the name on its own This name is also used to construct the file names for NCEP NCAR Reanalysis data file see the pth flag In case the format is ASCII the name is not important and this flag can be omitted except for special methods based on special parameters e g the wlk method needs uwnd vwnd etc The complete var flag might look e g like var slp e pth lt character gt n case of ASCII format this is the path of the input file its location directory including the file name Please note that depending on the system some shortcuts for directories may not be recognized like the symbol for the home directory E g pth home user dat slp txt In case of a fmt ne
61. er example cost733class dat pth slp dat fmt ascii lon 37 56 3 lat 30 76 2 met GWT ncl 27 crit 2 cla GWI27 cla dcol 3 6 1 3 GWTWS gwtws large scale circulation types Based on GWT Cap 6 1 2 The classification is done using GWT with 8 types for the 500mb geopotential If the mean windspeed at 500mb derived from the geopotential field is lower than 7m s it s convective resulting in one of the following three types e if the mean sea level pressure is lower than 1010mb it s low type 9 6 Classification methods 38 e if the mean sea level pressure is higher than 1015mb it s high type 10 e clse or if the mean windspeed at 500mb is lower than 3m s it s flat type 11 If the mean windspeed at 500mb is higher than 7m s it s advective and the types are identical to GWT using 8 types Options Strictly necessary options 1 dat specification Input data the 500mb geopotential field 2 dat specification Input data the sea level pressure as field or mean value The following options define the classification e crit int Handling of thresholds int can be 1 the four thresholds default 7m s 3m s 1010mb and 1015mb can be varied in the following matter a mean windspeed at 500mb lower than alpha real m s will result in one of the following three types if the mean sea level pressure is lower than gamma real mb it s low
62. etter denotes Dry or Wet conditions according to an weighted area mean value of the towering water content whole atmospheric column which is compared to the long term daily mean In order to achieve a classification system for 28 types WLKC28 six main wind sector types are used 01 330 30 and so on in 60 steps plus one undefined type which are further discriminated by cyclonicity as described above 18 types WLKC18 are produced by using nine wind sector types sector 01 345 15 02 15 75 03 75 105 04 105 165 05 165 195 06 195 255 07 255 285 08 285 345 00 undefined and cyclonicity at 925 hPa while the latter is omitted for producing nine types WLKCO09 The third and fourth letter denoting cyclonicity of pressure fields of a first and a second level and total precipitable water respectively are analyzed only if the corresponding data sets are provided at the command line Thus the order of the data sets provided by the command line arguments plays an important role for this method It is important to note that any other meteorological parameters than U and V for the first two data fields do not make any sense for this method For determination of the main wind sector and cyclonicity a weigthing mask is applied to the input fields putting highest weigths on the central zone intermediate weigth to the middle zone and lowest weight to the margin zone The mask is shown on verbose level gt 1 6
63. f non hierarchical cluster analysis which accounts for the distance in time between two objects and not only for their similarity The Euclidean distance is therefore weighted by a factor reducing the dissimilarity between two patterns if they are close together on the time axis The impact of this weight can be controlled by 1ambda real High values of 1ambda cause temporally neighbouring patterns to be classified into the same cluster more easily The weighting has been implemented into the SANDRA method Thus all options of this method see above apply also here 6 Classification methods 67 6 5 8 SOM som self organizing feature maps neural network according to Kohonen Self organizing maps describe a way to arrange types defined by their features grid point values in a structure where similar types are adjacent to each other a map This structure commonly is a two dimensional matrix describing a network of neurons i e types which are connected to their neighbours in the four directions However this structure can also be a one dimensional arrangement where the neurons are connected to their left and right neighbours only which is the case for the method implemented in cost733class The number of neurons is given by the number of types provided by the ncl int flag Each neuron has as many features as there are grid points or variables in the input data set columns in case of ASCIL input The values of each neuron are
64. f the input data The first number is the minimum latitude where latitudes south of the equator are given by negative numbers The second number is the maximum latitude The third number is the grid spacing in latitudes e g lat 30 70 2 5 Note that at the moment south to north order must be given in the dat file i e the left columns hold the southern most latitudes 4 3 3 Flags for spatial data Selection e slo lt number gt lt number gt lt number gt If given this flag selects a subset of grid points from the grid in the input data The first number is the minimum longitude where longitudes west of 0 degree are given by negative numbers The second number is the maximum longitude The third number is the grid spacing in longitudes The user must take care that the selected grid fits into the given grid e sla lt number gt lt number gt lt number gt If given this flag selects a subset of grid points from the grid in the input data The first number is the minimum latitude where latitudes south of the equator are given by negative numbers The second number is the maximum latitude The third number is the grid spacing in latitudes The user must take care that the selected grid fits into the given grid e sle lt integer gt This flag specifies the atmospheric level to be read if the fmt ncepr flag is given This is relevant for NetCDF data which may contain data of more than one level If omitted
65. g command to main f90 and add its source code file name to the list cost733class SOURCES in the file src Makefile am Then run autoreconf install configure amp amp make amp amp src cost733class Note that autoreconf needs the packages autotools and libtool being installed The single steps in detail In order to implement a new subroutine for a method variant or a complete new method the following steps are necessary e create a file containing the subroutine in the src directory This file begins with subroutine xyz argi arg2 and ends with end subroutine xyz In the brackets you can provide a list of arguments needed to calculate the classification The file has to have the extension 90 More on variables see below e add a calling funtion to main f90 This calling function should for the example above look like this xyz method if trim methodname xyz then if verbose gt 0 then write write a calling xyz endif call xyz argl arg2 call sort_cla ncl endif The if instruction says that the routine is called if the command line argu ment for met was xyz The write instructions gives information about that 12 Development 83 to the screen if the verbose level is gt 0 Then the subroutine is called providing the arguments and after it has finished the resulting catalogs are sorted by size of the classes You can ommit the last step by writing
66. genklassen In Klimastatus bericht 2003 DWD Blair D 1998 The kirchhofer technique of synoptic typing revisited nt J Clima tology 18 1625 1635 Buishand T and Brandsma T 1997 Comparison of circulation classification schemes for predicting temperature and precipitation in the netherlands nt J Climatology 17 875 889 Calinski T and Harabasz J 1974 A dendrite method for cluster analysis Commun Stat 3 1 27 Comrie A 1996 An all season synoptic climatology of air pollution in the u s mexico border region Professional Geographer 48 237 251 Dittmann E Barth S Lang J and Mueller Westermeier G 1995 Objektive wetterlagenklassifikation Ber Dt Wetterd 197 Ekstroem M Joensson P and Baerring L 2002 Synoptic pressure patterns associ ated with major wind erosion events in southern sweden 1973 1991 Climate Research 23 51 66 Enke W Schneider F and Deutschl nder T 2005 A novel scheme to derive optimized circulation pattern classifications for downscaling and forecast purposes Theor Appl Climatol 82 51 63 Bibliography 89 Enke W and Spekat A 1997 Downscaling climate model outputs into local and regional weather elements by classification and regression Climate Research 8 195 207 Hartigan J 1975 Clustering Algorithms Wiley Series in probability and mathematical statistics John Wiley amp Sons Hewitson B and Crane R 1992 R
67. gn dnl looking for compilers AC PROG_CC gcc icc AC LANG C AC PROG FC gfortran ifort 12 Development 8T AC_CONFIG_FILES Makefile src Makefile AC_CONFIG_SUBDIRS netcdf 4 0 dnl makefile AC_OUTPUT e Makefile am SUBDIRS netcdf 4 0 src e src Makefile am bin PROGRAMS cost 733class cost733class SOURCES globvar f90 arguments f90 avelink f90 dkmeans f90 iguapcas f90 pcaxtr f90 random f90 som f90 assigncor f90 kirchhofer f90 lund f90 prototype f90 sandra f90 tmodpca f90 assign f90 datainput f90 hcluster f90 kmeans f90 main f90 randomcent f90 sandrat f90 cost733class LDADD top srcdir netcdf 4 0 libsrc libs libnetcdf a AM FCFLAGS I top srcdir netcdf 4 0 f90 Bibliography 88 Bibliography Beck C Jacobeit J and Jones P 2007 Frequency and within type variations of large scale circulation types and their effects on low frequency climate variability in central europe since 1780 Int J Climatology 27 473 491 Beck C and Philipp A 2010 Evaluation and comparison of circulation type classifica tions for the european domain Physics and Chemistry of the Earth 35 9 12 374 387 Bernaards C and Jennrich R 2005 Gradient projection algorithms and software for arbitrary rotation criteria in factor analysis Educational and Psychological Measure ment 65 5 676 Bissolli P and Dittmann E 2003 Objektive wetterla
68. h part of the classification it is helpful to distinguish between options relevant for the data input and preprocessing and options relevant for the routines itself In order to better understand how the software works and how it can be used the workflow is briefly described in the following In the first step the command line arguments are analysed subroutine arguments For each option default values are set within the program however they are changed if respective options are provided by the user The next step is to evaluate the data which should be read from a file especially the size of the data set has to be determined here in order to reserve allocate computer memory for it Then the data are read into a preliminary array RAWDAT and eventually are preprocessed selected filtered etc before they are put into the final synchronized DAT array which is used by the methods 3 Getting started 8 In some cases depending on the classification evaluation operation existing catalogues have to read This is done in the next step by the subroutine classinput Afterwards the subroutine for the desired operation is called If this subroutine finished with a new catalogue it is written to a classification catalogue file cla and the program ends Other routines e g for comparison or evaluation of classification catalogues might just produce screen output 3 2 Quick start As an requirement to write complex commands it is necessar
69. i otherwise with coordinates for txt without for dat readncdate read the date of observations from netcdf time variable rather E than calculating it from actual range attribute of the time variable slows down the data input but can override buggy attribute entries as e g in slp 2011 nc 3 Getting started 12 per char period selection char is e g 2000 1 1 12 2008 12 31 12 1d mon lt char gt list of months to classify MMMMMM 3 e g mon 12 01 02 classifies winter data default is all months only applies if per is defined mod LIT set all months to be 30 days long relevant for met lit dlist char list file name for selecting a subset of dates within the given period for classification Each line has to hold one date in form YYYY MM DD HH for year month day hour If hour day or month is irrelevant please provide the constant dummy number 00 cat spec classification catalog input where spec consists of the following flags pth lt path for file gt fdt lt first date gt ldt lt last date gt ddt lt time step e g Qddt 1d for one day gt dtc lt number of date columns gt mdt lt list of months e g Qmdt 01 02 12 catname lt char gt file with as many lines as catalogs read in by cat each line contains the name of the catalog of the corresonding column in the catalog file n cntin char centroid input file
70. ility and consists of comparably compact classes i e it exhibits low within type variability see Fig 6 8 6 5 4 DKM dkmeans a variant of ckmeans DKM differs from CKM in three points 1 For finding dissimilar seed patterns for the 2nd and further clusters not only the sum of the distances to the preceding seeds is used but the minimum of these distances is maximised 2 There is no 596 minimum frequency limit for keeping all clusters In most cases if the population temporarily decreases below 596 of the total sample size it is filled up again later during the iterations 3 Furthermore it is possible to select a number of runs As the initiation of the starting partition partially follows random assignment results differ from run to run Therefore the soltion with highest explained cluster variance will be chosen The dafault value for nrun is 10 6 5 5 PXK pcaxtrkm k means using PXE starting partitions This method follows the application of PXE but differs in so far that the final CTS are established using the iterative clustering method K means In this way for the opti 6 Classification methods 63 Figure 6 6 Starting seed points for CKM classification method using the example data set slp dat in the test directory and 9 classes The projection of each daily pattern into the three dimensional PCA phase space is shown by small spheres Large spheres represent the projected centroids gt link to com ma
71. is must be the variable name in the netcdf file pth lt path for input data file gt in case of netcdf it may include symbols to be replaced by numbers given by the fdt and ldt flags fmt lt ascii or netcdf or grib gt default ascii 3 Getting started 11 If file name ends with nc netcdf is assumed for grib grb gribX grbX X 1 2 grib is assumed ascii default ascii file with one line per day object to classify and one column per variable parameter defining the objects the columns have to be delimited by one or more blanks The number of objects and variables is scanned by this program on its own netcdf data in self describing netcdf format time and grid is scanned automatically grib data in self describing format time and grid is scanned automatically Qdtc lt 1 4 gt number of leading date columns year month day hour in ascii file fdt lt YYYY MM DD HH gt first date in dataset description ldt lt YYYY MM DD HH gt last date in dataset description ddt lt int gt lt y m d h gt time step in data set in years months days or hours mdt lt list of months months in data set e g Qmdt 1 2 12 lon lt minlon gt lt maxlon gt lt diflon gt longitude description lat lt minlat gt lt maxlat gt lt diflat gt latitude description slo lt minlon gt lt maxlon gt lt diflon gt longitude selection sla lt minlat gt lt maxlat gt lt diflat gt latitude selection
72. ity between two partitions classifications of one set of observations are calculated by this routine Rand Index RI Adjusted Rand Index ARI Jaccard Index JI Mutual Information MI Normalized Mutual Information NMI See L Hubert and Arabie 1985 Kuncheva and Hadjitodorov 2004 Southwood 1978 Strehl and Gosh 2002 Kalkstein et al 1987 Milligan and Cooper 1985 Rand 1971 for more information on the different indices Command line parameters relevant for CPART e clain spec catalog input see 4 2 2 e idx character string base string for naming of output file s Output e lt idx gt _RI fld ARI fld JI fld MI fld NMI fld Diversity in dices estimated for all pairwise combinations of catalogs from clain for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea 10 Miscellaneous functions 79 10 Miscellaneous functions There are some other functions in cost733class that do not fit in any of the previous chapters 10 1 AGG agg Aggregation This subroutine provides basic function for aggregation At this point yearly sums per variable are supported however it is intended to compute seasonal values with the mon option For this function season is identical with meteorological season December January and February for winter and so forth Command line parameters relevant for AGG e agg filename output filen
73. l cycle The reduced 500 hPa heights ptq are given by ptq htq ht q 1 n nis the number of gridpoints t 1 N N the number of days The vector of reduced 500 hPa heights is approximated as pt slt al s2t a2 s3t a3 t 1 N a s are the first three principal component vectors eigenvectors of the second moment matrix of pt and s s are their amplitudes or scores The flow pattern of a particular day is described by three amplitudes slt s2t s3t slt al characterizes the east west component of the flow s2t a2 the north south component and s3t a3 the cyclonicity or anticyclonicity In the original classification the range of every amplitude is divided into three equiprobable intervals and that gives 27 classes as each pressure pattern is on the basis of its amplitudes uniquely assigned to one of the 3 x 3 x 3 possible interval combinations The only command line option influencing this method is the number of types e ncl lt int gt the number of types 6 Classification methods 53 No information about any grid geometry or date is required at the data input specifica tion Thus an example command could be cost733class dat pth hgt500 dat met KRZ ncl 27 v 2 In cost733class the following numers of types can be achieved 8 9 12 18 27 30 For producing 8 types the three amplitudes are devided into two intervals which is indicated by the standard output on verbosity level 2
74. l sum of squared differences between all elements and the overall centroid mean TSS M S Xy X 8 6 The input catalog file can have more than one catalog columns e g as produced by met RAC The ECV is calculated for all catalogs within this file one after the other 8 3 WSDCIM wsdcim Within type standard deviation and confidence interval of the mean The within type standard deviation WSD is calculated as the pooled standard devia tion SDI in order to take into account differing sizes n of classes k WSD je SOME 8 7 K ka Uk 1 8 Evaluation of classifications 74 The range of uncertainty of the variable s mean values associated to each classification is reflected by the confidence interval of the mean CIM calculated as the weighted mean utilizing class sizes as weights of the type specific confidence intervals of the mean for varying confidence levels 1 alpha K SDI k Y g 1 1 0 2 Vig Dk k 3 ed Nk Command line parameters relevant for WSDCIM CIM 8 8 e clain spec catalog input see 4 2 2 e step integer missing value indicator for catalog data e dat specification input data set to which evaluation metrics are applied e thres real confidence level for estimating the confidence interval CIM De fault value is 0 05 e idx character string base string for naming of output file s Output e lt idx gt _wsd list lt idx gt _cim
75. list WSD CIM indices estimated over all variables from the input data set for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea e lt idx gt _wsd fld lt idx gt _cim fld WSD CIM indices estimated for each in dvidual variable from the input data set for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea 8 4 DRAT drat Ratio of distances within and between circulation types The ratio of the mean pattern correlation Pearson s r between days daily gridded fields assigned to the same circulation type and the mean pattern correlation between days assigned to different circulation types has been proposed by Huth 1996 as a measure for the separability among circulation types Here we generalize this evaluation metric by using either the pearson correlation or the euclidean distance as measure of similarity between cases days to calculate the mean distance between days daily gridded fields assigned to the same circulation type DI and the mean distance between days assigned to different circulation types DO Note that the interpretation 8 Evaluation of classifications 75 of values of DRAT smaller or greater 1 resp is different when using ED or PC as distance metric DI DRAT 8 9 DO 8 9 Command line parameters relevant for DRAT e clain spec catalog input see
76. m daily mean Afterwards the method BIN is called which by default just 4 Data input 32 calculates equally spaced percentile thresholds to define 10 ncl 10 classes to which the data is assigned Note that this method uses only the first column after any date columns of course 4 4 3 NetCDF data selection cost733class dat var hgtQpth data 20thC ReanV2 hgt 7 7 nc fdt 1871 1dt 2008 Y slo 50 50 2 0 sla 20 50 2 0 sle 0950 Y writedat hgt 60950 slo 50 50 sla 20 50 1871 2008 bin This command selects a subgrid from 50 west to 50 east by 2 and from 20 north to 50 north by 2 out of the global reanalysis 20thC_ReanV2 grid After input in this case the data are written to a binary file which can be used for further classification by subsequent runs But of course it would have been possible to provide a met option to start a classification in this run Note that this call needs 1 2 GB of memory 4 4 4 Date selection for classification and centroids src cost733class dat pth grid slp dat WV lon 10 30 2 5 lat 35 60 2 5 fdt 2000 1 1 1dt 2008 12 31 ddt 1d Y met CKM ncl 8 per 2000 1 1 2008 12 31 1d mon 6 7 8 v 3 src cost733class dat pth grid slp dat lon 10 30 2 5 lat 35 60 2 5 fdt 2000 1 1 1dt 2008 12 31 ddt 1d cnt DOM00 nc Y clain pth CKMO08 cla dtc 3 fdt 2000 6 1 1dt 2008 8 31 mdt 6 7 8 Y per 2000 1 1 2008 12 31 1d mon 6 7 8 met CNT v 3 He
77. med by the program 4 2 2 Catalog files For some procedures like comparison centroid calculation assignment of data to exist ing catalogs etc it is necessary to read catalog files from disk This is done using the option clain lt spec gt where the specification lt spec gt is similar to the specification of the data input option The following specifications are recognized e Opth file name gt fdt lt first date ldt lt last date ddt lt time step e g Oddt i1d for one day 4 Data input 21 e Odtc number of date columns e Omdt list of months e g Omdt 01 02 12 See specifying data input for more details on these specifications As with the data input more than one classification catalog file may be used by providing more than one clain option Note that all catalog files have to be specified with all necessary flags Especially a wrong number of date columns could lead to errors in further steps 4 2 3 Files containing class centroids For the assignment to existing classifications it can be necessary to read files that contain class centroids This is done with the cntin filename option Allthough there is support for NetCDF output of class centroids in cost733class at the time only ASCII files are supposed to be read successfully A suitable file might be generated with the cnt lt filename gt option in a previous run The extension of lt filename gt must be dat For further informa
78. meter to higher values e g 100 or 1000 The only other option relevant for this routine is the number of clusters e ncl lt int gt the number of clusters 6 Classification methods 60 The standard output of KMN shows the run number the number of iterations needed to converge the explained cluster variance the sum of within type variances and the cluster sizes Eventually it is reported that there are empty clusters This can happen during the optimisation process and stops this run However as many runs as needed to reach the given nrun number are triggered Example src cost733class dat pth test datQ1on 3 20 11at 41 52 1 per 1958 1968 hrs 12 met KMN ncl 09 6 5 2 CAP pcaca k means of time filtered PC scores and HCL starting partition Actually this classification scheme is not a method on its own but a combination of two methods and certain data preprocessing steps This is the reason why there is no argument met CAP provided or allowed Instead this classification can be obtained by first running a hierarchical cluster analysis met HCL followed by a k means cluster analysis met KMN the latter using the catalog of the former as starting partition However since this classification procedure was used in COST Action 733 as a method on its own it is described in the following It is a two stage procedure which comprises a Principal Component Analysis to derive the dominant patterns of va
79. n spec catalog input see 4 2 2 e dat specification input data set to which evaluation metrics are applied e crit integer file output for 1 each individual variable and all variables default or 2 just the latter e idx character string base string for naming of output file s Output e lt idx gt _ev list lt idx gt _pf list EV PF indices estimated over all variables from the input data set for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea 8 Evaluation of classifications T3 e lt idx gt _ev fld lt idx gt _pf fld EV PF indices estimated for each indvidual variable from the input data set for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea 8 2 ECV exvar This routine calculates the explained cluster variance ECV is the same than EV see above values for existing catalogs provided by clain lt spec gt for the given input data set dat specification In contrast to EVPF it does not discern seasons and does not create output files It just calculates the ECV prints it to the screen and exits WSS BOV STen 8 3 where k WSS 0 Y Dzy 8 4 j l i C Cj is the Class j of the k classes and the squared Euclidean distance between an element and its centroid Dix z gt Xu Xa 8 5 Ms l Il UN TSS is the tota
80. n i e after application of the preprocessing steps if any The type of the resulting file depends on its extension bin causes unformatted binary output whereas any other extension produces simple ASCII output Note that this function can be used to prepare and compile data sets which can be used for input into cost733class or any other software later Thus if no method is specified cost733class can be used as a pure data processing tool 5 5 Output of indices used for classification Some methods are based on indices which are calculated as intermediate results e g PCA methods calculate scores or loadings time series The flag idx filename causes those methods to write out these indices into the according file 5 6 Opengl graphics output If compiled with opengl support configure enable opengl the switch opengl opens a x11 window for visualisation of the classification process for some optimisation methods Additionally the switch gljpeg produces single image files for each frame which can be used to produce an animation eg by using the unix command ffmpeg y r 20 sameq i out 06d jpg vcodec mjpeg f avi cost733ckm avi 6 Classification methods 35 6 Classification methods 6 1 Methods using predefined types 6 1 1 INT interval BIN binclass This method classifies the objects into bins which are defined either by thresholds cal culated as the fraction of the range of a variable or
81. n and the number of latitudes grid rows by the ny integer option 6 Classification methods 56 Figure 6 5 For a given grid point the difference between two time steps is calcu lated from the cosine of the angle between the particular 3D geopotential directions 6 3 3 ERP erpicum Even if the scheme originally was intended only for the classification of 500hPa geopon tential or sea level pressure patterns it is also possible to apply it to other sizes as well Moreover multi field data input is available In a preprocessing step the data is standardised temporally Then for every single grid point variable and time step the 3D geopotential direction is determined see Fig 6 4 Subsequently for one variable s resulting directional fields the euclidean differences between all pairs of fields respectively time steps are computed compare Fig 6 5 That is every day e g is compared with each other Finally if one day is very similar to another this couple of days will achieve an index of nearby 1 If two days are very different their index will be set to around zero In case of multi field data input the daily mean of all the input variables is taken Now the classification can be done Therefore it is counted how often one day s similarity index raises a given threshold usually between 1 and 0 The day which reaches the highest amount of similar patterns will be the reference of the first type All days that o
82. n of time steps 4 Data input 27 3 column wise normalisation of variables using population standard deviation sum of squared deviations divided by n 1 before selection of time steps 11 column wise centralisation of variables using the daily long term mean for removing the annual cycle 12 column wise normalisation of variables using the daily long term mean and sample standard deviation for removing the annual cycle 13 column wise normalisation of variables using the daily long term mean and population standard deviation for removing the annual cycle 21 column wise centralisation of variables using the monthly long term mean for removing the annual cycle 22 column wise normalisation of variables using the monthly long term mean and sample standard deviation for removing the annual cycle 23 column wise normalisation of variables using the monthly long term mean and population standard deviation for removing the annual cycle 31 column wise centralisation of variables using the daily long term mean of 31 day wide moving windows for removing the annual cycle 32 column wise normalisation of variables using the daily long term mean and sample standard deviation of 31 day wide moving windows for removing the annual cycle 33 column wise normalisation of variables using the daily long term mean and population standard deviation of 31 day wide moving windows for removing the annual cycle e fil lt integer gt gaussian time fil
83. n to data set A Qwgt 1 0 and a weight of 0 5 is given to data set B wgt 0 5 then data set A has a doubled influence on the assignement of objects to a type compared to data set B regardless how many variables grid points data set A or B has and regardless what the units of the data has been It is advisable always to specify the wgt lt float gt flag for all data sets if multiple data sets are used E g cost733class dat pth slp txtQ0wgt 1 DO pth shum txtQwgt 1 D0 met XYZ 4 3 8 Options for overall PCA preprocessing of all data sets together Apart from the possibility to apply Principal Component Analyis to each parameter data set separately the option pca float integer allows a final PCA of the whole data set As with the separate procedure the number of PC s to retain can be given as an integer number or in case of a floating number the fraction of explained variance to retain can be specified Also automatic weighting of each PC with it s fraction of explained variance can be obtained by using the pcw lt float int gt flag instead of pca 4 4 Examples 4 4 1 Simple ASCII data matrix cost733class dat pth era40_MSLP dat fmt asciiQlon 37 56 30lat 30 76 2 met kmeans ncl 9 This will classify data from an ASCII file era40 MSLP dat which holds as many columns as given by the grid of 37 to 56 degree longitude 3 degree spacing and 30 to 76 degree latitude 2 degree spacing Thus
84. n type classifications utilizing various different methods The name refers to COST Action 733 which has been an initiative started in the year 2005 within the ESSEM Earth System Science and Environmental Management domain of the COST European Cooperation in Science and Technology framework The topic of COST 733 is Harmonisation and Applications of Weather Type Classifications for European regions cost733class is released under GNU General Public License v3 GPL and freely available 2 Installation 2 2 Installation The software is developed on Unix Linux operating systems however it may possible to compile and run it on other operating systems too although some features may not work At the time of writing this affects the NetCDF and grib data format and the OpenGL visualisation which are not necessary to run the program 2 1 Getting the source code When you read this you might have downloaded the software package and unpacked it already However here is the way how to get and compile the software To download direct your web browser to http cost733class geo uni augsburg de cost733class 1 2 Here you will find instructions how to download the source code by SVN or how to get a tar package If you have the tar package unpack the bzip2 compressed tar file e g by tar xvfj cost733class 1 2 RC revision tar bz2 where stands for the svn version number Change into the new directory
85. nalysis data Kalnay et al 1996 which are described and available at http www cdc noaa gov data gridded data ncep reanalysis html but also other data sets following the COARDS conventions http ferret wrc noaa gov noaa_coop coop_cdf_profile html should be read able if they use the time unit of hours since 1 1 1 00 00 0 0 or any other year In case of multi file data sets the files have to be stored into one common directory and a running number within the file names has to be indicated by symbols which are replaced by a running number given by the fdt and 1dt flag see below Information about time and grid coordinates of the data set are retrieved from the NetCDF data file automatically In case of errors resulting from buggy attributes of the time axis in NetCDF files try readncdate Then the dates of observations are retrieved from the NetCDF time variable rather than calculating it from actual range attribute of the time variable 4 1 3 GRIB data format Cost733class includes the GRIB_ API package and is able to read grib version 1 amp 2 files directly and use the information stored in this self describing data format It has been developed using 6 hourly ECMWF data which are described and available at http data portal ecmwf int data d interim daily In case of multi file data sets the filepaths have to contain running numbers indicated by symbols or a combination of the following strings YYYY YY MM DD DDD
86. nce are assigned to a type s reference pattern won t be considered further The following types are determined recursively one by one using only the remaining ergo unclassified days of all the preceding types For optimization the threshold will generally be decreased until all days are classified starting off with 1 only a few days get classified see Tab 1 Furthermore the threshold is lowered with the type so let it be 0 70 for the first one it could be 0 69 for the second see Tab 2 In conclusion several runs are done gradually increasing the spacing of the thresholds according to their types compare Tab 2 and 3 That 6 Classification methods 57 is for the first run there generally are the same thresholds e g 0 63 Tab 1 for all of the types For the second run there may be 0 70 0 69 0 68 etc for the first second third etc type respectively Tab 2 Following this for the third run there could be 0 75 0 73 0 71 etc and so on All the resulting time series of types one series for each run or overall decrement of the threshold are now evaluated due to their explained cluster variance in the source data Only the series which achieves the highest score is returned Tab 1 cl ic obs count 1 1 00000 1 1 2 1 00000 1 1 3 1 00000 1 1 4 1 00000 1 1 5 1 00000 1 il 6 1 00000 1 1 7 1 00000 1 i 8 1 00000 1 1 9 1 00000 1 1 16435 missing ell ic obs count 1 0 63000 3809 16392 2 0 63000 1940 18
87. nd line revert black background 6 Classification methods 64 2 Figure 6 7 Sketch of the iterative process for ckmeans that leads from the starting par tition to the final stage For details see main text misation of the final clusters the K means method starts using the centroids obtained with the extreme scores criteria as starting partitions Options for PXK e met method PXK e ncl int number of classes Have to be always a pair value even number 2 4 8 10 e iter int maximum number of iterations Leave it unchanged for PXK in order to reach convergence default iter 9999999 i e this option should be omitted for PXK Further relevant switches concern the normalization method of the input data before the PCA is performed e crit O only normalize patterns for PCA original e crit 1 normalize patterns and normalize gridpoint values afterwards default e crit 2 normalize patterns and center gridpoint values afterwards 6 Classification methods 65 Figure 6 8 Final partition for CKM classification method using the example data set slp dat in the test directory and 9 classes The projection of each daily pat tern into the three dimensional PCA phase space is shown by small spheres Large spheres represent the projected centroids 6 Classification methods 66 The thresholds for definition of extreme score cases can be changed by e thres real thre
88. ne catalog can be produced by using the nrun int option default value is 100 By default the best catalog out of these according to the ex plained cluster variance see 8 is selected and written to the cla file If the option mcla filename is given all catalogs will be written to that file each catalog one additional column The output of this routine may look like the following geobl run src cost733class dat pth grid hgt500 dat met ran nrun 10 mcla ran09 mcla creating fake date DATA SET CONFIGURATION NOBS 3288 FYEAR 1 FMON 1 FDAY 1 FHOUR 1 LYEAR 3288 LMON 1 LDAY 1 LHOUR 1 NVAR 187 NPAR 1 PAR 1 6 Classification methods 69 MILO 99999 00 MALO 99999 00 DILO 99999 00 NLON 99999 MILA 99999 00 MALA 99999 00 DILA 99999 00 NLAT 99999 VARI il VAR2 187 NVAR 187 MINV 4861 00 MAXV 6030 00 calling ran 1 0 003850794664 377 365 355 340 388 383 354 354 372 2 0 003683256990 367 389 356 347 354 385 382 359 349 3 0 002744523286 357 393 372 341 357 355 387 349 377 4 0 004025563037 327 373 365 384 406 337 364 376 356 5 0 002995899144 360 348 379 350 364 396 376 353 362 6 0 002865057792 395 355 428 357 357 352 336 350 358 7 0 001475070462 349 394 349 364 373 374 350 392 343 8 0 003685594630 380 389 377 355 356 334 357 363 377 9 0 002208701446 358
89. ne centroids by random and assign objects to it ASC assign no real method just assign data to given centroids provided by cntin SUB substitute substitute catalog numbers by values given in a cntin file AGG aggregate build seasonal values out of daily or monthly values COR correlate calculate correlation metrics comparing the input data variables ONT centroid calculate centroids of given catalog clain and data see dat see also cnt ECV exvar evaluation of classifications by Explained Cluster Variance see clain crit EVPF evpf evaluation in terms of explained variation and pseudo F value clain WSDCIM wsdcim evaluation in terms of within type SD and confidence interval clain FSIL fsil evaluation in terms of the Fast Silhouette Index clain SIL sil evaluation in terms of the Silhouette Index clain DRAT drat evaluation in terms of the distance ratio within and between classes clain CPART cpart calculate comparison indices for gt 2 given partitions clain ARI randindex calculate only adjusted Rand indices for two or more partitions given by clain crit int INT interval 2 1 calculate thresholds as i th percentile where i cl 1 ncl 2 bins centered around the mean value 3 bin size is the data range ncl the bins are not centered 4 2 bins divided by thres lt real gt for svar lt int gt 5
90. ng the classification catalog Overall class centroids and centroids for each input data set are optional Examples An example with default values cost733class dat pth slp nc fmt netcdf fdt 1951 1dt 1980 met INT ncl 9 crit 1 svar 1 cla INT cla dcol 4 Another example cost733class dat pth slp nc fmt netcdf fdt 1951 1dt 1980 met INT ncl 20 crit 2 cnt INT cnt cla INT cla dcol 4 This run classifies the input data into 20 bins and writes a classification catalog and centroid file 6 1 2 GWT prototype large scale circulation types This method uses three prototype patterns and calculates the three pearson correlation coefficients between each field in the input data set and the three prototypes Beck et al 2007 The first prototype is a strict zonal pattern with values increasing from north to south The second is a strict meridional pattern with values increasing from west to east And the third is a cyclonic pattern with a minimum in the center and increasing values to the margin of the field Depending on the three correlation coefficients and their combination each input field is classified to one class Since there are only fixed numbers of combinations not all numbers of types can be achieved This method makes sense only for single pressure fields The possible numbers of types are 8 10 11 16 18 19 24 26 27 For 8 types the main wind sectors N NE E S
91. ns are used to select the best result the more robust is the result SOM and SAN are designed to be much more robust than KMN default nrun 1000 to produce reliable results SOM number of epochs after which neighbourhood radius is to be For training the neurons also neighboured neurons of the winner neuron in the network map are affected and adapted to the training pattern to a lower degree though The neighbourhood radius covers all neurons classes at the beginning and is reduced during the process until only the winner neuron is affected This slow decrease helps to overcome local minima in the optimisation function default step 10 meaning after 10 epochs neighbourhood radius is reduced by one WLK number of windsectors EVPF WSDCIM FSIL SIL DRAT CPART missing value indicator for catalogue data SOM PXE PXK maximum number of epochs iterations to run default iter 2000 i e it should converge by itself iter 0 for pcaxtr means that only the first assignment to the pc centroids is done for PXK iter is 2000 means 2000 k means iterations simulated annealing start temperature for CND default 1 cooling rate for som amp sandra default cool 0 99DO set to 0 999D0 or closer to 1 D0 to enhance and slow down tuning parameter INT number of variable column to use for calculatin interval thresholds tuning parameter WIK central weight for weigh
92. nters of the clusters The algorithm consists of the following steps 1 select k objects by random to be the initial cluster centers i e medoids 2 assign all other objects to them according to the minimum distance 3 calculate the cost function which is the sum of the distances between each object and its nearest medoid 4 begin an iteration 6 Classification methods 68 5 for each cluster check all objects whether they would be a better medoid if yes change the medoid and update the assignments and cost function 6 if no enhancement is possible by changing the medoid stop the iterations The following options apply for KMD e ncl int the number of clusters e crit int choice of the distance metric 0 Chebychev 1 Manhattan 2 Eu clid int 2 Minkovski distance of order int e opengl only if compiled with configure enable opengl activate visualisation e gljpeg only if compiled with configure enable opengl create image for each frame e nrun int number of runs with initial random medoids The solution with minimum cost function will be chosen Default for int is 10 6 6 Random classifications 6 6 1 RAN random This method does not use any input data but randomly selects any arbitrary number as type number for each object The respective number is retrieved from a random number generator between 1 and the maximum number of classes given by the ncl integer option More than o
93. numbers of types are 8 9 10 11 12 18 19 20 26 27 and 28 explained as follows Types 1 to 8 in 8 to 12 selected types are using the prevailing wind directions W NW N NE E SE S and SW where W 1 etc Types 1 to 16 in 18 19 or 20 are a subdivision of the 8 directional types into tendentious cyclonic and anticyclonic 1 8 cyclonic 9 16 anticyclonic Types 1 to 24 in 26 27 or 28 combine mainly directionals with partly cyclonic and anticyclonic ones 1 8 cyclonic 9 16 straight 17 24 anticyclonic Types 9 10 17 18 or 25 26 indicate pure cyclonic anticyclonic days The 9th 11th 19th or 27th type stands for a light indeterminate flow class and can be treated as unclassified except for 10 types where the 9th is something else Type 12 20 or 28 figures out a gale day Because the method is specialized on classifying daily mean sea level pressure patterns only some thresholds are hardcoded to distinguish flow intensities This is of course problematic if this method should be applied e g on geopotential heights Therefore there are options modifying the default classification scheme 6 Classification methods 45 Options Strictly necessary options e dat specification Input data Grid description necessary The following options define the classification e ncl int The number of types as described above Default is 9 e thres real If thres is set to 1 no weak flows a
94. ously and 4 Data input 29 for both NetCDF and ASCII data by the dlist file option The given file should hold as many lines as time steps should be selected and four columns for the year month day and hour of the time step These numbers should be integer and separated by at least one blank Even if no hours are needed they must be given as a dummy Note that it is possible to work with no or fake dates allowing for input of ASCII data sets which have been selected by another software before 4 3 7 Using more than one data set More than one data set can be used by just giving more than one dat specification option explained above within the command line The data of a second or further data set are then pasted behind the data of the first or previous data set as additional variables e g gridpoints or data matrix columns thus defining each object e g day or data matrix rows additionally Note that these data sets must have the same number of objects lines or records in order to fit together If one of the data sets is given in different physical units than the others e g hPa and Kelvin there must be taken care of the effect of this discrepancy for the distance metric used for classification 1 For all metrics the data set with higher total variance larger numbers will have larger influence 2 For correlation based metrics if there is a considerable difference of the mean between two data sets this
95. pc number of principal component n equiprobable intervalls per pc i number of percentile S percentile in percent p percentile as in timeseries c shifts class pc n i S p c 1 2 1 50 00000 0 86810 4 2 2 1 50 00000 0 03095 2 3 2 1 50 00000 0 07110 1 Thus observations of type 1 to 4 have scores below or equal to 0 86810 the median for PC 1 and those of type 5 to 8 have scores above 0 86810 in this example Types 3 4 and 7 8 also have scores above 0 03095 for the scond PC while types 1 2 and 5 6 have lower amplitudes Types 1 3 5 7 have lower amplitudes for PC 3 than 0 07110 and types 2 4 6 8 have higher scores Varying the number of used amplitudes and the number of percentiles the other num bers of types are created For ncl 9 pc n 1 S p Cc ESE 33333 50503003983 1 3 2 66 66667 0 63176 3 3 i SASS 985074223 0001 2 3 2 66 66667 0 40905 1 For ncl 12 pc n i s p c 1 3 1 33 33333 1 08003 4 1 3 2 66 66667 0 63176 4 2 2 1 50 00000 0 03095 2 3 2 1 50 00000 0 07110 1 For ncl 18 pc n 1 S p Cc 1 8 i 3 3333 LO Ga 1 3 2 66 66667 0 63176 6 2853087089833 3333 3989990 422 3 702 2 3 2 66 66667 0 40905 2 3 2 1 50 00000 0 07110 1 For ncl 27 6 Classification methods 54 pe nm cd S poe 1 3 1 33 33333 3 1 08003 9 1 3 2 66 66667 0 63176 9 2 3 1 33 33333 0 42237 3 2 3 2 66 66667 0 40905 3 3 3 1 33 333339
96. pcaxtr the extreme scoremethod 6 2 4 KRZ kruiz Kruizingas PCA based types Methods using the leader algorithm lll 6 3 1 LND lund the Lund method o se eo 4 a bes pss KIR eye Kirchhofer uoo eternit aunn sarn 33 33 33 33 34 34 34 DES ERP SEE lou A AS se whee et ee we E Rue NS 56 6 4 Hierarchical Cluster analysis cus oko e eo Ea 58 DAI Wile Oe 22 999 4 99x ae ee hehe e ee 58 b Optimization algorithms s lt cs ea SSR HSE POH AE SS EG HS 59 6 5 1 KMN kmeans conventional k means with random seeds 59 6 5 2 CAP pcaca k means of time filtered PC scores and HCL starting PATTON o asus Rus m PELA Se OE Sha Irt ROB LE E som Bl end 60 6 5 3 CKM ckmeans k means with dissimilar seeds 61 6 5 4 DKM dkmeans a variant of ckmeans 62 6 5 5 PXK pcaxtrkm k means using PXE starting partitions 62 6 5 6 SAN sandra simulated annealing and diversified randomization 66 6 5 7 SAT sandrat time constrained simulated annealing and diver Shed randomization ui de dox momo Wee ee XO ERA YO gos 66 6 5 8 SOM som self organizing feature maps neural network ac cording to Kohonen 2 2 29 eee Gee ERU 67 6 5 9 KMD kmedoids partitioning around medoids 67 6 6 Random classifications 2 es a bee God RR AC eR SR A EC 68 66 1 RAN random ie sen dodo aos AAA 68 562 RAU randomtent gt ssc soo
97. r Examples on cask sb BG xx II 141 Simple ASCI data matrik cae de nesse wwe DE RE ERAS 4 4 2 ASCII data file with date columns 143 NetCDF data selection cies cies 9o ras dee RO 4 4 4 Date selection for classification and centroids Data output 5 1 5 2 5 3 5 4 5 9 5 6 The classiBenfion Catalog uc acoso ERO ee OL Centroids or type composits e Output on the sereen eo 2 A Eom ee ER ee Ree Roe Rx Output of the input data uuo e ee ee wee eee ds Output of indices used for classification lt 2 ea a o o 44 Opens graphics GUEDUL i na es Rok oe Re RES bw LESS EES Classification methods 6 1 6 2 6 3 Methods using predefined types 0 o 6 L1 INT interval BIN bintless gt oo A mn 6 1 2 GWT prototype large scale circulation types 6 1 3 GWTWS gwtws large scale circulation types 6 1 4 LIT lit litynski threshold based method 6 1 5 JCT jenkcol Jenkinson Collison Types 6 1 6 WLK wlk automatic weather type classification according to German metservice coccion Ru eR RA EUR E Methods based on Higenvectors o o 6 2 1 PCT tpca t mode principal component analysis using oblique DIST uos vat te Row ER m ded d Ne LEM AUG SIN OR 6 2 20 PTT tpcat t mode principal component analysis using orthog Onal TOTON uo ABE e db m FO 6 2 3 PXE
98. r real kind 8 minlat npar maxlat npar diflat npar These are the grid dimensions for each data set Not necessarily allocated real kind 8 parmean npar parsdev npar If more than one data set is provided in dat each of them is normalized The correponding means and standard deviations are stored here You don t have to declare these in your subroutine you can just use it Except for the date coordinate and normalization variables the latter 4 items they are all aleady allocated and filled with values The others have to be checked whether they are allocated because the user might have omitted for example to give information about longitudes and latitudes e local variables provided by main The following variables are not global variables This means if you need it in your subroutine you have to provide it as parameters in the subroutine calling function and subroutine integer ncl This is the number of types provided by the option ncl 12 4 Using the input data from datainput f90 The subroutine datainput reads in the data from files and performs preprocessing of the data depending on the command line arguments The following steps are carried out 1 parsing of the specification strings 2 set up variables attribute dimension for each data set the number of variables is calculated to achieve the total number of columns for the data matrix 12 Development 86 10 11 12 13 14
99. r Schweizerischen Meteorologischen Zentralanstalt volume 43 page 16 Bundesamt fuer Meteorologie und Klimatologie MeteoSchweiz Kuncheva L and Hadjitodorov S T 2004 Using diversity in cluster ensembles 2004 IEEE International Conference on Systems Man and Cybernetics 2 1214 1219 L Hubert and Arabie P 1985 Comparing partitions Journal of Classification 2 193 218 Bibliography 90 Litynski J 1969 A numerical classification of circulation patterns and weather types in poland in polish Prace Panstwowego Instytutu Hydrologiczno Meteorologicznego 97 3715 Lund I 1963 Map pattern classification by statistical methods J Appl Meteorol 2 56 65 Milligan G and Cooper M 1985 An examination of procedures for determining the number of clusters in a data set Psychometrika 50 159 179 Philipp A Bartholy J Beck C Erpicum M Esteban P Fettweis X Huth R James P Jourdain S Kreienkamp F Krennert T Lykoudis S Michalides S C Pianko Kluczynska K Post P Alvarez D R Schiemann R Spekat A and Tymvios F S 2010 Cost733cat a database of weather and circulation type classifications Physics and Chemistry of the Earth 35 9 12 360 373 Rand W M 1971 Objective criteria for the evaluation of clustering methods J Amer Stat Assoc 66 846 850 Rousseeuw P 1987 Rousseeuw p 1987 Silhouettes a graphical aid to the in terpretation and valida
100. re classification is done only for summer months per 2000 1 1 2008 12 31 1d mon 6 7 8 and written to CKM08 cla in the first run In the second call CKMO8 cla is used to build centroids again only for the summer months Note that mdt 6 7 8 has to be provided for the clain option in order to read the catalog correctly 5 Data output 33 5 Data output 5 1 The classification catalog This is a file containing the list of resulting class numbers Each line represents one clas sified entity If date information on the input data was provided the switch dcol int can be used to write the datum to the classification outputfile as additional columns left to the class number The argument int decides on the number of date columns dcol 1 means only one column for the year or running number in case of fake dates 1i4 dcol 2 means year and month 1i4 1i3 dcol 3 means year month and day 114 213 and dcol 4 means year month day and hour 1i4 3i3 If the dcol option is missing the routine tries to guess the best number of date columns This number might be important if the catalog file is used in subsequent runs of cost733class e g for evaluation etc 5 2 Centroids or type composits If the cnt filename option is given a file is created which contains the data of the centroids or class means In each column of the file the data of each class is written Each line row of the file corresponds to the variables in the or
101. re classified class 9 11 19 or 27 depending on ncl lt int gt Use this for other variables than MSLP Options for data output e cla filename Output filename for the classification catalog e dcol int Number of date columns in the classification catalog e cnt filename Output filename for the class centroids e idx lt basename gt Output basename for values of wind flow characteristics w s f westerly southerly resultant flow zw zs z westerly southerly total shear vorticity Output This method returns one file containing the classification catalog Overall class centroids as well as a file containing values of wind flow characteristics are optional Examples An example with default values cost733class dat pth slp dat fmt ascii lon 10 30 2 5 lat 35 60 2 5 fdt 2000 1 1 12 1dt 2008 12 31 12 ddt 1d met JCT ncl 9 cla JCT09 cla dcol 3 Another example cost733class dat pth slp dat fmt ascii lon 10 30 2 5 lat 35 60 2 5 fdt 2000 1 1 12 1dt 2008 12 31 12 ddt 1d met JCT ncl 26 idx JCT26 cla JCT26 cla dcol 3 Same but with a different number of classes and output for values of wind flow charac teristics 6 Classification methods 46 6 1 6 WLK wlk automatic weather type classification according to German metservice This method is based on the OWLK objective weather type classification by Dittmann et al 1995 and Bissolli and Dittmann
102. riability and a clustering procedure to classify time series of the principal components Comrie 1996 Ekstroem et al 2002 If to analyse the entire year is considered not splitting the data in seasons raw data fields can be submitted to a temporal filtering to remove variability on time scales longer than the typical duration of regional weather systems but retaining the spatial patterns otherwise the PCA can be strongly influenced by seasonal variabil ity in the pressure data Such temporal variability is commonly removed following a method outlined in Hewitson and Crane 1992 Instead of a running mean however the cost733class software can be used to apply a gaussian filter Literature shows that different length filters have been used from 8 to 13 days for running means correspond ing with a length of 31 days for gaussian filters depending on the area of analysis To accomplish the average value of the map on each time step is first calculated and then an n days moving average is obtained Finally the difference between the original grid values and the filtered average time series values is then obtained transforming the raw data at each grid point on departures from the smoothed n days map mean which are used in all the subsequent analyses PCA is used to reduce the original data usually a large dimension database into a small set of new variables principal components that explain most of the original variability while the contri
103. s 1 0 for all data and all data sets each one defined by a dat specification are treated to have the same weight compared to each other by normalizing them separately as a whole over space and time and multiplying them with a factor 1 nvar where nvar is the number of variables or attributes of each data set After that each datasset is multiplied by the user weight number e scl lt float gt Scaling factor to apply to the input data of this parameter e off lt float gt Offset value which will be added to input data after scaling e nrm lt integer gt row wise normalisation 1 row wise centralisation of objects patterns 2 row wise normalisation of objects patterns sample standard deviation 3 row wise normalisation of objects patterns population standard deviation e ano lt integer gt where integer can be column wise centralisation of variables grid points after selection of time steps 2 column wise normalisation of variables using sample standard deviation sum of squared deviations divided by n after selection of time steps 3 column wise normalisation of variables using population standard deviation sum of squared deviations divided by n 1 after selection of time steps 1 column wise centralisation of variables grid points before selection of time steps 2 column wise normalisation of variables using sample standard deviation sum of squared deviations divided by n before selectio
104. s eg deme Ae we PRE ede 69 7 Assignments to existing classifications 70 EN UD o ooo ke hdd SEEDER ES SEH ER REDRESS ER SS 70 Ta CNT centroid uu koe con pa AEDS AEE EERE EE BOSE BH Se 70 8 Evaluation of classifications 72 8 1 EVPF evpf explained variation and pseudo F value 72 A ACT 73 8 3 WSDCIM wsdcim Within type standard deviation and confidence in terval of the mean sas tk te RA eR ak LAE ER 73 8 4 DRAT drat Ratio of distances within and between circulation types 74 85 FSIL fsil Fast Silhouette Index o 75 8 6 SIL sil Silhouette Index 2 ce eee Rot RR Rx 76 8 7 BRIER brier Brier Score ulus ee Re A 76 9 Comparison of classifications 78 91 CPART epart Catalog comparison gt c oe ca 46 sensa yox 78 10 Miscellaneous functions 79 10 1 AGG agg Aggregation o ce betes eoa PR eee EERE EE RA 79 102 COR cor Correlation oscars arnes 79 10 3 SUB substitute Subslitute lu oos ee ara EEG EE a 79 11 Visualization 12 Development 12 1 Implementing a new subroutine 222r o3 12 2 Packing the directory for shipping 22s VG 12 3 Use of variables in the subroutine 12 4 Using the input data from datainput fO 12 5 The gnu autotools files References 81 82 82 84 84 85 86 88 1 Introduction 1 1 Introduction cost733class is a FORTRAN software package focussed on creating and evaluating weather and circulatio
105. shold defining key group default 2 0 e delta real score limit for other PCs to define uniquely leading PC default 1 0 6 5 6 SAN sandra simulated annealing and diversified randomization SANDRA means Simulated ANealing and Diversified RAndomization Essentially it is a non hierarchical cluster analysis method like k means However it usually finds better solutions than conventional k means Following special command line parameters are relevant for SANDRA e nrun integer It is the number of diversified runs for searching the best re sult Note that if compilation has been started using the openmp option i e Nconfigure CC icc FC ifort FCFLAGS openmp the runs are executed in parallel How many parallel threads are used might depend on a environment variable e g NTHREAD 4 of your operating system For details check the docu mentation of your compiler The default value for nrun integer is 1000 Note that this may cause very long runtimes and is no bug e cool real This parameter controls the speed of cooling down the temperature parameter ie how fast the probability for so called wrong shifts of objects is reduced The higher real the slower the decrease Note that it must be less than one e g 0 999 else the routine will never stop 6 5 7 SAT sandrat time constrained simulated annealing and diversified randomization Time constraint clustering Jolliffe and Philipp 2010 is a variant o
106. ssiNcalionS s lt i e 46464 e040 64 4 29 Ox 8 3 2 2 Evaluating classifications 62 os o RR eee es 9 20 Lomparing classic tons e o os i san e a e Ey wg 9 3 2 4 Assignment to existing classifications 9 3 25 Rather simple pre processing e 10 mc Help isting 43s 9e eb 909 9 e ERR oS OEE Robe CAE RES OS 10 4 Data input 18 Ll Bonis IAE S2 56x x ER R4 Gn4OEORORXDESE EORR RO E RR EE EG 18 ATI ASCH dats file Tornat lcu ow xe VXXGEXGd ex SES 18 412 COARDS NetCDF data format 64686 4 eee o 19 A123 GRIB AA IO o lt ca kx ao Sw Ee GE OG BG BE BS 19 2123 Ole data TORIES sesos aoea sas aria Ae ee ARE ee we we G 20 42 Seltgenerated formats 64 66 fae bee Pe eee eR ee ed 20 221 Binary date format sc e saas a saar RO PR eee CRS 20 322 Catalog iles ciu rua c haw rasat nama H ara Tari 20 42 3 Files containing class centroids o 2o x 21 4 3 4 4 Specifying data input and preprocessing 141 SBpedBeabon Naee si ec s bee ed Ade ee ae REA RES 43 Flags for data set description io lt s edo asosi os dasdi ss 43 3 Flags for spatial data Selection 4 se soci sesed reied ms ASA Flags for data Preprocessing 4 2222 cox 3 648 43 Flags for data Postprocessing gt lt c scs cc o s 4 3 6 Options for selecting dates lt lt c 4 3 7 Using more than one data set e 4 3 8 Options for overall PCA preprocessing of all data sets togethe
107. t indicates a number of rows smaller than actually given in the file only this smaller number of lines is used omitting the rest of the file For data with fmt netcdf in multiple files it can be an integer number indicating the last year or running number which will be inserted to replace the placeholder symbols in the filename e ddt lt int gt lt y m d h gt time step of dates in data file in years months days or hours e g 1d for daily data If ddt is omitted but both fdt and 1dt have same resolution it is automatically set to one for that temporal resolution e g fdt 1850 01 and 1dt 2008 02 and omitting ddt will lead to ddt 1m i e monthly resolution e mdt lt list gt list of months covered in data file e g mdt 01 02 12 if only winter data are given in the file The list separator symbol may also be the comma instead of the Ww e lon lt number gt lt number gt lt number gt This specifies the longitude dimensions of the input data as given in the file The 4 Data input 25 first number is the minimum longitude where longitudes west of 0 degree are given by negative numbers The second number is the maximum longitude The third number is the grid spacing in longitudes e g 1on 30 50 2 5 This flag denotes just the description of the input data For NetCDF data which are self describing it is superflous e lat lt number gt lt number gt lt number gt This specifies the latitude dimensions o
108. t to which evaluation metrics are applied e idx character string base string for naming of output file s 8 Evaluation of classifications 76 Output e lt idx gt _fsil list FSIL indices estimated over all variables from the input data set for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea 8 6 SIL sil Silhouette Index This routine provides the original Silhouette index SIL according to Rousseeuw 1987 In contrast to FSIL for any case day i the distances to its own class a and its nearest neighboring class b are calculated as the average distance in terms of the euclidean distance between the case and all cases in its own class and its closest class respectively TL apos Y bi ai 8 11 n c maz aj bi Command line parameters relevant for SIL e clain spec catalog input see 4 2 2 e step integer missing value indicator for catalog data e dat specification input data set to which evaluation metrics are applied e idx character string base string for naming of output file s Output e lt idx gt _sil list SIL indices estimated over all variables from the input data set for months seasons and the whole year jan feb mar apr may jun jul aug sep oct nov dec win spr sum aut yea 8 7 BRIER brier Brier Score Brier Skill Score Di Ni 0 BSS 20 9 8 12 BSS
109. tcdf data set consisting of multiple subsequent files the path can include symbols to indicate that these letters should be replaced by a running number between the number given by fdt and the number given by for a NetCDF multifile data set n case of fmt grib the file name path can include symbols or a combination of the following strings YYYY YY MM DD DDD to in dicate that these letters should be replaced by a running number be tween the number given by fdt and the number given by ldt which has to have the same order and format as given in the path E g pth data YYYY MM slp DD grib fdt 2011 12 31 1dt 2012 01 01 or pth data slp YYYY MM DD grib fdt 2011 12 31 1dt 2012 01 01 or pth data slp MMDDYYYY grib fdt 12 31 2011 1dt 01 01 2012 or pth data slp YYYY grib fdt 2011 12 31 1dt 2012 01 01 or pth data slp YYYYDDD grib fdt 2011 001 1dt 2012 365 for grib multifile data sets e fmt lt character gt This can be either ASCII binary or NetCDF ftm ascii means the data are organized in text files Each line holds one observation usually the values of a day Each column holds one parameter specifying the observations Usually the columns represent grid points These files have to hold a rectangular matrix of values i e all lines have to have the same number of columns Columns have to be separated by one or more blanks or by commas Missing values are not 4 Data input 24 allowed The decimal marker must
110. ter of period lt int gt int lt 0 high pass int gt 0 low pass e arw lt integer gt Area weighting of input data grids 0 no defaut 1 cos latitude 2 sqrt cos latitude 3 calculated weights by area of grid box which is the same as cos latitude of option 1 e pca lt integer float gt parameter wise pca If provided this flag triggers a principal component analysis for compression of the input data The PCA retains as many principal components as needed to explain at least a fraction of lt real gt of the total variance of the data or as determined by lt integer gt This can speed up the classification for large data sets considerably Useful values are 0 99 or 0 95 e pcw lt integer float gt parameter wise pca with weighting of scores by explained variance This prepro cessing step works like pca but weights the PCs according to their explained 4 Data input 28 variance This simulates the original data set in contrast to the unweighted PCs In order to simulate the Euclidean distances calculated from the original input data for methods depending on this similarity metric the scores of each PC are weighted by sqrt exvar PC if wgttyp euclid is set Therefore if pcw 1 DO is given which actually doesn t lead to compression since as may PCs as original variables are used exactly the same euclidean distances relative to each other are calculated 4 3 5 Flags for data Postprocessing e cnt lt file gt
111. the file has 56 37 31 by 76 30 2 1 columns 32x24 768 The ordering of the columns follows the principle longitudes vary fastest from west to east latitudes vary slowest from south to north Thus the first column is for the southernmost latitude and the westernmost longitude the second column for the second longitude from west on the southernmost latitude etc The last column is for the easternmost and northernmost grid point as illustrated in table 4 4 1 Note that the command line could have been written as cost733class dat pth era40 MSLP dat lon 37 56 3 lat 30 76 2 met kmeans ncl 9 4 Data input 3l colum 1 lon 37 lat 30 column 2 lon 34 1at 30 column 32 lon 56 1at 30 column 33 lon 37 1at 32 column 768 lon 56 1at 76 t 1 mslp 37 30 1 mslp 34 30 1 mslp 56 30 1 mslp 37 32 1 ms1p 56 76 1 t 2 mslp 37 30 2 mslp 34 30 2 mslp 56 30 2 mslp 37 32 2 mslp 56 76 2 t nt mslp 37 30 nt mslp 34 30 nt mslp 56 30 nt mslp 37 32 nt mslp 56 76 nt Table 4 1 Example data matrix of mean sea level pressure values formatted to hold grid points in columns varying by lon 37 56 by 3 and by lat 30 76 by 2 and time steps t 1 nt in rows 2000 01 01 12 1 2000 01 02 10 3 2000 01 03 6 2000 O1 04 7 3 2000 01 05 7 7 2000 O1 06 6 6 2000 01 07 8 2000 01
112. ting mask default 3 D0 EVPF WSDCIM FSIL SIL DRAT scale factor for evaluation data BRIER if 0 default use all values crit 1 or patterns crit 2 if gt 0 a value or pattern is processed only if itself or mean pattern gt alpha GWIWS value percentile for low winds main threshold for types 9 10 11 tuning parameter WLK middle zone weight for weighting mask default 2 D0 EVPF WSDCIM FSIL SIL DRAT offset value for evaluation data GWIWS value percentile for flat winds type 11 tuning parameter WLK margin zone weight for weighting mask default 1 D0 3 Getting started 17 WSDCIM confidence level for estimating the confidence interval of the mean GWIWS value percentile for low pressure type 9 delta real tuning parameter WLK width factor for weigthing zones nxx delta nyx delta default 0 2 PXE PXK score limit for other PCs to define uniquely leading PC default 1 0 GWIWS value percentile for high pressure type 10 lambda real tuning parameter SAT weighting factor for time constrained clustering default 1 D0 dist lt int gt distance metric not for all methods yet if int gt 0 Minkowski distance of order int 0 Chebychev if int 1 I correlation coefficient nx lt int gt KIR number of longitudes needed for row and column correlations DRAT distance measure to use l euclidean distance 2 pearson correlation
113. tion catalog and the corresponding class centroids can be defined by cla filename and cnt filename Depending on the used classification method several other options can be provided by special options For further documentation of these consult the relative section Based on this the command for a classification with the method KMN may look like 3 Getting started 9 cost733class dat pth slp dat ncl 9 met KMN cla KMN_ncl9 cla 3 2 2 Evaluating classifications The basic scheme of an evaluation command is cost733class dat specification gt dat specification gt clain specification gt met method idx filename gt more method specific options The most important difference to a classification run is the clain specification option It defines the path to an existing classification catalog file and is mandatory for all evaluation methods The desired evaluation method must be chosen by the met method option The results are written to one or more files for which the base name can be given by the idx filename option Analogous to the previous command every method has its additional options which are explained in the corresponding sec tions below At least one dat specification option describes the data one wants to evaluate with A cost733class run which evaluates the classification generated by the previous com mand using the Brier Score would be
114. tion of cluster analysis journal of computational and applied mathematics 20 53 65 Journal of Computational and Applied Mathematics 20 53 65 Southwood T R E 1978 Southwood T R E 1978 Ecological Methods 2nd edn London Chapman and Hall Chapman and Hall Strehl A and Gosh J 2002 Cluster ensembles a knowledge reuse framework for combining partitions Journal of Machine Learning Research 3 583 617 Tang L Chen D Karlsson P E Gu Y and Ou T 2008 Synoptic circulation and its influence on spring and summer surface ozone concentrations in southern sweden Boreal Env Res 14 889 902
115. tion see section 7 1 Considering that such files can also be easily written with any text editor this feature of cost733class makes it possible to predefine class centroids or composites and assign data to them 4 3 Specifying data input and preprocessing The data input and preprocessing steps are carried out in the following order 1 reading input data files pth 2 grid point selection slo and sla 3 data scaling scl 4 adding offset off 5 centering normalization of object nrm 6 calculation of anomalies ano 7 filtering fil 8 area weighting arw 9 sequence construction seq 10 date selection options dlist per mon hrs 11 PCA of each parameter data set separately pca pew 4 Data input 22 12 parameter weighting wgt 13 overall PCA options pca pcw For reading data sets the software first has to know how many variables columns and time steps rows are given in the files In case of ASCII files the software will find out these numbers by its own if no description is given However in this case no information about time and space is available and some functions will not work In case of NetCDF files this information is always given in the self describing file format Thus the description of space and time by specification flags on the command line can be omitted but will be available Some methods like 1it need to know about the date provided e g by per mon etc of each obje
116. tion to file named lt char gt lt ext gt 3 Getting started 13 opengl the type of the indices dependes on the method e g scores and loading for PCT this switch activates the 3D visualisation output calls for the following methods SOM crit 2 SAN CKM gljpeg glwidth glheight glpsize glcsize glxangle real glyangle real glzangle real glstep glpause This is only working without parallelization and probably on unix linux systems The software compilation has to be configured by configure enable opengl in conjunction with opengl this switch produces single jpg images which can be used to create animations width of opengl graphics window default 800 height of opengl graphics window default 800 size of data points default 0 004D0 size of centroid points default 0 03D0 angle to tilt view on data cube default 60 D0 angle to tilt view on data cube default 0 D0 angle to tilt view on data cube default 35 D0 time stepping default 10 pause length default 1 glbackground int background color O black default 1 white glrotangle angle rotation angle step for spinning cube METHODS met method method NON none just read and write data and exit INT interval BIN classify into intervals of variable svar GWI prototype prototype grosswetterlagen
117. tmp geodata ERA40 ascii era40 Z925 12Z 195709 200208 domain00 dat got 16436 alcc ptmp geodata ERA40 ascii era40 Z500 12Z 195709 200208 domain00 dat calling wlk lines from number of windir sectors fraction of gridpoint wind anomalies of cyclonicity l yes shift sectors centering to 0 deg central weight alpha middle weight beta margin weight gamma mask zone width factor delta grids for U and V have size E ES ES EIU een ES EI ESI ESI EIU EIU G ES EEE ESI E ER EE ESI ESI Ed Ces Y A ies PRP RP EP RP RP PRP RP RP Pee Pe eB Pe ee Pe PE EP w HG a oy BPR PRP PRP RP RPP PRP RP Re PRP RP RP RP PRP RE FPrRFRrFENNNNNNNNNNNNNNNNNNF A EF L3 LS b2 b2 h2 b2 h2 h2 b2 bh2 bh2 bO bO h2 b2 h2 bh2 h2 h2 Nee de divided z92 divided z500 HG HG FPRrFNNNNNNNNNNNNNNNNNNF FE FPrPFNONNNNNNNNNNNNNNNNNF KF BE PRE NNN NNNNNNNNNKNNNNNNEEE PRNNNNIIIAIAIASAL MD MOM n n 8 Bon DOR NO RO RO MEME M Hd MEME NANNY NYE rn Don M MOM NANA NAAN NON M n n n PERRA PERRA ella tal by 10 min max by 10 min max number of classes PREREENNNNZIZIZIAZIZIIAIANNNN An l nrernnanNN AZZAIZAIAZAZAAAANNNN ARA to 0 350000000000000 E 32 by RERENNNNZIZAZIAIZIAZIIAIANNNN an ABRRRAEANNNNIIAIAZIAZAZAANNNN Ana 4 2 D 2 2 2 2 2 2 5 1 1 1 ff iG Y Y T deo ti 7 Y Y 1 1 ile 57 71 1 1 1 2 2 2 2 Hn off ots T i Inf If If 7 7 2 2 22 Beas 1 1 1 2 6
118. tors between different 4 Data input 19 numbers but the comma never as decimal marker The decimal marker must always be a point Thus you don t have to tell how many lines and columns are in the file but the number of columns has to be constant throughout the file the number of blanks between two attributes may vary though This format is fully compatible to the CSV comma separated values format which can be written e g by spread sheet calculation programs Note that no empty line is allowed This can lead to read errors especially if there is an empty line at the bottom of the file which is hard to see If necessary information about time and grid coordinates describing the data set additionally have to be provided within the data set specification at the command line In this case the ordering of rows and columns in the file must fit the following scheme The first column is for the southernmost latitude and the westernmost longitude the second column for the second longitude from west on the southernmost latitude etc The last column is for the easternmost and northernmost grid point The rows represent the timesteps All specifications have to fit to the number of rows and columns in the ASCII file 4 1 2 COARDS NetCDF data format Cost733class includes the NetCDF library and is able to read NetCDF files directly and use the information stored in this self describing data format It has been developed using 6 hourly NCEP NCAR rea
119. u opened by the right mouse button Moving the mouse with the left mouse button keeping pressed down allows to rotate the datacube within several directions while the mouse wheel allows to zoom in and out Holding the shift key and moving the mouse while the left mouse button is pressed shifts the data cube Holding the ctrl button pressed and selecting a data sphere with the left mouse button draws the map of this object at the lower left corner of the window switch dimension maps at axis on and off switch the data spheres off Switch the data spheres on only show the data spheres of the first class only show the data spheres of the second class Njej oj Ja only show the data spheres of the nineth class switch auto rotation off set auto rotation angle cycling through speeds from 0 to 0 5 degree per frame switch centroid spheres on and off OH ct oy Table 11 1 Keyboard shortcuts for contolling the visualization window 12 Development 82 12 Development 12 1 Implementing a new subroutine Before you change anything you should make a copy of the complete directory in order to give it another version number and to be able to go back to the start if anything is messed up e g cp r cost733class 0 19 03 cost733class 0 19 04 where 0 19 03 is the latest version you have downloaded If you write another subroutine store it into the src directory add the callin
120. udes compiler options to run parts of the code in parallel e compile intel omp sh this script uses the intel compiler suite In order to execute these scripts you type e g compile gnu debug opengl sh or sh compile gnu debug opengl sh These scripts can be easily copied and modified to save compiler options which are often needed However it is also possible to run the two commands configure and make manually one after the other as described in the following 2 2 1 configure The configure script tries to guess which compilers should be used however it is advicable to specifiy the compilers by setting the FC and CC flags E g if you want to use the GNU compilers gfortran and gcc say configure FC gfortran CC gcc or if the intel compilers should be used configure FC ifort CC icc Note that the FORTRAN and C compilers should be able to work together i e the binaries must be compatible This is not always the case e g when mixing other and GNU compilers depending on versions Also you can use some special compiler options e g for running parts of the classifi cations in parallel configure FC ifort CC icc FCFLAGS parallel openmp In the same manner options for the C compiler can be set by the CCFLAGS option Further options for the configure script control special features of the package configure disable netcdf
121. ven by the dat lt specification gt arguments must have the same number of variables columns as in the centroid input file For each record object of the data set the dissimilari ty distance between the object and the centroids is then calculated and the class number is chosen to be the one with the minimum distance The resulting catalog is written to the output file of the cla lt filename gt argument The following distance metrics can be selected by the dist lt int gt option e int gt 0 Minkowsky distance of order int ie 1 Manhattan block 2 Eu clidean 0 Chebychev e int 1 inverse Pearson correlation coefficient for non normalized objects using population dof n 1 e int 2 inverse Pearson correlation coefficient for non normalized objects using sample dof n e int 3 dist 1 D0 sum vecl vec2 nvar 1 where vecl and vec2 are the objects e int 4 dist 1 D0 sum vecl vec2 nvar The default distance 2 Euclidean distance 7 2 CNT centroid This operation allows to create centroids of a given classification catalog using the given data The flag clain lt spec gt is used to provide an existing catalog file comprising the integer class numbers for each object in each line respectively In order to calculate the centroids as means of all objects contained within the respective class the data used 7 Assignments to existing classifications 71 for building the centroi
122. y the user Thus most of them are ready for use integer nobs the number of observation i e objects or entities to classify or lines in the ASCII input data nobs is the total number of days for a daily data set integer nvar the number of variables i e attributes or parameters describing each object This commonly corresponds to the number of columns in an ASCII input file or grid points if patterns should be classified real kind 8 dat 1 nvar 1 nobs This is a two dimensional array of the input data in double precision integer npar This is the number of different data sets parameters contained in the dat array It is usually 1 however if the user has given more than one dat argument it is higher 12 Development 85 integer kind 1 cla 1 nobs This is a one dimensional array of one byte integer numbers It is allocated but not filled with values since it is the main result from the method subrou tine Thus you have to store your classification result here For each of the nobs objects you should store a type number in cla beginning with 1 The maximum type number allowed is 256 e global variables that have to be checked integer tyear nobs tmonth nobs tday nobs thour nobs These variables eventually hold the date for each of the nobs objects You have to ckeck whether they are allocated before you can use it real kind 8 minlon npar maxlon npar diflon npa
123. y to understand the basic synopsis of cost733class There are several different use cases in which cost733class can be used 1 Creation of classifications of any numerical data following an entity attribute value model main purpose of cost733class For more information about the data model used by cost733class see section 4 Evaluation of such classifications Comparison of such classifications Assignment of existing classifications to new data Gt ES Beo ge Rather simple data pre processing All of these use cases require different commands following different structures Therefor the next sections give brief instructions on how to use cost733class for each of these five cases 3 2 1 Creating classifications For the creation of classfications a basic command follows the scheme cost733class dat specification dat specification gt met method ncl integer gt cnt lt filename gt cla file gt more method specific options Essential for a successful completion of each classification run is the dat specification option which provides necessary information about the input data For some methods it is possible to give more than one data specification option for others it is preriquesite Furthermore the classification method must be named by the met method option The number of classes can be specified with ncl integer The filenames for the out put of the classfica
Download Pdf Manuals
Related Search
Related Contents
SIL 14 TOP/IT SIL 14 S TOP/IT - schede Magnese MA-101004 PC System Boards CS-2635RH Operation-Manual IT Linea User Manual - Infinite Peripherals Broan-NuTone 273003 Kitchen Hood GW 90 764 - CVD type Attuatore dimmer KNX per LED JVC 0506TNH-II-IM CRT Television User Manual Copyright © All rights reserved.
Failed to retrieve file