This package has at least eight features that together make it different from any existing location code I'm aware of.
genloc(3) describes input functions that can be used to load the data objects that are used by the location process. genloc builds heavily on parameter files (see pf(3)), but all the input functions have a parallel database equivalent when that is rational.
The genloc routines all use a long list of options loaded into the different programs through a parameter file. Some programs have special parameters peculiar to that program. This man page describes all parameters common to all the programs based on this package. Other parameters special to a particular interface are described only in that program's man page.
For the global parameters it is most useful educationally to divide the input parameter space into five categories: (1) control parameters, (2) network geometry parameters, (3) seismic phase description parameters, (4) data tables, and (5) initial hypocenter parameters. Parameters in each of these categories are summarized in separate sections below. In actual practice, they are normally all stored in a single parameter file. (A notable exception is dbgenloc(1), which intentionally splits the phase description information from everything else and uses a mix of parameter and database inputs.)
arrival_residual_weight_method. Sets type of residual weighting to be applied to arrival time data. Options are:
huber - implements Huber formula bisquare - use bisquare formula thomson - use Thomson's redescending formula none - turns off residual weighting
slowness_residual_weight_method. Same as arrival_residual_weight_method for slowness vectors. Options are identical.
min_error_scale and max_error_scale. Residual weighting functions that are used here are analytic functions that assume the errors are scaled to have scatter of order 1.0. Horrible things happen if this is not forced upon the data. To guarantee this happens, internally I calculate a scale factor using a well known robust statistic for this scale factor known as the interquartiles. These are calculated by first scaling all data by the reciprocal of the given time uncertainty (see data parameter section below). We take these numbers, calculate the interquartiles, and determine the scale by the magic number of 1.349 that is the appropriate constant for residuals with a normal distribution. There is an intrinsic problem, however, with residual weighting which these scale parameters seek to solve. The scale factor can become too large in the presence of multiple outliers, or more commonly become too low as the solution converges. In the later case, if min_error_scale is set too low the solution can get caught in a downward spiral where more and more data are discarded, reducing rms, reducing rms further, etc. If the data uncertainties are properly set, the minimum should range from 1.0 to 5.0 or so. The maximum depends upon how well refined the solution is. When only a rough initial estimate is available, this should be set to a large number like 1000.0. For a relocation from a good initial solution, max_error_scale can be made as small as one can get by with. An extreme case is to set min_error_scale=max_error_scale, in which case the error scale factor will never be adjusted.
Residual weighting requires some additional practical advice. The standard advice in the literature is to use the Huber formula for data at an early stage where potentially large outliers might be present and we are not necessarily close to a minimum. Use of the more aggressive Thomson formula is generally advised only after the solution has converged. You are welcome to experiment with the bisquare function, but it has less than ideal properties. That is, it tends to downweight data too soon unless the error scale is forced to remain relatively high. This can be achieved by setting min_error_scale to a fairly high value like 10.0.
time_distance_weighting. Boolean switch set to true or false. Setting this to true enables distance weighting for all arrival times. The actual form of the distance weight to be used can be phase dependent and the method for defining the weighting function is defined below.
slowness_distance_weighting. Same as time_distance_weighting, but for slowness vector data.
slowness_weight_scale_factor. Slowness data are intrinsically of a different scale than arrival time data. The weights of all slowness data are multiplied by this scale factor. This can be used to increase or decrease the importance assigned to array data relative to arrival time data. I reiterate, however, that ALL data are scaled by 1/uncertainty parameters that are loaded with the data or set from defaults (see below).
recenter. Boolean parameter that when true turns on a feature pioneered by Lienert and Fraser (BSSA, vol. 76, no. 3, p. 771-783, 1986) called recentering. In this approach the origin time is treated separately from the spatial coordinates. In their original implementation they utilized the mean value of the residuals at each step as a correction to the origin time, and then solved the spatial coordinate equations by the standard method. I've modified this slightly here by utilizing the median rather than the mean as this is a more stable approach in the presence of outliers.
generalized_inverse. Two options are presently supported: (1) pseudoinverse, or (2) marquardt. The first is linked to a single related parameter that is required when the pseudoinverse is selected. singular_value_cutoff sets the singular value cutoff used to form the pseudoinverse. Note this is a relative cutoff value. The actual singular value cutoff is determined from the largest singular value of the matrix that is inverted. That is, if smax is the largest singular value, we delete singular values from the pseudoinverse solution smaller than smax*singular_value_cutoff. A typical value to use for most data is about 0.001 to 0.0001.
The second invokes a damped least squares commonly attributed to Marquardt. I utilize the dynamically variable damping form of this methodology in which the damping parameter is adjusted automatically at each iteration. The basic recipe is to increase damping whenever a calculated step would lead to an increase in rms, and to decrease the damping factor in a regular way otherwise. For this to be stable, however, we require three parameters: (1) a ceiling on the damping parameter, (2) a floor on the damping parameter, and (3) an adjustment factor that determines how the damping parameter is scaled upward or downward. These three parameters are defined here by parameters called max_relative_damp, min_relative_damp, and damp_adjust_factor respectively. Note that the first two are labeled "relative" because they are not defined by an absolute scale, but are scaled by the largest singular value of the matrix being solved in the same way as the singular_value_cutoff parameter is defined above. Reasonable ranges for these three numbers are 1 to 10 for max_relative_damp, 0.0001 to 0.000001 for min_relative_damp, and 5 to 10 for damp_adjust_factor.
step_length_scale_factor, min_step_length_scale, depth_floor, and depth_ceiling are all utilized for step length damping. At present step length damping is not optional, and is always enabled. It is utilized here only to control unstable depth estimates. The basic algorithm is that whenever a calculated step would lead to a depth adjustment that would place the source above depth_ceiling (normally 0.0) OR below depth_floor, the step length (vector magnitude, not direction) is MULTIPLIED by the step_length_scale_factor repeatedly until the solution falls inside the bounds of depth_ceiling to depth_floor. That is, if we let s=step_length_scale_factor, the program first tries the step sx. If the solution still violates the ceiling or floor, it tries s*sx, then s*s*sx, then s*s*s*sx, etc. Just as in Marquardt's method, for this to be stable the range of this scaling must be limited. Internally, the program never allows the scale factor to exceed 1.0 for obvious reasons. This is done when the control file is read. If step_length_scale_factor is specified as a number greater than 1.0, the program will post a warning diagnostic and reset this parameter to a default. The reciprocal parameter is the floor on the scale factor which the user controls with the parameter min_step_length_scale. min_step_length_scale represents the smallest scale factor that is allowed on a calculated correction that would fall outside the depth ceiling or floor. It is easy to show that if s is the scale factor and m is the min_step_length_scale, the maximum number of adjustments that will be attempted is log(m)/log(s). It is equally important to understand what the algorithm does if the scale factor is reduced to min_step_length_scale. When this occurs, the algorithm fixes the depth at the ceiling or floor (whichever is involved) and determines the horizontal adjustment from sx. As a consequence of this, the user should recognize that events at depths near the ceiling or floor may effectively have a fixed depth.
If you understand this algorithm, it should be obvious that improper setting of these parameters can easily produce a solution that will never converge. The most important guideline is that, in general, it is a bad idea to let min_step_length_scale get too small. Step-length damping is most likely to become significant for events that are shallow with bad depth control. It is important to recognize that in this situation the final solution can depend upon the choice of the step-length parameters. The key advice is that for rough estimates (e.g. an estimate made automatically by a real-time system) the parameter min_step_length_scale should be kept relatively small (I recommend 0.1 to 0.25) to speed convergence of shallow sources where steps are calculated that would place the source in the ionosphere. Conversely, for refined catalogs where one is starting from a reasonable first guess for all events, min_step_length_scale should be set to a small number like 0.001.
fix_latitude, fix_longitude, fix_depth, and fix_origin_time are boolean parameters whose purpose should be obvious. Note that any combination of these four parameters can be set to true, although setting all four true is absurd except as an expensive way to calculate travel time residuals.
maximum_hypocenter_adjustments. Sets the maximum number of times a solution will be adjusted before the program will give up and exit. A typical number is 50.
deltax_convergence_size. The iteration sequence will terminate when the vector correction in the space coordinates of the hypocenter (in units of KILOMETERS) falls below this parameter.
relative_rms_convergence_value. A common reason to terminate a solution is based on data fit. Clearly when a solution is not improving the fit to the data significantly, further steps are not necessarily warranted. Here I use "relative" rms convergence. That is, the solution is terminated when the ratio of the difference in weighted rms residuals between the current step and the previous step with the rms residuals of the previous step (i.e. delta_rms/rms) falls below this parameter. This number should not be made too small, or the solution may terminate prematurely when the rms minimum of the solution has a very flat floor. This may be proper, but the answer in this case can depend strongly on the starting solution. My general opinion, is that this parameter should be used as a fallback to terminate marginal solutions that bounce around in a flat floored rms valley and never converge when measured by spatial adjustments. I recommend setting this parameter to 0.001 to 0.0001.
There are two basic geometry tables: station coordinates, and array coordinates. These are specified by two tables that are most easily seen by showing a simple example:
seismic_stations &Tbl{ CHM 42.9986 74.7513 0.6550 EKS2 42.6615 73.7772 1.3600 USP 43.2669 74.4997 0.7400 BGK2 42.6451 74.2274 1.6400 AML 42.1311 73.6941 3.4000 KZA 42.0778 75.2496 3.5200 TKM 42.8601 75.3184 0.9600 KBK 42.6564 74.9478 1.7600 AAK 42.6333 74.4944 1.6800 UCH 42.2275 74.5134 3.8500 KZA 42.0778 75.2496 3.5200 KBK 42.6564 74.9478 1.7600 ULHL 42.2456 76.2417 2.0400 TKM2 42.9208 75.5966 2.0200 } seismic_arrays &Tbl{ GEYBB9 37.9293 58.1125 0.6629 GEYG36 37.9293 58.1125 0.6629 }
Note that both tables are identical and contain: name, latitude, longitude, elevation (in km). They are set in a parameter file as a Tbl object. The only difference for the seismic_arrays table is that the name field may be doubled for a given array due to different subarray configurations. In the example shown these are referenced to a common origin, but this may not always be the case.
All the genloc programs use this form of geometry input with the exception of relocate(1). That program reads this same information from the css3.0 site table. The parameter file can contain the geometry information for relocate, but it will simply be ignored.
A final parameter that is implemented in both the db and parameter file geometry input is the parameter elevation_datum. This parameter can be used to set a reference elevation for 0 depth to something other than sea level. Note, however, that at present the depths computed by the location program are RELATIVE TO THIS DATUM NOT TO SEA LEVEL. This parameter defaults to 0.0 in which case the computed depths are sea level.
This section of the parameter file is by far the most complex. It makes use of a novel feature of Dan Quinlan's parameter files that allows a hierarchy of Arr object. This allows the parameter file to repeat key words nested within curly brackets. This is useful here to build the descriptions of what I call "phase handles" and use a common set of parameter names for each phase. It is most easily understood by first presenting an example:
phases &Arr{ P &Arr{ time_distance_weight_function &Tbl{ 0.0 1.0 10.0 1.0 90.0 0.7 92.0 0.0 360.0 0.0 } ux_distance_weight_function &Tbl{ 0.0 1.0 10.0 1.0 90.0 0.7 92.0 0.0 360.0 0.0 } uy_distance_weight_function &Tbl{ 0.0 1.0 10.0 1.0 90.0 0.7 92.0 0.0 360.0 0.0 } default_time_uncertainty 0.05 default_slowness_uncertainty 0.01 dt_bound_factor 0.01 du_bound_factor 0.035 time_station_corrections &Tbl{ GEYBB9 0.01 KBK 0.02 USP -0.2 } ux_station_corrections &Tbl{ GEYBB9 0.001 GEYBB12 0.0015 } uy_station_corrections &Tbl{ GEYBB9 -0.001 GEYBB12 -0.0015 } travel_time_calculator ttlvz velocity_model &Tbl{ 3.5 0.0 6.0 5.0 8.0 30.0 } } S &Arr{ time_distance_weight_function &Tbl{ 0.0 1.0 10.0 1.0 90.0 0.7 92.0 0.0 360.0 0.0 } ux_distance_weight_function &Tbl{ 0.0 1.0 10.0 1.0 90.0 0.7 92.0 0.0 360.0 0.0 } uy_distance_weight_function &Tbl{ 0.0 1.0 10.0 1.0 90.0 0.7 92.0 0.0 360.0 0.0 } default_time_uncertainty 0.2 default_slowness_uncertainty 0.005 dt_bound_factoR 0.01 du_bound_factor 0.035 time_station_corrections &Tbl{ GEYBB9 0.01 KBK 0.02 USP -0.2 } ux_station_corrections &Tbl{ GEYBB9 0.001 GEYBB12 0.0015 } uy_station_corrections &Tbl{ GEYBB9 -0.001 GEYBB12 -0.0015 } travel_time_calculator ttlvz velocity_model &Tbl{ 2.0 0.0 3.5 5.0 4.5 30.0 } } }
Notice the hierarchy that begins with the keyword "phases" and that the closing curly bracket does not end until the close of this example. Thus, "phases" is the highest level keyword that encloses this entire section of the input parameter file. This section can sometimes become huge as we will see in a moment.
The next level of the hierarchy is phase identifiers. A phase handle is built for each named "phase" within this block. In the example here, this is P and S, but it could be extended to as many phase names as one wished.
Within each phase identifier block, the following parameters are fixed and all are required: time_distance_weight_function, ux_distance_weight_function, uy_distance_weight_function, default_time_uncertainty, default_slowness_uncertainty, dt_bound_factor, du_bound_factor, time_station_corrections, ux_station_corrections, and uy_station_corrections. The admittedly verbose names should make most of their purposes obvious. Note that the units of all time quantities are seconds and slowness related quantities have units of s/km.
Two less obvious parameters are dt_bound_factor and du_bound_factor. These are used in computing "model error" estimates following a theory described in the following paper: Pavlis (1986) BSSA, 76, 1699-1717. dt_bound_factor is used to compute a bound on model related travel times. The formula is modifed from equation (25) of that paper. That is, rather than use arc length (in km) and a relative slowness error, here we use a fractional travel time error accumulation. That means the computed errors assume the travel time model error is bounded by T*dt_bound_factor. Thus, for example, if dt_bound_factor was 0.01, we assume the travel time error for a travel with with a 100 s travel time is bounded by 1 s. du_bound_factor is similar, but it is has dimensions (s/km) and has no distance dependence. (This problem was not discussed in the 1986 paper, but the extension is straightforward.)
The distance weight parameters define a distance weighting function as a series of ordered pairs. These are distances (in degrees) followed by the weight to be assigned at that position. These are interpolated internally using a simple linear interpolation between points to define the weight at a given distances. Note these parameters are required for each phase even if residual weighting is turned off (see above), and each list MUST end with 360.0. If the last point is not given as 360.0 it will be added with a weight of 0.0 and a diagnostic will be issued.
Station corrections are NOT required for each station. If a correction for either time or slowness is not found for a given station-phase-data type, that term will be set to 0.0.
The bottom level of the hierarchy in this set of parameters is the travel time section. A dependency on the form of the parameter file pertaining to calculation of travel times for a given phase depends upon the setting of the parameter travel_time_calculator. At present this keyword should be followed by one of three strings used to define the travel time calculator: (1) ttlvz, (2) "uniform table interpolation" (the quotes are not necessary, but they emphasize the string has embedded blanks), or (3) generic. ttlvz is a simple, constant velocity, flat-earth, layered model travel time calculator. Note you can use this for any phase, but be aware that this function always calculates first arrivals. Thus, it would produce wrong answers from something like PmP, for example, but it could be used to compute phases like Pn or Lg provided one properly defined the distance weights on these phases. "uniform table interpolation" selects a general-purpose travel time table grid interpolation routine. (A program taup_convert(1) can be used to build these tables for a wide range of seismic phases using the tau-p calculator.) Finally, generic implements a generic travel time interface presently under development that would unify travel time calculation with other datascope applications like dbpick. At present, this is only used as a direct interface into the tau-p library.
I anticipate alternative travel time calculators could be inserted here in the future, but at present these are the only ones that are accepted. Different parameters are searched for in this section depending on which calculator is selected.
The example above illustrates parameters required by the ttlvz function. That is, all we require is a Tbl headed by the keyword velocity_model. Each entry in the table is an ordered (velocity, depth) pair where the depth defines the depth to the top of the layer. Note that negative depths are allowed, and highly recommended for local problems like volcanos where sources often occur above the elevation of all stations.
For the tables, only one parameter will follow. table_file gives the name of a parameter file containing the uniform grid table in a format described in the FILES section below. Note that the name used will have the ".pf" added after it since this string is passed directly to pfread.
In this case the only required parameter are two parameters required by the generic interface: TTmethod and TTmodel. See tt(3) for a complete description.
This is also most easily seen by an example:
arrivals &Tbl{ P CHM 712788677.83217 0.028 1011 P KBK 712788673.29933 0.028 1013 P TKM 712788676.35498 0.014 1015 P USP 712788680.86788 0.038 1017 S CHM 712788726.71320 -1.000 1012 S KBK 712788720.20059 -1.000 1014 S TKM 712788725.74570 -1.000 1016 S USP 712788746.91177 -1.000 1018 } slowness_vectors &Tbl{ P GEYBB9 -0.125 0.009 0.01 0.01 1024 }
Note the order of entries for arrival time measurements is: phase name, station, epoch time, time uncertainty, and arid. Note that the -1.0 is used to flag a point with an unknown uncertainty. Listing any negative number for the uncertainty will lead to use of the default time uncertainty parameter defined for that phase. The "arid" field (integer at the end of the example) is not required by all programs. This field is ignored by sgnloc, but is required by orbgenloc. In contrast, the relocate program doesn't even look at this parameter, but obtains these data from a css3.0 database.
By default it is assumed that slowness_vectors are tabulated as shown: phase name, array, ux, uy, delta_ux, and delta_uy. Again if delta_ux or delta_uy are set to a negative number, the default defined for this phase will be used. Two options exist for slowness data. By default slowness is assumed to be tabulated in units of seconds/km. However, the parameter slowness_units can be use to override this. If the parameter slowness_units is found, the line is scanned for the string "degrees". If found, all slowness measurements are assumed to be in s/degree. In addition, slowness vectors by default are assumed to be tabulated by components ux (east positive) and uy (north positive) of the slowness vector. If the parameter slowness_format appears, followed by the string "azimuth", it is assumed that the two numbers following the array name are the slowness (units can be specified in either degrees or km if the slowness_units parameter is used). Only standard azimuths measured in degrees are accepted.
Only the sgnloc (command line) interface actually reads data through the parameter file like this. The relocate interface does not use this set of parameters. Instead it extracts this information from the arrival table of the css3.0 schema. dbgenloc and orbgenloc do the same, but the relationship is a little more complex. A key fact about the database interface worth noting here is the manner in which these routines handle the uncertainty estimates. This is discussed in some detail in genloc(3) and dbgenloc(1) where the context is a little clearer.
initial_location_method switch for method used. The following options are accepted:
The initial location methods interact with a series of parameters that cascade from the choice of the method.
initial_latitude, initial_longitude, initial_origin_time, and initial_depth set the initial hypocenter guess manually. Latitude and longitude need to be in degrees, depth in km, and origin time must be specified as an epoch time.
initial_depth sets depth used for trial location. In the S-P method a single depth search is used along a circular arc computed from the S-P time of the nearest station. The conversion from S-P to derive this circle is computed from the parameter S-P_velocity. The program searches a circular region at this distance and the given depth computing rms residuals at number_angles equally spaced points.
The following parameters are used by all methods that use a grid search either explicitly or implicitly.
latitude_range, longitude_range, depth_range, nlat, nlon, ndepths. These parameters set the area used in a rectangular grid search. A grid of nlat by nlon by ndepths points is computed starting from the center_latitude, center_longitude, and center_depth point. The program determines an initial location by and searching for a travel time residual in this grid. Latitudes and longitudes are, as always, in degrees and depths are in kilometers. Thus, to search the whole world with one point per degree and 0 to 500 km depths at 50 km intervals use:
center_latitude 0.0 center_longitude 0.0 center_depth 250.0 latitude_range 180.0 longitude_range 360.0 depth_range 500.0 nlat 180 nlon 360 ndepths 11
minimum_distance, maximum_distance, minimum_azimuth, number_points_r, number_points_azimuth, ndepths set radial grid search. Azimuth values are assumed to be in degrees. The following would search a radial segment from 1 to 2 degrees away from a specified center point (see above) with a range of azimuths from 70 to 110 degrees at 1 degree intervals in azimuth and 0.1 degree increments in distance (about 11 km) at a fixed depth of 5 km.
minimum_distance 1.0 maximum_distance 2.0 minimum_azimuth 70.0 maximum_azimuth 110.0 center_depth 5.0 number_points_r 11 number_points_azimuth 41 ndepths 1MINUS PHASES
Sometimes it is desirable to use phases like S-P instead of using the actual arrival time. The main application of this approach is to handle stations that have timing problems, but it is conceivable one would want to use this for other reasons. Nonetheless, the underlying assumption of genloc is that the use of phases like S-P should be driven by timing problems. For this reason genloc is set up to handle minus phases like this automatically when it is told a station has a bad clock. This is triggered by two parameters.
bad_clock is a Tbl parameter that lists stations that should be viewed as always having bad clocks. To use only S-P times make this list all the stations in your network.
clock_error_cutoff is a real number cutoff on clock timing accuracy. Programs using libgenloc can call a function that checks for an extension table to css3.0 called timing that defines clock accuracy over various time spans. That table is scanned against this parameter to find time periods where the clock accuracy is worse than the value specified by this parameter. (see also reftek_dbtiming(1) which builds this table for reftek data loggers). During processing any station that has a clock with a listed accuracy below this threshold will automatically revert to only using minus phases.
A final requirement is that any minus phase must have a full description in the phase handle section. That is, one must have built a phase handle for each desired "minus" phase. Note that the minus phase handle can be built for multiple difference-based phases (e.g. S-P, PP-P, pP-P, etc.)
DEFAULTS
All the parameters described in the CONTROL PARAMETERS section above can be omitted and the following defaults would be set:arrival_residual_weight_method huber slowness_residual_weight_method huber time_distance_weighting true slowness_distance_weighting true slowness_weight_scale_factor 1.0 min_error_scale 1.0 max_error_scale 50.0 depth_ceiling 0.0 depth_floor 700.0 generalized_inverse marquardt min_relative_damp 0.000005 max_relative_damp 1.0 damp_adjust_factor 5.0 recenter false fix_latitude false fix_longitude false fix_depth false fix_origin_time false step_length_scale_factor 0.5 min_step_length_scale 0.01 maximum_hypocenter_adjustments 50 deltax_convergence_size 0.01 relative_rms_convergence_value 0.0001The above constants are hard wired into the code, and some were found later to be less than ideal. A better starting point, in all cases, is to use sample parameter files found in $ANTELOPE/data/pf.
FILES
The travel time tables are specified as ASCII parameter files. These files do not, however, follow the regular rules of parameter files because they are defined deep within the hierarchy of another parameter file. Instead, they follow a model similar to dblocsat where the tables are assumed to be in a standard place. In particular, the given name is assumed to be a relative path from the location defined by the environment variable GENLOC_MODELS. That is, the function involved calls pfload on a file $GENLOC_MODELS/tables/genloc/x where x is the value entered for the parameter table_file. Note that this name should not have the appending ".pf" in accordance with the usual rules for parameter files.
nx and nz define the number of points in the grid. nx is, obviously, the number of points in epicentral distance.
dx and dz define the grid point spacing and have mixed units. dx is specified in degrees, and dz is specified in kilometers. These are fixed intervals that specify the regular mesh on which travel times and slowness are tabulated.
x0 and z0 are optional parameters. They both default to 0.0. They specify the distance and depth of the first point in the table. This is useful, for example, with a phase like Pn that does not exist until one is beyond a critical distance.
It is highly recommended that the parameter depth_floor be set to the minimum value of z0+(nz-1)*dz for all phases, or the calculator will not know how to handle steps that put the source below the bottom of the tables.
The tables are then specified in the parameter file as a very long Tbl tagged with the keyword uniform_grid_time_slowness_table. The entries of the table will look something like the following:
uniform_grid_time_slowness_table &Tbl{ 0.001100 0.172414 -0.000000 t 9.585741 0.172412 -0.000000 t 19.171297 0.172407 -0.000000 t 28.149948 0.123692 -0.000000 c 35.026741 0.123686 -0.000000 t 41.903145 0.123677 -0.000000 t 48.778996 0.123666 -0.000000 t 55.654144 0.123652 -0.000000 t 62.528427 0.123635 -0.000000 t 69.401695 0.123615 -0.000000 t 76.273788 0.123593 -0.000000 t 83.144547 0.123567 -0.000000 t ... 1152.625244 0.001050 -0.000005 t 1152.676392 0.000788 -0.000005 t 1152.712891 0.000526 -0.000005 t 1152.734863 0.000263 -0.000005 t }where the actual table has a total of nx*ny lines. These are assumed arranged as scans at constant depth so the table is expected to contain nx entries for a source at z0, followed by nx entries for a source at z0+dz, etc. The format of each lines is:
time slowness du/dx branch_codeUnit of time are seconds, slowness units are seconds/kilometer, and du/dx is seconds/km-km. Note that du/dx can often be neglected, so if you wish to make a set of tables using a routine other than taup_convert, you may well be able to get by with setting that column of the table to 0.0 everywhere. du/dx is largest for direct wave branches at offsets less than the source depth. Everywhere else the dominant terms come from angle terms and terms that scale with slowness. Note, for example, that existing programs like LOCSAT implicitly ignore terms involving du/dx anyway by keying on the azimuth rather than the slowness vector components.The branch_code is used to work around various levels of discontinuity that commonly exist in travel time tables and an ambiguity in sign. The following characters are presently recognized (anything else will generate an error, and cause the table for the offending phase to be ignored.):
t = turning ray u = upward directed branch c = crossover j = jump discontinuity n = not observable at this distancet and u are used to distinguish an ambiguity in sign between direct arrivals that result from a source very close to a receiver and arrivals from more distant events. Both can have the same apparent slowness, but the sign must be known to properly compute derivatives of time and slowness wrt to depth. c and j describe two levels of discontinuities that exist in all travel time tables. A crossover is a discontinuity in slope that occurs, for example, at the Pg-Pn crossover for the generic phase "P". A jump discontinuity, in contrast, is a step discontinuity in travel time. This occurs, for example, in the core shadow where we have a jump of over 250 s between Pdiff and PKiKP. n is used to flag a phase that is simply not observable at a given distance range in the table (e.g. S waves in the core shadow, or pP at close distance ranges).In order to analytically compute time and slowness derivatives, velocities at each of the depths that the travel time tables are tabulated at are required. These are assumed to be present in the parameter file headed by the keyword velocities that begin a Tbl with nz entries tabulating velocity at each of the nz tabulated source depths.
SEE ALSO
sgnloc(1), ggnloc(3)BUGS AND CAVEATS
Several things are presently lacking and/or incomplete the user is warned about: (1) a cascaded grid search procedure is planned, but has not yet been implemented; (2) the set of travel time options is not as rich as it could be; and (3) the minus phase handlers (e.g. S-P, pP-P, etc.) will only work with travel time calculators connected through the generic (tt(3)) interface.AUTHOR
Gary L. Pavlis
Antelope User Group Contributed Software