PLATES (a thin-plate, flat-Earth F-E program for anelastic plate deformation) and PLOTPLATES (a graphics program to plot both input and output datasets on Versatec) and Plates2AI (a graphics program to plot input and output in Adobe Illustrator PostScript) and accessory programs for preparing grids and scoring models; released August 1993 updated 5 March 1999 by Peter Bird Department of Earth and Space Sciences University of California Los Angeles, CA 90095-1567 (310) 825-1126 pbird@ess.ucla.edu Boilerplate ----------- (C) Copyright 1991, 1992, 1993, 1995, 1997, 1998, 1999 by Peter Bird and the Regents of the University of California. Colleagues are free to use, modify, and distribute this software at no charge, but subject to the following conditions: 1. If by using these programs you can obtain scientific results worthy of publication, you must acknowledge the source of the program in any seminars, proposals, and papers you prepare. 2. The programs and/or source code shall not be modified in a way which hides their author or origin. 3. These notices shall be distributed together with any copies of the source code or executable programs. While you may recover your actual costs of blank media and handling, no other fee or price shall be charged for copies you distribute. 4. By using this package, you assume an ethical obligation to help the author improve it, by reporting any bugs or defects and providing the files necessary to reproduce the error. Purpose ------- To solve the momentum equation (stress-equillibrium equation) under the quasistatic (creeping-flow) approximation, for thin faulted plates of lithosphere (crust + mantle lithosphere) deforming anelastically on a flat Earth. To calculate long-term average velocities, long-term average fault slip rates, long-term average anelastic strain rates, and long-term average stresses. (Here, "long-term" means averaged over several seismic cycles.) To provide either colored or black-and-white maps of the input data (finite element grid, elevation, heat flow, crustal thickness, mantle lithosphere thickness) and the computed results (velocity, slip rate, strain rate, stress). Limitations: ------------ In program PLATES, the bottom of the model domain is the base of the lithosphere (the asthenosphere). If you wish to end the model at the Moho, you could use program FAULTS, available in directory .../neotec /faults of this archive. (It is especially important to include the strength of mantle lithosphere when modeling oceanic plates.) If the region you are modeling is so wide that a flat-Earth approximation is unacceptable, then you need programs ORBWEAVE and SHELLS, available in .../neotec/shells. (However, these programs use less sophisticated 3-node elements, and are therefore less accurate when modeling a local region like one state.) Programs PLATES, FAULTS, and SHELLS are all for "snapshot" models of instantaneous velocity at the present. They do not time-step or compute finite strains, so they cannot be used for historical simulations. To do this, obtain the finite element program LARAMY from .../paleotec. (Unfortunately, LARAMY does not permit fault elements within the plate-- only a single subduction zone thrust at the edge of the grid is allowed.) All of my finite-element programs are "thin-plate" programs which use the isostatic approximation in place of the vertical component of the momentum equation. Therefore, they do not model anisostatic effects like plate flexure in a trench, or around a seamount. All of my finite-element programs are anelastic. Therefore, they cannot be used to model earthquakes or post-earthquake relaxation, for which you will need a (nonlinear) viscoelastic program like Melosh's TECTON. References ---------- Bird, P. (1988) Formation of the Rocky Mountains, western United States: A continuum computer model, Science, 239, 1501-1507. Bird, P. (1989) New finite element techniques for modeling deformation histories of continents with stratified temperature-dependent rheologies, J. Geophys. Res., 94, 3967-3990. Bird, P. (1992) Deformation and uplift of North America in the Cenozoic era, in: K. R. Billingsley, H. U. Brown, III, and E. Derohanes (Ed.), Scientific Excellence in Supercomputing: the IBM 1990 Contest Prize Papers, 1, Baldwin Press, Athens, Georgia, 67-105. Bird, P., and X. Kong (1994) Computer simulations of California tectonics confirm very low strength of major faults, Geol. Soc. Am. Bull., 106, 159-174. Bird, P. (1996) Computer simulations of Alaskan neotectonics, Tectonics, v. 15, no. 2, p. 225-236. Directory PATH listing for Volume PLATES (Note: I have not been consistent about upper-case/lower-case!) ------------------------------------------------------------- \: (the "root" directory; currently "\neotec\plates") READ_ME.9 (this file; different from READ-ME.1, ...) FillGrid.exe (Win32/Win95/WinNT executable of FillGrid) PLATES2AI.exe (Win32/Win95/WinNT executable of Plates2AI) +---SOURCE (a directory of source code in Fortran 77 or Fortran 90) | FillGrid.f (utility to assign data to nodes of an .feg file) | PLATES.FOR (the finite element program) | PLOTPLATES.FOR (the graphics program for Versatec plotting) | Plates2AI.F90 (the graphics program for Adobe Illustrator PostScript) | STRLL2XY.for (utility to convert stress data from lon,lat to x,y) | STRXY2IS.for (utility to convert stress data from x,y to element coordinates) | GEOXY2IS.for (utility to convert geodetic data from x,y to element coordinates) | SCORE_AK.for (program to score model results against data) | \---WORK (a directory of sample data files) AK795.feg (a grid of Alaska, from DRAWGRID) AKbc795.bcs (boundary conditions for this grid) BScoastx.dig (a basemap file of Alaska, from DIGITISE) Fake_Q_f.dig (digitized contours of the heat-flow map of Alaska) geodesy_xy.dat (geodetic data from Alaska, in x,y coordinates) geodesy_feis795.dat (geodetic data from Alaska, in element coordinates) inAK9801.in (parameter values and plot-control switches) LanModel.ai ("picture frame" generic PostScript file) Plates2AI.exe (Win32/Win95/WinNT executable of Plates2AI) stresses_xy.dat (stress data from Alaska, in x,y coordinates) stresses_feis795.dat (stress data from Alaska, in element coordinates) sAK9549.out (score report on Alaska model 95-49) tAK9801.out (text output from a run of PLATES: Alaska model 98-01) vAK9801.out (node-velocity output from a run of PLATES: Alaska model 98-01)) SUGGESTED STEPS IN A MODELING PROJECT BASED ON -= PLATES =-: ------------------------------------------------------------ 1. COMPILING THE PROGRAMS These programs will not run under DOS because of its 640K memory limit. They can be run under OS/2, Windows 95, or Windows NT on a fast 80486DX2 or Pentium PC (using 32-bit Fortran77 and/or F90 compilers). (The executable files FillGrid.exe and Plates2AI.exe requires 32-bit Windows, but the source code does not.) My experience is that PLATES takes about 8 minutes to converge with the sample grid given (1500 nodes) when run on an IBM or Cray vector computer. The graphics program needs about 20 seconds to warm up and then 5-15 seconds per plot after that. The times for PLATES will rise as the square of the number of nodes, approximately. Therefore, you might want a mainframe or a fast workstation. (Memory requirements are not as serious; I think you can work with a grid of 1000 nodes in 5 Megabytes.) Because of this, both PLATES and PLOTPLATES are written as batch- job programs which take all input from files (actually, from Fortran I/O device numbers, which you assign to files) and leave all output in files. There is no interaction. The nodal-data utility FillGrid is for interactive use on any system. Plates2AI can be run as a batch job (if you preconnect the input and output files to the Fortran device numbers), or it can be run as an interactive program which will prompt you for the names of needed files. (The interaction is text-mode only for maximum portability.) To ease the job of porting this code to different machines, I have tried to use mostly standard Fortran77. One place where I have cheated is in PLOTPLATES, where I used "internal writes" to convert floating-point numbers to text strings without the bother of writing them to an external file. If your system doesn't allow this, just write the numbers to a disk file using a FORMAT like (F8.3), and read them back with a format like (A8). Also, FillGrid.f mixes new Fortran 90 code with inherited Fortran 77 code in fixed (72-column) format, so this needs a Fortran 90 compiler if you compile it for some system other than 32-bit Windows. The recently-added graphics program Plates2AI.F90 is in Fortran 90, so for this you also need a Fortran 90 compiler if you recompile it for some system other than 32-bit Windows. If you read my code, you will quickly discover that I use up to four dummy parameters in subroutine calls: INPUT, OUTPUT, MODIFY, and WORK. These are never assigned any values or used; they are just place-holders to identify the groups of parameters which follow. INPUT-class parameters must have a value before the call, and are used, but not changed. OUTPUT-class parameters are computed by the subprogram, overwriting any initial value. MODIFY-class parameters must have a tentative value before the call, but the subprogram may change this value. WORK-class parameters do not have useful values either before or after the call, but merely provide necessary workspace for the subprogram. This workspace is dimensioned in the main program, so that PARAMETER statements may be used to set all the dimensions consistently. If you are unsure what values to assign to the integer array dimensions in the PARAMETER statments, then just set them to 1, compile PLATES (or PLOTPLATES), and try to run it. The program will detect a problem, and notify you how large the parameter needs to be. Then you can change the PARAMETER statement, recompile, and try again. I wish there was a better way to do this in Fortran 77! (If you want to convert the programs to Fortran 90, you could use dynamic memory allocation with ALLOCATE and DEALLOCATE.) Since I have been working on an IBM mainframe, normal floating-point numbers have 4-byte (32-bit) precision, and DOUBLE PRECISION numbers have 8-byte (64-bit) precision. If you move to a system where the default is 64-bit precision, then by all means you should change the DOUBLE PRECISION statements to REAL statements. At the heart of each program is a solver of linear systems. Since this needs to be both accurate and fast, I didn't write my own. PLATES uses routines from IBM's Engineering Sciences Subroutine Library (ESSL) which are hand-optimized for the IBM vector processors. In Plates2AI, I used an IMSL routine, LSLPB, from the MicroSoft version of the IMSL library that is supplied with MS Fortran Powerstation 4.0 Pro. You should search out the best solver for banded real linear systems on your particular platform, and substitute it. The routines to be supplied are replacements for DGBF and DGBS called from PLATES (8-byte factor and solve a banded real linear system, respectively), and SPBF and SPBS called from PLOTPLATES (4-byte factor and solve a banded real linear system, respectively), and LSLPB called from Plates2AI. In searching out replacements, bear in mind that the linear systems in PLATES do *NOT* all have symmetric coefficient matrices. Now, you will quickly discover that each solver for banded linear systems assumes a different ideosyncratic storage system for the coefficient matrix! Therefore, I have made this easy to change. Instead of letting Fortran compute the storage locations of a particular element (particular row, particular column) of the coefficient matrix, I have done it myself with a statement function INDEXK. You only need to change this statement function when you change solvers. (Yes, it would be more modular to make INDEXK a subprogram function instead of a statment function, because then it would only have to be changed in one place. Unfortunately, that is not only slower because of the many calls performed, but doubly slower because it blocks vectorization of any loops from which it is called.) 2. DIGITIZE YOUR BASEMAP Choose a standard basemap for your problem area; one that shows all active faults, at least. The type of map projection is not critical as long as the local distortion is small. (Most of the calculations done by PLATES and its associated programs will be in the 2-D (x,y) space of this map plane, and will not take any account of the curvature of the Earth.) You can use my program DIGITIZE (provided in another directory of this FTP site), or use your own digitizing program and then reformat the results to look like the output produced by DIGITIZE (see Read_me.6 for details). Assuming that you will use the SI (mks) metric system, the units of your digitized points should be meters (actual feature size, not reduced size on the printed map). [Of course, if you are only doing idealized problems on a "generic" flat planet, it may be possible to skip this step entirely.] 3. CREATE A FINITE-ELEMENT GRID (.feg file) with DrawGrid. See the instructions (Read_Me.7) for the DrawGrid program in another directory of this FTP site. Start DrawGrid.exe in DOS (or a system like Windows which emulates DOS), and load in the digitized basemap (.dig file) that you created in the previous step. Then, you will probably use DrawGrid commands to overlay the region with a uniform grid of squares or hexagons as a first step. You can then pull, stretch, cut, and replace elements until you have them where you want them. (In particular, element sides should overlie all active faults.) Then, still in DrawGrid, you can Cut these element sides to convert them to fault elements, and adjust the strike of the faults, and set their dips. Use the Save command frequently, as there is no Undo command! Also, use the Perimeter command when finished to check for any topological errors in the grid. 4. RENUMBER THE NODES FOR MINIMUM BANDWIDTH. This step is accomplished by progam NUMBER, described in Read_Me.7 in the DrawGrid directory of this FTP site. While technically this step is optional, it will result in AT LEAST ONE ORDER OF MAGNITUDE reduction in the memory and execution-time requirements when you come to run PLATES! 5. ASSIGN NODAL DATA TO EACH NODE IN THE .feg GRID FILE. If you look into the .feg file created by DrawGrid (and re- numbered by NUMBER), you will see that a typical node is described by a line like: 3 -1.500E+05 -8.660E+04 +0.00E+00 +0.00E+00 +0.00E+00 +0.00E+00 where the first number is the node number, and the next two are (x,y). Or, if the node is on the mid-point of a straight element side, the (x,y) coordinates may be (0.0,0.0) [a special code], as in: 4 0.0 0.0 +0.00E+00 +0.00E+00 +0.00E+00 +0.00E+00 In either case, the last 4 numbers on the line are all zeros. These need to be replaced with: * elevation above(+) or below(-) sea level, in meters; * heat-flow, in Watts/square-meter (not milli-Watts!); * thickness of the crust, in meters (omitting oceans); * thickness of the mantle lithosphere layer, in meters (= total lithosphere thickness minus crustal thickness). For example, if node 3 is in Tibet, the line might be changed to: 3 -1.500E+05 -8.660E+04 +5.12E+03 +7.05E-03 +7.52E+04 +1.23E+04 to represent a total lithosphere thickness of 87.5 km, of which 75.2 km is crust, which floats isostatically at 5.12 km elevation, with heat-flow of 0.0705 W/m/m (or 70.5 mW/m/m, which is not SI). The tricky thing is that these 4 numbers are not independent, but are subject to two constraints: -> The temperature at the base of the mantle lithosphere should be constant everywhere; the base of the lithosphere is an isothermal surface (unless part of the model rests on a subducting slab (cold) or on a mantle plume (hot). -> The density structure at each node should be isostatically balanced with a normal mid-ocean spreading rise at elevation -2700 m (unless the model is locally pulled down or up by a subducting slab or a mantle plume). In general, this means that only two of the values can be chosen independently, and the other two must be computed. Except in special degenerate cases (zero thermal expansion coefficient, and/or zero radioactive heat production), the calculation will be nonlinear and will require iteration. My choice is to specify the maps of elevation and of heat-flow, and compute the layer thicknesses from them. This has always turned out well AS LONG AS the heat-flow is not too small at any node: 0.040 mW/m^2 is a suggested minimum. The best way to supply these data for each node is to run my utility program FillGrid. (FillGrid.exe is an executable for Win32/Win95/Win98/WinNT; or recompile FillGrid.f with a Fortran 90 compiler for other systems, such as Unix.) This program takes its data from gridded-data (.grd) files of elevation and heat-flow. You must prepare these .grd files, which have values evenly-spaced on grids of longitude/latitude (e.g., like the ETOPO5 dataset of 5'-gridded topography of Earth). FillGrid will print a few lines of comments on the format it needs. FillGrid then uses these data to compute thicknesses of the crust and mantle lithosphere for each node. If there are no errors (see the log file for details) then the resulting modified .feg file should give a model which is everywhere isostatic with a mid-ocean rise, and which has an isothermal surface at the base of the lithosphere. When you first begin to experiment with PLATES, you may want to run some simple cases for uniform plates. Here is an easy way to create such a uniform model. Let us assume that your model will fit entirely within the region 32~44N, 126~110W (or, as we enter it: -126~-110E). You could create a dummy topographic .grd file like this: -126 16 -110 (lon_min, d_lon, lon_max) 32 12 44 (lat_min, d_lat, lat_max) 100 100 (4 identical entries of 100 m elevation) 100 100 And you could create a dummy heat-flow .grd file like this: -126 16 -110 (lon_min, d_lon, lon_max) 32 12 44 (lat_min, d_lat, lat_max) 0.06 0.06 (4 identical entries of 0.060 W/m/m, 0.06 0.06 equal to 60 mW/m/m) and you could supply these two files to FillGrid when prompted. FillGrid will then compute the (uniform) crustal and mantle- lithosphere thicknesses appropriate for 100 m elevation and 0.060 W/m/m heat flow, and apply them to every node. If you do NOT want your model to be isostatic and isothermal across its basal surface (e.g., if part of it is a thin forearc wedge which rests on a subducting slab which is part of another plate), then you can manually modify these nodal values of crustal thickness and mantle-lithosphere thickness, and not worry that this causes departures from isostasy and isothermy at the base of the model! 6. PLOT THE NODAL INPUT DATA TO CHECK FOR ERRORS Programs PLOTPLATES (graphics output on a Versatec plotter) and Plates2AI (graphics output as Adobe Illustrator files) can be used to plot and check the nodal data before conducting any dynamical simulations with PLATES. There are 8 plot types that you can display before computing a dynamic model with PLATES, and it is highly recommended that you use these to check your results from step (5) above. In PLOTPLATES, the actual graphics are produced by calls to a library of Fortran77-compatible subprograms for the Versatec Spectrum 11" electrostatic plotter. This does reasonably good work in either grey- and-white or color (the former for publication, the latter for slides), and does it quickly and cheaply. If you have no access to such a device, you could supply dummy subroutines to divert these calls and make them do what you want. You might want to put your graphics on a monitor, for instance. Whatever you do, you will have to know what the parameters in my calls to the Versatec routines are supposed to represent. One source of information is the "Versaplot Random Enhanced Programmer's Reference Guide", from Verstec, 2710 Walsh Avenue, Santa Clara, CA 95051 (Form 5770-02). Another (usually, better) option for graphics output is Plates2AI. This is a program that I wrote in Fortran 90 to convert finite-element input and output files to maps expressed as ASCII text files in the Adobe Illustrator dialect of the PostScript graphics language. I supply both source code (Plates2AI.f90) and an executable (Plates2AI.exe) which can be run on Windows NT or Windows 95 (or probably on OS/2, and possibly on Windows 3.1 if you have loaded the Win32 subsystem(?)). The output file(s) from this program are intended to be read by Adobe Illustrator 4 for Windows 3.1 OR Adobe Illustrator 7 for Windows 95/NT and MacOS, from which they can be edited and printed. The editing option is especially nice: you can hide or emphasize groups of lines, change titles, add annotations and arrows, shrink maps and combine them, or redefine the custom colors to shades that you find more pleasing. In particular, you can redefine the custom color "foreground" from black to white and the custom color "background" from white to black, resulting in very nice slide copy (white letters + colors on black). (While running Plates2AI, you will notice that it must read a file called LanModel.ai. I provide this. It is a generic PostScript file containing Adobe Illustrators macros, custom colors, and custom shading patterns (but no actual plot). You can edit this file with Adobe Illustrator, but I would not advise it unless you really understand AI.) Whichever graphics program you choose, notice that most of the input values controlling the graphics are in the model parameter file (e.g., sample inAK9801.in provided), FOLLOWING the lines read by PLATES. For each possible plot, there is one line. The leading T/F indicates whether that plot is to be produced, and this will affect which input files are required. (The first 9 plots can be produced using only an .feg file with nodal data; the latter 7 plot types require that you have already run PLATES and have a file of nodal velocities to input.) The second item is the contour interval, if you want to specify it (leaving this blank means the program will choose). The third item is the variable value that should be associated with the central color (or gray shade) of the color bar; if you do not want to choose this, leave it blank and the program will decide. The 4th item is an integer (+1 or -1) which controls whether dark/blue colors or light/red colors are associated with lower variable values. DO NOT CHANGE THE COLUMNS IN WHICH THESE (T/F, +1/-1) ENTRIES APPEAR! Following the lines for each plot type are choices about pen weights, the location of the center of the map window, the map scale, and whether the plot is to be black/white/gray (for publication and laser printers) or colored (for slide-makers and/or inkjet printers). Notice that after the graphics block, there are 4 more lines read by Plates2AI which define the conic projection that was used to "flatten" the Earth into the (x,y) model plane. These lines are not read by either PLATES or PLOTPLATES, and their ONLY function is to allow Plates2AI to put latitude/longitude fiducial lines on the maps. Therefore, you do not need to be concerned about great accuracy. (If you used some other map projection than conic, you can probably still find an equivalent conic projection that will be close enough for this purpose. If you can't, then just use Adobe Illustrator to select and delete the meridians and parallels on the maps, since they are not accurate.) Two plots which you should always create and check are "temperature at the base of the plate" and "pressure anomaly at the base of the plate". If you used FillGrid to create your nodal data, then ideally the temperature should be a uniform value (1250 C?), and the pressure anomaly should be of small magnitude (a few MPa or less). However, there are various reasons why FillGrid could fail to converge on an accurate solution, and if so these plots will show the problem. (Also consult the .log file that was created by FillGrid.) If you edited the nodal data by hand after running FillGrid, it is especially important to check that the basal temperature and basal pressure anomaly are reasonable. 7. PROVIDE BOUNDARY CONDITIONS Run the finite element program PLATES to get a list of boundary nodes. These will require boundary conditions. Use your compiled version of PLATES from step (1) above. Preconnect Fortran unit 1 to the finite element grid (AK795.feg, or equivalent). Leave Fortran units 2 and 3 unconnected, or connect them to dummy files which are empty. Preconnect Fortran unit 4 to the input parameter file (inAK9801.in, or equivalent). Provide for text output (up to 132 characters/line) from Fortran unit 6. (Fortran unit 7 also nominally produces output, but this time the program will not get that far.) Now, look at the text output from Fortran unit 6. The finite element grid is read and echoed. Next, the input parameters are read. Then, the program checks the grid for topological errors. If none are found, the program looks for a file of boundary conditions. Since you have not provided one, it will print a list of nodes requiring boundary conditions, and stop. What you should do is use a text editor to extract this last table from the output and make it the skeleton of your boundary-conditions input file (using AKbc795.bcs as a model). Select boundary conditions for each node where required. The step above describes how to obtain a list of nodes which require boundary conditions. Each row should have a boundary sequence number in column 1, and the node number (from your .feg file) in column 2. This list should proceed in counterclockwise order around the boundary. You need to be aware of some subtleties of the topology. First, if you have placed fault elements along the perimeter of your model, then the boundary nodes are the ones OUTSIDE the fault, and they belong the ADJACENT plate, not the one whose volume you are modeling. Therefore, assign them the velocities of the neighboring plate(s). Second, wherever there are faults along the boundary, there is the possibility that they connect in triple-junctions to additional faults which are outside the perimeter of the model. That is, at every end of every boundary fault element, there is a chance of a change in the name and Euler pole of the neighboring plate. For this reason, the grid will have TWO DISTINCT nodes on the outside of the boundary at each point where two boundary faults meet. Both require boundary conditions. The first one listed belongs the boundary fault which comes first as you go counterclockwise around the boundary. (To summarize: if the boundary of your model is ringed with fault elements, then the boundary velocities you impose will be those of adjacent plates, not the plate(s) within the model. If there is a triple junction of plates, you will have to specific the velocities of both neighboring plates at that point, using two boundary nodes at the same nominal location. If there is no triple junction, then you will have to specify the velocity of the neighboring plate redundantly, twice.) For each node in the list, choose an integer index 0, 1, 2, or 3: 0 = "free" This part of the boundary is subject only to lithostatic normal traction, with no shear traction. The amount of lithostatic pressure is computed from the layer thicknesses and geotherm just inside the boundary, as if the same structure continued indefinitely outside the boundary (but the rocks outside the boundary were very weak). No additional data is needed following code "0". 1 = "1-component" is followed by a velocity magnitude (in SI, this would be in m/s) and an argument (in degrees COUNTERclockwise from +x axis); both are floating-point numbers in free format, separated by space(s) or one comma. The meaning is that the component of velocity along the specified argument will be fixed at the specified value, but the other component will be left free. 2 = "2-component" is followed by a velocity magnitude (in SI, this would be in m/s) and an argument (in degrees COUNTERclockwise from +x axis); both are floating-point numbers in free format, separated by space(s) or one comma. The meaning is that the component of velocity along the specified argument will be fixed at the specified value, while the other component will be fixed at zero. 3 = "computed" means that the node is to have a type-2 (two-component) velocity boundary condition, but that PLATES should compute the velocity and argument, using the code in SUBROUTINE SIDES. Therefore, no additional data follow the code of "3". WARNING: The current code of SIDES is specific to Alaska, and will need to be replaced before modeling other areas!!! Type this code (and real-number velocity and argument, if needed) after the sequence number and the node number. To complete the boundary-conditions file, add an initial title line which will be copied to certain output files to identify the boundary condition set in use. Consult AKbc795.bcs as an example of correct format. 8. DYNAMICAL EXPERIMENTS WITH -= PLATES =-. Before you begin to work with PLATES, be sure to take advantage of all the documentation. Read Bird (1989) for a general foundation in the finite element techniques. Then, read the Appendix in Bird and Kong (1994) to learn how faults are modeled. You will also find some useful comments there on convergence, and how it is affected by the input parameter OKDELV (in sample file inAK9801.in). Also read Bird (1996) for an example of an application of PLATES, to Alaska. Then, read the many comment lines at the beginning of PLATES. You will find explanations of the PARAMETER statements, the I/O device numbers, and a complete glossary of variables used in the main program. Finally, look at the sample text output file (tAK9801.out) where comments are added to clarify the meaning of many input variables. The units in most of my files are SI. I suggest that you use the same. My programs are almost entirely free from units, until the very last steps of PLOTPLATES and Plates2AI, where m/s velocities are converted to mm/year, and dimensions in m are converted to dimensions in km. If you don't mind recoding the legends of the output plots, you are free to use cgs or English or any other consistent set of units. Once you actually begin modeling a region of your own, you will need to replace subroutine BOTTOM of PLATES, which contains problem- specific estimates of the horizontal components of flow at the base of the lithosphere. Be sure to also replace the same subprogram in PLOTPLATES and Plates2AI, or else your plots of basal drag will be wrong, although everything else in the solutions will be correct! 9. PLOTTING THE RESULTS OF SIMULATIONS See the comments above (#6) about using PLOTPLATES or Plates2AI to produce graphical plots. Once you have run PLATES, you will have a new file of the velocities of the nodes (like file vAK9801.out, provided as an example for use with AK795.feg and inAK9801.in). This will mean that you are now able to create the full range of 16 plots, not just the 8 which were available previously. 10. SCORING THE OUTPUT OF DYNAMIC MODELS Some users of PLATES may only wish to compute "generic" models of idealized cases. In this case, scoring is not necessary. However, if the model is customized to a particular part of the Earth (or other planet), it is very useful to determine, objectively and numerically, whether one model is better than another, and whether any model fits the data acceptably well. Such scoring can involve four different methods: (a) Long-term-average fault slip rates from scarp studies, trenching, or offset features can be compared to the slip rates on the model faults. In this method, the lack of earthquake cycles on the model faults is not a problem, so long as the offset geologic features have been offset in at least several earthquakes, not just one! (b) Principal stress (or principal strain-rate) directions from the model can be compared to data from boreholes, seismic fault-plane-solutions, dikes, etc. (e.g., the World Stress Map dataset of M. L. Zoback and others). In this method, there could be a small problem if stress directions vary during the seismic cycle, because the model does not include this effect, and most data are collected at an unknown time (phase) in the seismic cycle. One merely hopes that this is a source of random mismatch and not systematic error. (c) Length-rates of baselines between geodetic benchmarks can be compared to the predicted relative velocities in the model. In this method, there is a very large correction for the seismic cycle which exists in nature, but not in the model. Unless a geodetic survey "captures" a slip event on a particular fault, it are likely to show a smooth (error-function) variation of velocity in any profile across the fault. The model, however, will show a velocity discontinuity at the fault. The solution is to correct the model prior to comparing it with the geodetic data. Analytic solutions for rectangular dislocation patches in an elastic half-space can be summed to find the change in the model velocities of benchmarks due to temporary locking of the shallow parts of faults (above the brittle/ductile transition). For each fault, one finds the brittle/ductile transition depth reported by subroutine MOHR in PLATES, and uses this is as the lower boundary of a dislocation patch whose slip rate is just the negative of the fault slip-rate predicted by the finite element model. Adding this dislocation solution to the F-E model APPROXIMATELY converts the F-E model to a time-dependent visco-ELASTIC solution which can be compared to geodetic data. (d) Maps of scalar strain-rate derived from seismicity can be compared to maps of scalar strain-rate derived from the model. On the model side, it is necessary to somehow smooth the infinite strain-rates within the model faults to a finite distribution which has the same integral. On the seismic data side, it is necessary to convert magnitudes to moments, then convert moments to strain-rates, and finally to smooth these strain-rates by some kind of spatial averaging. It is probably best if each user of PLATES writes his/her own scoring program, because the available data does not come in any standard format, and often some special-purpose code must be used to correct for its limitiations. However, for users who would like to see how I have scored my Alaska models, I have included all the necessary programs and datasets. My scoring made use of methods (a), (b), and (c) above. If similar data are available in your area, you may be able to reformat it like mine and use the scoring programs ALMOST unchanged. (However, be sure to scan for the word "Alaska" in the comments of the Fortran code, because it marks a few places where very specific features of the Alaskan datasets, like the position of the Aleutian trench, were built into my scoring program!) The scoring program that I used is SCORE_AK.for (Fortran 77), and it produced the numbers for Table 2 of Bird (1996). The table of fault slip rates is Slip795.dat, which corresponds to Table 1 of Bird (1996). There is one line for each place where a long-term-average fault slip rate has been determined. Column 1 is the fault element number (integer). Column 2 is the relative position of the datum within the fault element, expressed as a dimensionless fraction of the distance along the fault toward the node named in column 3. (See the .feg file, where the fault element is defined, to see the local node numbers.) Column 4 is the minimum right-lateral rate, in mm/a. Column 5 is the maximum right-lateral rate, in mm/a. (Left-lateral slip is negative right-lateral slip, so a left-lateral rate of 1 to 2.5 mm/a would be expressed as " -2.5 -1.0 " in columns 4 and 5.) Column 6 is the minimum thrusting rate, in mm/a. Column 7 is the maximum thrusting rate, in mm/a. (Thrusting rates are expressed as throw rates, = the difference between the vertical velocities of the two sides of the fault. Rates of normal faulting are entered as negative rates of thrusting. Note that rates of dip-slip entered for a vertical strike-slip fault element will simply be ignored.) Column 8 is a text label to identify the fault. Column 9 is a text label to identify the bibliographic reference. If one knows that a fault is "active" with a certain sense, but does not know the numerical rate, it is still possible to enter a minimum rate of 0.0001 mm/a (for right-lateral and thrust faults), or to enter a maximum rate of -0.001 mm/a (for left-lateral and normal faults). The dataset containing the stress directions, used in scoring method (b), is Stresses_feis795.dat. It is derived from the dataset of Zoback and Zoback (1989, Geol. Soc. Am. Mem, 172). Column 1 is the datum identifier of Zoback and Zoback. Column 2 is the number of the finite element it is in. [This may change whenever you edit the .feg file! ] Column 3 and Column 4 give the internal coordinates S1 and S2 within that element; there is a third internal coordinate S3, but it is redundant because S1 + S2 + S3 = 1 by definition (as in a ternary diagram). Internal coordinate S1 has value 1 at node #1, decreasing to 0 on the opposite side. Internal coordinate S2 has value 1 at node #2, decreasing to 0 on the opposite side. (Note that #1, #2, #3 are LOCAL node numbers within the element; for the conversion to global node numbers, see how the elements were defined in the .feg file.) [ Because manual calculation of these internal coordinates is very difficult, I provide utility program STRXY2IS.for which will read a similar dataset (with stress locations given as x,y in meters) and compute the finite element number and internal coordinates. I also provide a utility program, STRLL2XY.for, which converts these data from longitude,latitude coordinates to x,y coordinates. It is important to check a few points carefully after each coordinate change! ] Column 5 is the quality index of Zoback and Zoback, from A (best) to E (worst). Column 6 holds an abbreviation of the stress-measurement method. Column 7 is the azimuth of the most-compressive horizontal principal stress (sigma1h), expressed in degrees COUNTER-clockwise from the +x axis of the Cartesian x,y system. [ Note that in the precursor file stresses_xy.dat, this was also true. However, in the original data file of Zoback and Zoback, the azimuth was in degrees clockwise from North; the change occured when running utility program STRLL2XY.for.] Column 8 is the stress-regime (strike-slip, thrusting, etc.) Column 9 is the depth in km (usually 0.). Column 10 is the bibliographic citation for the datum. The dataset containing the geodetic benchmark locations and baseline length rates, used for scoring method (c), is file geodesy_feis795.dat. Each geodesy data file actually contains two tables: -nominal benchmark locations (not including small changes); -rate of change of baselines (with standard deviations). Within each file, my programs recognize the end of the first table by the line: =============================== In the first table, Column 1 (A31) is the benchmark name, which usually begins with the name of the local network it belongs to. Column 2 is the number of the finite element it sits in. [ This may change whenever you edit the .feg file! ] Column 3 and Column 4 give the internal coordinates S1 and S2 within that element; there is a third internal coordinate S3, but it is redundant because S1 + S2 + S3 = 1 by definition (as in a ternary diagram). Internal coordinate S1 has value 1 at node #1, decreasing to 0 on the opposite side. Internal coordinate S2 has value 1 at node #2, decreasing to 0 on the opposite side. (Note that #1, #2, #3 are LOCAL node numbers within the element; for the conversion to global node numbers, see how the elements were defined in the .feg file.) [ Because manual calculation of these internal coordinates is very difficult, I provide utility program GEOXY2IS.for which will read a similar dataset (with benchmark locations given as x,y in meters) and compute the finite element number and internal coordinates. It is important to check a few points carefully after each coordinate change! ] =============================== In the second table, Column 1 is the name of one benchmark defining a baseline; Column 2 is the name of the other benchmark for that baseline; Column 3 is the lengthening (extension) rate in mm/a; Column 4 is the standard deviation of the length rate, in mm/a. The result of running SCORE_AK.for, using all the data files (a), (b), and (c) above, together with the Alaskan model 98-01 (files AK795.feg, inAK9801.in, and vAK9801.out) is given in text file sAK9801.out. 11. VARY PARAMETERS AND BOUNDARY CONDITIONS TO OPTIMISE THE MODEL. Experiment with the model, trying to reduce prediction errors. At this point, you have done all the hard work of setting up the software and learning how to use it. For very little extra effort, you can use additional simulation runs to perform comparison experiments, and see how the quality of the simulation is affected by boundary conditions, fault friction coefficient, mantle flow pattern, fault dips and interconnections, etc., etc., etc.! Use the prediction errors obtained from part (l0) above as an objective guide as to which model is more realistic, and try to minimize the sizes of these various prediction errors. Remember, every model (and every parameter set) contains its own systematic errors with respect to a real planet. However, we may reasonably hope that when we compare two simulations to see the differential effect of one parameter, that the effect(s) of these systematic errors will largely cancel! In this way, we hope to get reliable insights about real planets from these imperfect models. GOOD LUCK! Peter Bird