Revision $Id: ftimes-bimvw.base,v 1.30 2007/10/06 17:25:06 klm Exp $ Purpose This recipe demonstrates how to perform basic integrity monitoring using FTimes and WebJob. FTimes will be used on a periodic basis to take system critical and full snapshots. A WebJob framework will be used to centrally manage the configuration, scheduling, and execution of these snapshots. WebJob clients will periodically download a wrapper script and take a snapshot with FTimes. Then, the results will be uploaded to a WebJob server where they will be tagged, archived, and analyzed. The WebJob server is the snapshot repository for all clients. This is where change analysis takes place; the results of which can be reported in various ways (e.g., email alerts, web pages, etc.). Motivation Taking system snapshots on a regular basis can dramatically reduce the amount of time required to answer the following questions: - What files and directories have been modified, added, or removed (referred to by FTimes as (C)hanged, (N)ew, and (M)issing)? - What files are known and can be attributed to a trusted source? - What files are unknown and must be reviewed? - How do currently deployed file systems compare to the original file systems (i.e., as deployed or designed)? - How much much drift exists between systems of the same type? - When were patches, software upgrades, or packages installed? - What files and directories need to be restored to repair damage done by an intruder or an administrator that made a mistake? By regularly harvesting and analyzing snapshot data, you'll be able to answer these questions and more. Requirements Cooking with this recipe requires an operational WebJob server. If you do not have one of those, refer to the instructions provided in the README.INSTALL file that comes with the source distribution. The latest source distribution is available here: http://sourceforge.net/project/showfiles.php?group_id=40788 Each client must have basic system utilities, FTimes (3.7.0 or higher), and WebJob (1.6.0 or higher) installed. Note that the Windows portion of this recipe was only tested on the Windows XP operating system. The server must be running UNIX and have basic system utilities, Apache, FTimes (3.7.0 or higher), and WebJob (1.6.0 or higher) installed. If you want to generate encrypted email alerts, the server will need to have GnuPG installed. GnuPG is available here: http://www.gnupg.org/ The commands presented throughout this recipe were designed to be executed within a Bourne shell (i.e., sh or bash). This recipe assumes that you have read and implemented the following recipe for managing jobs on UNIX WebJob clients: http://webjob.sourceforge.net/Files/Recipes/webjob-run-periodic.txt It also assumes you are familiar with configuration overrides and server-side triggers. Time to Implement Assuming that you have satisfied all the requirements/prerequisites, this recipe should take approximately 1 hour to implement. Solution The following steps take you through the process of configuring your WebJob server to perform Basic Integrity Monitoring Via WebJob -- or BIMVW for short. 1. The full snapshots directory structure (shown below) is used to implement BIMVW. This directory structure is where harvested snapshots will be tagged, archived, and analyzed. Most of this directory structure is created automatically by nph-webjob.cgi through the use of config file overrides. However, some files (e.g., ignore.rules) are optional and will need to be managed manually. snapshots | + | + | + | | | - .cmp | - .cmp.filtered | - .env | - .err | - .out | - .rdy | - analysis.log - baseline.1st - baseline.map - compared.cmp - compared.cmp.filtered - ignore.rules - snapshot.map Each subject directory corresponds to the ID of a WebJob client. These directories are automatically created and filled with snapshots as they roll in via nph-webjob.cgi. Typically, client IDs and hostnames have a 1:1 relationship, but it is possible to have more than one client ID per host. A profile in the FTimes context is different than a profile in the WebJob context. In the FTimes context, a profile is defined to be the set of files and directories that you wish to monitor. This recipe provides two sample profiles: "all" and "sys". These profiles are implemented as scripts, and the name of each script is the name that will be used by nph-webjob.cgi to create the corresponding profile directory in the above tree structure. For example, the "all" profile directory name will be c_hlc_ftimes_all for UNIX clients (or c_hlc_ftimes_all.bat for Windows clients) because that is the name of the script used to implement that profile. If this is not yet clear, just remember that profile directories (in the FTimes context) are created from the %cmd token in the PutNameFormat. The FTimes profiles provided by this recipe have the following meanings: all A profile for monitoring all system files. This profile is implemented as the c_hlc_ftimes_all wrapper script for UNIX clients and c_hlc_ftimes_all.bat for Windows clients. sys A profile for monitoring system critical files. This profile is implemented as the c_hlc_ftimes_sys wrapper script for UNIX clients and c_hlc_ftimes_sys.bat for Windows clients. You can create additional profiles (e.g., mission critical) by creating your own wrapper scripts. You can also customize the existing wrapper scripts to suit your needs. For example, you may want to alter the Include and Exclude lists, or you may want to enable/disable FTimes compression. Refer to the FTimes man page for details on the various configuration controls that are available. http://ftimes.sourceforge.net/FTimes/ManPage.shtml As of WebJob 1.7.0, a new type of profile is required to ensure that job directories don't pile up too far on the client due to dirty jobs. A dirty job is a job that creates files or directories but fails to remove them when done. This, in turn, prevents WebJob from being able to remove its run directory on exit, and since every job gets its own run directory, dirty jobs can be a problem over time. The new "rdm" profile has the following meaning: rdm A profile for monitoring the WebJob run directory. RDM is short Run Directory Monitor. This profile is is implemented as the c_hlc_ftimes_rdm wrapper script for UNIX clients. The purpose of this script is to create snapshots and conditionally prune away old job directories. Currently, this profile is not implemented for Windows clients. 2. Appendices 1 and 2 contain sample UNIX wrapper scripts for the "all" and "sys" profiles. Appendices 3 and 4 contain sample Windows wrapper scripts for the "all" and "sys" profiles, and Appendix 8 contains a sample UNIX wrapper script for the "rdm" profile. Extract these files on your WebJob server, copy them to the commands tree, and edit them as needed. # WEBJOB_BASE_DIR="/var/webjob" # WEBJOB_COMMANDS="${WEBJOB_BASE_DIR}/profiles/common/commands" The commands below extract and install the UNIX scripts. # sed -e '1,/^--- c_hlc_ftimes_all ---$/d; /^--- c_hlc_ftimes_all ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_all # sed -e '1,/^--- c_hlc_ftimes_sys ---$/d; /^--- c_hlc_ftimes_sys ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_sys # sed -e '1,/^--- c_hlc_ftimes_rdm ---$/d; /^--- c_hlc_ftimes_rdm ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_rdm # cp c_hlc_ftimes_{all,rdm,sys} ${WEBJOB_COMMANDS}/ # chmod 644 ${WEBJOB_COMMANDS}/c_hlc_ftimes_{all,rdm,sys} # chown 0:0 ${WEBJOB_COMMANDS}/c_hlc_ftimes_{all,rdm,sys} The commands below extract and install the Windows scripts. Note that the scripts assume the FTimes binary (ftimes.exe) is located in the C:\ftimes\bin directory. If it is not, modify the script with the correct path to FTimes. # sed -e '1,/^--- c_hlc_ftimes_all\.bat ---$/d; /^--- c_hlc_ftimes_all\.bat ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_all.bat # sed -e '1,/^--- c_hlc_ftimes_sys\.bat ---$/d; /^--- c_hlc_ftimes_sys\.bat ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_sys.bat # cp c_hlc_ftimes_{all,sys}.bat ${WEBJOB_COMMANDS}/ # chmod 644 ${WEBJOB_COMMANDS}/c_hlc_ftimes_{all,sys}.bat # chown 0:0 ${WEBJOB_COMMANDS}/c_hlc_ftimes_{all,sys}.bat If the FTimes profile for a particular client requires customization, simply copy the corresponding wrapper script to that client's commands directory and edit it as needed. 3. Create custom config file overrides for the "all" and "sys" profiles for the UNIX and Windows scripts as appropriate. # mkdir -p ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_all # mkdir -p ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_rdm # mkdir -p ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_sys # mkdir -p ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_all.bat # mkdir -p ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_sys.bat Populate the UNIX and Windows "all" profile config file with the following content: # vi ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_all/nph-webjob.cfg # vi ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_all.bat/nph-webjob.cfg --- nph-webjob.cfg --- PutNameFormat=snapshots/%cid/%cmd/%Y-%m-%d/%cid_%Y-%m-%d_%H.%M.%S.%pid PutTriggerEnable=Y PutTriggerCommandLine=ftimes_bimvw -c gzip -r %rdy --- nph-webjob.cfg --- Note: In contrast to the "sys" profile that follows, no email reporting is enabled by default. This is because the "all" profile, without proper tuning, will typically generate too many false positives. Through the use of ignore rules, this problem could be managed to the point where email alerts are worth enabling. Populate the UNIX "rdm" profile config file with the following content: # vi ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_rdm/nph-webjob.cfg --- nph-webjob.cfg --- PutNameFormat=snapshots/%cid/%cmd/%Y-%m-%d/%cid_%Y-%m-%d_%H.%M.%S.%pid PutTriggerEnable=Y PutTriggerCommandLine=ftimes_bimvw -b -c gzip -e "root@localhost" -r %rdy --- nph-webjob.cfg --- Note: The '-b' option is used to always compare to the first baseline, which, in theory should contain very few records. This is done for emphasis -- we want to see how big the pile of dirty jobs is becoming. If this option proves to be too noisy, it can be removed. Populate the UNIX and Windows "sys" profile config file with the following content: # vi ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_sys/nph-webjob.cfg # vi ${WEBJOB_BASE_DIR}/config/nph-webjob/commands/c_hlc_ftimes_sys.bat/nph-webjob.cfg --- nph-webjob.cfg --- PutNameFormat=snapshots/%cid/%cmd/%Y-%m-%d/%cid_%Y-%m-%d_%H.%M.%S.%pid PutTriggerEnable=Y PutTriggerCommandLine=ftimes_bimvw -c gzip -e "root@localhost" -r %rdy --- nph-webjob.cfg --- Note: It is important that you use the PutNameFormat specified above. In fact, the ftimes_bimvw script may fail to function if a different PutNameFormat is used. This is because several internal variables are derived from path/name of the .rdy file. Note: If you want to have email alerts sent to a different address, simply modify the '-e' option. You may specify multiple email addresses by enclosing them in double quotes and separating individual addresses with commas like so: -e "root@localhost,security@your.domain" Note: You can optionally encrypt outbound emails using GPG. To do this, add the '-g' option to the PutTriggerCommandLine like so: -g /var/webjob/config/gpg This option enables encryption, and its argument specifies the GPG home directory (i.e., the place where the GPG key rings are located). More details regarding this option can be found in Step 7. 4. Extract ftimes_bimvw and ftimes_bimvw_add_ignore_rules from this recipe (Appendices 5 and 6), and install them in a suitable bin directory on your WebJob server. # BIN_DIR=/usr/local/bin # sed -e '1,/^--- ftimes_bimvw ---$/d; /^--- ftimes_bimvw ---$/,$d' ftimes-bimvw.txt > ftimes_bimvw # cp ftimes_bimvw ${BIN_DIR} # chown 0:0 ${BIN_DIR}/ftimes_bimvw # chmod 755 ${BIN_DIR}/ftimes_bimvw # sed -e '1,/^--- ftimes_bimvw_add_ignore_rules ---$/d; /^--- ftimes_bimvw_add_ignore_rules ---$/,$d' ftimes-bimvw.txt > ftimes_bimvw_add_ignore_rules # cp ftimes_bimvw_add_ignore_rules ${BIN_DIR} # chown 0:0 ${BIN_DIR}/ftimes_bimvw_add_ignore_rules # chmod 755 ${BIN_DIR}/ftimes_bimvw_add_ignore_rules Note: This recipe assumes that you chose a location that is in the PATH of the Apache user. If that's not the case, you'll need revisit step 3 and use full paths in each PutTriggerCommandLine. The purpose of ftimes_bimvw is to perform change analysis, conditionally generate email alerts, and conditionally compress uploaded snapshot data. This script has the following usage: ftimes_bimvw [-b] [-c {bzip2|compress|gzip|none}] [-e address[,address]] [-g gpg-keyring-path] [-H ftimes-home] [-i global-ignore-file] [-m compare-mask] -r rdy-file where '-b' forces the script to compare the current snapshot to the first (or oldest) recorded baseline (typically baseline.1st); '-c' enables bzip2, compress, or gzip compression (gzip is the default); '-e' is a comma delimited list of recipients that should receive a copy of all email alerts (alerts are not sent by default); '-g' enables GPG encryption of email alerts and specifies the location where the GPG key rings are located; '-H' is a path that points to the FTimes home directory (/usr/local/ftimes is the default); '-i' specifies the name of a global ignore file that contains egrep-style regular expressions used to filter out noise or false positives (each profile uses its own set of ignore rules by default -- this option can be used to supplement those local rules); '-m' is the compare mask that you want to use when performing change analysis ("none+md5" is the default); and '-r' is the full path of the client's .rdy file. The purpose of ftimes_bimvw_add_ignore_rules is to read ignore rules from a file and add them to the local ignore.rules file for the specified profile and set of clients. This script has the following usage: ftimes_bimvw_add_ignore_rules [-d snapshots-dir] [-g gid] [-u uid] -i ignore-file -p profile client-id ... where '-d' is the location of the snapshots directory (this is set to /var/webjob/incoming/snapshots by default); '-g' is the GID of the Apache user (apache by default); '-i' specifies the name of an ignore file that contains egrep-style regular expressions, one per line; '-p' is the name of the profile directory; and '-u' is the UID of the Apache user (apache by default). Note: You can automatically create egrep-style ignore rules by running sed (as shown below) against the compare output files (either compared.cmp or compared.cmp.filtered). After running sed, you can add the appropriate regular expressions to the client's local ignore.rules file, or you may place these expressions in a file and run ftimes_bimvw_add_ignore_rules to apply them to several clients in a single operation. Alternatively, you can insert the expressions in a global ignore.rules file (see '-i' option for ftimes_bimvw). sed -e 's/\([$+./\|]\)/[\1]/g; s/\^/\\^/g; s/^/^/;' compared.cmp 5. For UNIX clients, edit the hourly script (1.5 or higher) and add the following job: ${WEBJOB_HOME}/bin/webjob -e -f ${WEBJOB_HOME}/etc/upload.cfg c_hlc_ftimes_sys & For Windows clients, create a scheduled task named "ftimes_sys" that runs once an hour. You can create 24 run times (one for each hour) via the "Schedule" tab in the properties window for the ftimes_sys scheduled task. The example below assumes WebJob is installed in the C:\webjob directory. C:\webjob\bin\webjob.exe -e -f C:\webjob\etc\upload.cfg c_hlc_ftimes_sys.bat We recommend that you take a snapshot of system critical files once an hour. For the "rdm" profile, add the following six-hour job: ${WEBJOB_HOME}/bin/webjob -e -f ${WEBJOB_HOME}/etc/upload.cfg c_hlc_ftimes_rdm -r The default run directory for UNIX clients is usually /usr/local/webjob/run. 6. For UNIX clients, edit the daily script (1.5 or higher) and add the following job: ${WEBJOB_HOME}/bin/webjob -e -f ${WEBJOB_HOME}/etc/upload.cfg c_hlc_ftimes_all & For Windows clients, create a scheduled task named "ftimes_all" that runs once a day. The example below assumes WebJob is installed in the C:\webjob directory. C:\webjob\bin\webjob.exe -e -f C:\webjob\etc\upload.cfg c_hlc_ftimes_all.bat We recommend that you take a snapshot of all system files once a day. 7. If you want to GPG-encrypt email alerts from the WebJob server you will need to create a signing key and sign each recipient's key. This is required to make GPG work properly. Note that you can create a normal key, but it is not required. To create a signing-only key, run the following commands: # mkdir -p /var/webjob/config/gpg # gpg --homedir=/var/webjob/config/gpg --local-user=ftimes_bimvw_alerts --gen-key Select the option to make this a signing-only key. Fill in the key expiry length, and other requested information. Create a password when prompted for this key. Once the signing key has been created, import and sign each recipient's key. Refer to the GPG documentation for details on how to do this. Currently, encrypted email alerts require a group alias called ftimes_bimvw_alerts in your gpg.conf file. This alias is used by ftimes_bimvw. You need to create/edit a gpg.conf file and add the GPG Key IDs of all your recipients to the "group" line. The Key IDs should be space delimited. When finished, you should have a gpg.conf file that looks like this: --- gpg.conf --- group ftimes_bimvw_alerts = no-greeting no-secmem-warning no-version --- gpg.conf --- To test that the key and group options are configured correctly, run the following command from a command prompt: # gpg --homedir=/var/webjob/config/gpg --armor -o - -e --yes -r ftimes_bimvw_alerts /var/webjob/profiles/common/commands/hourly This should produce ASCII armored encrypted output, and send it to your screen. If it doesn't, you'll need remedy any errors. Closing Remarks When harvesting snapshots on a periodic basis, it is important to choose a job interval that is sufficiently long. This helps to ensure that jobs don't stack up on the client. A good rule of thumb is that a snapshot should take no longer than 1% of the job interval to complete. For example, if it takes 36 seconds to map all system critical files, then the overall snapshot interval should be no less than one hour (i.e., 3600 seconds). One situation you'll want to avoid is having your "all" and "sys" profiles (or any other profiles) running concurrently. There are two main reasons for this: 1) the system load of having multiple snapshots run at the same time may be more than you're willing to accept and 2) it's best, from a forensic perspective, if ATime changes can be attributed to a particular snapshot (when multiple snapshots are running at the same time over a common set of files, this may not be possible). Before deploying this solution (i.e., BIMVW), you need to consider your tolerance for blackouts and decide if the results will be good enough to meet your needs. A blackout occurs when a WebJob client is unable to perform its regularly scheduled snapshot due to some technical difficulty (e.g., network failure, broken script, full disk, etc.). In general, a missing snapshot here or there is not a big deal because any outstanding changes will be picked up during the next job cycle. On the other hand, this could be a real issue if your systems are under active attack. This is because the perpetrator may be causing the blackout to buy himself extra time to restore file attributes or content that changed during his attack. Once an FTimes profile is up and running, you can filter out unwanted alerts by using an ignore file. To do this, create the following file in the appropriate subject/profile directory: snapshots | + | + | - ignore.rules A sample ignore.rules file can be found in Appendix 7. The format of this file is a list of egrep-style regular expressions, one per line. An empty file contains zero expressions, so it will perform no filtering -- except for Solaris (see warning below). Warning: Do not use an empty file when using the native version of egrep on Solaris systems as this will filter out all changes, and that's probably not what you intended to have happen. Instead, create an ignore.rules file and place the following line in it: ^category[|]name[|]changed[|]unknown Note: You may specify a global ignore.rules file with the '-i' command line option. This option should be used to supplement local ignore rules. Note: It does not replace local ignore rules. Again, heed the warning for Solaris described above. If you plan to use the email reporting, you'll need to run tests to verify that everything is working. If your MTA is not mail, you'll need to edit the script and set the MAIL variable as appropriate. Note: If you have multiple systems with large (e.g., >500K records) snapshots, you could experience heavy loads on your server. This is because FTimes attempts to acquire as much memory as is required to load the baseline into an internal lookup table. Credits This recipe was brought to you by Klayton Monroe and Jay Smith. References FTimes is available here: http://ftimes.sourceforge.net/FTimes/ WebJob is available here: http://webjob.sourceforge.net/WebJob/ Appendix 1 The following command may be used to extract this Appendix: $ sed -e '1,/^--- c_hlc_ftimes_all ---$/d; /^--- c_hlc_ftimes_all ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_all --- c_hlc_ftimes_all --- #!/bin/sh ###################################################################### # # $Id: c_hlc_ftimes_all,v 1.3 2007/02/08 17:13:52 klm Exp $ # ###################################################################### # # Copyright 2006-2007 The FTimes Project, All Rights Reserved. # ###################################################################### # # Purpose: Create a snapshot for the "all" profile. # ###################################################################### IFS=' ' PATH=/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin PROGRAM=`basename $0` Usage() { echo 1>&2 echo "Usage: ${PROGRAM} [-H ftimes-home]" 1>&2 echo 1>&2 exit 1 } while getopts "H:" OPTION ; do case "${OPTION}" in H) FTIMES_HOME="${OPTARG}" ;; *) Usage ;; esac done if [ ${OPTIND} -le $# ] ; then Usage fi PATH=${FTIMES_HOME=/usr/local/ftimes}/bin:${PATH} ftimes --maplean - -l 0 << EOF AnalyzeRemoteFiles=N BaseName=- Compress=Y FieldMask=all-magic IncludesMustExist=N ExcludesMustExist=N Include=/ EOF --- c_hlc_ftimes_all --- Appendix 2 The following command may be used to extract this Appendix: $ sed -e '1,/^--- c_hlc_ftimes_sys ---$/d; /^--- c_hlc_ftimes_sys ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_sys --- c_hlc_ftimes_sys --- #!/bin/sh ###################################################################### # # $Id: c_hlc_ftimes_sys,v 1.9 2008/10/12 16:26:46 klm Exp $ # ###################################################################### # # Copyright 2006-2007 The FTimes Project, All Rights Reserved. # ###################################################################### # # Purpose: Create a snapshot for the "sys" (system critical) profile. # ###################################################################### IFS=' ' PATH=/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin PROGRAM=`basename $0` Usage() { echo 1>&2 echo "Usage: ${PROGRAM} [-H ftimes-home]" 1>&2 echo 1>&2 exit 1 } while getopts "H:" OPTION ; do case "${OPTION}" in H) FTIMES_HOME="${OPTARG}" ;; *) Usage ;; esac done if [ ${OPTIND} -le $# ] ; then Usage fi PATH=${FTIMES_HOME=/usr/local/ftimes}/bin:${PATH} ftimes --maplean - -l 0 << EOF AnalyzeRemoteFiles=N BaseName=- Compress=Y FieldMask=all-magic IncludesMustExist=N ExcludesMustExist=N Include=/.bash_profile Include=/.bashrc Include=/.cshrc Include=/.profile Include=/.ssh Include=/.ssh2 Include=/.tcshrc Include=/bin Include=/boot Include=/dev Include=/etc Include=/kernel Include=/lib Include=/modules Include=/opt/local/bin Include=/opt/local/etc Include=/opt/local/ftimes Exclude=/opt/local/ftimes/run Include=/opt/local/integrity Exclude=/opt/local/integrity/run Include=/opt/local/webjob Exclude=/opt/local/webjob/run Include=/rescue Include=/root Include=/sbin Include=/stand Include=/usr/bin Include=/usr/lib Include=/usr/libdata Include=/usr/libexec Include=/usr/local/bin Include=/usr/local/etc Include=/usr/local/ftimes Exclude=/usr/local/ftimes/run Include=/usr/local/integrity Exclude=/usr/local/integrity/run Include=/usr/local/lib Include=/usr/local/libexec Include=/usr/local/sbin Include=/usr/local/webjob Exclude=/usr/local/webjob/run Include=/usr/sbin Include=/usr/ucb EOF --- c_hlc_ftimes_sys --- Appendix 3 The following command may be used to extract this Appendix: $ sed -e '1,/^--- c_hlc_ftimes_all\.bat ---$/d; /^--- c_hlc_ftimes_all\.bat ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_all.bat --- c_hlc_ftimes_all.bat --- @echo off set PATH=C:\ftimes\bin;%PATH% ( echo AnalyzeRemoteFiles=N echo BaseName=- echo Compress=Y echo FieldMask=all-magic echo IncludesMustExist=N echo ExcludesMustExist=N ) | ftimes.exe --maplean - -l 0 --- c_hlc_ftimes_all.bat --- Appendix 4 The following command may be used to extract this Appendix: $ sed -e '1,/^--- c_hlc_ftimes_sys\.bat ---$/d; /^--- c_hlc_ftimes_sys\.bat ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_sys.bat --- c_hlc_ftimes_sys.bat --- @echo off set PATH=C:\ftimes\bin;%PATH% ( echo AnalyzeRemoteFiles=N echo BaseName=- echo Compress=Y echo FieldMask=all-magic echo IncludesMustExist=N echo ExcludesMustExist=N echo Include=C:\WINDOWS echo Include=C:\WINNT ) | ftimes.exe --maplean - -l 0 --- c_hlc_ftimes_sys.bat --- Appendix 5 The following command may be used to extract this Appendix: $ sed -e '1,/^--- ftimes_bimvw ---$/d; /^--- ftimes_bimvw ---$/,$d' ftimes-bimvw.txt > ftimes_bimvw --- ftimes_bimvw --- #!/bin/sh ###################################################################### # # $Id: ftimes_bimvw.base,v 1.9 2008/12/16 03:44:20 klm Exp $ # ###################################################################### # # Copyright 2006-2007 The FTimes Project, All Rights Reserved. # ###################################################################### # # Purpose: Basic Integrity Monitoring Via WebJob (BIMVW) # ###################################################################### IFS=' ' PATH=/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin:/usr/local/webjob/bin PROGRAM=`basename $0` MAIL=mail XER_OK=0 XER_Usage=1 XER_RdyFile=2 XER_MissingInput=3 XER_FTimesDecoder=4 XER_FTimesCompare=5 XER_FilterChanges=6 XER_GenerateEmailAlert=7 XER_CompressFiles=8 XER_CompoundError=9 XER_FTimesIntegrity=10 XER_FTimesVersion=11 XER_FTimesLock=12 ###################################################################### # # CompressFiles # ###################################################################### CompressFiles() { MY_BASE_FILE=$1 MY_COMPRESSION_METHOD=$2 # Optional #################################################################### # # Check required inputs. # #################################################################### if [ -z "${MY_BASE_FILE}" ] ; then echo "${PROGRAM}: CompressFiles(): Error='Missing one or more required inputs.'" 1>&2 return ${XER_MissingInput} fi #################################################################### # # Determine which compression utility to use. # #################################################################### case "${COMPRESSION_METHOD}" in bzip2) MY_COMPRESSOR="bzip2" ;; compress) MY_COMPRESSOR="compress" ;; gzip) MY_COMPRESSOR="gzip" ;; none) return 0 ;; *) return 0 ;; esac #################################################################### # # Compress only those files uploaded by the client. # #################################################################### MY_HAVE_ERRORS=0 for MY_EXTENSION in ".env" ".err" ".out" ".rdy" ".cmp" ".cmp.filtered" ; do MY_FILE=${MY_BASE_FILE}${MY_EXTENSION} if [ -f ${MY_FILE} ] ; then MY_COMMAND_LINE="${MY_COMPRESSOR} -f ${MY_FILE}" echo ${MY_COMMAND_LINE} eval ${MY_COMMAND_LINE} if [ $? -ne 0 ] ; then MY_HAVE_ERRORS=1 fi fi done if [ ${MY_HAVE_ERRORS} -eq 1 ] ; then return ${XER_CompressFiles} fi return 0 } ###################################################################### # # CreateLockFile # ###################################################################### CreateLockFile() { my_lock_file=$1 # Customize ln(1) options based on the OS. case `uname -s` in NIKOS) # This OS is so old it doesn't support '-n'. ln_options= ;; *) ln_options="-n" ;; esac if [ -z "${my_lock_file}" ] ; then return 1 # Rats, we didn't even get to the gate. fi my_old_umask=`umask` umask 022 my_lock_dir=`dirname ${my_lock_file}` if [ ! -d ${my_lock_dir} ] ; then mkdir -p ${my_lock_dir} if [ $? -ne 0 ] ; then return 1 # Rats, we got bushwhacked. fi fi umask 077 my_temp_file=${my_lock_file}.$$ echo $$ | cat - > ${my_temp_file} if [ $? -ne 0 ] ; then return 1 # Rats, we didn't even get out of the gate. fi umask ${my_old_umask} ln ${ln_options} ${my_temp_file} ${my_lock_file} > /dev/null 2>&1 if [ $? -eq 0 ] ; then rm -f ${my_temp_file} return 0 # Ding ding ding, we have a winner. fi my_old_pid=`head -1 ${my_lock_file}` TestPid "${my_old_pid}" if [ $? -eq 0 ] ; then kill -0 ${my_old_pid} > /dev/null 2>&1 if [ $? -eq 0 ] ; then rm -f ${my_temp_file} return 1 # Rats, the lock is in use. fi fi # At this point, the lock is corrupt, stale, or owned by a different # user. Attempt to delete it, and go for the gold. rm -f ${my_lock_file} ln ${ln_options} ${my_temp_file} ${my_lock_file} > /dev/null 2>&1 if [ $? -eq 0 ] ; then rm -f ${my_temp_file} return 0 # Ding ding ding, we have a winner. fi rm -f ${my_temp_file} return 1 # Rats, someone else got there first. } ###################################################################### # # DeleteLockFile # ###################################################################### DeleteLockFile() { my_lock_file=$1 if [ -n "${my_lock_file}" -a -f "${my_lock_file}" ] ; then my_old_pid=`head -1 ${my_lock_file}` TestPid "${my_old_pid}" if [ $? -eq 0 ] ; then if [ ${my_old_pid} -eq $$ ] ; then rm -f ${my_lock_file} fi fi fi return 0 } ###################################################################### # # FilterChanges # ###################################################################### FilterChanges() { MY_MASTER_CMP_FILE=$1 MY_GLOBAL_IGNORE_FILE=$2 # Optional #################################################################### # # Check required inputs. # #################################################################### if [ -z "${MY_MASTER_CMP_FILE}" ] ; then echo "${PROGRAM}: FilterChanges(): Error='Missing one or more required inputs.'" 1>&2 return 1 fi #################################################################### # # Initialize derived variables. # #################################################################### MY_PROFILE_DIR=`dirname ${MY_MASTER_CMP_FILE}` #################################################################### # # Determine if ignore files exist. If a local ignore file does not # exist, create one -- revert to /dev/null in case of an error. If # a global ignore file was not defined or doesn't exist, revert to # /dev/null. The expected result of reverting to /dev/null in each # case is to create an open filter for that particular leg of the # filtering process. # #################################################################### MY_LOCAL_IGNORE_FILE=${MY_PROFILE_DIR}/ignore.rules if [ ! -f "${MY_LOCAL_IGNORE_FILE}" -o ! -r "${MY_LOCAL_IGNORE_FILE}" ] ; then echo '^category[|]name[|]changed[|]unknown[|]records' > ${MY_LOCAL_IGNORE_FILE} if [ ! -f "${MY_LOCAL_IGNORE_FILE}" -o ! -r "${MY_LOCAL_IGNORE_FILE}" ] ; then MY_LOCAL_IGNORE_FILE=/dev/null fi fi if [ -z "${MY_GLOBAL_IGNORE_FILE}" -o ! -f "${MY_GLOBAL_IGNORE_FILE}" -o ! -r "${MY_GLOBAL_IGNORE_FILE}" ] ; then MY_GLOBAL_IGNORE_FILE=/dev/null fi #################################################################### # # Filter out anything that matches the ignore rules. # #################################################################### MY_COMMAND_LINE="egrep -v -f ${MY_LOCAL_IGNORE_FILE} ${MY_MASTER_CMP_FILE} | egrep -v -f ${MY_GLOBAL_IGNORE_FILE}" echo ${MY_COMMAND_LINE} eval ${MY_COMMAND_LINE} > ${MY_MASTER_CMP_FILE}.filtered if [ $? -ne 0 -a $? -ne 1 ] ; then return 1 fi return 0 } ###################################################################### # # GenerateEmailAlert # ###################################################################### GenerateEmailAlert() { MY_MASTER_CMP_FILE_FILTERED=$1 MY_HOSTNAME=$2 MY_ALERT_RECIPIENTS=$3 MY_GPG_KEY_DIR=$4 # Optional #################################################################### # # Check required inputs. # #################################################################### if [ -z "${MY_MASTER_CMP_FILE_FILTERED}" -o -z "${MY_HOSTNAME}" -o -z "${MY_ALERT_RECIPIENTS}" ] ; then echo "${PROGRAM}: GenerateEmailAlert(): Error='Missing one or more required inputs.'" 1>&2 return 1 fi #################################################################### # # Initialize derived variables. # #################################################################### MY_PROFILE_DIR=`dirname ${MY_MASTER_CMP_FILE_FILTERED}` MY_PROFILE=`basename ${MY_PROFILE_DIR}` #################################################################### # # Generate the report and send it out. # #################################################################### if [ -f ${MY_MASTER_CMP_FILE_FILTERED} ] ; then MY_ECHO="echo \"category|name|changed|unknown|records\"" MY_CAT="cat ${MY_MASTER_CMP_FILE_FILTERED}" MY_SUBJECT="Integrity Alert (${MY_PROFILE}) -- Unexpected changes on ${MY_HOSTNAME}." if [ -n "${MY_GPG_KEY_DIR}" ] ; then MY_ENCRYPT="gpg --homedir ${MY_GPG_KEY_DIR} --armor -o - -e --yes -r ftimes_bimvw_alerts" MY_COMMAND_LINE="{ { ${MY_ECHO} ; ${MY_CAT} ; } | ${MY_ENCRYPT} ; } | ${MAIL} -s \"${MY_SUBJECT}\" \"${MY_ALERT_RECIPIENTS}\"" else MY_COMMAND_LINE="{ ${MY_ECHO} ; ${MY_CAT} ; } | ${MAIL} -s \"${MY_SUBJECT}\" \"${MY_ALERT_RECIPIENTS}\"" fi echo ${MY_COMMAND_LINE} eval ${MY_COMMAND_LINE} MY_RETURN_STATUS=$? if [ ${MY_RETURN_STATUS} -ne 0 ] ; then echo "${PROGRAM}: GenerateEmailAlert(): Error='Mail command returned non-zero exit status (${MY_RETURN_STATUS}).'" 1>&2 return 1 fi fi return 0 } ###################################################################### # # ProcessSnapshot # ###################################################################### ProcessSnapshot() { MY_RDY_FILE=$1 MY_COMPARE_MASK=$2 MY_USE_1ST_BASELINE=$3 MY_GLOBAL_IGNORE_FILE=$4 # Optional MY_ALERT_RECIPIENTS=$5 # Optional MY_GPG_KEY_DIR=$6 # Optional #################################################################### # # Check required inputs. # #################################################################### if [ -z "${MY_RDY_FILE}" -o -z "${MY_COMPARE_MASK}" -o -z "${MY_USE_1ST_BASELINE}" ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='Missing one or more required inputs.'" 1>&2 return ${XER_MissingInput} fi #################################################################### # # Initialize derived variables. # #################################################################### MY_BASE_FILE=`basename ${MY_RDY_FILE} .rdy` MY_DATE_DIR=`dirname ${MY_RDY_FILE}` MY_PROFILE_DIR=`dirname ${MY_DATE_DIR}` MY_ENV_FILE=${MY_DATE_DIR}/${MY_BASE_FILE}.env MY_LOG_FILE=${MY_DATE_DIR}/${MY_BASE_FILE}.err MY_MAP_FILE=${MY_DATE_DIR}/${MY_BASE_FILE}.out MY_CMP_FILE=${MY_DATE_DIR}/${MY_BASE_FILE}.cmp MY_CMP_FILTERED_FILE=${MY_DATE_DIR}/${MY_BASE_FILE}.cmp.filtered MY_MASTER_1ST_BASELINE_FILE=${MY_PROFILE_DIR}/baseline.1st MY_MASTER_BASELINE_FILE=${MY_PROFILE_DIR}/baseline.map MY_MASTER_SNAPSHOT_FILE=${MY_PROFILE_DIR}/snapshot.map MY_MASTER_CMP_FILE=${MY_PROFILE_DIR}/compared.cmp MY_MASTER_LOCK_FILE=${MY_PROFILE_DIR}/compared.pid #################################################################### # # Remove old output files. This must be done as early as possible # so that other follow-on utilities, which rely on these files, # don't run when they shouldn't. # #################################################################### rm -f ${MY_MASTER_CMP_FILE} ${MY_MASTER_CMP_FILE}.filtered #################################################################### # # Check that the minimum required version of FTimes is installed. # #################################################################### MY_FTIMES_VERSION=`ftimes --version | awk '{print $2}'` MY_MAJOR_FTIMES_VERSION=`echo ${MY_FTIMES_VERSION} | cut -f1 -d.` MY_MINOR_FTIMES_VERSION=`echo ${MY_FTIMES_VERSION} | cut -f2 -d.` if [ -z "${MY_MAJOR_FTIMES_VERSION}" -o -z "${MY_MINOR_FTIMES_VERSION}" ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='Unable to determine current FTimes version.'" 1>&2 return ${XER_FTimesVersion} fi if [ ${MY_MAJOR_FTIMES_VERSION} -lt 3 -o ${MY_MINOR_FTIMES_VERSION} -lt 9 ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='FTimes 3.9.0 or higher is required, but the current version is ${MY_FTIMES_VERSION}.'" 1>&2 return ${XER_FTimesVersion} fi #################################################################### # # Verify snapshot integrity. # #################################################################### MY_TARGET_HASH=`egrep "[|]OutFileHash=" ${MY_LOG_FILE} | cut -d= -f2 | tr -d '\r'` if [ -z "${MY_TARGET_HASH}" ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='Unable to obtain target OutFileHash.'" 1>&2 return ${XER_FTimesIntegrity} fi MY_ACTUAL_HASH=`webjob -h -t md5 ${MY_MAP_FILE}` if [ -z "${MY_ACTUAL_HASH}" ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='Unable to obtain actual OutFileHash.'" 1>&2 return ${XER_FTimesIntegrity} fi if [ X"${MY_ACTUAL_HASH}" != X"${MY_TARGET_HASH}" ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='Unable to verify snapshot integrity.'" 1>&2 return ${XER_FTimesIntegrity} fi #################################################################### # # Create a lock file. # #################################################################### CreateLockFile "${MY_MASTER_LOCK_FILE}" if [ $? -ne 0 ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='Unable to secure a lock file.'" 1>&2 return ${XER_FTimesLock} fi #################################################################### # # Rotate baseline/snapshot files. However, do not rotate a zero # length snapshot file into the baseline position as that could # cause a large number of false positives. If the snapshot file # has no content, it's assumed that the snapshot failed. # #################################################################### if [ -s ${MY_MASTER_SNAPSHOT_FILE} ] ; then if [ -s ${MY_MASTER_BASELINE_FILE} ] ; then if [ -s ${MY_MASTER_1ST_BASELINE_FILE} ] ; then : # Don't overwrite the 1st baseline if it exists and has non-zero length. else mv -f ${MY_MASTER_BASELINE_FILE} ${MY_MASTER_1ST_BASELINE_FILE} fi fi mv -f ${MY_MASTER_SNAPSHOT_FILE} ${MY_MASTER_BASELINE_FILE} fi #################################################################### # # Decode the current snapshot. FTimes 3.6.0 and higher will decode # any snapshot (compressed or not), so this should be safe to do. # #################################################################### MY_COMMAND_LINE="ftimes --decoder ${MY_MAP_FILE} -l 0" echo ${MY_COMMAND_LINE} eval ${MY_COMMAND_LINE} > ${MY_MASTER_SNAPSHOT_FILE} if [ $? -ne 0 ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='Unable to decode snapshot (${MY_MAP_FILE}).'" 1>&2 DeleteLockFile "${MY_MASTER_LOCK_FILE}" return ${XER_FTimesDecoder} fi #################################################################### # # Check to see if the user wants to compare the current snapshot to # the original baseline. If yes, redefine MY_MASTER_BASELINE_FILE. # #################################################################### if [ ${MY_USE_1ST_BASELINE} -eq 1 ] ; then if [ -f ${MY_MASTER_1ST_BASELINE_FILE} ] ; then MY_MASTER_BASELINE_FILE=${MY_MASTER_1ST_BASELINE_FILE} fi fi #################################################################### # # A non-empty baseline/snapshot pair is required to perform change # analysis. Once the changes are tallied, filter out anything that # matches the ignore rules (if any). Then, if any changes remain, # conditionally generate an email alert. # #################################################################### if [ -s ${MY_MASTER_BASELINE_FILE} -a -s ${MY_MASTER_SNAPSHOT_FILE} ] ; then MY_COMMAND_LINE="ftimes --compare ${MY_COMPARE_MASK} ${MY_MASTER_BASELINE_FILE} ${MY_MASTER_SNAPSHOT_FILE} -l 0 | sed '/^category/d;' | sort -t\| -k 2" echo ${MY_COMMAND_LINE} eval ${MY_COMMAND_LINE} > ${MY_MASTER_CMP_FILE} if [ $? -ne 0 ] ; then echo "${PROGRAM}: ProcessSnapshot(): Error='Unable to perform change analysis.'" 1>&2 DeleteLockFile "${MY_MASTER_LOCK_FILE}" return ${XER_FTimesCompare} else FilterChanges "${MY_MASTER_CMP_FILE}" "${MY_GLOBAL_IGNORE_FILE}" if [ $? -ne 0 ] ; then DeleteLockFile "${MY_MASTER_LOCK_FILE}" return ${XER_FilterChanges} fi cp ${MY_MASTER_CMP_FILE} ${MY_CMP_FILE} cp ${MY_MASTER_CMP_FILE}.filtered ${MY_CMP_FILTERED_FILE} if [ -n "${MY_ALERT_RECIPIENTS}" -a -s ${MY_MASTER_CMP_FILE}.filtered ] ; then MY_HOSTNAME=`egrep "^Hostname=" ${MY_ENV_FILE} | cut -d= -f2` GenerateEmailAlert "${MY_MASTER_CMP_FILE}.filtered" "${MY_HOSTNAME}" "${MY_ALERT_RECIPIENTS}" "${MY_GPG_KEY_DIR}" if [ $? -ne 0 ] ; then DeleteLockFile "${MY_MASTER_LOCK_FILE}" return ${XER_GenerateEmailAlert} fi fi fi else echo "${PROGRAM}: ProcessSnapshot(): Warning='Change analysis skipped due to empty baseline and/or snapshot.'" 1>&2 fi DeleteLockFile "${MY_MASTER_LOCK_FILE}" return ${XER_OK} } ###################################################################### # # TestPid # ###################################################################### TestPid() { my_pid="$1" my_pid_regexp="^[0-9]+$" echo "${my_pid}" | egrep "${my_pid_regexp}" > /dev/null 2>&1 if [ $? -eq 0 ] ; then # The PID is valid. return 0; fi return 1; # The PID is not valid. } ###################################################################### # # Usage # ###################################################################### Usage() { echo 1>&2 echo "Usage: ${PROGRAM} [-b] [-c {bzip2|compress|gzip|none}] [-e address[,address]] [-g gpg-keyring-path] [-H ftimes-home] [-i global-ignore-file] [-m compare-mask] -r rdy-file" 1>&2 echo 1>&2 exit ${XER_Usage} } ###################################################################### # # Main # ###################################################################### ALERT_RECIPIENTS= USE_1ST_BASELINE=0 COMPRESSION_METHOD=gzip COMPARE_MASK=none+md5 GLOBAL_IGNORE_FILE= GPG_KEY_DIR= RDY_FILE= while getopts "bc:e:g:H:i:m:r:" OPTION ; do case "${OPTION}" in b) USE_1ST_BASELINE=1 ;; c) COMPRESSION_METHOD="${OPTARG}" ;; e) ALERT_RECIPIENTS="${OPTARG}" ;; g) GPG_KEY_DIR="${OPTARG}" ;; H) FTIMES_HOME="${OPTARG}" ;; i) GLOBAL_IGNORE_FILE="${OPTARG}" ;; m) COMPARE_MASK="${OPTARG}" ;; r) RDY_FILE="${OPTARG}" ;; *) Usage ;; esac done if [ ${OPTIND} -le $# ] ; then Usage fi PATH=${FTIMES_HOME=/usr/local/ftimes}/bin:${PATH} if [ -z "${RDY_FILE}" ] ; then Usage fi if [ ! -f "${RDY_FILE}" ] ; then echo "${PROGRAM}: Error='The specified file (${RDY_FILE}) does not exist or is not regular.'" 1>&2 exit ${XER_RdyFile} fi BASE_FILE=`basename ${RDY_FILE} .rdy` DATE_DIR=`dirname ${RDY_FILE}` PROFILE_DIR=`dirname ${DATE_DIR}` ANALYSIS_LOG_FILE=${PROFILE_DIR}/analysis.log ProcessSnapshot "${RDY_FILE}" "${COMPARE_MASK}" "${USE_1ST_BASELINE}" "${GLOBAL_IGNORE_FILE}" "${ALERT_RECIPIENTS}" "${GPG_KEY_DIR}" > ${ANALYSIS_LOG_FILE} 2>&1 PROCESS_RETURN_STATUS=$? CompressFiles "${DATE_DIR}/${BASE_FILE}" "${COMPRESSION_METHOD}" >> ${ANALYSIS_LOG_FILE} 2>&1 COMPRESS_RETURN_STATUS=$? if [ ${PROCESS_RETURN_STATUS} -eq 0 -a ${COMPRESS_RETURN_STATUS} -eq 0 ] ; then RETURN_STATUS=${XER_OK} elif [ ${PROCESS_RETURN_STATUS} -ne 0 -a ${COMPRESS_RETURN_STATUS} -eq 0 ] ; then RETURN_STATUS=${PROCESS_RETURN_STATUS} elif [ ${PROCESS_RETURN_STATUS} -eq 0 -a ${COMPRESS_RETURN_STATUS} -ne 0 ] ; then RETURN_STATUS=${COMPRESS_RETURN_STATUS} else RETURN_STATUS=${XER_CompoundError} fi if [ ${RETURN_STATUS} -ne 0 ] ; then echo "${PROGRAM}: Error='Processing failed. Check the analysis log (${ANALYSIS_LOG_FILE}).'" 1>&2 fi exit ${RETURN_STATUS} --- ftimes_bimvw --- Appendix 6 The following command may be used to extract this Appendix: $ sed -e '1,/^--- ftimes_bimvw_add_ignore_rules ---$/d; /^--- ftimes_bimvw_add_ignore_rules ---$/,$d' ftimes-bimvw.txt > ftimes_bimvw_add_ignore_rules --- ftimes_bimvw_add_ignore_rules --- #!/bin/sh ###################################################################### # # $Id: ftimes_bimvw_add_ignore_rules,v 1.6 2007/02/08 17:29:02 klm Exp $ # ###################################################################### # # Copyright 2006-2007 The FTimes Project, All Rights Reserved. # ###################################################################### # # NAME # # ftimes_bimvw_add_ignore_rules - add ignore rules # # SYNOPSIS # ftimes_bimvw_add_ignore_rules [-d snapshots-dir] [-g gid] [-u uid] # -i ignore-file -p profile client-id ... # # DESCRIPTION # # This program adds ignore rules to clients with the specified # profile and these ignore rules are used by the ftimes_bimvw script # when comparing two ftimes maps. The regular expressions contained # in ignore-file will be added to the following ignore.rules files # based on the arguments passed to this script. # # /snapshots-dir/profile/{client-id,...}/ignore.rules # # This script provides an easy mechanism for applying the same # ignore rule to many clients within a profile reducing the chance # of human error (fat finger) and amount of labor involved. # # OPTIONS # # [-d snapshots-dir] # Specifies the main directory for the ftimes snapshots. The # default value is /var/webjob/incoming/snapshots. # # [-g gid] # Specifies the Apache daemon group ID. The default value is # 'apache'. The ftimes_bimvw_wrapper script must be able to read # the ignore.rules file when executed by the Apache daemon via the # WebJob PutTrigger. When executed in this manner, the # ftimes_bimvw_wrapper script runs with the Apache GID. Thus, it # may be necessary to specify the correct GID via this option. # # [-u uid] # Specifies the Apache daemon user ID. The default value is # 'apache'. The ftimes_bimvw_wrapper script must be able to read # the ignore.rules file when executed by the Apache daemon via the # WebJob PutTrigger. When executed in this manner, the # ftimes_bimvw_wrapper script runs with the Apache UID. Thus, it # may be necessary to specify the correct UID via this option. # # -i ignore-file # Specifies the file containing the regular expressions that # specify files that will be ignored by the ftimes_bimvw script # when comparing two ftimes maps. # # -p profile # Specifies the profile (e.g., c_hlc_ftimes_sys) to apply the # regular expressions contained in the ignore-file. # # client-id ... # Specifies one or more client names to apply the regular # expressions contained in the ignore-file. # ###################################################################### IFS=' ' PATH=/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin PROGRAM=`basename $0` ###################################################################### # # Usage # ###################################################################### Usage() { echo 1>&2 echo "Usage: ${PROGRAM} [-d snapshots-dir] [-g gid] [-u uid] -i ignore-file -p profile client-id ..." 1>&2 echo 1>&2 exit 1 } ###################################################################### # # Main # ###################################################################### PROFILE= SNAPSHOTS_DIR=/var/webjob/incoming/snapshots SRC_FILE= WWW_GID=apache WWW_UID=apache while getopts "d:i:g:u:p:" OPTION ; do case "${OPTION}" in d) SNAPSHOTS_DIR="${OPTARG}" ;; g) WWW_GID="${OPTARG}" ;; i) SRC_FILE="${OPTARG}" ;; p) PROFILE="${OPTARG}" ;; u) WWW_UID="${OPTARG}" ;; *) Usage ;; esac done if [ ${OPTIND} -gt $# ] ; then Usage fi shift `expr ${OPTIND} - 1` if [ -z "${SNAPSHOTS_DIR}" -o -z "${PROFILE}" -o -z "${SRC_FILE}" ] ; then Usage fi if [ ! -f ${SRC_FILE} ] ; then echo "${PROGRAM}: Error='Ignore file (${SRC_FILE}) does not exist or is not regular.'" 1>&2 exit 2 fi for CLIENT_ID in $@ ; do DST_FILE=${SNAPSHOTS_DIR}/${CLIENT_ID}/${PROFILE}/ignore.rules if [ -f ${SRC_FILE} -a -f ${DST_FILE} ] ; then echo "---> updating ${DST_FILE} ..." cat ${SRC_FILE} ${DST_FILE} | sort -u > ${DST_FILE}.new chown ${WWW_UID}:${WWW_GID} ${DST_FILE}.new mv ${DST_FILE} ${DST_FILE}.old mv ${DST_FILE}.new ${DST_FILE} else echo "---> skipping ${DST_FILE} ... (file does not exist or is not regular)" fi done --- ftimes_bimvw_add_ignore_rules --- Appendix 7 The following command may be used to extract this Appendix: $ sed -e '1,/^--- ignore.rules.sample ---$/d; /^--- ignore.rules.sample ---$/,$d' ftimes-bimvw.txt > ignore.rules.sample --- ignore.rules.sample --- ^category[|]name[|]changed[|]unknown[|]records --- ignore.rules.sample --- Appendix 8 The following command may be used to extract this Appendix: $ sed -e '1,/^--- c_hlc_ftimes_rdm ---$/d; /^--- c_hlc_ftimes_rdm ---$/,$d' ftimes-bimvw.txt > c_hlc_ftimes_rdm --- c_hlc_ftimes_rdm --- #!/bin/sh ###################################################################### # # $Id: c_hlc_ftimes_rdm,v 1.5 2008/10/12 16:26:46 klm Exp $ # ###################################################################### # # Copyright 2007-2007 The FTimes Project, All Rights Reserved. # ###################################################################### # # Purpose: Create a snapshot for the "rdm" profile, and conditionally # prune away old job directories. RDM is short Run Directory # Monitor. # ###################################################################### IFS=' ' PATH=/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin PROGRAM=`basename $0` ###################################################################### # # Usage # ###################################################################### Usage() { echo 1>&2 echo "Usage: ${PROGRAM} [-f field-mask] [-H ftimes-home] [-p prune-days] [-w time-window] -r run-dir" 1>&2 echo 1>&2 exit 1 } ###################################################################### # # Process arguments. # ###################################################################### FIELD_MASK= PRUNE_DAYS= RUN_DIR= TIME_WINDOW=43200 # 12 Hours while getopts "f:H:p:r:w:" OPTION ; do case "${OPTION}" in f) FIELD_MASK="${OPTARG}" ;; H) FTIMES_HOME="${OPTARG}" ;; p) PRUNE_DAYS="${OPTARG}" ;; r) RUN_DIR="${OPTARG}" ;; w) TIME_WINDOW="${OPTARG}" ;; *) Usage ;; esac done if [ ${OPTIND} -le $# ] ; then Usage fi if [ -z "${RUN_DIR}" ] ; then Usage fi PATH=${FTIMES_HOME=/usr/local/ftimes}/bin:${PATH} if [ ! -d ${FTIMES_HOME} ] ; then echo "${PROGRAM}: Error='Home directory (${FTIMES_HOME}) does not exist or is not a directory.'" 1>&2 exit 2 fi if [ -n "${PRUNE_DAYS}" ] ; then echo ${PRUNE_DAYS} | egrep '^[0-9]+$' > /dev/null 2>&1 || { echo "${PROGRAM}: Error='Prune days (${PRUNE_DAYS}) does not pass muster.'" 1>&2 exit 2 } if [ ${PRUNE_DAYS} -lt 1 -o ${PRUNE_DAYS} -gt 365 ] ; then echo "${PROGRAM}: Error='Prune days (${PRUNE_DAYS}) must be in the range [1-365].'" 1>&2 exit 2 fi fi if [ ! -d ${RUN_DIR} ] ; then echo "${PROGRAM}: Error='Run directory (${RUN_DIR}) does not exist or is not a directory.'" 1>&2 exit 2 fi echo ${TIME_WINDOW} | egrep '^[0-9]+$' > /dev/null 2>&1 || { echo "${PROGRAM}: Error='Time window (${TIME_WINDOW}) does not pass muster.'" 1>&2 exit 2 } if [ ${TIME_WINDOW} -lt 100 -o ${TIME_WINDOW} -gt 31536000 ] ; then echo "${PROGRAM}: Error='Prune days (${TIME_WINDOW}) must be in the range [100-31536000].'" 1>&2 exit 2 fi ###################################################################### # # Conditionally develop a filter based on the specified time window. # If the script is not being run from within a job directory, do not # develop a filter. Always filter out the run directory itself. # ###################################################################### CWD=`pwd` JOB_DIR=`basename ${CWD}` DEC_PATTERN="[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]" HEX_PATTERN="[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]" JOB_DIR_PATTERN="webjob_${DEC_PATTERN}_${HEX_PATTERN}" TIME_FILTER_REGEXP= echo ${JOB_DIR} | egrep "^${JOB_DIR_PATTERN}(\.d)?\$" > /dev/null 2>&1 if [ $? -ne 0 ] ; then echo "${PROGRAM}: Warning='Script is being executed outside a valid job directory. Therefore, no filter can be generated/applied.'" 1>&2 else CHOP_COUNT=`echo ${TIME_WINDOW} | tr -d '\r\n' | wc -c | awk '{print $1}'` CHOP_COUNT=`expr ${CHOP_COUNT} - 2` # This is why we need a minimum time window of 100 or higher. SED_CHOP_PATTERN= while [ ${CHOP_COUNT} -gt 0 ] ; do SED_CHOP_PATTERN="${SED_CHOP_PATTERN}." CHOP_COUNT=`expr ${CHOP_COUNT} - 1` done TIME1=`basename ${CWD} | awk -F_ '{print $2}'` TIME2=`expr ${TIME1} - ${TIME_WINDOW}` MASK_TIME1=`echo ${TIME1} | sed "s/${SED_CHOP_PATTERN}$//;"` MASK_TIME2=`echo ${TIME2} | sed "s/${SED_CHOP_PATTERN}$//;"` while [ ${MASK_TIME1} -ge ${MASK_TIME2} ] ; do if [ -z "${TIME_FILTER_REGEXP}" ] ; then TIME_FILTER_REGEXP="${MASK_TIME1}" else TIME_FILTER_REGEXP="${TIME_FILTER_REGEXP}|${MASK_TIME1}" fi MASK_TIME1=`expr ${MASK_TIME1} - 1` done fi if [ -n "${TIME_FILTER_REGEXP}" ] ; then FTIMES_EXCLUDE_FILTER="ExcludeFilter=webjob_(?:${TIME_FILTER_REGEXP})" else FTIMES_EXCLUDE_FILTER= fi ###################################################################### # # Map the specified run directory. Filter out recently created job # directories based on the expression developed above. # ###################################################################### FTIMES_VERSION=`ftimes --version | awk '{print $2}' 2> /dev/null` case "${FTIMES_VERSION}" in 3.8.0) if [ -z "${FIELD_MASK}" ] ; then FIELD_MASK="all-sha1-sha256-magic" fi ;; *) echo "${PROGRAM}: Error='FTimes version (${FTIMES_HOME}) is not supported. Use FTimes 3.8.0.'" 1>&2 exit 2 ;; esac ftimes --maplean - -l 0 << EOF BaseName=- Compress=Y FieldMask=${FIELD_MASK} Include=${RUN_DIR} Exclude=${CWD} # Don't map the directory associated with this job. ${FTIMES_EXCLUDE_FILTER} EOF ###################################################################### # # Conditionally prune stale job files and directories. # ###################################################################### if [ -n "${PRUNE_DAYS}" ] ; then for dir in `find ${RUN_DIR} -type d -a -name "${JOB_DIR_PATTERN}*" -a -mtime +${PRUNE_DAYS} -prune` ; do find ${dir} -type f | xargs rm -f 2> /dev/null find ${dir} -type d -depth | xargs rmdir 2> /dev/null done fi --- c_hlc_ftimes_rdm ---