Centralized YUM Repository Unix Patch...

Centralized YUM Repository Unix Patch Management

Repository Locations

The PVALENTINO YUM repositories live on PATCH.PVALENTINO.ORG

All repositories live under /data01/repository/. The only sub-directory currently present is patch/, which is where all normal package updates go.

The patch directory structure is demonstrated by this example:
 
/data01/repository/patch/q3/rhel5-x86_64/
Where:
/data01/repository/ is the root of all repositories
patch/ is where all system updates go
q3/ indicates that these packages are for the 3rd Quarter scheduled patching
rhel5-x86_64/ contains the packages for Red Hat Enterprise Linux 5 for the 64bit edition
 

Under rhel5-x86_64 (in the example) would contain the actual repository metadata and all the RPM package files. You cannot just place the RPM files into a directory and call it a repository -- you must set up the YUM metadata for that directory before it can be used.

Solaris patch clusters and AIX fix files should be placed in logically named directories underneath the patch date directory (q3 in this example). For Solaris, the name should be something like solaris8 for Solaris 8. For AIX, it should be aix_51 for AIX 5.1. See the Appendices for Solaris and AIX patching procedures.
 
Creating a Patch Repository

To create the repositories for a scheduled quarterly patch cycle, we will first need to obtain all the current packages for each release of Red Hat Linux and for each platform that it runs on in our environment.

Most of our systems are connected to the Red Hat Network (RHN), and one of the services it provides is to provide the up-to-date packages for download. RHN is designed to only allow each system to download packages for its own release and platform; however, there is a workaround for this problem. On PATCH there is a utility called "rhnget", which can impersonate other systems registered through RHN and download the appropriate packages for their respective release and platform.
There is a directory on PATCH, /data01/systemid/, which contains the system identification files from one representative system for each release and platform in our environment:
 

rhel4-i386.systemid RHEL 4 i386

rhel4-x86_64.systemid RHEL 4 x86-64
rhel5-x86_64.systemid RHEL 5 x86-64
rhel5-i386.systemid RHEL 5 i386
 
The rhnget utility must impersonate a system that has an active entitlement to RHN. If any of these systems change or are decommissioned, you will have to copy the system identification file from another system of the same release and platform that is also entitled with RHN. Register a system for each platform with RHN with up2date --register (if you get error follow steps to import the key into rpm then retry the registration). Once a system is registered with RHN the system identification file can be found on the system at /etc/sysconfig/rhn/systemid, simply copy this file over to PATCH in the /data01/systemid/ folder and rename it using the convention above.
 
To fetch the packages for each release/platform, log into PATCH as root and do the following:
 
Manual Process :
Create and change to the directory for the quarter:

mkdir /data01/repository/patch/q3 ; cd /data01/repository/patch/q3

Also, change the 'current' symlink, so that the we can always access the current patchset with the same URL:

rm /data01/repository/patch/current

ln -s /data01/repository/patch/q3 /data01/repository/patch/current
 

Clean up an existing repository:

yum clean all

yum clean metadata
yum clean dbcache
yum makecache

Retrieve the packages for each release/platform (the directories will be automatically created). Use this exact script, in order to get the additional package channels that some systems are subscribed to:

rhnget --systemid=/data01/systemid/rhas21-i386.systemid rhn:///redhat-advanced-server-i386 ./rhas21-i386/

rhnget --systemid=/data01/systemid/rhel3-i386.systemid rhn:///rhel-i386-as-3 ./rhel3-i386/
rhnget --systemid=/data01/systemid/rhel4-i386.systemid rhn:///rhel-i386-as-4 ./rhel4-i386/
rhnget --systemid=/data01/systemid/rhel4-i386.systemid rhn:///rhel-i386-as-4-appstk-1 ./rhel4-i386/
rhnget --systemid=/data01/systemid/rhel4-x86_64.systemid rhn:///rhel-x86_64-as-4 ./rhel4-x86_64/
rhnget --systemid=/data01/systemid/rhel4-ppc.systemid rhn:///rhel-ppc-as-4 ./rhel4-ppc/

(You will probably want to create a simple script to run the commands, as downloading all the files will take several hours. Scripting it will let you run all of them without having to check on it constantly. Make sure to check the output occasionally to make sure that there weren't any problems during the downloads.)

Once all the files are downloaded, you can turn the directories into YUM repositories. This step will scan the RPMs in the directory and collect information from them, including their versions and dependencies.

In each of the directories under where you ran the rhnget command (e.g., /data01/repository/patch/q3/) you will need to run the following three commands:

cd /data01/repository/patch/q3/[directory]/

createrepo -v /data01/repository/patch/q3/[directory]/

Or, for older versions of Linux use:

yum-arch

(Replace [directory] with the name of the directory you are working on.)

The createrepo command creates the new-style of YUM metadata, used by newer versions of YUM. The yum-arch command creates the old-style of YUM metadata, required by older versions of YUM. For instance, RHEL 3 systems will not run the newer versions of YUM.

Automated Process Script:

#!/bin/bash

if [ "$1" = "" ]
then
exit 1
fi
if [ ! -d /data01/repository/patch/$1 ]; then mkdir /data01/repository/patch/$1 ; fi
cd /data01/repository/patch/$1
rm -f /data01/repository/patch/current
ln -s /data01/repository/patch/$1 /data01/repository/patch/current
result="1"
count="0"
while [ $result != 0 ] && [ $count -lt 3 ]
do
/usr/bin/rhnget -vvv --systemid=/data01/systemid/rhel4-i386.systemid rhn:///rhel-i386-as-4 ./rhel4-i386/ 2>&1 | mail -s "$1 rhel4 patch download status" pvalentino@sysxperts.com
result=$?
count=`expr $count + 1`
done
if [ $result = 0 ]; then /usr/bin/createrepo -v ./rhel4-i386;fi
result="1"
count="0"
while [ $result != 0 ] && [ $count -lt 3 ]
do
/usr/bin/rhnget -vvv --systemid=/data01/systemid/rhel4-x86_64.systemid rhn:///rhel-x86_64-as-4 ./rhel4-x86_64/ 2>&1 | mail -s "$1 rhel4 x86_64 patch download status" pvalentino@sysxperts.com
result=$?
count=`expr $count + 1`
done
if [ $result = 0 ]; then /usr/bin/createrepo -v ./rhel4-x86_64;fi
result="1"
count="0"
while [ $result != 0 ] && [ $count -lt 3 ]
do
/usr/bin/rhnget -vvv --systemid=/data01/systemid/rhel5-x86_64.systemid rhn:///rhel-x86_64-server-5 ./rhel5-x86_64 2>&1 |mail -s "$1 rhel5-x86_64 patch download status" pvalentino@sysxperts.com
result=$?
count=`expr $count + 1`
done
if [ $result = 0 ]; then /usr/bin/createrepo -v ./rhel5-x86_64;fi
exit

Then create a cron job to run each quarter to download the patches
0 20 21 2 * /root/patchdownload.sh q1
0 20 3 6 * /root/patchdownload.sh q2
0 20 3 9 * /root/patchdownload.sh q3
0 20 3 11 * /root/patchdownload.sh q4
 

Congratulations, you have set up the repositories. The next step is to create the configuration files used to update the systems.

Creating YUM Configurations

These are the files used by YUM on the servers to define their package distribution and where to locate the packages. One is required for each release/platform defined.
The files should be located in the quarterly patch directory, and should be named after their directory, with a .conf on the end. For example, if you have:
 

/data01/repository/patch/current/rhel5-x86_64/

you would create the following configuration file:

/data01/repository/patch/current/rhel5-x86_64.conf
 
The file contents are fairly simple. Copy between the BEGIN FILE and END FILE markers:
 
-----BEGIN FILE-----
[main]
cachedir=/var/cache/yum
debuglevel=2
logfile=/var/log/yum.log
pkgpolicy=newest
distroverpkg=update
tolerant=1
exactarch=1
retries=20
#exclude=kernel*
[update]
name=PVALENTINO Patching - rhel5-x86_64
baseurl=http://PATCH.PVALENTINO.ORG/repository/patch/q3/rhel5-x86_64/
-----END FILE-----
 
Replace the quarter and release/platform information in the [update] section to reflect the correct information. The baseurl parameter is the web URL where the release/platform RPMs and repository metadata are located on the PATCH server.
 
Using the Repositories
Web Access
You can access the PATCH repositories internally via HTTP, at:
 

Updating Systems

If you haven't updated a system using this method before, you will need to install YUM onto the system first for RHEL 4 or older. RHEL 5 includes Yum and necessary packages. Packages for Red Hat 2.1, 3, and 4 are available on PATCH. Install it with the command:
 

rpm -ivh http://PATCH.PVALENTINO.ORG/repository/yum/yum-package

Where yum-package is one of:

yum-1.0.3-0.1.el2.rf.noarch.rpm for RHAS 2.1
yum-2.0.8-0.1.el3.rf.noarch.rpm for RHEL 3
 
These packages are platform-independent (they're just Python scripts), so they will work on i386, PowerPC, or whatever else. The exception is for RHEL4, which has package dependencies. For RHEL4 systems, you will need to download one of the following:
 

yum-rhel4-i386.tar.gz for RHEL 4 on i386

yum-rhel4-ppc.tar.gz for RHEL 4 on PowerPC (PPC)
yum-rhel4-x86_64.tar.gz for RHEL 4 on x86_64
 
Untar the appropriate file and install all the RPMs it contains together (e.g. rpm -ivh *.rpm)
 

Once YUM is installed, you only have to issue the following command to start the update process:

yum -c http://PATCH.PVALENTINO.ORG/repository/patch/<quarter>/<release>.conf -y update

or just yum -y update with the appropriate yum.conf in /etc
 

Replace quarter with the quarter information (e.g. q3) or the current keyword, and release with the release/platform information (e.g. rhel5-x86_64)

If you want to update a system without updating its kernel packages (necessary on some systems using kernel modules tied to a specific kernel version), run the following instead:
 
 
YUM will fetch the package information from PATCH, determine dependency requirements, and then download the packages and install them. If a new kernel version is to be installed, it will install the new kernel (not replace the old one) and make it the default kernel in /etc/grub.conf. Switching back to the old kernel is just a matter of editing /etc/grub.conf.
 
When YUM completes, reboot the system:
Init 6
 

Monitor the system console to ensure that it boots up without any issues. Make sure that all services on the system start up correctly.

If you are patching a VMWare virtual server and the kernel has been updated, you will probably need to reinstall the VMWare Tools package and reboot again in order to have the network modules properly rebuilt for the new kernel. See Appendix D for instructions on reinstalling the VMWare Tools package.
If the system is connected to the Red Hat Network, you will need to send the updated system package information to Red Hat, so it can correctly track what packages are on the system and alert us to any updates or security issues. Run the following as root on the system:
 
up2date -p
 
This completes the system patching process.
 

Troubleshooting
If, when rebooting the patched system where the kernel was upgraded, the system panics on startup, then it is possible that the system uses specific kernel modules for the disk subsystem. What you'll need to do is reboot the system to the old kernel (select it from the GRUB bootloader), and remake the ramdisk image that the new kernel uses for booting the system so that it includes the necessary kernel modules. (See also: mkinitrd man page)
Boot to the previous kernel version and log into the system as root. Go to the /lib/modules/ directory, and you'll see directories for each version of the kernel installed on the system. Basically, you need to find out what modules are in the currently booted kernel's version that aren't in the new kernel's version, and copy them over (maintaining the directory structure inside the directories.)
In general, modules built on one patch revision will work on a newer patch revision (i.e. 2.4.21-foo should work on 2.4.21-bar). A module from a 2.4 kernel, however, will almost certainly not work on a 2.6 kernel.
Once the modules are copied, change to the /boot directory. Make a backup of the initrd file for the new kernel. Then, run the mkinitrd command for the new kernel:
 

mkinitrd initrd-(kernel).img (kernel)

mkinitrd initrd-$(uname -r).img $(uname -r)
Where (kernel) is the version string of the new kernel (e.g., 2.4.21-47.0.1.ELsmp which is output with `uname –r` command). Once completed, try rebooting the system again.
 

PATCH System Build

The following are the specifications for PATCH.PVALENTINO.ORG. This is a fairly basic Red Hat installation, with only Apache as its major service.

VMWare:

One CPU

512mb memory
One network adapter
connected to VLAN1
One 16gb virtual disk
root, 4gb
swap, 1gb
/boot, 150mb
/var, 4gb
/usr, remaining space (~6.74gb)
One 100gb virtual disk
Set up entirely as a LVM volume, mounted on /data01

OS:

Running Red Hat Enterprise Linux 4 update 8

Apache:

Apache should run at startup.
 
Appendix A: AIX and Solaris Patching
 

AIX Patching
PVALENTINO has only three AIX systems: aixapp, aixtest, and aixdb. These all currently run AIX 5.1. The following describes how to determine what patches need to be applied, how to obtain them, and how to apply them.
Getting the Patches
You first need to obtain the list of latest system fileset versions from IBM. You can download the list for AIX 5.1 from:

http://www.ibm.com/eserver/support/fixes/fixcentral/fixinfofiledownload?file=LatestFixData51

Log into aixapp as root, create a temporary directory (i.e. mkdir /tmp/patchwork), and change to that directory. Copy the LatestFixData51 file that you downloaded into this directory. Then run:

/usr/sbin/compare_report -s -r /tmp/patchwork/LatestFixData51 -l

This will generate two files: /tmp/lowerthanmaint.rpt and /tmp/lowerthanlatest1.rpt. The lowerthanlatest1.rpt file is what we need. Copy it to your workstation, and then go to:

http://www.ibm.com/eserver/support/fixes/fixcentral/comparereport?system=2&type=1&package=5&release=51&tab=0

That page lets you upload the lowerthanlatest1.rpt file, and get a customized list of what fileset updates you need. Click the Browse button on the page, select the lowerthanlatest1.rpt file, and then click Submit. On the next page, make sure all three checkboxes are selected. Select the operating system revision from the dropdown (use the oslevel -r command on lmsappdev to determine this), and then click Continue.

You will now be presented with a list of fileset updates. There should be a link at the top to "Download all filesets using the ftp command". Click on that, and then click Continue in the window that pops up. Follow the instructions to download all the .bff files -- it is recommended that you do this directly on PATCH, since the files should be stored in the repository there.
 

Installing the Patches

First thing you should do on your target system is to check to make sure that any previously applied patches were committed correctly. Log into the system as root and run installp -s. This will show all software updates that are applied but not committed. If nothing is returned, then you're ready to go. Otherwise, you will need to apply the previous updates by running installp -c all (as the root user.)

Create a directory in a filesystem on the target machine that has enough space, change to that directory, and transfer the previously downloaded .bff files to there. To begin the patching, run the following command as root:
 
installp -aX -d . all
 

After all the patches are installed, the system must be rebooted. If possible, monitor the system console, and make sure the system comes back up normally. If you don't have access to the system console, be advised that it can take 10-15 minutes for the system to become accessible again.

Two weeks after the patches are applied, you need to commit them permanently to the system. Run the following as root:
 
installp -c all
 
No reboot is required for running the commit command.
 

Solaris Patching

PVALENTINO only has three Solaris systems: sun1, sun2, and sun3. These are all running Solaris 8 on the SPARC architecture.

Getting the Patch Cluster
With Solaris, you don't need to fiddle with figuring out what software is on your system: you just download a patch cluster, unzip it, and run it. The only trick is getting the patch cluster in the first place.
Go to http://sunsolve.sun.com/ and log in. If you do not have an account, you will need to create one. Sun requires that you have a SunSolve account before they'll let you download the patch cluster.
Once you're logged in, go back to the main SunSolve page. Select PatchFinder. Under 'Recommended Solaris Patch Clusters', scroll down and select 'Solaris 8' (NOT 'Solaris 8 x86' or 'Solaris 8 Sun Alert Patch Cluster'). Select the 'Download HTTP' radio button, and the click 'Go'. You will now download the 8_Recommended.zip file. Once that is downloaded, transfer that to the PATCH repository.
 

Installing the Patch Cluster

Create a directory in a filesystem on the target machine that has enough space, change to that directory, and transfer the previously downloaded 8_Recommended.zip file there. Unzip it with the command:

unzip -qq 8_Recommended.zip

Its a large file, and these aren't very fast systems, so unzipping may take a while. Once unzipped, remove the 8_Recommended.zip file.

You must have the system console (aka serial console) to patch, since you need to reboot the system to single-user mode before patching -- you will not be able to ssh into the system.
When you are ready to patch, log into the system via the system console, shutdown any applications, and reboot to single-user mode with the command:
 
reboot -- -s
 

The system will reboot, and then ask for the root password to enter maintenance mode. Enter the root password, and you will be at the shell prompt.

Run the following command to make sure all local filesystems are mounted:
 
mountall -l
 
Change to the directory where you unzipped the 8_Recommended.zip file, and then change to the 8_Recommended/ directory that was created. Run the following command to start the patch process:
 
./install_cluster | tee patchlog
 
Answer 'y' when it asks if you're ready to continue. It will then start working through patch installations.
 

You will see plenty of 'Return code 2' and 'Return code 8' messages. These are normal... return code 2 means that the patch was already installed, and return code 8 means that the patch was for a software package that wasn't installed on the system. For other patch codes, Google search for "Solaris Patch Codes" -- there are several lists out there. With the 'tee patchlog' part of the command, all output from the install_cluster script will be written out to a file named patchlog so that you can review all the messages later.

Be advised that it will usually take a good hour or more for it to run through the entire patch process.

Once script is completed, reboot the system by typing 'reboot'.

  
 

Scripts to Archive Log or Data Files

Scripts to Archive Log or Data Files
 
Script will move logs to a subdirectory based upon time, zip files after specified time period, then finally delete after a specified time period
 
Create file called archiveLog.sh in home directory of user that will be doing the archive process or in a directory owned by that user.
 
#!/bin/bash
#***************************************************************************************************#
# archiveLog.sh                                                                                #
# March, 2009                                                                                       #
#                                                                                                   #
# Archive log files                                                                           #
#                                                                                                   #
# Usage:                                                                                            #
# archiveLog.sh [-b base_dir] [dir 1] [dir2] [dir n] [dir_day_cnt [zip_day_cnt [del_day_cnt]]] #
#                                                                                                   #
# Default behavior:                                                                                 #
# Archive from base directory /log/                                                            #
# Archive all subdirectories                                                                        #
# Move to a subdirectory after 31 days                                                              #
# Move to a zip with the same name as the subdirectory 62 days after the                            #
# subdirectory was created.                                                                         #
# Remove zip files a year after the zip file was created                                            #
#                                                                                                   #
# Notes:                                                                                            #
# To flag a directory so that the archive process ignores it, put a file                            #
# in the directory with the name ".archive.ignore"                                              #
#***************************************************************************************************#
echo " log file archiving"
ERROR_GENERAL=1
ERROR_BAD_DIR=2
prog_name=" Archive Log"
base_dir="/log"
arc_script="/tmp/archlogtmp.sh"
# process command line args - last 3 (or first numeric) are day counts
# -b base_dir must be first 2 args if used
IFS=$'\x0A'
delim=$'\x0A'
dir_list=""
if [ ! -z "$1" ] ; then
   if [ "$1" = "-b" ] ; then
      base_dir=${2:-$base_dir}
      shift
      shift
   fi
   until [ -z "$1" ] || [[ $1 == [0-9]* ]]
   do
      dir_list="$dir_list$delim$base_dir/$1"
      shift
   done
fi
# Get the list of all directories off the base directory if none were specified
if [ -z "$dir_list" ] ; then
   dir_list=$(find $base_dir -type d -maxdepth 1  -mindepth 1 -printf "%p\n")
fi
dir_day_cnt=${1:-30}
zip_day_cnt=${2:-32}
del_day_cnt=${3:-365}
echo "Days to move to directory  : $dir_day_cnt"
echo "Days to zip to archive     : $zip_day_cnt"
echo "Days to remove archive     : $del_day_cnt"
echo "Base directory: $base_dir"
echo "Directories being archived : "
echo $dir_list
if [ ! -z "$4" ] ; then
   for i in [1 2 3] ; do shift; done
   echo "$prog_name : Warning : Unused command arguments: $@"
fi
# Check for error code and if it is not 0 display an error message and exit
# $1 result error code to test
# $2 exit error code if there is an error
# $3 custom error message - default is general error
check_error()
{
   if [ "$1" -ne "0" ]; then
      echo "$prog_name : Error $2 : ${3:-General Error}" >& 2
      exit $2
   fi
}
# Create a temporary script with all archiving steps plus helper functions
cat << EOF > $arc_script
#!/bin/bash
# Check for error code and if it is not 0 display an error message and exit
# \$1 result error code to test
# \$2 exit error code if there is an error
# \$3 custom error message - default is general error
check_error()
{
   if [[ \$1 == [0-9]* ]] && [ "\$1" -ne "0" ]; then
      echo "\$0 : Error \$2 : \${3:-General Error}" >& 2
      exit \$2
   fi
}
# Move the contents of a directory into a zip file and remove the directory
# Non-managed directories are ignored (don't have the .arch file)
# \$1 Directory to be zipped
zipDirectory()
{
   local tag="\$1/.arch"
   local ignore="\$1/.archive.ignore"
   if [[ -f "\$tag" ]] && [[ -f "\$ignore" ]] ; then
      echo "Compressing archive directory \$1"
      zip -m9jDq \$1-Archive \$1/*
      check_error \$? $ERROR_GENERAL "Unable to create archive '\$1-Archive.zip'"
      rm "\$tag"
      check_error \$? $ERROR_GENERAL "Unable to remove tag file '\$tag'"
      rmdir "\$1"
      check_error \$? $ERROR_GENERAL "Unable to remove directory '\$1'"
   fi
}
# Move a file from its original directory to an archive directory
# \$1 direcotory containg file
# \$2 File to move
# \$3 archive directory to move file into
moveFile()
{
   local tag="\$1/.arch"
   local arcdir="\$1/\$3"
   local ignore="\$1/.archive.ignore"
   if [[ ! -f "\$tag" ]] && [[ ! -f "\$ignore" ]] ; then
      if [[ ! -d \$arcdir ]] ; then
         /bin/mkdir "\$arcdir"
         check_error \$? $ERROR_GENERAL "Unable to create directory '\$arcdir'"
         echo " Archive" > "\$arcdir/.arch"
         echo "Creating new archive directory \$arcdir"
      fi
      /bin/mv "\$1/\$2" "\$arcdir"
      check_error \$? $ERROR_GENERAL "Unable to move file '\$2' to archive directory '\$arcdir'"
   fi
}
# Remove a file and display removal notice
# \$1 fully qualified file name to remove
removeArchive()
{
   echo "Removing archive \$1"
   rm "\$1"
   check_error \$? $ERROR_GENERAL "Unable to remove archive '\$1'"
}
EOF
# Make sure base directory exists
if [[ ! -d "$base_dir" ]] ; then
   check_error 1 $ERROR_BAD_DIR "Invalid base directory '$base_dir'"
fi
# Make sure all archive directories are valid
for dir in $dir_list
do
   if [[ ! -d "$dir" ]] ; then
      check_error 1 $ERROR_BAD_DIR "Invalid directory '$dir'"
   fi
done
# Look for date files older than one month and move them into an archiving sub-directory
for dir in $dir_list
do
   find "$dir" -type f -mtime +$dir_day_cnt  \! \( -name \*.zip -or -name \*.gzip \) -printf "moveFile \"%h\" \"%f\" \"%Tb-%TY\" \n " | sed -e 's/\$/\\\$/g' >>$arc_script
   check_error $? $ERROR_GENERAL "Error calling 'find' to move files for '$dir'"
done
# Look for archive directories that are more than two months and zip them
for dir in $dir_list
do
   find "$dir" -type d -mtime +$zip_day_cnt -name ???-???? -printf "zipDirectory \"%p\" \n " >>$arc_script
   check_error $? $ERROR_GENERAL "Error calling 'find' to zip directories for '$dir'"
done
# Look for zip archives that are one year old and delete them
for dir in $dir_list
do
   find "$dir" -type f -mtime +$del_day_cnt -name ???-????-Archive.zip -printf "removeArchive \"%p\" \n ">>$arc_script
   check_error $? $ERROR_GENERAL "Error calling 'find' to remove arhive zip files for '$dir'"
done
# Execute the archiving script
/bin/bash "$arc_script"
check_error $? $ERROR_GENERAL "Error running generated archive script"
echo "$prog_name : Archival processing complete"
exit 0
 

Create a file called archiveLogJob.sh in the same directory
 
/bin/bash /home/webadmin/archiveLog.sh dir1 dir2 30 32 365
 
dir1 and dir2 corresponds to the names of the directories you want to archive and it is assumed that the top level is /log in this script
the 30 32 and 365 correspond to days to move, days to zip, and days to delete respectively
 
Usage:                                                                                           
# archiveLog.sh [-b base_dir] [dir 1] [dir2] [dir n] [dir_day_cnt [zip_day_cnt [del_day_cnt]]]
 
Finally create a cron job to run the scripts on an appropriate schedule
 
0 15 * * 5  /home/user/scripts/archiveLogJob.sh > /home/user/archiveLogJob.log
 

Example Apache with SSL and reverse p...

Example Apache with SSL and reverse proxy configuration
 
httpd.conf
 

ServerTokens OS

ServerRoot "/etc/httpd"
PidFile run/httpd.pid

# Keepalive settings

Timeout 120
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 15

<IfModule prefork.c>

StartServers       8
MinSpareServers    5
MaxSpareServers   20
ServerLimit      256
MaxClients       256
MaxRequestsPerChild  4000
</IfModule>

<IfModule worker.c>

StartServers         2
MaxClients         150
MinSpareThreads     25
MaxSpareThreads     75
ThreadsPerChild     25
MaxRequestsPerChild  0
</IfModule>

Listen 80

LoadModule auth_basic_module modules/mod_auth_basic.so

LoadModule auth_digest_module modules/mod_auth_digest.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_alias_module modules/mod_authn_alias.so
LoadModule authn_anon_module modules/mod_authn_anon.so
LoadModule authn_dbm_module modules/mod_authn_dbm.so
LoadModule authn_default_module modules/mod_authn_default.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_owner_module modules/mod_authz_owner.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_dbm_module modules/mod_authz_dbm.so
LoadModule authz_default_module modules/mod_authz_default.so
LoadModule ldap_module modules/mod_ldap.so
LoadModule authnz_ldap_module modules/mod_authnz_ldap.so
LoadModule include_module modules/mod_include.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule logio_module modules/mod_logio.so
LoadModule env_module modules/mod_env.so
LoadModule ext_filter_module modules/mod_ext_filter.so
LoadModule mime_magic_module modules/mod_mime_magic.so
LoadModule expires_module modules/mod_expires.so
LoadModule deflate_module modules/mod_deflate.so
LoadModule headers_module modules/mod_headers.so
LoadModule usertrack_module modules/mod_usertrack.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule mime_module modules/mod_mime.so
LoadModule dav_module modules/mod_dav.so
LoadModule status_module modules/mod_status.so
LoadModule autoindex_module modules/mod_autoindex.so
LoadModule info_module modules/mod_info.so
LoadModule dav_fs_module modules/mod_dav_fs.so
LoadModule vhost_alias_module modules/mod_vhost_alias.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule dir_module modules/mod_dir.so
LoadModule actions_module modules/mod_actions.so
LoadModule speling_module modules/mod_speling.so
LoadModule userdir_module modules/mod_userdir.so
LoadModule alias_module modules/mod_alias.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule cache_module modules/mod_cache.so
LoadModule suexec_module modules/mod_suexec.so
LoadModule disk_cache_module modules/mod_disk_cache.so
LoadModule file_cache_module modules/mod_file_cache.so
LoadModule mem_cache_module modules/mod_mem_cache.so
LoadModule cgi_module modules/mod_cgi.so
LoadModule version_module modules/mod_version.so

Include conf.d/*.conf

#ExtendedStatus On

User webadmin

Group webadmin

# Main configuration

# UseCanonicalName: When set "On", Apache will use the value of the

# ServerName directive. Otherwise apache will use the client provided host name
UseCanonicalName Off

DirectoryIndex index.html index.htm index.php

AccessFileName .htaccess

#

# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
<Files ~ "^\.ht">
    Order allow,deny
    Deny from all
</Files>

#

# TypesConfig describes where the mime.types file (or equivalent) is
# to be found.
#
TypesConfig /etc/mime.types
DefaultType text/plain

<IfModule mod_mime_magic.c>

#   MIMEMagicFile /usr/share/magic.mime
    MIMEMagicFile conf/magic
</IfModule>

HostnameLookups Off

# CACHE CONFIG AND KERNEL ACCELERATORS

<Directory "/www/">

        EnableMMAP off
        EnableSendfile off
</Directory>

#CacheRoot /web_cache

#CacheDirLevels 5
#CacheDirLength 3
#MCacheSize 409600
#MCacheMinObjectSize 1
#MCacheMaxObjectSize 256000

#CacheEnable disk /

#CacheEnable mem /

ErrorLog /log/nohost_error.log

LogLevel warn

LogFormat "%h %l %u %t \"%r\" %>s %b  \"%{Referer}i\" \"%{User-Agent}i\"" combined

LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

SetEnvIf Remote_Addr "127\.0\.0\.1" dontlog

SetEnvIf Remote_Addr "-" dontlog

SetEnvIf Host "^$" dontlog

SetEnvIf Request_URI \.gif dontlog

SetEnvIf Request_URI \.jpg dontlog
SetEnvIf Request_URI \.jpeg dontlog
SetEnvIf Request_URI \.png dontlog
#
CustomLog /log/nohost_access.log combined

ServerSignature Off

Alias /icons/ "/var/www/icons/"

<Directory "/var/www/icons">

    Options Indexes MultiViews
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>

# IndexOptions: Controls the appearance of server-generated directory

# listings.
IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable

AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip

AddIconByType (TXT,/icons/text.gif) text/*

AddIconByType (IMG,/icons/image2.gif) image/*
AddIconByType (SND,/icons/sound2.gif) audio/*
AddIconByType (VID,/icons/movie.gif) video/*

AddIcon /icons/binary.gif .bin .exe

AddIcon /icons/binhex.gif .hqx
AddIcon /icons/tar.gif .tar
AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv
AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip
AddIcon /icons/a.gif .ps .ai .eps
AddIcon /icons/layout.gif .html .shtml .htm .pdf
AddIcon /icons/text.gif .txt
AddIcon /icons/c.gif .c
AddIcon /icons/p.gif .pl .py
AddIcon /icons/f.gif .for
AddIcon /icons/dvi.gif .dvi
AddIcon /icons/uuencoded.gif .uu
AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl
AddIcon /icons/tex.gif .tex
AddIcon /icons/bomb.gif core

AddIcon /icons/back.gif ..

AddIcon /icons/hand.right.gif README
AddIcon /icons/folder.gif ^^DIRECTORY^^
AddIcon /icons/blank.gif ^^BLANKICON^^

DefaultIcon /icons/unknown.gif

ReadmeName README.html

HeaderName HEADER.html

IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t

AddLanguage ca .ca

AddLanguage cs .cz .cs
AddLanguage da .dk
AddLanguage de .de
AddLanguage el .el
AddLanguage en .en
AddLanguage eo .eo
AddLanguage es .es
AddLanguage et .et
AddLanguage fr .fr
AddLanguage he .he
AddLanguage hr .hr
AddLanguage it .it
AddLanguage ja .ja
AddLanguage ko .ko
AddLanguage ltz .ltz
AddLanguage nl .nl
AddLanguage nn .nn
AddLanguage no .no
AddLanguage pl .po
AddLanguage pt .pt
AddLanguage pt-BR .pt-br
AddLanguage ru .ru
AddLanguage sv .sv
AddLanguage zh-CN .zh-cn
AddLanguage zh-TW .zh-tw

LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW

ForceLanguagePriority Prefer Fallback

AddDefaultCharset UTF-8

AddType application/x-compress .Z

AddType application/x-gzip .gz .tgz

AddHandler type-map var

#
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml

Alias /error/ "/var/www/error/"

<IfModule mod_negotiation.c>

<IfModule mod_include.c>
    <Directory "/var/www/error">
        AllowOverride None
        Options IncludesNoExec
        AddOutputFilter Includes html
        AddHandler type-map var
        Order allow,deny
        Allow from all
        LanguagePriority en es de fr
        ForceLanguagePriority Prefer Fallback
    </Directory>

</IfModule>

</IfModule>

BrowserMatch "Mozilla/2" nokeepalive

BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0
BrowserMatch "RealPlayer 4\.0" force-response-1.0
BrowserMatch "Java/1\.0" force-response-1.0
BrowserMatch "JDK/1\.0" force-response-1.0

BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully

BrowserMatch "MS FrontPage" redirect-carefully
BrowserMatch "^WebDrive" redirect-carefully
BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully
BrowserMatch "^gnome-vfs/1.0" redirect-carefully
BrowserMatch "^XML Spy" redirect-carefully
BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully
FileETag MTime Size
ProxyRequests Off
TraceEnable Off
NameVirtualHost *:80
Include conf/sites/*
 
In the sites folder create your vhost such as www.sysxperts.com.conf
<VirtualHost _default_:80>
        ServerName www.sysxperts.com
        ServerAlias www1-sysxperts www1-sysxperts.sysxperts.com
        ServerAdmin pvalentino@sysxperts.com

        ErrorLog /var/log/httpd/www1-sysxperts-error_log

        CustomLog /var/log/httpd/www1-sysxperts-access_log combined env=!dontlog

        RewriteEngine On

        RewriteRule ^/myapp/?(.*)$ https://%{HTTP_HOST}/myapp/$1 [R,L]
        RewriteRule ^/myapp2/?(.*)$ https://%{HTTP_HOST}/myapp2/$1 [R,L]

        Include conf/all_vhosts.conf

        DocumentRoot /www/www.sysxperts.com

        <Directory "/www/www.sysxperts.com/">

                Options +Includes -Indexes
                AllowOverride None
                AddOutputFilter INCLUDES .htm
                AddOutputFilter INCLUDES .html
                Order Allow,Deny
                Allow From All
        </Directory>
</VirtualHost>

Listen www.sysxperts.com:443

<VirtualHost www.sysxperts.com:443>
        ServerName www.sysxperts.com
        ServerAlias www1-sysxperts www1-sysxperts.sysxperts.com
        ServerAdmin pvalentino@sysxperts.com

        ErrorLog /var/log/httpd/www1-sysxperts-error_log

        CustomLog /var/log/httpd/www1-sysxperts-access_log combined env=!dontlog

        RewriteEngine On

        RewriteRule ^/$ http://%{HTTP_HOST}/ [R,L]

        SSLEngine On

        SSLCertificateFile    ssl/www.sysxperts.com.crt
        SSLCertificateKeyFile ssl/www.sysxperts.com.key
        Include conf/ssl.conf
        Include conf/all_vhosts.conf

        DocumentRoot /www/www.sysxperts.com

        <Directory "/www/www.sysxperts.com/">

                Options +Includes -Indexes
                AllowOverride None
                AddOutputFilter INCLUDES .htm
                AddOutputFilter INCLUDES .html
                Order Allow,Deny
                Allow From All
        </Directory>

        RewriteRule /myapp$ /myapp/ [R,L]

        <Location "/myapp/">
                ProxyPass http://myapp.sysxperts.com:8080/myapp/
                ProxyPassReverse http://myapp.sysxperts.com:8080/myapp/
                ProxyPassReverse /
        </Location>

        RewriteRule /myapp2$ /myapp2/ [R,L]

        <Location "/myapp2/">
                ProxyPass http://myapp2.sysxperts.com:8080/myapp2/
                ProxyPassReverse http://myapp2.sysxperts.com:8080/myapp2/
                ProxyPassReverse /
        </Location>
</VirtualHost>
 
 The include file all_vhosts.conf should have entries common to all virtual hosts i.e.:
# Rewrite engine must be turned on prior to including this config file
RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK)
RewriteRule .* - [F]
 The ssl include file should contain entries common to all SSL vhosts i.e. :
SSLProtocol -ALL +SSLv3 +TLSv1
SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM
 
For PCI Scans you may also need to add an .htaccess file to /var/www/manual/images with Options -Indexes or you can disable the manual altogether. 
 

Example setup for Apache Site Mainten...

Example setup for Apache Site Maintenance pages for proxied applications
 
Create a directory in /etc/httpd/conf called scontrol owned by root and another subdirectory called control owned by the account apache runs as.
 
/etc/httpd/conf/scontrol/control
 
in /etc/httpd/conf/scontrol create the following files owned by root
 
myapp_inet_set_default.sh
#!/bin/sh
cd /etc/httpd/conf/scontrol
ln -sf control/myapp.default.conf myapp.conf
/etc/init.d/httpd reload
 
myapp_inet_set_offline.sh
 #!/bin/sh
cd /etc/httpd/conf/scontrol
ln -sf control/myapp.offline.conf myapp.conf
/etc/init.d/httpd reload
 
in the control subfolder create the following files owned by the account apache runs under
 
myapp.default.conf
RewriteRule /myapp$ /myapp/ [R,L]
<Location "/myapp/">
    ProxyPassReverse /
</Location>
myapp.offline.conf
RewriteRule ^/myapp/? http://www.sysxperts.com/main/myappmaintenance.html [NC,R,L]
Create the myappmaintenance.html page and put it into the main subfolder that you created under your Document Root or anywhere you'd like to configure it to go provided you use an appropriate RewriteRule.
 
 
In the vhost configuration under /etc/httpd/conf/sites/www.sysxperts.com.conf find the Rewrite and Location entries for the app and replace them with:
 Include conf/scontrol/myapp.conf
 
Create a cron job under root to turn on site maintenance page at start  of maintenance and one to turn the app back on at end or manually run the scripts as necessary
 

28 19 * * * /etc/httpd/conf/scontrol/myapp_inet_set_offline.sh

0 0 * * * /etc/httpd/conf/scontrol/myapp_inet_set_default.sh
  
This example turns on site maintenance page at 7:28PM and brings the app back online at midnight.
 

Apache Security

Apache Security

in httpd.conf

    TraceEnable Off
   
in ssl.conf

    SSLProtocol -ALL +SSLv3 +TLSv1
    SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM


in vhosts

    Include conf/ssl.conf
    RewriteEngine on
    RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK)
    RewriteRule .* - [F]


Perl script to publish web content fr...

Perl script to publish web content from test to prod via ssh (linux with apache)
 
Requirements - public key auth for ssh between the 2 servers
 
create file /etc/fpub.allowed on test server with:
 
#source:destination paths

    /web/docs:/web/docs

 
create file /usr/bin/fpub with:
#!/usr/bin/perl -w

use strict;

use Cwd qw(abs_path);

use Sys::Syslog qw(:standard :macros);
use File::Spec;

openlog('fpub', 'ndelay', LOG_USER);

# User to run as, system to scp to (should be put in a config file)

my $user = 'unixuser';
my $target = 'destinationservername';

my $relfile = $ARGV[0];

# Make sure that a file was specified

if (! $relfile) {
        print "No file specified for publishing.\n";
        print "Usage: fpub /file/to.publish\n";
        print "Also, use \"fpub list_allowed\" to see allowed publishing locations.\n";
        exit 1;
}

# List allowed publish locations if requested by the user

if ($relfile eq 'list_allowed') {
        print "Listing allowed publish locations:\n\n";
        open ALLOW, '/etc/fpub.allowed' or die "Could not open allowed locations file /etc/fpub.allowed: $!";

        while (<ALLOW>) {

                print $_;
        }

        close ALLOW;

        print "\n";

        exit 0;

}

# Get the absolute path of the file

my $file = File::Spec->rel2abs($relfile);

print "Publishing file: $file\n";

my $username = getpwuid($>);

syslog(LOG_INFO, "fpub ($username) publishing file $file");

my $scpret = system("sudo -u $user fpub_scp $file $target");

if ($scpret == 100) {

        syslog(LOG_ERR, "fpub ($username) fpub.allowed error, could not publish file: $file");
} elsif ($scpret == 120) {
        syslog(LOG_WARNING, "fpub ($username) publish disallowed for file: $file");
} elsif ($scpret == 130) {
        syslog(LOG_ERR, "fpub ($username) fpub.allowed contains relative paths");
} elsif ($scpret != 0) {
        syslog(LOG_ERR, "fpub ($username) unspecified error.  File: $file");
}

closelog();

Use examples:
fpub index.html #from within the directory where index.html exists
find . -type f -mtime -1 -exec fpub {} \;  #to publish all docs updated within last 24 hours