Extracting and Copying Mail and Calendar Appointments from a Corrupted Microsoft Windows Live Mail Installation (Calendar EDB/ESE Database Files)

In order to install another one of the Live series of packages (Movie Maker) on Windows 7 a colleague updated their Window Live Mail package as part of the install process. The install process then hung and crippled the Mail program, stopping it from booting with a useless “Windows Live Mail has stopped working” message.

As they were using POP3 to download mail and not synchronising with the calendar server very often they were worried about losing everything, including all their very important calendar appointments. First step was to try System Restore, which didn’t work (all the system restore points we tried came back with the same error in Windows Live Mail. Next we had to look at manually moving and editing files.

We decided that the next best thing would be to set up Windows Live Mail on another machine before manually copying the data from the corrupted machine over to the new one. You can alternatively just make backups of your data and try uninstalling/re-installing Windows Live Mail again.

Mail retrieval from the existing install is easy as all you need to do is copy some physical files across from the USERNAME C:\Users directory. Just move the subfolders (containing mail) from “C:\Users\USENAME\AppData\Local\Microsoft\Windows Live Mail” to the other machine with a good copy of Windows Live Mail. You will need to set up your email retrieval settings again but your old mail should just appear in all the correct folders.

Retrieval of a corrupted calendar is a lot more tricky and needs some free third party tools in order to work. Despite the availability of several (very poor) tools I couldn’t find a way of extracting the calendar data in a format that could be easily imported into the working Windows Live Mail program. Most tools simply refused to open the data file. As a result I had to extract the data and my colleague had to manually enter the appointments again (which is still better than losing everything). I went a step further and wrote a little PHP script to display the data more easily so they didn’t have as hard time of it.

By far the most success I had was with NirSoft ESEDatabaseView which could open the corrupted Live Mail Calendar database file where all other programs failed. The “WLCalendarStore.edb” file containing the database of calendar appointments was found at “C:\Users\USERNAME\AppData\Local\Microsoft\Windows Live Mail\Calendars\DBStore\WLCalendarStore.edb”. I downloaded ESEDatabaseView and ran the executable from the zip. Then I opened the corrupted “WLCalendarStore.edb” and selected the “calendarItem” table from the dropdown. Now this is great, but everything is in HEX format and needs to be converted to normal text!

I first extracted the HEX encoded CalendarItem data to CSV (click on an item in the list of CalendarItem, ctrl-A, ctrl-S then Save as Type “Comma Delimited Text File (*.csv)”). It’s up to you how best to convert this HEX information from the output CSV but I used the following PHP script to convert and display the easy to understand “ServerIcal” column which was output as column 17 in the CSV. Note that I have two parts to this PHP, one part that exports nicely human readable data and the other that just outputs the raw iCal style data:


<?php

// helper function to convert hex value to string
function hex2str($hex) {
    for($i=0;$i<strlen($hex);$i+=2) $str .= chr(hexdec(substr($hex,$i,2)));
    return $str;
}

// show appointments as easy to read HTML
echo "<h1>Appointments</h1>";

if (($handle = fopen("calendaritem.csv", "r")) !== FALSE) {
    while (($data = fgetcsv($handle, 1000000, ",")) !== FALSE) {

			// get the ServerIcal column
			$ical = $data[17];

			// strip out all '00' and ' ' strings from the output
			$ical = str_replace(" 00 ","",$ical);
			$ical = str_replace(" ","",$ical);

			// convert to string
			$line = hex2str($ical);

			// tidy up HTML to make it easily human readable
			$line = substr($line,strpos($line,"DTSTART"));
			$line = substr($line,0,strpos($line,"UID"));
			$line = str_replace("SUMMARY:","SUMMARY: <strong>",$line);
			$line = str_replace(PHP_EOL,"</strong>" .PHP_EOL,$line);
			$line = str_replace("DTSTART;VALUE=DATE:","DTSTART;VALUE=DATE: <strong>",$line);
			$line = str_replace(" DTEND;","</strong>  DTEND;",$line);
			$line = str_replace("DTEND;VALUE=DATE:","DTEND;VALUE=DATE: <strong>",$line);
			$line = str_replace("  SUMMARY: ","</strong>   SUMMARY: ",$line);
			$line = str_replace("DTSTART;VALUE=DATE:","Start Date:",$line);
			$line = str_replace("DTEND;VALUE=DATE:","End Date:",$line);

			// output as HTML
			echo "<br/>$line";
        }

    fclose($handle);
}

// show raw appointment data
echo "<h1>Raw Data</h1>";

if (($handle = fopen("calendaritem.csv", "r")) !== FALSE) {
    while (($data = fgetcsv($handle, 1000000, ",")) !== FALSE) {

			// get the ServerIcal column
            $ical = $data[17];

			// strip out all '00' and ' ' strings from the output
			$ical = str_replace(" 00 ","",$ical);
			$ical = str_replace(" ","",$ical);

			// convert to string
			$line = hex2str($ical);

			// output as HTML
			echo "<br/><br/>$line";
        }

    fclose($handle);
}

?>

The output as HTML was easily readable enough that my colleague could manually enter all the appointments again.

Obviously I would much prefer to output from the database to something that could be directly imported but this functionality isn’t available in Windows Live Mail and the database was so corrupted we couldn’t even just copy it over to the new machine. My colleague is now looking at some of the many alternatives to POP3 and manual sync of calendar items using Windows Live Mail.

Quickly and Safely Move a Microsoft SQL Server Database (MDF and LDF Files) to a New Physical Location (Including Setting Read Write Access)

To move a Microsoft SQL Server database to a new physical location you need to detach, copy, reattach and set permissions for the database MDF and log LDF files associated with the database. I needed to do it as the drive on which each type of database required file (MDF/LDF) was located was to be separately backed up, as per the site backup regulations. I was using SQL Server 2008 R2.

Microsoft’s recommended way of doing this is to use SQL Management Studio. Create a query and detach your database “mydb” by entering and running the following:

use master
go
sp_detach_db ‘mydb’
go

Now copy the MDF and LDF files to their new location and reattach (locations and file names are for this example only):

use master
go
sp_attach_db ‘mydb’,’E:\DATA\mydb.mdf’,’F:\DATA\mydb_log.ldf’
go

You can check the basic properties of the database (and that the file locations have been correctly set) using:

use mydb
go
sp_helpfile
go

The final thing I needed to do was to set the file/folder permissions so that the database could go from read-only to read/write. I first set the folder permissions for my two new folders (“E:\DATA\” and “F:\DATA\” as above). To do this I needed to add the following user to the security settings with full control:

sqlservermssqluser$COMPUTERNAME$mssqlserver

This didn’t actually set the files (server permissions error) so I set the permissions for the “E:\DATA\mydb.mdf” and “F:\DATA\mydb_log.ldf” files individually the same way.

Now open up another query window and type the following to enable read/write access to the database:

use master
go
alter database mydb set read_write with no_wait
go

Now your database is set up exactly as it was previously, only the associated files have moved physical location.

Quick Subversion (SVN) Server Setup on Ubuntu Server 12.04

Setting up an Apache Subversion (SVN) server for access using svn:// with client applications like TortoiseSVN is actually pretty simple. The official Ubuntu Documentation covers a lot more than this simple setup but this is enough to get something up and running quickly without worrying about WebDAV or HTML access. The odyniec.net tutorial is also really useful and provides the init.d startup script I used to make the SVN server run at boot.

The steps are; install subversion, create the repository directories, set access control, set subversion to run at boot.

To install subversion in ubuntu just run:

sudo apt-get install subversion

Now create a directory to hold your subversion repositories, in my case I used “/home/svn”:

sudo mkdir /home/svn

Create a repository folder, for example “svnrepo1″, within this directory:

sudo mkdir /home/svn/svnrepo1

Now you can use the “svnadmin” program that comes as part of the subversion package to create a SVN repository within this folder:

sudo svnadmin create /home/svn/svnrepo1

The configuration file for the repository is created as “/home/svn/svnrepo1/conf/svnserve.conf” and contains the option to enable password protection as well as a lot of other useful settings. The important lines to uncomment to force password access are:

anon-access = none
auth-access = write

By setting “anon-access” to “none” you force people to enter passwords on connecting to the SVN. Now set up password protected access by uncommenting the following:

password-db = passwd

Settings “password-db” to “passwd” means the list of users and passwords in the “/home/svn/svnrepo1/conf/passwd” file will be used to check if someone has access. In a lot of cases it makes sense to keep this “passwd” file somewhere else so it can be used for all your repositories. In my case I set it to:

password-db = /home/svn/passwd

Just make sure to set the passwd file to be only readable by root:

sudo chmod 600 /home/svn/passwd

The “passwd” file is actually a very simple text file and looks something like:

[users]
harry = harrypassword
sally = sallypassword

Once the SVN server is configured and a repository set up as above you can run the SVN server using:

svnserve -d –foreground -r /home/svn

To make sure the SVN server starts at boot you need to set up init.d. Do this by creating and editing a file “/etc/init.d/svnserve” (I use “nano” to do my text editing on the command line):

sudo nano /etc/init.d/svnserve

Now paste in the contents of the odyniec.net init.d script. This script covers everything you need to start, stop and restart the “svnserve” program at boot so your SVN server can listen to all svn:// connections. There are alternatives to using this script, but this works and is simple to set up. Make sure you change the line with “DAEMON_ARGS” to point to the right place of “/home/svn”:

DAEMON_ARGS=”-d -r /home/svn”

Now tell Ubuntu to update its startup routine to include this new script:

sudo update-rc.d svnserve defaults

Reboot the server to make sure everything is working as expected.

You can now start, stop or restart the automatically booted SVN server using the following commands:

sudo /etc/init.d/svnserve start
sudo /etc/init.d/svnserve stop
sudo /etc/init.d/svnserve restart

Connecting to your SVN server can be done using something like TortoiseSVN and the URL you use to connect to the “svnrepo1″ repository you just set up is:

svn://your.serv.er.ip/svnrepo1

There is a lot more you can do with the SVN configuration, such as adding group support etc, but this is the quickest way to set up a standard SVN server on Ubuntu to accept svn:// connections using “svnserve”.

“unblock” an entire directory of files (rather than individually) when copying files between NTFS locations in Windows Server

We copied a few hundred files between a Windows Server 2003 machine and a Windows Server 2008 machine in order to migrate a ASP.NET web application. There are a few hoops to jump through as once it was set up I instantly hit:

Request for the permission of type ‘System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b27b5d561934e089′ failed. (C:\inetpub\wwwroot\WebApp\web.config line 90)

This was related to a “<httpModules>” element in the “Web.config” file.

On searching for similar problems I found a blog post on MSDN which stated that this could be due to a DLL file that needed to be unblocked (right click, “Unblock” button) after copying from another location. Unfortunately, we couldn’t go through and unblock each individual file to try and get round this as there were so many.

The solution (thanks to superuser/StackExchange) is to zip up all the files into one compressed archive before transferring them. Then the NTFS flagging of “unsafe” files that need to be unblocked only includes the one file, simple!

This solved our System.Web.AspNetHostingPermission error straight away, implying that the blocking of files by NTFS for security reasons can affect ASP.NET migration.

Simple and secure MySQL database backup to gzip using mysqldump in Linux

As part of a larger daily backup cron job script I needed to quickly backup my MySQL databases to individual compressed “gzip” .GZ files. The command to do this is very easy, just run the command and pipe it to “gzip”:

mysqldump -u USERNAME -pPASSWORD DATABASENAME | gzip > OUTPUTFILE.gz

This requires you to actually put in the USERNAME and PASSWORD on the command line, which is obviously a bad idea due to logging of commands and other security reasons.

The MySQL recommended way of doing this is to instead use a separate file containing the login details. You use “mysqldump” with the argument “–defaults-extra-file” and specify the location of a configuration file such as “/root/mysqldetails.cnf”. It is a good idea to create this file and “chown” as root and “chmod” it to be “0400″ which will make it read-only by the “root” user.

chown root:root /root/mysqldetails.cnf
chmod 0400 /root/mysqldetails.cnf

The file itself is a very simple text file and just looks something like:

[client]
host = localhost
user = USERNAME
password = PASSWORD

So now this file has been created and the permissions set correctly, the mysqldump command looks like:

mysqldump –defaults-extra-file=/root/mysqldetails.cnf DATABASENAME | gzip > OUTPUTFILE.gz

The result is OUTPUTFILE.gz which is a compressed copy of your DATABASENAME database, without showing anyone the username and password required to access the database. The “mysqldump” command is very useful and more information can be found in the MySQL documentation.

Encrypt a USB drive in linux and automatically mount it on startup using a keyfile and dm_crypt

The easiest way of doing this is to use dm_crypt‘s “cryptsetup” on your USB drive, create a keyfile then set the options in “/etc/fstab” and “/etc/crypttab”. By using a keyfile you can get the drive to automatically mount without having to type in your encryption password. I was doing this on a bare install of CentOS 6.3 but the steps should be similar on other distros with “cryptsetup” installed.

I needed to back up some important (and confidential) files to a USB portable drive that I wanted to encrypt with full disk encryption. You can do this in a variety of ways but the method here was the easiest I found. More information can be found at Brad’s Blog and HowtoForge.

Encrypting and mounting your USB drive

First you need to physically plug in your USB drive to the machine and then unmount it if it automatically mounts. I performed all the commands here using the root user. In my case, when I plugged in the USB drive it was found as “/dev/sdb” and automatically mounted by CentOS. To unmount:

umount /dev/sdb

Now the USB drive needs to be formatted using “cryptsetup” and the “luksFormat” command:

cryptsetup luksFormat /dev/sdb

The tool will give you a warning about overwriting data, which you need to confirm by typing an uppercase “YES”. You then type in and confirm your LUKS passphrase, which will be used to unlock the drive in future. This passphrase is also used later when creating the keyfile.

Now you can create a device mapper for the drive using “cryptsetup” and the “luksOpen” command. I called my mapper “secretvol” in this example so the drive will be mapped to “/dev/mapper/secretvol”. You will be prompted for the passphrase:

cryptsetup luksOpen /dev/sdb secretvol

Now before you can mount your newly mapped device you need to format the file system (I used ext3):

mkfs.ext3 /dev/mapper/secretvol

Now you can mount the USB drive. Make sure you have created the mount point (in my case “/mnt/encrypteddrive”) first then mount it with:

mkdir /mnt/encrypteddrive
mount /dev/mapper/secretvol /mnt/encrypteddrive

To test this all works properly reboot your machine before unlocking and mounting your USB drive manually (requiring entry of the passphrase):

cryptsetup luksOpen /dev/sdb secretvol
mount /dev/mapper/secretvol /mnt/encrypteddrive

To unmount and lock the drive by closing the device mapper with the “luksClose” command:

umount /dev/mapper/secretvol
cryptsetup luksClose secretvol

Creating a keyfile to avoid entering your passphrase manually

A keyfile is good as it means you can unlock your USB drive without having to manually type the passphrase. To create a keyfile “/root/keyfile” for your device using “cryptsetup” and the “luksAddKey” command enter the following (you will need to enter your passphrase). The first command creates a random 4096 byte file, the second makes it read only to root and the third stores your passphrase in the keyfile using “luksAddKey”:

dd if=/dev/urandom of=/root/keyfile bs=1024 count=4
chmod 0400 /root/keyfile
cryptsetup luksAddKey /dev/sdb /root/keyfile

Now you can unlock your previously created drive without manually entering the passphrase using:

cryptsetup luksOpen –key-file /root/keyfile /dev/sdb secretvol

And mount with:

mount /dev/mapper/secretvol /mnt/encrypteddrive

Automatically unlock and mount your encrypted USB drive at system startup

Now that you have a keyfile you can set up your linux install to automatically unlock and mount the USB drive by editing a couple of files.

Edit your “/etc/crypttab” file:

nano /etc/crypttab

Add the line below to add the “/dev/mapper/secretvol” device:

secretvol /dev/sdb /root/keyfile luks

NOTE: You can also use the UUID of your drive in “/etc/crypttab” to make sure that the right disk as detected by the kernel is used. In cases where you may be adding or removing disks this is really important as you may have “sdb” or “sdc” or “sdX” depending on what order the disks are detected by your linux install. To find the right UUID type:

ls -l /dev/disk/by-uuid

Which in my case told me that my UUID for “sdb” (my USB drive) was “6858274d-2370-4377-9426-d786c3e7a410″. The line in “/etc/crypttab” that you should use in this case to add “/dev/mapper/secretvol” is:

secretvol /dev/disk/by-uuid/6858274d-2370-4377-9426-d786c3e7a410 /root/keyfile luks

Now edit your “/etc/fstab” file:

nano /etc/fstab

Add the line below to automatically mount the device to “/mnt/encrypteddrive”:

/dev/mapper/secretvol /mnt/encrypteddrive ext3 defaults 0 2

Now to test this, reboot your machine and navigate to “/mnt/encrypteddrive” where your USB drive will be mounted automatically for you. Easy!

Run .bat batch and .cmd files as scheduled tasks in Windows with a local user (avoid the “Could not start” error)

Running scheduled tasks as a local user means you can lock down user permissions and avoid giving broad admin rights to your local users. I have a scheduled task that needed to be run by a local user by running a .cmd (.bat works as well) batch file every day.

I created the local user with a password and added the scheduled task to run my .cmd file every day at 4am. When adding the scheduled task I put in the correct user details and password and then tried to run it, which failed with a “could not start” error.

The reason for this is that by default, new local users do not have read and execute permissions on “cmd.exe” which is used by Windows task scheduler to start .cmd and .bat files in scheduled tasks. The fix is to navigate to your “system32″ directory (probably “c:\windows\system32″) and right click on the “cmd.exe” application, go to the security tab and add your new local user with “Read & Execute” permissions.

Once the security settings for “cmd.exe” are set to allow your local user to run it, the task scheduler will now allow your .cmd/.bat scheduled task to run with that local user and everything will work fine.

Simple setup of Oracle 11g Release 2 on CentOS 6.3, including pdksh and all dependencies, in VirtualBox

I’ve installed Oracle Database 11g Release 2 a few times on various Linux installs and apart from a few quirks it is a pretty similar process on most. The absolute bare bones default install, as described here, is easy to set up and doesn’t take that long. You can see more detail, including all the recommended steps if you follow the instructions in the Oracle install guide. I will describe installing 32bit Oracle Database 11g Release 2 on CentOS 6.3 32bit with the UI installed so we can use the Oracle installer directly. My computer’s name was “localhost.localdomain” as I was testing this in a development VirtualBox install.

First download Oracle 11g Release 2 from their website. For a linux install it comes as 2 zip files which you must first accept the license for before downloading. The exact version I downloaded was “Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86″.

Now you need to prepare your CentOS install by adding the required users and user groups for the install process. In my setup I am following oracle and running the following commands to add the “oinstall” and “dba” user groups:

groupadd oinstall
groupadd dba

Now add the “oracle” user, who we will be using to run the Oracle 11g install and give the user the correct group membership:

useradd -g oinstall -G dba oracle

Now create a directory and set the appropriate permissions where you are going to install Oracle. In my case I have installed it in the “oracle” user’s home directory under “/home/oracle/app”:

mkdir -p /home/oracle/app
chown -R oracle:oinstall /home/oracle/app/
chmod -R 775 /home/oracle/app/

Now extract the Oracle ZIP files downloaded earlier into somewhere sensible. I chose “/home/oracle/database”. Navigate to the directory and run the install script as your new oracle user:

su oracle

cd /home/oracle/database
./runInstaller

NOTE: In my case, because this CentOS install was a VirtualBox virtual machine I needed to explicitly set the $DISPLAY variable to the local machine before the UI for the installer would run. This is done by running the following command and restarting my shell:

export DISPLAY=:0.0

Now the installer will start up. You can ignore entering your email in the first step “Configure Security Updates” and leave the default setting of “Create and configure a database” in the second step “Installation Option”.

For the “System Class” step of the install I just left it as the default “Desktop Class” and in the “Typical Installation” step I left everything as default apart from setting the Administrative password. The default settings puts the oracle base in “/home/oracle/app/oracle” with a global database name of “orcl.localdomain”. For the “Create Inventory” step I left the default folder of “/home/oracle/app/oraInventory” and the group name “oinstall”.

Now we get on to the interesting part of the install, which is the “Prerequisite Checks” stage. If you are running the install on a brand new copy of CentOS you will need to set a few system variables and install a set of prerequisites.

NOTE: You may not need to, but I needed to add more swap space to my CentOS install this time around in order to meet the prerequisites. Run the following commands as root to create a 2048mb swap file called “/swapfile” on your harddrive and set CentOS to use it for swap space:

dd if=/dev/zero of=/swapfile bs=1024 count=2097152
mkswap /swapfile
swapon /swapfile

Now set CentOS to always use this swap space at boot by editing your “/etc/fstab” file using the command:

nano /etc/fstab

And add the following line:

/swapfile  swap  swap  defaults  0  0

So if you have passed the swap space test in the “Prerequisite Checks” in the Oracle install you can start to fix all those “Failed” messages. Click on the button “Fix & Check Again” and a window will pop up to tell you about the handy “runfixup.sh” script that will be placed in “/tmp/CVU_11.2.0.1.0_oracle/runfixup.sh”. So in your shell, navigate to the directory as root and run the script:

cd /tmp/CVU_11.2.0.1.0_oracle/
./runfixup.sh

The “runfixup.sh” script will fix all the system variables for you so you don’t need to set them manually. Now all that remains is to fix the dependencies, most of which can be installed using “yum” with the following command:

yum install gcc gcc-c++ compat-libstdc++-33 elfutils-libelf-devel libaio-devel libstdc++-devel unixODBC unixODBC-devel

Now the only remaining prerequisite that causes a “Failed” message is “pdksh-5.2.14″ which has been removed from the CentOS repositories after CentOS 5 (see here). The replacement is “ksh” but if you install this package using “yum install ksh” you will get the same dependency check “Failed” in the Oracle install for “pdksh-5.2.14″ and “ksh” will conflict with “pdksh” if you then go to install it.

The solution is to install “pdksh” manually from RPM, which can be found at a variety of mirrors. I used the following command to install the “pdksh” package:

rpm -q ftp://ftp.pbone.net/mirror/archive.download.redhat.com/pub/redhat/linux/6.1/en/os/i386/RedHat/RPMS/pdksh-5.2.14-1.i386.rpm

Now Oracle should pass all the prerequisite checks and you will see the “Summary” step of the install where you can click the “Finish” button. It may take a while but Oracle Database will install with all the required settings ready for you to use out of the box.

The final step is to execute the configuration scripts as root, which will pop up after you have unlocked any users you might need other than the defaults (you don’t need to though at this stage). The two scripts can be run as follows:

cd /home/oracle/app/oraInventory/
./orainstRoot.sh

cd /home/oracle/app/oracle/product/11.2.0/dbhome_1/
./root.sh

To test your install worked you can log in to the web based management interface for your computer “localhost.localdomain” with the user name “SYS” connecting as “SYSDBA” and using the password you set during the install of Oracle. Remember to open port 1158 on your firewall if you need to:

https://localhost:1158/em/

Now you can start to use Oracle. I highly recommend looking through the documentation from Oracle themselves to help get yourself used to the Oracle way of doing things. There are loads of client applications that can help, like the command line based Oracle Instant Client and the Oracle SQL Developer UI program. Oracle have a lot of good walkthroughs for working with their tools which are available as part of their Learning LIbrary.

Quickly enable NTFS support in CentOS 6.3 using EPEL, yum and ntfs-3g

Enabling NTFS support in CentOS 6.3 is only 2 commands in a shell script and can be done in seconds by installing the EPEL repository and the “ntfs-3g” package.

I needed to transfer some files from my USB drive to CentOS 6.3 (using the USB device option in VMware) and got an error message about an unknown filesystem “ntfs”. The drive I was using was formatted in Windows using the NTFS filesystem and couldn’t be read by my CentOS 6.3 install by default.

First you need to install the Extra Packages for Enterprise Linux (EPEL) repository, which is done by typing the following into the shell as root assuming you are using 32bit CentOS 6.3:

rpm -Uvh http://mirror.overthewire.com.au/pub/epel/6/i386/epel-release-6-7.noarch.rpm

If you are using 64bit CentOS use:

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm

Accept all the prompts to install the repository to get access to a large number of extra packages for CentOS. You can add the “ntfs-3g” package with the following command:

yum install ntfs-3g

Accept all the prompts and you are done, your NTFS formatted drives can now be read from and written to using CentOS.

Use Google PageSpeed Insights and WebPagetest to benchmark your website speed and get performance suggestions for free

As part of optimising the speed of this site I found a link to Google PageSpeed Insights from their Webmaster Tools page. This is a free service and gives you some great feedback on high, medium and low priority changes you can make to your site to improve response and overall user experience. You can use their web interface and just type in a URL to check the PageSpeed score.

For reference, as of today this site got a PageSpeed score of 94 out of 100, which is excellent.

I also found another good free one, WebPagetest, which breaks down the speed of your site from various locations and allows you to really see which part of your site load is slowing things down. You can do this kind of thing in your browser but the location awareness of this one makes it a lot more useful.