Compare Two MySQL Database Structures Using mysqldump and diff

To compare two mysql databases in linux/osx you can use a very simple script that calls the “mysqldump” and “diff” commands to output the difference in structure of two databases to an output file. In this case I was comparing my local database with another at IP address You will be prompted for passwords but to avoid this you can add “-pYOURPASSWORD” (less secure):

mysqldump --no-data --skip-comments --skip-extended-insert --host= -u root -p DATABASENAME>file1.sql
mysqldump --no-data --skip-comments --skip-extended-insert --host= -u root -p DATABASENAME>file2.sql
diff file1.sql file2.sql > diffoutput.txt
rm -f file1.sql
rm -f file2.sql

Create Certificates and Use SSL with MySQL and the ADOdb Database Abstraction Library for PHP

This post is mainly for my own records and is the combination of 3 tutorials from KrWiki at waterlovinghead, Carl’s Whine Rack and Mad Irish. All 3 are almost 100% what I was looking for.

I needed to set up a secure database connection between our live web server and database server, both running Ubuntu Server. The 2 machines are completely separate and have their own IP address. The procedure for setting up an SSL connection between the two is similar to setting up a standard SSL certificate in Apache. First you create the certificates and sign them, then you configure the client (web server and server (database server to use them, finally you create a special MySQL user that uses SSL and connect using ADOdb. You don’t need to use ADOdb but it makes things a lot easier (see my previous posts).

It’s very important that your MySQL server has SSL enabled (most do by default). You can test this by connecting to your server and typing the SQL command:


Which should show “have_ssl” as either “DISABLED” or “YES”. If it says “NO” you will need to enable SSL (which might mean recompiling MySQL).

Creating the certificates

You need to create certificates for both the server (our database server) and the client (our web server). There are always a few ways of doing this described in various tutorials but this way is the easiest.

First create a directory somewhere that you will store your certificates in, e.g. “/var/ssl_certificates/” then go into the directory and type the following:

openssl genrsa 2048 > ca-key.pem

Which will generate a 2048 bit RSA private key for the Certificate Authority (CA). Now you need to create a Certificate Authority (CA) certificate, which will be used later to sign our server and client certificates. It is possible to use an external CA but in this case we are self signing, which makes things easier.

openssl req -new -x509 -nodes -days 1000 -key ca-key.pem > ca-cert.pem

You will be asked a few details: country name, state or province name, locality name, organisation name, organisational unit name, common name and email address. Set the common name to the name of your server e.g. The ca-cert.pem file created is our CA certificate.

Now you can create the server certificate request:

openssl req -newkey rsa:2048 -days 1000 -nodes -keyout server-key.pem > server-req.pem

This will ask for the same details as the CA certificate but will also ask you to put in a challenge password and optional company name. In our case, we are not going to be using the challenge password as we would have to enter it every time the SQL server started up.

Now sign the request using our CA certificate and key and generate the server certificate:

openssl x509 -req -in server-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > server-cert.pem

Now we need to create a client certificate request. We need to create one for each client, but in this case we only need to create one for the web server:

openssl req -newkey rsa:2048 -days 1000 -nodes -keyout client-key.pem > client-req.pem

You will be asked the same questions as for the server certificate request. The difference this time is that you need to put the client hostname in e.g. Now sign this request and generate the client certificate:

openssl x509 -req -in client-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > client-cert.pem

So now we should have several files in our directory. These are:

  • ca-key.pem – The certificate authority (CA) key
  • ca-cert.pem – The certificate authority (CA) certificate
  • server-key.pem – The server key
  • server-req.pem – The server certificate request
  • server-cert.pem – The server certificate
  • client-key.pem – The client key
  • client-req.pem – The client certificate request
  • client-cert.pem – The client certificate

So now we have the required certificates for both the server and the client we can set up both.

Setting up the MySQL server and client to allow SSL

All you need to do to set up the MySQL server to accept SSL requests is edit the MySQL my.cnf configuration file (usually in either /etc/my.cnf or /etc/mysql/my.cnf). Add the following lines in the [mysqld] section to point MySQL to the required files (in our case in /var/ssl_certificates/):


Now restart the MySQL service to check it works by typing:

/etc/rc.d/init.d/mysqld restart

If everything is working fine it should just start up like normal.

Now to set up the client (our web server) we need to copy the 3 required client files ca-cert.pem, client-cert.pem & client-key.pem to a directory on the client (e.g. /var/ssl_certificates/ again). Now you edit the my.cnf on the client (usually in either /etc/my.cnf or /etc/mysql/my.cnf) and add the following lines in the [client] section, which you may need to add yourself:


So now the client and server are set up to use the SSL certificates you created for all SSL connections between the two. Now we set up a user for the database who is forced to use SSL.

Setting up a user to use SSL only

It’s easiest to set up the user on the command line using the “mysql” command on the database server. So log in as your root user (or equivalent) on the database server and create a test database “test_ssl”:

create database test_ssl;

Now create a test user “test_user” with password “testpass” with the correct client hostname “″ and force them to use SSL when logging in:

grant all privileges on test_ssl.* to 'test_user'@'' identified by 'testpass' require ssl;
flush privileges;

Now we can create a test table “test” and insert some data so we can query it to test everything is working from PHP:

create table test (test_id int auto_increment primary key, test_data varchar(255));
insert into test set test_data='foo';

Now we go back to ADOdb and PHP to test this all works.

Connect to the database over SSL using ADOdb

Simply create a .php file on your web server with the following code:

include('adodb5/'); // database libraries (must point to where your ADOdb libraries are kept)

$dsn = 'mysqli://test_user:testpass@';
$dbh = NewADOConnection($dsn);

$arr = $dbh->GetAll("SELECT * from test");

This should connect to our database using the test_ssl user and using SSL and print out the following on your web browser:

Array ( [0] => Array ( [0] => 1 [test_id] => 1 [1] => foo [test_data] => foo ) )

Now the great thing about this is that by using ADOdb we don’t need to worry about any kind of special connection string. The only thing that is different between an SSL connection and a standard connection is that SSL ues the “mysqli” command in the data source name rather than “mysql”, the rest of the code stays the same.

NOTE: It’s worth double checking that everything is working by executing "show status like 'ssl_cipher'" as a SQL query in the PHP above:

include('adodb5/'); // database libraries (must point to where your ADOdb libraries are kept)

$dsn = 'mysqli://test_user:testpass@';
$dbh = NewADOConnection($dsn);

$arr = $dbh->GetAll("show status like 'ssl_cipher'");

which gives:

Array ( [0] => Ssl_cipher [Variable_name] => Ssl_cipher [1] => DHE-RSA-AES256-SHA [Value] => DHE-RSA-AES256-SHA )

Using ADODB to easily connect to MySQL and Microsoft SQL Server from PHP

In previous posts I’ve talked about connecting to Microsoft SQL Server from PHP using ODBC and using mssql_connect to connect directly. The most important thing I’ve realised is not to re-invent the adodb wheel, which has shown to be a really great cross-platform way of connecting to databases.

The best thing about adodb is that just by changing the connection string I can talk to MySQL and Microsoft SQL Server without changing any of my code. I’ve used this for performance and migration testing and it’s been perfect, with none of the strange hangs I noticed with a standard ODBC connection. Currently I am using Ubuntu Server 10.10 to host all the PHP, which has overall been a pretty good experience, apart from these strange hangs.

If you are in Ubuntu Server you may need to install some ODBC packages in order to get adodb to work. Some of these packages are not totally necessary but I would recommend them for testing ODBC connections etc:

sudo apt-get install php5-odbc unixodbc unixodbc-dev freetds-dev sqsh tdsodbc

Now you need to create a SQL user in either your MySQL or SQL Server database. Make sure you map the SQL Server user as a database owner on the DB you want to connect to and give them the same default schema. The reason for this is I couldn’t get the connection in adodb to work with the actual DB specified (unlike in MySQL).

Download adodb5 and place the directory in the root of your webserver and add the following to the top of your PHP file:


Now open up your PHP file and insert the following for SQL Server with IP “the.ser.ver.ip”:

$dbcstring = “mssql://username:password@the.ser.ver.ip”;

Or the following for MySQL:

$dbcstring = “mysql://username:password@the.ser.ver.ip/databasename?persist”;

Create a connection to the database:

$DB = NewADOConnection($dbcstring);

Now you can use the built in commands in adodb to easily pull data from the database and display it on screen. It doesn’t matter which database you are connecting to, the commands are now the same. This is the main advantage of adodb in my opinion, along with the ease of use:

$arr = $DB->GetAll(“SELECT column FROM table”);
echo $arr[$i][0]  . “<br/>”;

It’s worth reading the guide on adodb for the commands you can use, such as $DB->GetRow(), $DB->GetOne() and $DB->Execute(). The great thing is that adodb makes everything so simple you can treat your data as arrays or you can go further and make use of some of the more advanced functionality. Either way, by using adodb you massively cut down on code rewrites when you migrate to a new database.

Migrating data from MySQL to Microsoft SQL Server (and away from MySQL slowness)

I hit a stumbling block with MySQL that has made me seriously rethink using it in future. A simple JOIN query between two tables (one with 1k rows, 1 with 660k rows) was taking a very long time in MySQL. I had a look at indexing etc but couldn’t get the execution time down below 220 seconds on the test machine, no matter how the query was structured. I even considered adding redundant columns to the dataset to allow for indexing using integers, rather than varchars. While trying a few things I decided to test the exact same query on the same data set in Microsoft SQL Server Express 2005, where it took 4 seconds (1 second after the query had been cached).

The difference is obviously massive, far more than I was expecting. I then ran a few tests using DISTINCT, GROUP BY and COUNT(*) and noticed that in every instance, MySQL was slower than SQL Server, even on identical test machines. Considering the database is expected to grow massively over the next year we decided that it would be sensible to consider alternatives to MySQL. We already have a license for SQL Server so the decision was pretty easy (also, all my testing is currently done using SQL Server Express 2005).

Migrating the data across is not easy using .sql exports from MySQL Workbench as the data set is large (60mb of .sql) and the output file needs editing for SQL Server compatibility. Annoyingly SQL Server Express 2005 doesn’t support multiple INSERT VALUES queries (please upgrade to SQL Server 2008!) making it even more difficult.

I found a post on CodeProject by Niklas Henricson that explains how easy it is to migrate tables from MySQL to SQL Server using an ODBC connector from MySQL. You can download the ODBC connector from MySQL and create a connection which can then be easily used in SQL Server Management Studio to transfer data between MySQL and SQL Server. A summary of Niklas’ steps are:

  • Open your ODBC Data Source Administrator from the Control Panel -> Administrative Tools. Under the tab labelled as “System DSN”, press the “Add” button.
  • On the “Create New Data Source” dialog that appeared, choose MySQL ODBC 5.1 Driver and then press the “Finish” button.
  • After that, a MySQL connection configuration dialog will appear. Add your MySQL database account information in it, preferably the “root” account which has full access to your databases in MySQL. Do not change the port to anything other than 3306, unless during your MySQL server installation, you have defined something else.
  • Press the “Test” button to ensure your connection settings are set properly and then the “OK” button when you’re done.
  • In this state, you are ready to establish a link towards MySQL database from your Microsoft SQL Server Management Studio. Open a query window and run the following SQL statement:

EXEC master.dbo.sp_addlinkedserver
@server = N’MYSQL’,
@provstr=N’DRIVER={MySQL ODBC 5.1 Driver};; _
DATABASE=databasename; USER=username; PASSWORD=password; OPTION=3′

  • In the query window, run the following SQL statement to import table tablename from the MySQL database databasename, into the database in Microsoft SQL called SQLServerdatabasename.

SELECT * INTO SQLServerdatabasename.dbo.tablename
FROM openquery(MYSQL, ‘SELECT * FROM databasename.tablename’)

The potential for ODBC based migration of data is brilliant and openquery can be scripted to run across all the database tables. I am currently looking at a way of bulk importing an entire database from MySQL to SQL Server and will update this post in the future.

There are a few things to be wary of, such as keys, data types, creation of identity specifications etc.. but this can all be sorted once the data is migrated across. The speed is pretty good and shouldn’t be too much of a concern as this is a one time operation.