LDAP with auth_pam and PHP to authenticate against MySQL

In the quest to secure MySQL as well as ease the number of complicated passwords to remember, many organizations are looking into external authentication, especially using LDAP. For free and open source, Percona’s PAM authentication plugin is the standard option.

tl;dr is I go through how to compile php-cli for use with auth_pam plugin.

Background


There are two plugins that can be used. From the documentation, the two plugins are:

  • Full PAM plugin called auth_pam. This plugin uses dialog.so. It fully supports the PAM protocol with arbitrary communication between client and server.
  • Oracle-compatible PAM called auth_pam_compat. This plugin uses mysql_clear_password which is a part of Oracle MySQL client. It also has some limitations, such as, it supports only one password input. You must use -p option in order to pass the password to auth_pam_compat.

Percona’s MySQL client supports both plugins natively. That is, you can use auth_pam or auth_pam_compat and use the “mysql” tool (or “mysqldump”, or mysql_upgrade, etc.) and you are good to go. Given the choice, we would all use auth_pam, under which clients DO NOT use mysql_clear_password.

Not all clients support auth_pam, which is the main problem. Workarounds have called for using auth_pam_compat over SSL, which is a perfectly reasonable way to handle the risk of cleartext passwords – encrypt the connection.

However, what if you want to use auth_pam?

The problem with auth_pam

Back in 2013, Percona posted about how to install and configure auth_pam and auth_pam_compat. I will not rehash that setup, except to say that most organizations no longer use /etc/shadow, so the setup involves getting the correct /etc/pam.d/mysqld in place on the server.

That article has this gem:

As of now, only Percona Server’s mysql client and an older version of HeidiSQL(version 7), a GUI MySQL client for Windows, are able to authenticate over PAM via the auth_pam plugin by default.

So, if you try to connect to MySQL using Perl, PHP, Ruby, Python and the like, you will receive this error: “Client does not support authentication protocol requested by server; consider upgrading MySQL client.”

Fast forward 4 years, to now, and this is still an issue. Happily, the article goes on to explain how to recompile clients to get them to work:

The good news is that if the client uses libmysqlclient library to connect via MySQL, you can recompile the client’s source code to use the libmysqlclient library of Percona Server to make it compatible. This involves installing Percona Server development library, compiler tools, and development libraries followed by compiling and installing the client’s source code.

And, it helpfully goes step by step on how to recompile perl-DBD-mysql to get it working with LDAP authentication (as well as without – it still works for users who do not use LDAP).

But what if you are using PHP to connect to MySQL?

PHP and auth_pam


If you try to connect, you get this error:
SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client

So let us try to mirror the perl recompilation process in PHP.

Step 1

“Install Percona yum repository and Percona Server development library.” This is not a problem, do what you need to do to install Percona-Server-devel for your version.

Step 2

Install a package manager so you can build a package – optional, but useful, if you ever want to have this new client without having to recompile. As in the example, I chose the RPM package manager, so I installed rpm-build.

Step 3

Download and install the source RPM for the client package. This is where I started running into trouble. What I did not realize was that PHP does not divide out its packages like Perl does. Well, it does, but php-mysqlnd is compiled as part of the core, even though it is a separate package.

Downloading the main PHP package


So I downloaded the source RPM for PHP at https://rpms.remirepo.net/SRPMS/, and installed it into the sources directory:
cd SRPMS
wget https://rpms.remirepo.net/SRPMS/php-7.0.22-2.remi.src.rpm
cd ../SOURCES
rpm -Uvh ../SRPMS/php-7.0.22-2.remi.src.rpm

This unpacks a main file, php-7.0.22.tar.xz, plus a bunch of supplemental files (like patches, etc).

What it does NOT contain is a spec file, which is critical for building the packages.

Getting a spec file


I searched around and found one at https://github.com/iuscommunity-pkg/php70u/blob/master/SPECS/php70u.spec – this is for 7.0.21, so beware of using different versions of spec files and source code. Once that was done, I changed the mysql lines to /usr/bin/mysql_config as per Choosing a MySQL library. Note that I went with the “not recommended” library, but in this case, we WANT to compile with libmysqlclient.

Compiling php-cli, not php-mysqlnd


In addition, I discovered that compiling php-mysqlnd with the new libraries did not work. Perhaps it was something I did wrong, as at that point I was still compiling the whole PHP package and every module in it.

However, what I *did* discover is that if I recompiled the php-cli package with libmysqlclient, I was able to get a connection via PHP using LDAP authentication, via a tool written by someone else – with no changes to the tool.

Final spec file


So here is the spec file I eventually came up with. I welcome any optimizations to be made!

Step 4

“Install compilers and dependencies”.
On my host I had to do a bunch of installations to get the requirements installed (your mileage may vary), including the Percona Server package for the /usr/lib64/mysql/plugin/dialog.so file:
yum install Percona-Server-server-55-5.5.55-rel38.8.el6.x86_64 libtool systemtap-sdt-devel unixODBC-devel

Step 5

“Build the RPM file”. Such an easy step, but it took about a week of back and forth with building the RPM file (which configures, tests and packages up everything), so I went between this step and updating the spec file a lot.

cd rpmbuild/SPECS/
rpmbuild -bb rpmbuild/SPECS/php-cli.spec

Then I installed my PHP file and tested it, and it worked!
# rpm -e php-cli –nodeps
# rpm -Uvh /root/rpmbuild/RPMS/x86_64/php70u-cli-7.0.22-2.ius.el6.x86_64.rpm –nodeps
Preparing… ########################################### [100%]
1:php70u-cli ########################################### [100%]

I hope you have similar success, and if you have updates to the spec files and lists of packages to install, please let me know!

Query Reviews (part 2): pt-query-digest

Query reviews (part 1): Overview

The 1st post in the series gave an overview of what a query review is and the value they can bring you. So now let’s talk about how one is done, specifically, how to do a query review using pt-query-digest.

The point of a query review is that it is a comprehensive review of queries. Imagine if you could get a list of all queries that run on your system, and then you systematically looked at each query to determine if it is optimized. That is the basic concept behind a query review.

So, how do you get a list of queries?

pt-query-digest can use a slow query log, binary log, general log or tcpdump. I usually use a slow query log with long_query_time set to 0, so I can capture all the successful queries and their timings. If this is too much overhead, consider using Percona Server’s log_slow_rate_limit and log_slow_rate_type parameters to only log every nth session/query. This means that if you have 5000 queries per second, you can set the slow logging rate to every 100th query, and reduce the write overhead for the slow query log to 50 queries per second (instead of all 5000 queries).

So you have your log, now what? Well, we need to process it. The –type option is where you set what your log type is (binlog, genlog, slowlog, tcpdump). Default is slowlog.

By default, pt-query-digest will give you a report of the top 95% worst queries. You can change that with the –limit parameter – note that –limit just limits the output; pt-query-digest still processes all the queries in the log file. If –limit is followed by an integer, it will limit the output to the top X queries; if it’s followed by a percentage (e.g. 10%) it will output the top percentage of queries.

As this is a query review of all queries, we will want to set the limit to 100%.

There are a lot of other options that pt-query-digest has, but many of them are there so we can distill and get queries that meet a certain criteria. The point of a query review is to look at ALL queries, so we do not need to use those options.

In fact, the only other options we need are related to the review itself. Because a review is systematic, we need a place to store information related to the review. How about a database for that? In fact, pt-query-digest has a –review option that takes parameters to store the information into a table.

Here is the command I recently used to start a query review. It was run from the shell commandline, and I used –no-report because I did not want anything other than the table and its rows created:
[sheeri.cabral@localhost]$ pt-query-digest --no-report --type slowlog --limit 100% --review h=localhost,u=sheeri.cabral,D=test,t=query_review --create-review-table --ask-pass mysql_slow.log

You can see that –review has a number of arguments, comma-separated, to identify a table on a host to put the queries into. I used the –create-review-table flag to create the table, since it did not already exist, and –ask-pass because I do not type in passwords in a shell command.

pt-query-digest then spends some time analyzing the file then creating and populating the table. Here’s a sample row in the table:

*************************** 1. row ***************************
checksum: 11038208160389475830
fingerprint: show global status like ?
sample: show global status like ‘innodb_deadlocks’
first_seen: 2017-06-03 11:20:59
last_seen: 2017-06-03 11:32:15
reviewed_by: NULL
reviewed_on: NULL
comments: NULL

The checksum and fingerprint are ways to make the query portable, no matter what values are used. The fingerprint takes out all the differences among iterations of the query, and puts ? in its place. So if you have a query that’s used over and over, like
SELECT first_name FROM customers WHERE id in (1,2,3)
the fingerprint would look like
SELECT first_name FROM customers WHERE id in (?+)

The sample provides a way for us to copy and paste into an EXPLAIN (or my favorite, EXPLAIN FORMAT=JSON) statement, so that we can assess the query.

So then we can go through the process of optimizing the query. In the end, this query has nothing to tweak to optimize, so I update the reviewed_on date, the reviewed_by person, and the comments:

mysql> UPDATE test.query_review set reviewed_on=NOW(), reviewed_by='sheeri.cabral', comments='no mechanism to optimize' WHERE checksum=11038208160389475830;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

On to the next query – we shall get the next query that has not yet been reviewed:
mysql> select * from test.query_review where reviewed_on is null limit 1\G

If you have already done some query reviews, your WHERE clause may look something like where reviewed_on is null OR reviewed_on < NOW()-interval 6 month.

And then look at that query for optimization. Lather, rinse, repeat. This is a GREAT way to get familiar with how developers (and ORMs) are writing queries.

Some tricks and tips – first take a look at all the queries less than 50 characters or so – you can easily update those to be all set to reviewed, with whatever message you want.
mysql> select fingerprint from query_review where length(sample)<50;
+----------------------------------------------+
| fingerprint |
+----------------------------------------------+
| administrator command: Ping |
| set session `wait_timeout` = ? |
| show tables |
| rollback |
| select * from information_schema.processlist |
| select @@session.tx_isolation |
| show status |
| start transaction |
| select user() |
| show query_response_time |
| set autocommit=? |
| show full processlist |
| show databases |
| administrator command: Statistics |
| show plugins |
| show slave status |
| commit |
| set names ? |
| show global status like ? |
| administrator command: Quit |
| show /*!? global */ status |
| set names utf? |
| select @@version_comment limit ? |
| show engine innodb status |
| select database() |
+----------------------------------------------+
25 rows in set (0.00 sec)

One great feature is that you can add columns to the table. For example, maybe you want to add an “indexes” column to the table, and list the index or indexes used. Then after the query review is complete, you can look at all the indexes in use, and see if there is an index defined in a table that is NOT in use.

You can review all the queries and run a query review every 6 months or every year, to look at any new queries that have popped up, or queries that have been removed (note first_seen and last_seen in the table).

You can also see how the query performance changed over time using the –history flag to pt-query-digest, which can populate a table with statistics about each query. But that is a topic for another post!

Query reviews are excellent ways to look comprehensively at your queries, instead of just the “top 10” slow, locking, most frequent, etc. queries. The EXPLAINing is long and slow work but the results are worth it!

Why does the MySQL optimizer not do what I think it should?

In May, I presented two talks – one called “Are you getting the best out of your indexes?” and “Optimizing Queries Using EXPLAIN”. I now have slides and video for both of them.

The first talk about indexing should probably be titled “Why is MySQL doing this?!!?!!?” It gives insight into why the MySQL optimizer chooses indexes that you do not expect; especially when it does not use an index you expect it to.

The talk has something for everyone – for beginners it explains B-trees and how they work, and for the more seasoned DBA it explains concepts like average value group size, and how the optimizer uses those concepts applied to metadata to make decisions.

Slides are at http://technocation.org/files/doc/2017_05_MySQLindexes.pdf.
Click the slide image below to go to the video at https://www.youtube.com/watch?v=e39-UfxQCCsSlide from MySQL indexing talk

The EXPLAIN talk goes through everything in EXPLAIN – both the regular and JSON formats – and describes what the fields mean, and how you can use them to figure out how to best optimize your query. There are examples that show where you can find red flags, so that when you EXPLAIN your own queries, you can be better prepared for gotchas. The EXPLAIN talk references the indexing talk in a few places (both talks were given to the same audience, about a week apart), so I highly recommend you watch that one first.

Slides are at .
Click the slide image below to go to the video at https://www.youtube.com/watch?v=OlclCoWXplgSlide image from the EXPLAIN talk

Proactive MySQL: Query Reviews (part 1, overview)

Query Reviews part 2: pt-query-digest

One task that can really help reduce future problems is to do a periodic query review. I’ve been using pt-query-digest to do this since 2010 (back when it was part of Ma’atkit, mk-query-digest!), and while I have presented the idea several times at conferences, I have never blogged about it.

I am going to share a secret with you – I blog not just to share information with YOU, but to share information with ME. Future me. This comes up because I am working on a query review at Salesforce.com for one of our busiest shards. I went to go look up a blog post on using pt-query-digest, because it’s a handy cheat sheet….and….I never did it. So, here goes!

This is the first blog post in what will be a series of blog posts, because it is a long topic and there are a few tools that can be used these days for analysis.

What is a query review?
At its simplest, it’s a review of queries. In this context, a query review is where you proactively review the performance of ALL successful queries sent to a server.

Why should you do a query review?
A query review can find possibly problematic queries BEFORE they are a problem. As an example, you can easily find queries that do not use an index, and make indexes before the tables become so large that they have problems.

Another example is the case when you have a query that does the same thing over and over, many times per second – for example, a query that counts the sessions table every time a user hits the page, to say “x people online now”. That query could be rewritten to not do a count every time – maybe do a count once every minute or 5 minutes and put inside a “user count” table, and then each page hit queries that table. Or use an intermediate cache.

Another reason to do a query review is that sometimes a sample query is difficult to find.

Who should do a query review?
A query review is not for a junior DBA. You need to have some knowledge of how to optimize queries, how indexing works, when an index is valuable, and when an index is necessary. For example, a query not using an index on a text field in the “countries” table is much better than a query not using an index on an integer in the “customer” table, because countries won’t grow to be huge but hopefully your customers table will.

What does a query review look like?
Necessary for a query review is gathering “all” queries. This can be done in many ways:

  • general log – logs all queries when they are sent to the server
    • Pros
    • Built into all versions and forks of MySQL.
    • Gets ALL queries sent to the server, even ones that have an error. You can see if there are lots of queries with syntax errors, which can help find code/ORM bugs.
    • Gets ALL attempted logins, even if they fail, so it’s useful for a security/technical debt.
    • You can turn it on dynamically since….MySQL 5.1 (I think? at any rate, probably your version has it dynamic)
    • MySQL overhead
    • Cons
    • because the logging happens when they are sent, there is no indication if the query completed successfully, or how long the query took.
    • Write intensive to disk
    • Grows at a large rate
  • slow query log with long_query_time turned to 0 – gets all *successful* queries
    • Pros
    • Built into all versions and forks of MySQL.
    • Can be turned on dynamically (since, I think, MySQL 5.1, same as general log).
    • Gets lots of information, including lock timing, query execution timing, rows returned, whether the query was successful or not.
    • Cons
    • Does not get ALL attempted queries – only gets some errors (e.g., not syntax errors)
    • Does not get failed logins
    • Write intensive to disk
    • Grows at a large rate
    • MySQL overhead
  • tcpdump and other traffic sniffers (wireshark, built-in sniffers to programs like MONyog, etc)
    • Pros
    • Built into every platform
    • Gets all MySQL traffic, including attempted logins and queries with syntax errors
    • Gets information like execution time, users and hosts.
    • No additional MySQL overhead
    • Cons
    • Must be root to run tcpdump
    • Write intensive to disk
    • Grows at a large rate
  • PERFORMANCE_SCHEMA, pt-query-digest –processlist, proxies, audit logs

Well, that’s a lot of words for one blog post, so I’ll end part 1: overview here. Future posts in the series will cover how to use the tools to do a query review.

Conference Tips!

For those going to Percona Live next week, I am re-sharing this blog post from September 2012 (from a now-defunct blog):

As many folks know, I do a bit of traveling, both going to conferences, and speaking at them (MySQL and others). So I have compiled a list of tips and tricks, from the basics like do not forget to eat breakfast to putting your business cards inside your bag. I have a list with pictures that I will add to as I think of more. I hope you enjoy this tumblr-style list of conference tips!

Do you have any other tips? Add them in the comments!

What does pt-show-grants look like?

[note: A broken link from a defunct blog made me dig up this post from archive.org and re-post it here; Beginners may find this informative!]

The OurSQL Podcast did an episode on some of the lesser-known but very useful tools in the Percona Toolkit. pt-show-grants is one of those tools that I use pretty frequently. While the manual page has an explanation of all the features and a few examples, you dont really see the output, and often you decide whether or not to use a tool based on what it gives you as output.

So here is a small example of an actual command I did today using pt-show-grants. I wanted to find the grants for a particular user. To do that without pt-show-grants, Id have to login to MySQL, run

mysql> SELECT host FROM mysql.user WHERE user='aus4_dev';

And then use that host information in a SHOW GRANTS statement:

mysql> SHOW GRANTS FOR aus4_dev@HOST;

But I would have to do this for each HOST if there were 2 hosts, Id have to run the SHOW GRANTS command twice.

Happily, pt-show-grants has an option called only, which will show you all user@host combinations for the username you specify. I have login information stored in a .my.cnf on this particular dev machine, and except for the password and host, this is an exact copy/paste of what I typed and the output:

[scabral@dev1.db ~]$ bin/pt-show-grants --only aus4_dev
-- Grants dumped by pt-show-grants
-- Dumped from server Localhost via UNIX socket, MySQL 5.1.52-log at 2012-03-01 08:52:01
-- Grants for 'aus4_dev'@'10.0.0.1'
GRANT USAGE ON *.* TO 'aus4_dev'@'10.0.0.1' IDENTIFIED BY PASSWORD '*1234567890ABCDEF1234567890ABCDEF12345678';
GRANT ALL PRIVILEGES ON `aus4_dev`.* TO 'aus4_dev'@'10.0.0.2';
-- Grants for 'aus4_dev'@'10.0.0.2'
GRANT USAGE ON *.* TO 'aus4_dev'@'10.0.0.2' IDENTIFIED BY PASSWORD '*1234567890ABCDEF1234567890ABCDEF12345678';
GRANT ALL PRIVILEGES ON `aus4_dev`.* TO 'aus4_dev'@'10.0.0.2';

By default, if I did not put in the only, it would show me all the users that I was allowed to see. There is also an ignore option, so if you want to show all users except a particular username, you can do that as well.

Being able to find all user@host users and their grants given a particular username is very handy and eliminates the need to go into the database to find the hostnames.

[original post date: 3/1/2012]

Testing Advanced Log Flushing for Percona Audit

We use Percona’s <A HREF=”https://www.percona.com/doc/percona-server/5.5/management/audit_log_plugin.html”>audit log plugin</A> to keep a record of all our logins. Recently we did one of those tasks that everyone knows they should do, but few ever do: change the application user’s password.

When we change the application password, we add a new application user with the proper permissions and a new password, update the information in our repository and wait a while. Using this procedure, any failures mean the old user is used, and more importantly – failure does not impact the end user.

We check the audit logs to see if there were failures – if the user is still being used – when it is no longer in use, we can drop the user.

For reference, here are our settings:

[mysqlaudit]# grep audit /etc/my.cnf
# Percona audit plugin options
audit_log_format = JSON
audit_log_rotate_on_size = 1073741824 . #1G
audit_log_rotations = 10
audit_log_file = /var/log/mysqlaudit/audit.log
audit_log_buffer_size = 4194304
audit_log_strategy = ASYNCHRONOUS
audit_log_policy = LOGINS

This means that we automatically flush logs >1G, keeping 10 audit logs. The other option is to do manual flushing, but we do not want our log files to get very large, and we don’t need to keep audit logs for a very long time.

The ideal behavio – we update our app to the new user, and then flush the logs. That way we could start a new audit log and only have to search the current audit log for the old user. I’m sure people are thinking, “well, it’s set to rotate by size, not manually, so you just cannot do it.” However, binary logs are set the same way and FLUSH LOGS do indeed rotate logs manually, even when auto rotate by size is set.

The tl;dr is that there is currently no way to do this without restarting MySQL. The audit_log_rotate_on_size variable is not dynamic, so we could not set it to manual without restarting MySQL. Here are some other tests we did to see if we could force an audit log flush while using the auto rotate:

We tried moving the audit logs and flushing:

[ ~]# cd /var/log/mysqlaudit/
[mysqlaudit]# ls -l
total 1459572
-rw-rw—- 1 mysql mysql 420839439 Mar 31 11:04 audit.log
-rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.01
[mysqlaudit]# mv audit.log.01 audit.log.02
[mysqlaudit]# mv audit.log audit.log.01
[mysqlaudit]# ls
audit.log.01 audit.log.02

Don’t worry, this doesn’t affect writing the file – the inode is still in MySQL and it still writes to the file, now called audit.log.01:
[mysqlaudit]# ls -l
total 1459652
-rw-rw—- 1 mysql mysql 420925253 Mar 31 11:07 audit.log.01
-rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.02
[mysqlaudit]# ls -l
total 1459652
-rw-rw—- 1 mysql mysql 420925253 Mar 31 11:07 audit.log.01
-rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.02
[mysqlaudit]# mysql -u sheeri.cabral -p -e “FLUSH LOGS;”
Enter password:
[mysqlaudit]# ls -l
total 1459688
-rw-rw—- 1 mysql mysql 420958983 Mar 31 11:07 audit.log.01
-rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.02

Note that this also proves that “FLUSH LOGS” does not close and open the audit log.

Can we force it? Let’s try by setting the audit_log_policy to NONE and then to LOGINS (what we have it as by default):

[mysqlaudit]# mysql -u sheeri.cabral -p -e “set global audit_log_policy=NONE; set global audit_log_policy=LOGINS”
Enter password:
[mysqlaudit]# ls -l
total 1459768
-rw-rw—- 1 mysql mysql 421043317 Mar 31 11:10 audit.log.01
-rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.02

Here’s another failed test – let’s see if we can disable then enable the plugin:
[mysqlaudit]# mysql -u sheeri.cabral -p -e “UNINSTALL PLUGIN audit_log”
Enter password:
[mysqlaudit]# ls -rlth
total 1.4G
-rw-rw—- 1 mysql mysql 1.1G Mar 25 08:50 audit.log.01
-rw-rw—- 1 mysql mysql 403M Mar 31 11:44 audit.log
[mysqlaudit]# mysql -u sheeri.cabral -p -e “INSTALL PLUGIN audit_log SONAME ‘audit_log.so’; ”
Enter password:
ERROR 1125 (HY000) at line 1: Function ‘audit_log’ already exists
[mysqlaudit]# mysql -u sheeri.cabral -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1375345
Server version: 5.5.51-38.1-log Percona Server (GPL), Release 38.1, Revision b4a63b4

Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

[(none)]> \P grep -i audit
PAGER set to ‘grep -i audit’
[(none)]> SHOW PLUGINS;
| audit_log | DELETED | AUDIT | audit_log.so | GPL |
41 rows in set (0.00 sec)

[(none)]> INSTALL PLUGIN audit_log SONAME ‘audit_log.so’;
ERROR 1125 (HY000): Function ‘audit_log’ already exists

So, I ended up needing to restart MySQL if I wanted to re-enable the plugin.

Sometimes the failures are the most illuminating!

[I realize that the plugin probably could not hack FLUSH LOGS but it would be great to get FLUSH AUDIT or something similar…]

MySQL DevOps First Step: Revision Control

MySQL environments are notorious for being understaffed – MySQL is everywhere, and an organization is lucky if they have one full-time DBA, as opposed to a developer or sysadmin/SRE responsible for it.

That being said, MySQL is a complex program and it’s useful to have a record of configuration changes made. Not just for compliance and auditing, but sometimes – even if you’re the only person who works on the system – you want to know “when was that variable changed?” In the past, I’ve relied on the timestamp on the file when I was the lone DBA, but that is a terrible idea.

I am going to talk about configuration changes in this post, mostly because change control for configuration (usually /etc/my.cnf) is sorely lacking in many organizations. Having a record of data changes falls under backups and binary logging, and having a record of schema changes is something many organizations integrate with their ORM, so they are out of scope for this blog post.

Back to configuration – it is also helpful for disaster recovery purposes to have a record of what the configuration was. You can restore your backup, but unless you set your configuration properly, there will be problems (for example, an incompatible innodb_log_file_size will cause MySQL not to start).

So, how do you do this? Especially if you have no time?

While configuration management systems like chef, puppet and cfengine are awesome, they take setup time. If you have them, they are gold – use them! If you do not have them, you can still do a little bit at a time and improve incrementally.

If you really are at the basics, get your configurations into a repository system. Whether you use rcs, cvs, subversion or git (or anything else), make a repository and check in your configuration. The configuration management systems give you bells and whistles like being able to make templates and deploying to machines.

It is up to you what your deployment process is – to start, something like “check in the change, then copy the file to production” might be good enough, for a start – remember,  we’re taking small steps here. It’s not a great system, but it’s certainly better than not having any revision control at all!

A great system will use some kind of automated deployment, as well as monitoring to make sure that your running configuration is the same as your configuration file (using <A HREF=”https://www.percona.com/doc/percona-toolkit/3.0/pt-config-diff.html”>pt-config-diff). That way, there are no surprises if MySQL restarts.

But having a great system is a blog post for another time.

Generating a MySQL Password

One of the services our database engineers provide is adding users to MySQL. We have some nice Chef recipes, so all I have to do is update a few files, including adding in the MySQL password hash.

Now, when I added myself, I just logged into MySQL and generated a password hash. But when my SRE (systems reliability engineer) colleague needed to generate a password, he did not have a MySQL system he could login to.

The good news is it’s easy to generate a MySQL password hash. The MySQL password hash is simply a SHA1 hash of a SHA1 hash, with * at the beginning. Which means you do not need a MySQL database to create a MySQL password hash – all you need is a programming language that has a SHA1 function (well, and a concatenate function).

And I found it, of course, on this post at StackExchange (http://unix.stackexchange.com/a/234592/217471). So you don’t have to click through, here is what it says – and I have tested all these methods and I get the same password hash. I have changed their example of “right” to “PASSWORD HERE” so it’s more readable and obvious where the password goes, in case you copy and paste from here.

Some one-liners:

**MySQL** (may require you add -u(user) -p):

mysql -NBe "select password('PASSWORD HERE')"

**Python**:

python -c 'from hashlib import sha1; print "*" + sha1(sha1("PASSWORD HERE").digest()).hexdigest().upper()'

**Perl**:

perl -MDigest::SHA1=sha1_hex -MDigest::SHA1=sha1 -le ‘print “*”. uc sha1_hex(sha1(“PASSWORD HERE”))’

**PHP**:

php -r 'echo "*" . strtoupper(sha1(sha1("PASSWORD HERE", TRUE))). "\n";'

Hopefully these help you – they enabled my colleagues to easily generate what’s needed without having to find (or create) a MySQL instance that they can already login to.

Upgrading from MySQL 5.1 to MariaDB 5.5

In my last post, a tale of two MySQL upgrades, a few folks asked if I would outline the process we used to upgrade, and what kind of downtime we had.

Well, the processes were different for each upgrade, so I will tackle them in separate blog posts. The first step was to upgrade all our MySQL 5.1 machines to MariaDB 5.5. As mentioned in the previous post, MariaDBs superior performance for subqueries is why we switched and we switched back to MySQL for 5.6 to take full advantage of the performance_schema.

It is not difficult to blog about our procedure, as we have documentation on each process. My first tip would be to do that in your own environment. This also enables other folks to help, even if they are sysadmins and not normally DBAs. You may notice the steps contain items that might be obvious to someone who has done maintenance before we try to write them detailed enough that if you were doing it at 3 am and a bit sleep-deprived, you could follow the checklist and not miss anything. This also helps junior and aspiring DBAs not miss any steps as well.

The major difference between MySQL 5.1 and MySQL 5.5 (and its forks, like MariaDB) is that FLOAT columns are handled differently. On MySQL 5.1, a float value could be in scientific notation (e.g. 9.58084e-05) and in 5.5, its not (e.g. 0.0000958084). This makes checksumming difficult, as all FLOAT values will show differences even when they are the same number. There is a workaround for this, devised by Shlomi Noach.

We have an n+1 architecture for databases at Mozilla this means that we have an extra server. If we need 1 master and 3 slaves, then n+1 is 1 master and 4 slaves. Because of this, there are 2 different ways we upgrade the first slave we upgrade, and subsequent slaves/masters.

These steps are copied and pasted from our notes, with minor changes (for example, item #2 is send out maintenance notices but in our document we have the e-mail addresses to send to).

Assumptions: Throughout these notes we use /var/lib/mysql, as that is our standard place for MySQL. You may need to change this to suit your environment. We are also using Red Hat Enterprise Linux for our operating system, so this procedure is tailored to it (e.g. yum install/yum remove). We control packages using the freely available puppet mysql module we created.

For the first slave
The overall procedure is to perform a logical backup the database, create a new empty installation of the new server version, and import the backup. Replication does work from MySQL 5.1 to MariaDB 5.5 and back (at least, on the 25 or so clusters we have, replication worked in both directions. Your mileage may vary).

1. Make sure the slave has the same data as the master with checksums (the previous checksum is fine, they should be running every 12 hours).

2. Send out maintenance notices.

3. Take the machine out of any load balanced services, if appropriate

4. Set appropriate downtimes in Nagios

5. Start a screen session on the server

6. Do a SHOW PROCESSLIST to see if there are any slaves of the machine. If so, move them to another master if they are needed. [we have a different checklist for this]

7. Do a SHOW SLAVE STATUS to see if this machine is a slave.
a. If this machine is a slave, ensure that its master will not delete its binlogs while the upgrade is occurring.

b. If this machine is a slave, do a SLAVE STOP; and copy the master.info file somewhere safe [or the slave_master_info table if using that]

8. Stop access to the machine from anyone other than root (assuming you are connecting from root):

UPDATE mysql.user SET password=REVERSE(password) WHERE user!='root'; FLUSH PRIVILEGES;

9. See what the default character set is for the server and databases:
SHOW VARIABLES LIKE 'character_set_server'; SHOW VARIABLES LIKE 'character_set_database';
SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE DEFAULT_CHARACTER_SET_NAME!='utf8' AND SCHEMA_NAME NOT IN ('mysql');

If applicable, change the server defaults to UTF8 and change databases to utf8 with ALTER DATABASE dbname DEFAULT CHARACTER SET utf8;

10. Stop access to the machine from anyone other than root (assuming you are connecting from root): UPDATE mysql.user SET password=REVERSE(password) WHERE user!='root'; FLUSH PRIVILEGES;

11. Check to see how big the data is:
mysql> SELECT SUM(DATA_LENGTH)/1024/1024/1024 AS sizeGb FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA!='information_schema';

12. Determine how you can export the data, given the size. You may be able to export without compression, or you may need to do a mysqldump | gzip -c > file.sql, then compress the old data files instead of just moving them aside.

13. Do a du -sh * of the datadir and save for later, if you want to compare the size of the database to see how much space is returned after defragmenting

14 .Export the data from all databases, preserving character set, routines and triggers. Record the time for documentations sake. Im assuming the character set from step 9 is utf8 (if its something like latin1, youll need to put in default-character-set=latin1 in the command). If the machine has slaves, make sure to use master-data=1. If you need to compress, change the shell command accordingly:
time mysqldump --all-databases --routines --triggers --events > `date +%Y-%m-%d`_backup.sql

15. Stop MySQL

16. Copy the config file (usually /etc/my.cnf) to a safe place (like /etc/my.cnf.51)

17. Do a rpm -qa | egrep -i "percona|mysql". Do a yum remove for the mysql/percona packages. Its OK if it also removes related packages, like perl-DBD, but make a note of them, because you will want to reinstall them later. Sample:
yum remove Percona-Server-client Percona-Server-shared-compat Percona-XtraDB-Cluster-devel Percona-Server-server

18. Move the /var/lib/mysql directory to /var/lib/mysql-old. Compress any files that need compression (if you need to compress, to decompress the sql file). If you absolutely cannot keep the files, see if you can copy them somewhere. We really want to preserve the old data directory just in case we need to revert.

19. Decompress the sql file, if applicable.

20. Install the proper packages by changing puppet to use maridb55 instead of mysql51 or percona51. Verify with rpm -qa | egrep -i percona|mysql|maria

[this may be different in your environment; we use the freely available puppet mysql module we created.

21. Run mysql_install_db

22. Make any changes to /etc/my.cnf (e.g. run puppet). When going from MySQL 5.1 to 5.5, there are no particular global changes Mozilla made.

– when we went from MySQL 5.0 to MySQL 5.1, we did a global change to reflect the new slow query log options.

23. chown -R mysql:mysql /var/lib/mysql/

24. chmod 775 /var/lib/mysql

25. Start MySQL and check the error logs for any warnings. Get rid of any warnings/errors, and make sure MySQL is running.

26. Turn off binary logging. Import the export, timing how long it takes, for reference:

time mysql < YYYY_MM_DD_backup.sql

27. Restart MySQL and look for errors, you may need to run mysql_upgrade.

28. Turn on binary logging, if applicable.

29. Test.

30. If this machine was a slave, re-slave it. Let it catch up, making sure there are no data integrity errors, and no replication errors.

31. Reinstate permissions on the users:
UPDATE mysql.user SET password=REVERSE(password) WHERE user!='root'; FLUSH PRIVILEGES;

32. Re-slave any slaves of this machine, if needed.

33. Turn back on Nagios, making sure all the checks are green first.

34. Run a checksum on the master to propagate to this slave, and double-check data integrity on the slave. Note that you will want to use –ignore-columns with the output of this command in the checksum, to avoid false positives from scientific notation change (see http://www.sheeri.com/mysql-5-1-vs-mysql-5-5-floats-doubles-and-scientific-notation/)

Find FLOAT/DOUBLE fields to ignore in checksum: SELECT GROUP_CONCAT(DISTINCT COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE DATA_TYPE IN ('float','double') AND TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema');

35. Put the machine back into the load balancer, if applicable.

36. Inform folks the upgrade is over

On the first upgrade, we did what is usually recommended – do a logical export with mysqldump, and then an import. With other upgrades in the same replication hierarchy, we can take advantage of Xtrabackup to stream the new version directly to the machine to be upgraded.

The general procedure here is similar to the above, except that a logical export is not taken. After preparation steps are taken, a new empty MariaDB 5.5 server is installed. Then we use xtrabackup to backup and restore the existing MariaDB 5.5 server to the machine we are upgrading.

For subsequent slaves, and the master

  1. Coordinate with affected parties ahead of time
  2. Send out any notices for downtime
  3. Take the machine out of any load balanced services, if appropriate. If the machine is a master, this means failing over the master first, so that this machine becomes a regular slave. [we have a different checklist for how to failover]
  4. Set appropriate downtimes in Nagios, including for any slaves
  5. Start a screen session on the server
  6. Do a SHOW PROCESSLIST to see if there are any slaves of the machine. If so, move them to another master if they are needed.
  7. Do a SHOW SLAVE STATUS to see if this machine is a slave.
    1. If this machine is a slave, ensure that the master will not delete its binlogs while the upgrade is occurring.
    2. If this machine is a slave, do a SLAVE STOP; and copy the master.info file somewhere safe
    3. If this machine is a slave, do a SLAVE STOP; and copy the master.info file somewhere safe
  8. Save a list of grants from pt-show-grants, just in case there are users/permissions that need to be preserved.  [this is done because sometimes masters and slaves have different users, though we try to keep everything consistent]
  9. Figure out how big the backup will be by doing a du -sh on the datadir of the already-upgraded machine to be backed up, and make sure the new machine has enough space to keep the old version and have the new version as well.
  10. Stop MySQL on the machine to be upgraded.
  11. Copy the config file (usually /etc/my.cnf) to a safe place (like /etc/my.cnf.51)
  12. Do a rpm -qa | egrep -i "mysql|percona". Do a yum remove for the mysql packages (at least mysql-server, mysql). Its OK if it also removes related packages, like perl-DBD, but make a note of them, because you will want to reinstall them later.
  13. Move the /var/lib/mysql directory to /var/lib/mysql-old. Compress any files that need compression. If you absolutely cannot keep the files, see if you can copy them somewhere. We really want to preserve the old data directory just in case we need to revert.
  14. Install the proper packages by changing puppet to use maridb55 instead of mysql51 or percona51, running puppet manually. Verify with rpm -qa | egrep -i "percona|mysql|maria"
  15. Run mysql_install_db
  16. Make any changes to /etc/my.cnf (or run puppet). When going from MySQL 5.1 to 5.5, there are no particular changes.
  17. chown -R mysql:mysql /var/lib/mysql/
  18. chmod 775 /var/lib/mysql
  19. Start MySQL and check the error logs for any warnings. Get rid of any warnings/errors, and make sure MySQL is started.
  20. Stop MySQL, and move or delete the datadir that was created on upgrade.
  21. If you are directly streaming the backup to the machine to be upgraded, do this on the machine to be upgraded:
    cd $DATADIR
    nc -l 9999 | tar xfi -
  22. On the machine to be backed up (that is already upgraded), in a screen session, making sure you get any slave info:
    time innobackupex --slave-info --stream=tar $DATADIR | nc (IP/hostname) 9999
  23. Once xtrabackup is complete, fix permissions on the datadir:
    chown -R mysql:mysql /var/lib/mysql/
    chmod 775 /var/lib/mysql
  24. Prepare the backup:
    time innobackupex --apply-logs --target-dir=/var/lib/mysql
  25. Fix permissions on the datadir again:
    chown -R mysql:mysql /var/lib/mysql/
    chmod 775 /var/lib/mysql
  26. Restart MySQL and look for errors
  27. Test.
  28. If this machine was a slave, re-slave it. Let it catch up, making sure there are no data integrity errors, and no replication errors.
  29. Re-slave any slaves of this machine, if needed.
  30. Turn back on Nagios, making sure all checks are green first.
  31. Put the machine back into the load balancer, if applicable.
  32. Inform folks the upgrade is over

Its long and detailed, but not particularly difficult.