About Me

My photo
I am currently working as a DB Consultant with Ashnik. We are into Open Source and Cloud based enterprise class solutions. Prior to this I was working as a DBA @ Misys India (Banking Services). Working on IBM DB2, Oracle Microsoft SQL Server. Selected as IBM Champion for Data Management 2011. I like bit of photography, cycling, poetry and I just love gadgets!!

Tuesday, July 30, 2013

Recoverability of PostgreSQL-2: Setup of Hot Standby, failover and rebuilding the master node

In my last post, I had discussed about certain concepts of backup,recovery and High Availability. And as promised, in this post, I will be covering specifics about implementation of Hot Standby server, failover and switch over scenarios.

Step 1: Installation

The first step would be to install PPAS (same version) on two different servers (running same OS/OS version). For putting up this blog, I tried these steps on 2 servers, each running Postgres Plus Advanced Server 9.2.1 on RHEL 6.1. For the sake of simplicity, instead of overwriting the default DB cluster/instance on secondary db server, I created a different db instance
  • Primary DB Server:
    •  Name:dbserver1
    •  IP Address: 192.168.160.147
    •  Data directory: /opt/PostgresPlus/9.2AS/data
    •  Port: 5444
  • Stand by DB Server
    •  Name:dbserver2
    •  IP Address: 192.168.160.150
    •  Data Directory: /opt/PostgresPlus/9.2AS/data2
    •  Port: 5222
Step 2: Parameter Setup

Change the below parameter:
  •   wal_level = hot_standby
  •   max_wal_senders = 3
  •   wal_keep_segments = 128 (optional/depending on load)
  •   replication_timeout = 5 sec (optional)
  •   hot_standby = on (required/effective only for hot stand by server)
To ease the pain of back-up-restore, failover-failback I created two postgresql.conf backups on primary server (under data directory)
1.   postgresql.conf.dbserver1
2.    postgresql.conf.dbserver2

Both the files are same, with only difference in the value for port (dbserver1=5444 and dbserver2=5222).
The value for replication related parameters and hot_standby same in both the files. As the replication parameters are not going to cause any difference on secondary server unless you use cascaded replication and hot_standby value is ignored on Primary Server.

Add a new value in pg_hba.conf
host  replication   all   192.168.160.0/24      trust

Step 3: Create recovery.conf

Create two dummy recovery.conf files namely recovery.conf.dbserver1_is_master and recovery.conf.dbserver2_is_master.

recovery.conf.dbserver1_is_master
standby_mode = 'on'
primary_conninfo = 'host=192.168.160.147 port=5444 user=enterprisedb password=ashnik'
restore_command = 'scp enterprisedb@192.168.160.150:/mnt/arch/%f %p' 
             # optional
            # needs archiving command to be enabled on primary
recovery_target_timeline = 'latest'      #optional
trigger_file = '/opt/PostgresPlus/9.2AS/data/recover.trigger'

recovery.conf.dbserver2_is_master
standby_mode = 'on'
primary_conninfo = 'host=192.168.160.150 port=5222 user=enterprisedb password=ashnik'
restore_command = 'scp enterprisedb@192.168.160.150:/mnt/arch/%f %p' 
         # optional
        # needs archiving command to be enabled on primary
recovery_target_timeline = 'latest'      #optional
trigger_file = '/opt/PostgresPlus/9.2AS/data/recover.trigger'

For the ease of management, you should keep the passwords same on both the servers. Here the password is used in plain text, but one can always use password files or md5 password.

Step 4: Backup

You need to take a cold backup of your primary server and restore it on secondary server. With rest of the files in the backup you will also get below files:
  1.    postgresql.conf.dbserver1
  2.    postgresql.conf.dbserver2
  3.    recovery.conf.dbserver1_is_master
  4.    recovery.conf.dbserver2_is_master
Step 5: Setup of Standby Server

Now copy the postgresql.conf.dbserver2 as postgresql.conf on dbserver2.
Then copy the recover.conf.dbserver1_is_master as recovery.conf
Now start the db cluster on secondary server first to make sure it is not going to lose any transaction. You will see an error in log complaining about non-availability of primary server, which can be ignored at this point. Confirm that secondary database instance is up and running and you can connect and fire read queries on the database.

Step 6: Start-up the Primary Server

Once sure, start the primary server.

Check the primary and secondary server. Connect to each of them to confirm the connectivity, role and read/write accessibility.

You can use “pg_ctl status” OS command and “pg_is_in_recovery()” SQL function to confirm the status of each database.

Now let’s try to do a failover.

Step 7: Create a Failure

So for that, first there has to be a failure. On the primary site, fire a command
        kill -9 <>

Now connect to the secondary database, check if you can do write operation or simply check if it’s still in recovery state by pg_is_in_recovery() function. So do you still get “t” as the output of the above command? Are you still not able to write operations on the secondary database?

Well don’t be surprised, actually PostgreSQL does not do failover on its own. There have always been two school of thoughts about any failover mechanism, one which says “auto failover” and other which says “manual failover”. If given a choice, I always prefer a manual failover for DR site. This ensures that my DR site does not mistakenly assume a network failover as a disaster or a failure. Moreover, the DR site is not just database, one also needs to make sure the application and client connectivity is modified accordingly. Hence it’s best to keep it manual. Auto-failover is useful if you want to do hot-streaming replication for HA (which I would discuss in a later post in this series).

Step 8: Switch Secondary/slave database to become read/write

On the secondary server create the trigger file (as specified in recovery.conf)
touch opt/PostgresPlus/9.2AS/data2/recover.trigger

Now connect again and check if your current live sessions/new sessions to the secondary database are able to do write operations to the database. If yes, then great! You just completed a successful failover.

Step 9: Rebuilding the Master Database

Now, we need to re-build the master (assuming that the master database server is up).

First of all clean up the database cluster on old Primary server:
            rm -Rf /opt/PostgresPlus/9.2AS/data

Now, take a backup from new primary (dbserver2) to rebuild the master:
pg_basebackup -D /opt/PostgresPlus/9.2AS/data -h 192.168.160.150 -W -p 5222 -Fp --xlog-method=stream

Once the backup is complete, copy the postgresql.conf.dbserver1 as postgresql.conf and then remove recovery.done and copy the recover.conf.dbserver2_is_master as recovery.conf.

Step 10: Start the Primary DB as new slave

Now start the db cluster on master node. Once the start-up is successful, make sure everything is alright by connecting to the database and firing the below command:
SELECT pg_is_in_recovery(); #expected output is “t”

Then fire the below command on dbserver1 and dbserver2 and both should be same:
SELECT txid_current_snapshot();

Carefully inspect the log files on secondary node (dbserver1) to confirm the recovery is in progress and there is no issues in the replication.

I have dodged a few details in this post like:
1.     Details of pg_basebackup command (to take a backup), which will be included in one of my upcoming posts in this series
2.     Details of archive logging and its setup – This is going to be the next post

Thursday, July 11, 2013

Recoverability of PostgreSQL-1: Recovery Concepts

Recently one of my colleagues told me that, it’s very difficult (he actually said "impossible") to do fail back and switchback in PostgreSQL. Well, I thought of giving it a try. In my upcoming blog, I will be discussing the process and details of my experiment. But before that, it’s very important to understand certain concepts that people get generally confused with, thus ending up with a messed up recovery strategy. Let me discuss a few terms which are very important for designing your backup-recovery and disaster-recovery processes:

1) Archiving: Means archiving your transaction/xlog/wal segments/WAL. This is not a substitute for creation of DR site. It is required for taking a hot backup/online backup. And so is the case with at least MS SQL Server and DB2.

2) Backup: A process in which you take a backup of your database. It can be logical where you take a backup of tables and rows or it can be physical where you backup the datafiles and other configuration files needed by your database/instance/cluster. Physical backups can be cold/offline or can be online/hot.

3) Crash Recovery: You had a crash of your database service/server and your database is recovering from the same. Generally if your disk is intact this can be achieved with your WAL segments/WAL itself. Whereas, if your disk has crashed and you are restoring your database from a backup you will need to archive files and latest WAL files for applying the changes since the backup was taken.

4) Disaster Recovery: You had a crash after which your database server is not available and you have to either rebuild the whole system from scratch using a backup or have to start using a DR site you had built for just this scenario.

5) Fail over: A term which refers to using your DR site in case there is a major failure at your primary site.

6) Switchover: A term which refers to switching the roles of your Primary and DR sites either because Primary site has come up again after a recent major failure or due to some maintenance work for which you have to switch the roles.

7) Switchback: A term which means switching back/restoring back the roles of "new primary" and "new DR" sites. i.e. the original Primary site regains control. This is generally a planned activity and everyone desires to have minimal downtime for this.

8) Fail back: A term which is generally used interchangeably for switchback, but I would think it is more of those scenarios when your "new primary" or "switched primary" server goes down for some reason and you have to now promote the "new DR" (which originally was ’primary’ site in start of time) as Primary

9) Warm standby: A standby server which is an exactly (almost real time) replica of your primary server and is always available in recovery mode and cannot be accessed until a recovery is triggered

10) Hot Standby: A standby server which is an exactly (almost real time) replica of your primary server and is always available in recovery mode and cannot be accessed for write commands until a recovery is triggered. But this server can be utilized for read queries

11) WAL Archive Replication: A replication process where the WAL segments which have been archived are copied and replayed on the standby server

12) Streaming Replication: A replication process where the WAL segments are copied directly to the standby servers (standby server need not wait for the WAL files to be archived)

So folks, these are some the terms that would help give you a better insight into understanding the concepts. And if you already have a recovery strategy, then hopefully these terms may have helped you to understand the process in a better way.
I shall be back with a test which does fail-over, switchover (after a fail-over) and then switchback.

PS: Any comments or corrections about the terms and their definition/explanation used here, are more than welcome.