The big, big picture
I've been recently tinkering with the best practices for the backup and recovery of DB2 data servers. So I noticed that many people mainly focus on their backup strategy to ensure the safety and availability of the data. But the best backup strategy is not of much use if not leveraged by a strong recovery strategy. An efficient recovery strategy must enable you to quickly restore access to data after a software, hardware, or user failure.
So my goal is to turn the two DB2 servers powering BigDataUniversity.com into an example of DB2 best practices for backup and restore, by having:
- an effective backup strategy
- providing continuous availability during backup,
- and, ideally, the ability to perform the backup process on the HADR Standby instead of Primary (hello, utopia!)
- a rapid recovery strategy
- including continuous availability during restore
- and a strategy for backup retention.
s3cmd makes your life easy
Part of the optimization is persisting the backups on S3 and leveraging s3cmd to automate their deployment. Inside the Ubuntu powered Amazon instances, everything went smooth. But I first looked into ruby-s3cmd as soon as I wanted to experiment things from my Mac. The ruby gem felt more like bogging me down, so I went back to s3cmd, and simply installed it:
- first get s3cmd
- then switch to root just for the install:
sudo python setup.py install
- at last, run
s3cmd --configureto set up your AWS S3 credentials