Somehow I am managing to keep-up with my busy office life and on call support... Which it seems is literally 24X7...
I find myself out of topic while talking to old mates... Office and being single are only topics I could blabber about... This blogs should give me a change... :)
So this is not going to be a HADR tips or how you should decide your backup schedules... It is more on how you can save your time while taking backups and how you can improvise the speed of restores (and hence reduce non-availability time)...
To start with, it is generally advised that you take backups when you system is not being accessed or it having low-traffic... And well while restoring I won't offer the same advice, you don't actually have much of a choice... Your system is not available and best you can do is to reduce the time it takes...
Your throttle the priority by
SET UTIL_IMPACT_PRIORITY FOR utility_id TO num
which will throttle the backup utility or you can set the same by specifying
"UTIL_IMPACT_PRIORITY" clause in the backup command as well...
We should also recollect that if your UTIL_IMPACT_LIM dbm cfg is set too low, no matter how high you set UTIL_IMPACT_PRIORITY itcan not go beyond the threshold set by UTIL_IMPACT_LIM...
To have the backup operation faster it is good to increase your BACKUP BUFFER SIZE/RESTORE BUFFER SIZE "BACKBUFSZ"/"RESTBUFSZ" dbm cfg or you can override it in backup/restore command with "BUFFER buffer_size"... Also you should choose a good (not your lucky number) number for your number of backup/restore buffers to be used... This has to be decided by your db size and RAM available... It can be set with "WITH n BUFFERS"... Multiple buffers specially boost up the performance of backup when using PRALLELISM... Prallelism decides the number of buffer handlers...
Then comes compression, my favorite... If you think compression slows down the speed of backup due to CPU processing lag which is involved in compression algorithm, then you correct about your analysis but not about final conclusion it leads you to...
Compression algorithm takes some extra CPU cycles but you save a great deal on I/O cycle which are more expensive (in terms of time and their availability)... hence compression will not just save space but time as well... and plus your buffers can be utilized easily... Same applies while you are restoring a backup, you spend less time on reads and more data fits in your buffer and hence your restore is faster...
Additionally going by basic h/w an OS concepts having a backup file on disk which is other than one where your db resides/going to be created (during restore) is often faster... Of course there are other reasons as well for which you should not store backups on same disk where DB resides...
Incremental/delta backups helps you take smaller backups which saves time backing up only
what has changed (or delta of change) and while restoring you have to restore a full backup and then changes are to be applied one by one hence makes the restore slower... But if you use backup/restore to shift data periodically (say once a day or week) to a different server incremental/delta backups are best... Cuase they take less time to be backed-up and then less time to be restored (as your restoring only the diff/incremental backup)...
Finally, snapshot backups using Netapp/Tivoli etc might boost up the back up process drastically... Though those are bit expensive to invest into, but if you have very (and I mean very) large volume of database your investments will be well paid off...
TO close this, backup is nothing but making an image of your database in a single file... Database is stored in datafiles which are spread over various blocks in your disk... Now if those blocks were scattered, backup would be quite slow, and hence it is gives your another reason to keep your db well re-orged... Also if your db is spread across various disks/file system it could boost the performance of backups and restores as well...
I will be discussing my daily experiences and tips I have learned (from google, my seniors and various other blogs) for better db2 performance...
About Me
- Sameer
- Singapore, Singapore, Singapore
- I am currently working as a DB Consultant with Ashnik. We are into Open Source and Cloud based enterprise class solutions. Prior to this I was working as a DBA @ Misys India (Banking Services). Working on IBM DB2, Oracle Microsoft SQL Server. Selected as IBM Champion for Data Management 2011. I like bit of photography, cycling, poetry and I just love gadgets!!
interesting and informative, keep posting, thank you very much!
ReplyDeleteNice Blog Thanks For Sharing with me and This blog will tell you everything about the you just improvise .
ReplyDelete