About Me

My photo
Singapore, Singapore, Singapore
I am currently working as a DB Consultant with Ashnik. We are into Open Source and Cloud based enterprise class solutions. Prior to this I was working as a DBA @ Misys India (Banking Services). Working on IBM DB2, Oracle Microsoft SQL Server. Selected as IBM Champion for Data Management 2011. I like bit of photography, cycling, poetry and I just love gadgets!!

Connect to me

See my profile on LinkedIn Visit my Ashnik Website

My db2 success story: Five things to check when you see high memory usage on db server

Hi, so I am back with my blog sequence of db2 has talent... I know I have been little late with this one... But can not help it when India is playing cricket world cup... They finally won it... After 28 long years... First host nation to have won it... So had given up everything else for following the team create history...

by thw way the grand finale concluded recently and the last four finalists are waiting for your votes... There are some really cool tips to adopt in your dba life and that can help you lift your quality of work... so do see the replay and vote before 7th april...

I hope Scott and DBI will surely gonna come up with more of this... Some of you might find it interesting that JB (of the contestant who made it to mega finals) has been invited by Susan Visser to write a book... TO get a glimpse of her check this out, she was Susan's guest blogger...

Well let me continue with my would had been presentation in the episode just before the grand finales... All the contestants talked about their success story, where they concentrated on one single thing with DB2 that made them hero (well this is what I assume from all the presentations)...

So, for me being a junior DBA, there had not been many such moments... One i have already shared in my previous blog... Let me attempt to share another one...

We had a client running db2 8.2.8, and we had planned for migration to db2 9.5... But before the transition could kick-off there was a big performance bottleneck... All of a sudden they were reporting high I/O, high memory consumption and very high swap-in swap-out activity...
Eyebrow were raised and much like any other issue, this too made people point towards database... Our team was consulted, and my seniors being on leave on that day, I was involved in lot of mail exchanges that day... Let me share the information I requested for:

First Step: I requested for snapshot of memory usage by db2 instance and db2 database hosting the production db:
db2mtrk -i -d [-v]

It confirmed that the db was not using too much of memory, it was using whatever was allocated to bufferpool and cache-pool and db shared memory...

Second thing I did was a request to check the tablespaces where the tables are lying and the bufferpool used by all the different tablespaces we had suggested

This confirmed that not all the tables are in the same tablespace, and also all tablespaces are not pointing to the same bufferpool. This helped me overcome the doubt that, may be only one bufferpool is being used for all the I/O, other bufferpools have simply reserved there share of memory but are not using it... Moreover one bufferpool being used for all tabels/tablespaces means too many flush events...

third step of my investigation was to check if there is CIO (oncurrent I/O) enabled at the filesystem level and to check if the filesystem chaching is switched off at tablespace level...

the result confirmed the cause for high I/O and too much of memory usage...
we saw that there was filesystem caching enabled for all the tablespaces (shit!!! the db2 8.2.8 version had this as default, so guys take my advice and migrate to new version, where default is to create tablespaces with filesystem level caching off)... Moreover the CIO was disabled at Unix level as well...
If you have filesystem level caching enabled, what happens is your data is once cached by the OS in memory and then database (i.e. DB2 application) will cache is again in the bufferpool area created by db2 in the application memory allocated to it by kernel... firstly it causes to much of memory to be used, and secondly there is more time wasted in double caching :P...

Correcting this helped alot...

fourth thing to be inspected was prefetch size at db and tablespace level, if this value is not apt there might be too much of I/O waits involved (though this was not in the problem definition, i thought it will be good to check it) the monitor element tablespace_prefetch_size shall help you get that info (i had to use the table-function interface, as I was working with 8.2.8 version, it is much easier to get in v9.7)... Ideally 1.5-3 times your block size is a good number for prefetch data...

fifth thing I confirmed was to check if there is a scope given for block I/O or not... Actually you can reserve some space in your bufferpool for blocked I/O while creating the bufferpool... Blocked I/O is when DB2 performs sequential prefetching of pages into the block area of the buffer pool...
you can check the performance of blocked I/O using block_ios and vertored_ios monitor elements belonging to your bufferpool...
For optimal performance, it is recommended (source db2 info center for v8) that you bind tablespaces with the same extent size to a block-based buffer pool with a block size equal to the extent size...

These all things can help you as well, when you see a shooting high increase in memory usage by db2, or high memory usage on server hosting db2 database or a very high I/O or too much of I/O waits...

querying all these parameters regularly and fixing them (even when, rather esp when you are not facing any issues) will help you keep memory and I/O issues away...

My mantra for db2 DBAs in production is a snapshot a day keeps performance issues aways... i.e. run one snapshot a day and work on the results, fix the problems you anticipate out of their results...

Well I am myself looking forward to DBI's most awaited webinar on DB2 LUW Performance: 10 SQL Snapshots You MUST Run Today!
Register Now!!!

To end it I hope DBI won't mind me pretending to be a pseudo contestant despite being out of the show now... With all due regards I apologize, if in any way I have overlooked any term/conditions/legality of this event which is a mega success... there will be many to follow...

1 comment: