It has rather been quite a busy quarter for me... Office has kept me quite busy and was quite pre-occupied with my job... But here I am back with all the stuffs I wanted to share...
We have already seen how to wisely choose optimum value for various tablespace related parameters and how to decide on tablespace sizing... Now we will proceed to the last thread in this discussion and see how one can optimize bufferpools so that it can match the expectations set by tablespace optimiztions (that we have learned in pervious two posts)...
TO start with as a rule of thumb you should have a separate bufferpool for temporary tablespace and if you have different user temp tablespace(which I strongly recommend) you should have a diff bufferpool for that too... If you do not do this, you will see a lot of sort overflows or low bufferpool utilization in snapshots or high read/write operations @ disk level...
Also you should ideally have different bufferpools for index and data tablespaces... This is help you avoid situations where an index tree forced to flushed out of bufferpool to make space for a table read operation...
More importantly you should have different bufferpools for tablespaces storing transactional/operational (which is generally huge in volumes and cardinality) tables and a separate one for tablespaces which store config data tables... As config data is generally queried again and again e.g. every transaction you do in a bank will fetch transaction code (Cash Deposite or withdrwal etc are transaction codes) for your transaction... If you tablespace storing the table for transaction codes available in system is mapped to same bufferpool which is also mapped to tablepace storing table which logs all the transaction (e.g. your cash withdrawal activity and your friends credit card payment activity etc)... In such a scenario when a user does an enquiry on transactions it will flush the transaction code data it will have to read gain for a new transaction... If a transaction enquiry is frequent activity it will screw up cause of this faulty bufferpool design...
You can keep a watch on bufferpool utilization with sysibmadm.snapbp snapshot view... Ideally I will say table-bufferpool utilization should be above 90% (but 85% might be acceptable in certain cases)... For index bufferpools if it is below 95% then you need to worry, ideally it should be arround 97%...
Unlike other performance improvement techniques e.g. indexes and table-partitioning etc. a need for which could be easily guessed over a period of monitoring, a flaw in tablespace design or bufferpool allocation can not be so easily caught at times... The snapshots or anyother monitor data might not be indicative or directly pointing at it... I will suggest thinking about these well in advance... Many people think that this is more of a production call... But a production-DBA knows less than you (the development DBA, involved in table design) about the number of tables, volume of data and nature of queries...
I shall some time try to sum-up on how to use various monitoring techniques to keep a track of parameters set for your bufferpool/tablespaces...
very informative. the paragraph starting "Generally transactional..." is not very clear. can you elaborate on why block based bufferpool will help.
ReplyDeleteImagine yourself in a library books are blocks of info with each page of book representing say a row of data (for simplification)... Now suppose you have to gather some info on DB2 and you are reading the Guides which are split across 9 different books... So you go and fetch one book from library and keep it in you personal bookshelf... While preparing your content you feel you need volume no #2 as well so you go and fetch that as well from library, when you come back you see the place adjacent to your db2 volume #1 is occupied by some other book so you search for another location and place it there... Imagine if now you have to go and get subsequent volumes as well...
ReplyDeleteNow if you had visited library once and got all 9 volumes at once (prefetching)...
you obviously spent less time when you keep 9 books picked in one single attempt and placed nine attempts at nine different locations...
But still all the books have to placed in different location on your multi-story bookself, you have get down to pickup books one by one and then keep each in an empty location one by one...
Instead if before going to library you emptied one whole storey in your bookshelf and keept all the books at once in that emptied location (you obviously spent less time when you keep 9 books picked in one single attempt and placed in one single attempt at one single location...)
Sorry, but no, Transactional Systems (OLTP) should be performing Random (Synchronous I/O) and NOT PREFETCH/SCAN ASYNC I/O. Scanning in OLTP is foolish, costly, inappropriate, raises CPU and I/O consumption, and puts applications at risk for locking problems. Please read the DB2 LUW Performance blogs at http://www.dbisoftware.com/blog/db2_performance.php
ReplyDeleteRegards, Scott
Hi Scott,
ReplyDeleteWell I agree to certain extent... But these days hardly any system is purely OLTP (may be I am yet to see more applications env and live implementations of products from diff domains)... They will have reports and online enquiry ... Mostly transactional data in "only written" kind of data in Transactional Systems... They are read once in a while and that too specific records... But most of the reads on such systems happen during a report or enquiry (I may be too constrained here due to lack of experience with too many systems)...
I will agree to you for any transactional system there should not be any pre-fetching... And hence will be editing the blog most to get rid of "Generally" in that para...
"Well I agree to certain extent... "?
ReplyDeleteYour inexperience with OLTP systems does not authorize you to confuse people. Scott is absolutely right. It is ridiculous that you are trying to argue a unfounded statement.
Well I will still stick to my statement that in most of the real time scenarios reserving some amount of your pools for block I/O will help you in Batches/inquiry and reports... In a pure OLTP environment it will be foolish to force prefetching/blocked I/O etc as the reads are mostly random... But you have Batches and Reports, reserving 5-10% of total pool will help... I will emphasize on point that one has to monitored before and after one reserves pools for blocked I/O...
ReplyDeleteI agree with you Sam. In real-life scenarios systems are not pure OLTP. There are always reports running against tables which does sequential I/O. Saying in theory OLTP should not be doing this is great, but you can say that only if you own the company. :) In workplaces you end up doing compromises, most of the time one DBA is fighting a few developers/business analysts who wants things to be done a certain way. Hey, you know what, in return the developers will actually be willing to send you SQL to tune ! :-D
ReplyDeleteScott, that plug really wasn't needed.
Peter.
Peter, you sent across a smile over my face with that SQL Tuning example... Recently someone in development wanted our team to review basic JOIN (a simple two table join) queries to confirm that they will not be "performance bottleneck", where as they will never let us change the table design and indexing during development, which might avoid some performance bottleneck or some code change in future... :-D
ReplyDelete