About Me

My photo
Singapore, Singapore, Singapore
I am currently working as a DB Consultant with Ashnik. We are into Open Source and Cloud based enterprise class solutions. Prior to this I was working as a DBA @ Misys India (Banking Services). Working on IBM DB2, Oracle Microsoft SQL Server. Selected as IBM Champion for Data Management 2011. I like bit of photography, cycling, poetry and I just love gadgets!!

Connect to me

See my profile on LinkedIn Visit my Ashnik Website

An Alternative to truncate for pre v9.7

So many a times people have to truncate a table in run time, i.e. while processing a batch or a temp tabel or while flushing a table which stores some periodic stats etc... DB2 9.7 has an Amazingly fast command which will help you do that in flash of a blink called truncate... But ooops you are not using db2 v9.7... you had upgraded to db2 9 or db2 9.5 recently and can not take the effort required to port your application to new version and do all the testing again... No sweat... I will be sharing a few such tips which will help you imitate truncate table

1) Which is generic for all platforms...


but I am not very sure if space acquired by table is retained in this case

2) Obviously

delete from tablename;

3) This will be specific to a OS platform and I will not suggest using this if you have to code it in your application:

i) *NIX --> import from /dev/null of del replace into tablename
ii) Windows --> import from nul of del replace into tablename

This operation is though faster than normal delete but is logged and hence will not be as performant as first... Some people argue that second option is not recoverable, but we are looking at an alternative for truncate (which is not recoverable either) and hence recovery is not my priority... Moreover I have committed the transaction as soon as I fire "NOT LOGGED" command for "tablename" hence should not be an issue...


  1. How is option 2 recoverable ?

    If you do a rollforward beyond that point in time of truncate,
    -- option 1 and 2, table will be marked as inaccessible
    -- option 3, table will be accessible and data you insert after the "truncate" is recoverable.

    Generally speaking, option 3 is preferable.

    1. There was a not missing!!! :)

      Option 1 and 2 are never recoverable, and in-fact if something goes wrong before next backup you end up re-creating the table. But is not that ok? I will be using "truncate" only on tables, whose data is either required temporarily (the case with staging tables) or the data is derived from various diff tables and that data remains same till next load, so I am not much bothered about loosing this table and data (as I either do not need it always or i can re-create it post-recovery). But yeah if someone has has a requirement to "truncate" table for a scenario other than these two, he can not use option 1 and 2. I am not sure if someone will be comfortable using option 3 as I feel there is only one (but good enough for me not use it in my code) issue with "load from \dev\nul\" that is the diff in command on unix and windows! :-(

      I do not know why the hell some enterprise have to run on Windows Servers :-(

  2. If you are after compatability between Unix and Windows, you can try something like -

    echo "" > empty.txt
    import from empty.txt of del replace into table1

  3. I never thought it! Just tried! works fine!!! :)

  4. What is the best way to delete records with 'where clause'. I have a requirement where I need to delete about 1M records from a table which has 2.5M records. I am struggling to do so as DB2 simply hangs after some time.

    1. well I am no more a db2 guy... But from my general database experience I can suggest you to create an index (temporarily) on the table for the column which is used in where clause. If that too is not working then

      export data in flat files with the where clause as filter

      truncate the table

      load/import the data back from flat files

      Do the deletion in chunks. Use a windowing function and row enumerator. Deleted only 100K records at a time, do it till delete returns with 0 records.