One of the very important needs for DBA/developer is to delete huge data from a table.
Generally Huge deletes causes rollback segment or data-files related errors.
To overcome this, you may want to use this easy piece of code which deletes the data on regular intervals and commits after every 1000 rows
I have taken a example with table name as GL_BALANCES and I’m deleting the data for 04 period
declare
x number :=0;
cursor b0 is select d.rowid from gl.gl_balances d where period_name like '%-04' ;
begin
for b in b0 loop
delete from gl.gl_balances d2 where b.rowid = d2.rowid;
x := x + 1;
if x = 1000 then
commit;
x := 0;
end if;
end loop;
commit;
end;
/
2 comments:
I thought that doing huge selects and frequent commits could lead to the "snapshot too old" errors. See this comment from Steve Karam:
"Lastly (and true even for manual undo management), if you commit more frequently, you make it more possible for ORA-01555 errors to occur. Because your work will be scattered among more undo segments, you increase the chance that a single one may be overwritten if necessary, thus causing an ORA-01555 error for those that require it for read consistency."
Blogged at http://www.oraclealchemist.com/oracle/do-frequent-commits-fix-ora-01555/
Don,
Thanks for your constructive feedback.
I agree with you that combination of huge selects and frequent commits can cause 1555.
Firstly, I'm not selecting more than once. I selected only once at the time of cursor opening.
Secondly, I asume this process to run during off hour timings, when application user selects are either minimal or not there at all.
Assuming you have huge table of size 60-70G and u need to delete 20G of data, this code can come handy during off-hours.
But, I agree with you, that it will be more vunerable to 1555.
Another method to delete huge data is to crrate another table with filtered data (which you need) and rename the new table to old table later.
Sachin
Post a Comment