Monday, March 19, 2012

Big Database needs pruning

I have a big sql database running on sql server 2000. The manager wasn't
watching the file sizes and the db made it up to 144 gig and the log is up
to 80 gig. We have been working on pruning this pig but it just has
completely consumed the disk. My expertise is not SQL, but I am the guy for
this issue.
So I am in a quandary. A Delete statement was run to prune the db down, but
that just caused the tlog to chew up the disk space that was left on the
disk. So things got slow so we ended up restarting the sql service and
figure the Checkpoint piece is running in the background rebuilding the
pointers.
Question #1)
Is there a way to determine if Checkpointing is going on rebuilding the
pointers,etc... ? Reason for the curiosity is that Enterprise Manager (EM)
won't respond right now. I can manage the DB's (except for the DB pig) via
SQL Analyzer. It is amazing how quick you can learn tsql when you gotta
keep things up and running.
Question #2)
If/When I get control back I want to chuck the Log via
a) sp_detach_db
b) Delete the Log file
c) sp_attach_single_file_db
Will this work?
Question #3)
There are months and months of data that is easily purgable by date but I
want to avoid making the log file large and clogging up the system again.
Can I use a statement like
Delete from tableA where date_time between '2006-10-01 00:00:00' and
'2006-10-14 23:59:59'
I am unsure how the SQL server will handle this statement. If there are
only 5% of the records in the DB that match this query will the log file
only grow by that amount? I assume it must for backout purposes.
--
Thanks for any help
Paul"Paul Bergson [MVP-DS]" <pbergson@.allete_nospam.com> wrote in message
news:OD3Vvd8dHHA.2128@.TK2MSFTNGP04.phx.gbl...
>I have a big sql database running on sql server 2000. The manager wasn't
>watching the file sizes and the db made it up to 144 gig and the log is up
>to 80 gig. We have been working on pruning this pig but it just has
>completely consumed the disk. My expertise is not SQL, but I am the guy
>for this issue.
> So I am in a quandary. A Delete statement was run to prune the db down,
> but that just caused the tlog to chew up the disk space that was left on
> the disk. So things got slow so we ended up restarting the sql service
> and figure the Checkpoint piece is running in the background rebuilding
> the pointers.
> Question #1)
> Is there a way to determine if Checkpointing is going on rebuilding the
> pointers,etc... ? Reason for the curiosity is that Enterprise Manager
> (EM) won't respond right now. I can manage the DB's (except for the DB
> pig) via SQL Analyzer. It is amazing how quick you can learn tsql when
> you gotta keep things up and running.
> Question #2)
> If/When I get control back I want to chuck the Log via
> a) sp_detach_db
> b) Delete the Log file
> c) sp_attach_single_file_db
> Will this work?
>
> Question #3)
> There are months and months of data that is easily purgable by date but I
> want to avoid making the log file large and clogging up the system again.
> Can I use a statement like
> Delete from tableA where date_time between '2006-10-01 00:00:00' and
> '2006-10-14 23:59:59'
> I am unsure how the SQL server will handle this statement. If there are
> only 5% of the records in the DB that match this query will the log file
> only grow by that amount? I assume it must for backout purposes.
> --
I've always used the following statements to shrink big logs.
backup log DBName with truncate_only
go
use DBName
go
dbcc shrinkfile(2, 100, TRUNCATEONLY)
go
You should also look into backing up the log file on a regular basis or
changing to the Simple recovery model if your model allows it.|||This really isn't providing any answers to my questions. Do you know about
the Checkpoint, etc...?
--
Paul Bergson
MVP - Directory Services
MCT, MCSE, MCSA, Security+, BS CSci
2003, 2000 (Early Achiever), NT
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Isaac Alexander" <isaacNOSPAM@.goNOSPAMprocura.com> wrote in message
news:efbR2j8dHHA.2316@.TK2MSFTNGP04.phx.gbl...
> "Paul Bergson [MVP-DS]" <pbergson@.allete_nospam.com> wrote in message
> news:OD3Vvd8dHHA.2128@.TK2MSFTNGP04.phx.gbl...
>>I have a big sql database running on sql server 2000. The manager wasn't
>>watching the file sizes and the db made it up to 144 gig and the log is up
>>to 80 gig. We have been working on pruning this pig but it just has
>>completely consumed the disk. My expertise is not SQL, but I am the guy
>>for this issue.
>> So I am in a quandary. A Delete statement was run to prune the db down,
>> but that just caused the tlog to chew up the disk space that was left on
>> the disk. So things got slow so we ended up restarting the sql service
>> and figure the Checkpoint piece is running in the background rebuilding
>> the pointers.
>> Question #1)
>> Is there a way to determine if Checkpointing is going on rebuilding the
>> pointers,etc... ? Reason for the curiosity is that Enterprise Manager
>> (EM) won't respond right now. I can manage the DB's (except for the DB
>> pig) via SQL Analyzer. It is amazing how quick you can learn tsql when
>> you gotta keep things up and running.
>> Question #2)
>> If/When I get control back I want to chuck the Log via
>> a) sp_detach_db
>> b) Delete the Log file
>> c) sp_attach_single_file_db
>> Will this work?
>>
>> Question #3)
>> There are months and months of data that is easily purgable by date but I
>> want to avoid making the log file large and clogging up the system again.
>> Can I use a statement like
>> Delete from tableA where date_time between '2006-10-01 00:00:00' and
>> '2006-10-14 23:59:59'
>> I am unsure how the SQL server will handle this statement. If there are
>> only 5% of the records in the DB that match this query will the log file
>> only grow by that amount? I assume it must for backout purposes.
>> --
> I've always used the following statements to shrink big logs.
> backup log DBName with truncate_only
> go
> use DBName
> go
> dbcc shrinkfile(2, 100, TRUNCATEONLY)
> go
> You should also look into backing up the log file on a regular basis or
> changing to the Simple recovery model if your model allows it.
>|||"Paul Bergson [MVP-DS]" <pbergson@.allete_nospam.com> wrote in message
news:OD3Vvd8dHHA.2128@.TK2MSFTNGP04.phx.gbl...
>I have a big sql database running on sql server 2000. The manager wasn't
>watching the file sizes and the db made it up to 144 gig and the log is up
>to 80 gig. We have been working on pruning this pig but it just has
>completely consumed the disk. My expertise is not SQL, but I am the guy
>for this issue.
> So I am in a quandary. A Delete statement was run to prune the db down,
> but that just caused the tlog to chew up the disk space that was left on
> the disk. So things got slow so we ended up restarting the sql service
> and figure the Checkpoint piece is running in the background rebuilding
> the pointers.
> Question #1)
> Is there a way to determine if Checkpointing is going on rebuilding the
> pointers,etc... ? Reason for the curiosity is that Enterprise Manager
> (EM) won't respond right now. I can manage the DB's (except for the DB
> pig) via SQL Analyzer. It is amazing how quick you can learn tsql when
> you gotta keep things up and running.
> Question #2)
> If/When I get control back I want to chuck the Log via
> a) sp_detach_db
> b) Delete the Log file
> c) sp_attach_single_file_db
> Will this work?
>
> Question #3)
> There are months and months of data that is easily purgable by date but I
> want to avoid making the log file large and clogging up the system again.
> Can I use a statement like
> Delete from tableA where date_time between '2006-10-01 00:00:00' and
> '2006-10-14 23:59:59'
> I am unsure how the SQL server will handle this statement. If there are
> only 5% of the records in the DB that match this query will the log file
> only grow by that amount? I assume it must for backout purposes.
> --
> Thanks for any help
> Paul
>|||"Paul Bergson [MVP-DS]" <pbergson@.allete_nospam.com> wrote in message
news:OD3Vvd8dHHA.2128@.TK2MSFTNGP04.phx.gbl...
>I have a big sql database running on sql server 2000. The manager wasn't
>watching the file sizes and the db made it up to 144 gig and the log is up
>to 80 gig. We have been working on pruning this pig but it just has
>completely consumed the disk. My expertise is not SQL, but I am the guy
>for this issue.
> So I am in a quandary. A Delete statement was run to prune the db down,
> but that just caused the tlog to chew up the disk space that was left on
> the disk. So things got slow so we ended up restarting the sql service
> and figure the Checkpoint piece is running in the background rebuilding
> the pointers.
> Question #1)
> Is there a way to determine if Checkpointing is going on rebuilding the
> pointers,etc... ? Reason for the curiosity is that Enterprise Manager
> (EM) won't respond right now. I can manage the DB's (except for the DB
> pig) via SQL Analyzer. It is amazing how quick you can learn tsql when
> you gotta keep things up and running.
Check the SQL Error log, but my guess is that DB is still in recovery mode.
(sorry, I don't recall off the top of my head the command to get this via
QA.
I THINK
select * from master.dbo.sysdatabases w
will give the info, but I don't recall which status you need to look for.
> Question #2)
> If/When I get control back I want to chuck the Log via
> a) sp_detach_db
> b) Delete the Log file
> c) sp_attach_single_file_db
> Will this work?
It MIGHT. But don't do that. It's one of those, "99% of the time it'll
work.. and 1% you're hosed.)
You may want to do an actual transaction log backup (especially if up to now
you have a consistent chain of backups.)
Otherwise, do a backup log PigDB with truncate_only
Then DBCC shrinkfile on the DB log.
>
> Question #3)
> There are months and months of data that is easily purgable by date but I
> want to avoid making the log file large and clogging up the system again.
> Can I use a statement like
> Delete from tableA where date_time between '2006-10-01 00:00:00' and
> '2006-10-14 23:59:59'
>
Certainly.
You can even do this in a loop with a SET ROWCOUNT and do backups of the log
in between.
> I am unsure how the SQL server will handle this statement. If there are
> only 5% of the records in the DB that match this query will the log file
> only grow by that amount? I assume it must for backout purposes.
Unfortunately, off the top of my head, I'm not 100% sure how to calculate
this (there are ways) but I'd start with that assumption.
Hope this helps.
> --
> Thanks for any help
> Paul
>
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||I can't believe I didn't think to look in the logs, duh... Only 72 hours to
go, what a stinking pig
I have a follow up but it refers to code pruning it self. I will repost
Paul
http://www.pbbergs.com
Please no e-mails, any questions should be posted in the NewsGroup
This posting is provided "AS IS" with no warranties, and confers no rights.
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:%23RdOZD9dHHA.1240@.TK2MSFTNGP04.phx.gbl...
>
> "Paul Bergson [MVP-DS]" <pbergson@.allete_nospam.com> wrote in message
> news:OD3Vvd8dHHA.2128@.TK2MSFTNGP04.phx.gbl...
>>I have a big sql database running on sql server 2000. The manager wasn't
>>watching the file sizes and the db made it up to 144 gig and the log is up
>>to 80 gig. We have been working on pruning this pig but it just has
>>completely consumed the disk. My expertise is not SQL, but I am the guy
>>for this issue.
>> So I am in a quandary. A Delete statement was run to prune the db down,
>> but that just caused the tlog to chew up the disk space that was left on
>> the disk. So things got slow so we ended up restarting the sql service
>> and figure the Checkpoint piece is running in the background rebuilding
>> the pointers.
>> Question #1)
>> Is there a way to determine if Checkpointing is going on rebuilding the
>> pointers,etc... ? Reason for the curiosity is that Enterprise Manager
>> (EM) won't respond right now. I can manage the DB's (except for the DB
>> pig) via SQL Analyzer. It is amazing how quick you can learn tsql when
>> you gotta keep things up and running.
> Check the SQL Error log, but my guess is that DB is still in recovery
> mode.
> (sorry, I don't recall off the top of my head the command to get this via
> QA.
> I THINK
> select * from master.dbo.sysdatabases w
> will give the info, but I don't recall which status you need to look for.
>
>> Question #2)
>> If/When I get control back I want to chuck the Log via
>> a) sp_detach_db
>> b) Delete the Log file
>> c) sp_attach_single_file_db
>> Will this work?
> It MIGHT. But don't do that. It's one of those, "99% of the time it'll
> work.. and 1% you're hosed.)
> You may want to do an actual transaction log backup (especially if up to
> now you have a consistent chain of backups.)
> Otherwise, do a backup log PigDB with truncate_only
> Then DBCC shrinkfile on the DB log.
>>
>> Question #3)
>> There are months and months of data that is easily purgable by date but I
>> want to avoid making the log file large and clogging up the system again.
>> Can I use a statement like
>> Delete from tableA where date_time between '2006-10-01 00:00:00' and
>> '2006-10-14 23:59:59'
> Certainly.
> You can even do this in a loop with a SET ROWCOUNT and do backups of the
> log in between.
>
>> I am unsure how the SQL server will handle this statement. If there are
>> only 5% of the records in the DB that match this query will the log file
>> only grow by that amount? I assume it must for backout purposes.
> Unfortunately, off the top of my head, I'm not 100% sure how to calculate
> this (there are ways) but I'd start with that assumption.
>
> Hope this helps.
>
>> --
>> Thanks for any help
>> Paul
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com
> http://www.greenms.com/sqlserver.html
>|||Paul Bergson [MVP-DS] wrote:
> I have a big sql database running on sql server 2000. The manager wasn't
> watching the file sizes and the db made it up to 144 gig and the log is up
> to 80 gig. We have been working on pruning this pig but it just has
> completely consumed the disk. My expertise is not SQL, but I am the guy for
> this issue.
> So I am in a quandary. A Delete statement was run to prune the db down, but
> that just caused the tlog to chew up the disk space that was left on the
> disk. So things got slow so we ended up restarting the sql service and
> figure the Checkpoint piece is running in the background rebuilding the
> pointers.
> Question #1)
> Is there a way to determine if Checkpointing is going on rebuilding the
> pointers,etc... ? Reason for the curiosity is that Enterprise Manager (EM)
> won't respond right now. I can manage the DB's (except for the DB pig) via
> SQL Analyzer. It is amazing how quick you can learn tsql when you gotta
> keep things up and running.
From Books Online
Table master..sysdatabases:
Field [status]:
=====Status bits, some of which can be set by the user with sp_dboption (read
only, dbo use only, single user, and so on):
1 = autoclose; set with sp_dboption.
4 = select into/bulkcopy; set with sp_dboption.
8 = trunc. log on chkpt; set with sp_dboption.
16 = torn page detection, set with sp_dboption.
32 = loading.
64 = pre recovery.
128 = recovering.
256 = not recovered.
512 = offline; set with sp_dboption.
1024 = read only; set with sp_dboption.
2048 = dbo use only; set with sp_dboption.
4096 = single user; set with sp_dboption.
32768 = emergency mode.
4194304 = autoshrink.
1073741824 = cleanly shutdown.
Multiple bits can be on at the same time.
=========field [status2]:
16384 = ANSI null default; set with sp_dboption.
65536 = concat null yields null , set with sp_dboption.
131072 = recursive triggers, set with sp_dboption.
1048576 = default to local cursor, set with sp_dboption.
8388608 = quoted identifier, set with
sp_dboption.
33554432 = cursor close on commit, set with sp_dboption.
67108864 = ANSI nulls, set with sp_dboption.
268435456 = ANSI warnings, set with sp_dboption.
536870912 = full text enabled, set with sp_fulltext_database.
> Question #2)
> If/When I get control back I want to chuck the Log via
> a) sp_detach_db
> b) Delete the Log file
> c) sp_attach_single_file_db
> Will this work?
You better do the following:
backup log <dbname> with no_log
dbcc shrinkdatabase('<dbname>',TRUNCATEONLY)
> Question #3)
> There are months and months of data that is easily purgable by date but I
> want to avoid making the log file large and clogging up the system again.
> Can I use a statement like
> Delete from tableA where date_time between '2006-10-01 00:00:00' and
> '2006-10-14 23:59:59'
Yes
> I am unsure how the SQL server will handle this statement. If there are
> only 5% of the records in the DB that match this query will the log file
> only grow by that amount? I assume it must for backout purposes.|||"Paul Bergson [MVP-DS]" <pbergson@.allete_nospam.com> wrote in message
news:OSMpANFeHHA.1244@.TK2MSFTNGP04.phx.gbl...
>I can't believe I didn't think to look in the logs, duh... Only 72 hours
>to go, what a stinking pig
>
Yeah, I've done the same thing regarding forgetting ;-)
> I have a follow up but it refers to code pruning it self. I will repost
>
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html

No comments:

Post a Comment