Showing posts with label handle. Show all posts
Showing posts with label handle. Show all posts

Monday, March 19, 2012

Big Data size and SQL Server

Does SQL Server 2005 Enterprise 64bit can handle 250 - 300 terabyte of data considering my DBA is guru.

Thank you - Ashok

Your question is not cleare, can you refrase it?

If you are looking for biggest db the sql serve can handle? see the link below...

http://www.microsoft.com/sql/prodinfo/compare/wintercorp.mspx

|||

My question is we are planning for a new project which will merge several databases with other things. we think new data size can go up to 250 terabyte. There will be hardware for that size of data and I think this is big size for any database like Oracle, Sybase or SQL Server. We are looking close to SQL Server 2005 and SSIS to use but same time we are trying to find if software has some any limitation to handle this type of size once it is setup in different partitions and etc.

Thank you - Ashok

Saturday, February 25, 2012

Better Way To Handle The Code?

Hi experts,
I have a table which stores data in multi-rows basis for a particular
record. The structure of the table is as exhibit:
CREATE TABLE [dbo].[Table_Header] (
[Rec_Id] [numeric](18, 0) NOT NULL ,
[Param_Name] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Param_Value] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
) ON [PRIMARY]
GO
Example data is shown as follows:
Rec_Id Param_Name Param_Value
1 Product TV
1 Category Electrical
2 Product Telephone
2 Category Electrical
3 Product TV
3 Category Electrical
In case I would like to filter out those records (Rec_Id) where Product = TV
and Category = Electrical, one way I can think of using T-SQL is:
Select A.Rec_Id From Table_Header A, Table_Header B
Where A.Rec_Id = B.Rec_Id And
A.Param_Name = 'Product' And
A.Param_Value = 'TV' And
B.Param_Name = 'Category' And
B.Param_Value = 'Electrical'
The Select statement above returns me results: 1 and 3. But it will grow to
be quite complicated if I have many parameters to filter in my select
statement. Any better way to handle this coding?
ThanksLBT
What is a PK on your table?
Do you have any indexes defined on the table?
"LBT" <LBT@.discussions.microsoft.com> wrote in message
news:A1E1B426-7986-4535-BF4B-BDD35532F273@.microsoft.com...
> Hi experts,
> I have a table which stores data in multi-rows basis for a particular
> record. The structure of the table is as exhibit:
> CREATE TABLE [dbo].[Table_Header] (
> [Rec_Id] [numeric](18, 0) NOT NULL ,
> [Param_Name] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
,
> [Param_Value] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> ) ON [PRIMARY]
> GO
> Example data is shown as follows:
> Rec_Id Param_Name Param_Value
> 1 Product TV
> 1 Category Electrical
> 2 Product Telephone
> 2 Category Electrical
> 3 Product TV
> 3 Category Electrical
> In case I would like to filter out those records (Rec_Id) where Product =
TV
> and Category = Electrical, one way I can think of using T-SQL is:
> Select A.Rec_Id From Table_Header A, Table_Header B
> Where A.Rec_Id = B.Rec_Id And
> A.Param_Name = 'Product' And
> A.Param_Value = 'TV' And
> B.Param_Name = 'Category' And
> B.Param_Value = 'Electrical'
> The Select statement above returns me results: 1 and 3. But it will grow
to
> be quite complicated if I have many parameters to filter in my select
> statement. Any better way to handle this coding?
> Thanks
>
>
>
>|||Hi
You can normalize your table.
You can have 2 tables here Product and Category
Product:
RecID | Product_Name
1 | TV
2 | Telephone
3 | TV
Category:
RecID | Category_Name
1 | Electrical
2 | Electrical
3 | Electrical
Based on the req. u can query as
Select Category.Rec_id
FROM Category
INNER JOIN Product ON Product.Rec_id = Category.Rec_id
AND Product_Name IN ('TV')
WHERE Category_Name IN ('Electrical')
This will solve the purpose
best Regards,
Chandra
http://chanduas.blogspot.com/
http://groups.msn.com/SQLResource/
---
"LBT" wrote:

> Hi experts,
> I have a table which stores data in multi-rows basis for a particular
> record. The structure of the table is as exhibit:
> CREATE TABLE [dbo].[Table_Header] (
> [Rec_Id] [numeric](18, 0) NOT NULL ,
> [Param_Name] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
> [Param_Value] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> ) ON [PRIMARY]
> GO
> Example data is shown as follows:
> Rec_Id Param_Name Param_Value
> 1 Product TV
> 1 Category Electrical
> 2 Product Telephone
> 2 Category Electrical
> 3 Product TV
> 3 Category Electrical
> In case I would like to filter out those records (Rec_Id) where Product =
TV
> and Category = Electrical, one way I can think of using T-SQL is:
> Select A.Rec_Id From Table_Header A, Table_Header B
> Where A.Rec_Id = B.Rec_Id And
> A.Param_Name = 'Product' And
> A.Param_Value = 'TV' And
> B.Param_Name = 'Category' And
> B.Param_Value = 'Electrical'
> The Select statement above returns me results: 1 and 3. But it will grow t
o
> be quite complicated if I have many parameters to filter in my select
> statement. Any better way to handle this coding?
> Thanks
>
>
>
>|||> I have a table which stores data in multi-rows basis for a particular

> record
Why? That's usually a really poor design. There are better ways to
represent types and subtypes. Example:
CREATE TABLE Products (sku INTEGER PRIMARY KEY, product_type_code
CHAR(2) NOT NULL CHECK (product_type_code IN ('EL','EN','SP')),
productname VARCHAR(40) NOT NULL UNIQUE, UNIQUE (sku,product_type_code)
/* Columns common to all products */)
CREATE TABLE EntertainmentProducts (sku INTEGER PRIMARY KEY,
product_type_code CHAR(2) DEFAULT 'EN' NOT NULL CHECK
(product_type_code='EN'), medium CHAR(3) NOT NULL CHECK (medium IN
('DVD','VHS','CD')), FOREIGN KEY (sku, product_type_code) REFERENCES
Products (sku, product_type_code) /* Columns specific to this
subtype... */)
CREATE TABLE ElectronicProducts (sku INTEGER PRIMARY KEY,
product_type_code CHAR(2) DEFAULT 'EL' NOT NULL CHECK
(product_type_code='EL'), FOREIGN KEY (sku, product_type_code)
REFERENCES Products (sku, product_type_code) /* Columns specific to
this subtype */)
CREATE TABLE SportsProducts (sku INTEGER PRIMARY KEY, product_type_code
CHAR(2) DEFAULT 'SP' NOT NULL CHECK (product_type_code='SP'), FOREIGN
KEY (sku, product_type_code) REFERENCES Products (sku,
product_type_code) /* Columns specific to this subtype */)

> it will grow to
> be quite complicated if I have many parameters to filter in my select

> statement.
Correct! That's why it's better to go for a more normalized approach.
David Portas
SQL Server MVP
--|||The PKs are Rec_Id and Param_Name
"Uri Dimant" wrote:

> LBT
> What is a PK on your table?
> Do you have any indexes defined on the table?
> "LBT" <LBT@.discussions.microsoft.com> wrote in message
> news:A1E1B426-7986-4535-BF4B-BDD35532F273@.microsoft.com...
> ,
> TV
> to
>
>|||Thanks for the suggestion. In fact, there will be a lot of parameters in my
actual application where I think normalization is not my exact solution
(sorry that not to make my question clear in previous post). And because of
the norm that the parameters list will change from time to time, I need to
store data in the mentioned manner (or I will have to change table structure
everytime parameters change).
"Chandra" wrote:
> Hi
> You can normalize your table.
> You can have 2 tables here Product and Category
> Product:
> RecID | Product_Name
> 1 | TV
> 2 | Telephone
> 3 | TV
> Category:
> RecID | Category_Name
> 1 | Electrical
> 2 | Electrical
> 3 | Electrical
> Based on the req. u can query as
> Select Category.Rec_id
> FROM Category
> INNER JOIN Product ON Product.Rec_id = Category.Rec_id
> AND Product_Name IN ('TV')
> WHERE Category_Name IN ('Electrical')
> This will solve the purpose
> --
> best Regards,
> Chandra
> http://chanduas.blogspot.com/
> http://groups.msn.com/SQLResource/
> ---
>
> "LBT" wrote:
>|||1. What objects are being stored in this table? It looks like a generic
System table - holding attribute bnames and values, for a database. I would
recommend you get a good book on Relational Data modeling, and take a quick
look at it. You might consider normalizing the data structure a bit..
Instead of having a Param_Name column in a table, you might want to create a
Products Table and a Categorys Table, but I don;t know what exactly you are
doing so it's hard to tell..
"LBT" wrote:

> Hi experts,
> I have a table which stores data in multi-rows basis for a particular
> record. The structure of the table is as exhibit:
> CREATE TABLE [dbo].[Table_Header] (
> [Rec_Id] [numeric](18, 0) NOT NULL ,
> [Param_Name] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
> [Param_Value] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> ) ON [PRIMARY]
> GO
> Example data is shown as follows:
> Rec_Id Param_Name Param_Value
> 1 Product TV
> 1 Category Electrical
> 2 Product Telephone
> 2 Category Electrical
> 3 Product TV
> 3 Category Electrical
> In case I would like to filter out those records (Rec_Id) where Product =
TV
> and Category = Electrical, one way I can think of using T-SQL is:
> Select A.Rec_Id From Table_Header A, Table_Header B
> Where A.Rec_Id = B.Rec_Id And
> A.Param_Name = 'Product' And
> A.Param_Value = 'TV' And
> B.Param_Name = 'Category' And
> B.Param_Value = 'Electrical'
> The Select statement above returns me results: 1 and 3. But it will grow t
o
> be quite complicated if I have many parameters to filter in my select
> statement. Any better way to handle this coding?
> Thanks
>
>
>
>|||I think I have provided a confusing example. My mistake :)
Actually the table structure is designed in such a way to store variety of
data structure sent in by user. Say for example, user might send in "Rec_Id
[1], Product [TV], Category [Electrical]", "Rec_Id [2], Product [Telephone],
Category [Electrical]", etc.
At later time, user can send in data structure with new add-in parameter(s)
- "Rec_Id [3], Product [TV], Category [Electrical], Power [1000kW], Color
[Black]". The challenge is I cannot change the table structure (this is the
dynamic solution requested by customer) everytime new parameter is being
introduced. With the table design, I can simply store "Power" into Param_Nam
e
and "1000kW" into Param_Value (no adding of column to table structure is
allowed).
So with this table design, the only way I can perform records filtering (the
filtering parameters however will be fixed which are Product and Category in
this case) is as stated in my post? I just wonder whether there is any bette
r
way to perform the query.
In fact, I have to create and configure few reference/definition tables so
that my application will be dynamic enough to cater for this requirement. Bu
t
it will be lengthy to explain the entire situation.
Thanks
"CBretana" wrote:
> 1. What objects are being stored in this table? It looks like a generic
> System table - holding attribute bnames and values, for a database. I wou
ld
> recommend you get a good book on Relational Data modeling, and take a qui
ck
> look at it. You might consider normalizing the data structure a bit..
> Instead of having a Param_Name column in a table, you might want to create
a
> Products Table and a Categorys Table, but I don;t know what exactly you a
re
> doing so it's hard to tell..
> "LBT" wrote:
>|||In a way, you are in the dilemma which often results from trying to achieve
conflicting objectives...
The entire concept of Relational Databases is, in one sense (I need to be
careful here) designed to allow you to "Structure" your data so that it
closely matches the real world data objects or abstractions which represent
those objects, in order to allow the kind of data "processing" (like
arbitrary filtering, sorting, etc. that you are trying to accomplish. Old
style (pre-Relational) Database systems were comparitively inadeguate at
these sorts of things, *because* the data was not stored in a structured
relational way.
But you are *Trying* to store your data in an unstructured way to allow as
much flexibility in what kind of data is allowed in to the database. These
two objectives are fundamentally inconsistent, and that is one reason why yo
u
are experiencing difficulties
"LBT" wrote:
> I think I have provided a confusing example. My mistake :)
> Actually the table structure is designed in such a way to store variety of
> data structure sent in by user. Say for example, user might send in "Rec_I
d
> [1], Product [TV], Category [Electrical]", "Rec_Id [2], Product [Telephone],
> Category [Electrical]", etc.
> At later time, user can send in data structure with new add-in parameter(s
)
> - "Rec_Id [3], Product [TV], Category [Electrical], Power [1000kW], Color
> [Black]". The challenge is I cannot change the table structure (this is the
> dynamic solution requested by customer) everytime new parameter is being
> introduced. With the table design, I can simply store "Power" into Param_N
ame
> and "1000kW" into Param_Value (no adding of column to table structure is
> allowed).
> So with this table design, the only way I can perform records filtering (t
he
> filtering parameters however will be fixed which are Product and Category
in
> this case) is as stated in my post? I just wonder whether there is any bet
ter
> way to perform the query.
> In fact, I have to create and configure few reference/definition tables so
> that my application will be dynamic enough to cater for this requirement.
But
> it will be lengthy to explain the entire situation.
> Thanks
> "CBretana" wrote:
>|||If the customer DEMANDS a design like this then they should understand
and accept its weaknesses with regard to data integrity, performance
and increased complexity. However, I would always discuss the
alternatives with them first. Change control for adding new attributes
should be their friend not their enemy. If they are sing a zero
maintenance solution then tell them to forget it - of course no such
thing exists - or at least when a legacy application reaches that stage
of maturity then it's probably already in need of replacement.
The reason that proper change control is required is that users
generally make very poor database designers. If you allow users to
decide the format of business data you will lose any integrity and
future value therein. Do you really expect users to comprehenend and
analyze functional dependencies and standardise and conform attributes
in the system? If they were capable of that then they must be database
pros in which case they wouldn't need this kind of kludge. Corporate
data is too important to be left to users to manage.
David Portas
SQL Server MVP
--

Tuesday, February 14, 2012

Best Way to handle UID to another server

I have a need when a Update, Insert or Delete is done to a record in DB "A", it will send the appropriate UID to a different table in different DB "B".

My first thought was to have a trigger on the table in DB "A" simply call a stored procedure on DB "B" and do the UID.

However - my question is what is the best approach and what's the best way to establish the connection to DB "B" for the UID from within DB "A"? We can't use linked servers - DNSLESS string would be the preferred connect way. Not sure how to execute it within a trigger.

Is even using a Trigger to Stored Proc the best way?

What about Transaction Replication - which I've never attempted - is that a better way?

Just looking for some guidance here so I don't needlessly burn time down a path that isn't recommended or simply won't work.

Thanks as always for your input,

PeterI would never suggest using triggers that reference objects outside of their own database. A recipe for disaster.
A better strategy would be to have the trigger load your data into a staging table, and then have a job scheduled once per minute to transfer the data from the staging table to the remote server. This is very robust if you can handle latency of up to one minute (scheduled jobs can be set to run once per minute).
If you require zero latency, then I would look into replication as a solution.|||Thanks Blindman:

I did some BOL checking and replication sounds good, but the table the data needs to go into has different field names and is in use in another application. The target DB happens to be on the same server.

What about have a trigger call a stored procedure and pass the fields in question along with the key to the SP?

The SP would make the connection to the other DB and do the UID.

You remain helpful,

Peter|||I still don't like it. What you are trying to avoid is having your trigger fail and system "A" come to a crashing halt when system "B" goes down or the connection fails. Your solution needs to be strong enough to fail safely, or you are just doubling or trippling the possible points of failure.
The advantage of the staging tables is that, if the connection to system "B" is lost then system "A" continues merrily on, loading data into the staging tables. When system "B" comes back online the data in the staging tables is automatically transferred.
No downtown for system "A".
No data-loss for system "B".
No late-night hours getting both systems back in synch for "DBA".
If you can stand up to a 1 minute latency, this is the way to go.
If you can't handle the latency, then replicate to duplicate tables in system "B" and put the triggers on THEM.|||Blindman:

I can stand a latency of even up to 5 minutes.

My concerns:

1. The table I'm feeding updates to has a different name, naming scheme and field types. I'll need to manipulate the data I'm "pushing" to Server B regardless of the approach I take.
2. With #1, I didn't know if Replication can handle. Perhaps "Transactional Replication" isn't the proper type of replication for this
3. I'm totally new to Replication and it doesn't look trivial via BOL.

Any additional help here is appreciated.

I'm event considering having the application update the server B tables directly. I don't like that for a number of reasons including if someone updates the back-end directly (unlikely - but possible), it wouldn't be updating server B.

Peter|||The staging table method should handle all your concerns. I would recommend transforming the data when it is shuttled from the staging tables to the destination database, rather than when it is inserted into the staging tables, as you want your triggers to be as streamlined as possible.

Best way to handle License issues

Here is the scenario:

We are developing an ETL type data load application and we're thinking building a SSIS package to assist with this.

The application and data files to be loaded would be on the client Windows XP Workstation. The SQL Server 2005 instance would be on a networked server elsewhere.

The XP Workstation would NOT have sql server 2005.

What is the best way to handle this? Can this be achieved fairly easily?

Kevin

You’ll have to be a bit more specific on how you want to do your licensing... on a per seat basis? Per server?

No matter the model, I would highly recommend taking a read of this article on the mechanisms provided by the .NET Framework to help you implement a licensing mechanism.

|||I'm sorry I wasn't clear. I do not want to CREATE licensing for my application. I want to find out what the best, cost effective solution for me would be.

Example:
Buy 1 SQL Server 2005 and call SSIS packages from the Client?
I think in this scenario I would only need 1 SQL Server Standard license?

Kevin
|||Ahh... that I’m afraid I cannot help you with much... instead I would suggest starting with the SQL Server 2005 Licensing: Frequently Asked Questions if you have not already found it.|||

If the ETL application is to be on the client then you will need to have a licensed SQL Server install there - although you need only install SSIS and tools, not the server itself.

A better - and more economical - scenario for you may be to have SSIS running on the SQL Server box. No additional license required. You can create a SQL Agent job to execute the package and invoke that job remotely from the client using T-SQL.

Either way, of course, you will have data moving over the network from client to server - and that may be a bottleneck for you.

Donald

Best way to handle Index Rebuilding/Reindexing.

Hi,
Currently we defrage our indexes on a regular basis and defrag then when
statics are at a certain level. What I want to find out is:
Is there a way to add speed control during rebuilding and defragging
process? What I mean by this is that typically I run this at slow times but
if something comes up and database gets busy, I want to make sure most of
the recourses (including i/o) are taken away for the defrag/reindexing
processes and given back to data processing. Perhsp if server stays busy for
a certain amount of time, simply cancel the defrag and/or reindex job. In my
environment, I simply cannot afford to not have i/o available for realtime
transactions. I need to do this for both SQL server 2000 and for SQL server
2005. Any ideas?
Thank you.Not in current versions of SQL Server. You can affect how many CPU's are
used by rebuilding indexes with the MAXDOP. So if you don't want an index
rebuild to monopolize all the processors you can set the MAXDOP to less than
the max. But in terms of I/O there is no way to govern that. Bottom line is
that if your requirements are such that you need to reindex and do other I/O
you need to ensure your hardware can handle the max load you need to
sustain.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
> Hi,
> Currently we defrage our indexes on a regular basis and defrag then when
> statics are at a certain level. What I want to find out is:
> Is there a way to add speed control during rebuilding and defragging
> process? What I mean by this is that typically I run this at slow times
> but if something comes up and database gets busy, I want to make sure most
> of the recourses (including i/o) are taken away for the defrag/reindexing
> processes and given back to data processing. Perhsp if server stays busy
> for a certain amount of time, simply cancel the defrag and/or reindex job.
> In my environment, I simply cannot afford to not have i/o available for
> realtime transactions. I need to do this for both SQL server 2000 and for
> SQL server 2005. Any ideas?
> Thank you.
>|||Thank you Andrew.
Is there a way to quantify non-index-rebuilt / non-index-defrag activity? I
am thinking while performing index maintenance, I can run another job that
keeps track of other traffic. If it exceeds a pre-set threshold, it will
kill index jobs.
Unfortunately changing hardware to support all my load is currently not
possible as I have over 150 SQL servers, each at a different site and
replacing them will mean $$$$$$.
Thanks.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uShzc9C8HHA.5752@.TK2MSFTNGP04.phx.gbl...
> Not in current versions of SQL Server. You can affect how many CPU's are
> used by rebuilding indexes with the MAXDOP. So if you don't want an index
> rebuild to monopolize all the processors you can set the MAXDOP to less
> than the max. But in terms of I/O there is no way to govern that. Bottom
> line is that if your requirements are such that you need to reindex and do
> other I/O you need to ensure your hardware can handle the max load you
> need to sustain.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then when
>> statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp if
>> server stays busy for a certain amount of time, simply cancel the defrag
>> and/or reindex job. In my environment, I simply cannot afford to not have
>> i/o available for realtime transactions. I need to do this for both SQL
>> server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>|||first, SQL 2005 Enterprise provides some option for online index operation,
these options are optimized to reduce the impact on the server.
second, when a server is "IDLE" starting an index defrag will change the
status from idle to heavy IO, so this impact your other processes. if you
want to stop the index action when there is activity, you will be stopped
because you generate activities with the defrag himself. so its hard to
detect if the activity is user based or job based.
(you can use sys.dm_exec_connections to know what is the IO generated by the
defrag process, but you have to do a lot of calculations and you have to
know what is the connectionid of the job... start to become complicated)
Maybe you can test the number of users connected to the database, if there
is too many users, wait a minute and test again...
so you have a first job which is the defrag job himself.
then you create another job, a monitoring activity job, this one will stop
the first job if there is too many users connected to the database; also you
can add a schedule to start again the first job 1hour later for example
using the monitoring job task.
you can also test the wait duration of the user sessions, if the wait
duration is too high and you defrag job is in progress, then you'll stop the
job. (sys.dm_os_waiting_tasks dynamic view)
hope this will give you some starting point...
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
> Hi,
> Currently we defrage our indexes on a regular basis and defrag then when
> statics are at a certain level. What I want to find out is:
> Is there a way to add speed control during rebuilding and defragging
> process? What I mean by this is that typically I run this at slow times
> but if something comes up and database gets busy, I want to make sure most
> of the recourses (including i/o) are taken away for the defrag/reindexing
> processes and given back to data processing. Perhsp if server stays busy
> for a certain amount of time, simply cancel the defrag and/or reindex job.
> In my environment, I simply cannot afford to not have i/o available for
> realtime transactions. I need to do this for both SQL server 2000 and for
> SQL server 2005. Any ideas?
> Thank you.
>|||Thank you Jeje for your suggestions. I like the wait duration concept. I
will have to give it a try.
Unfortunately, I will not b e able to use the "number of users" method. None
of my users connect directly to my databases. We have application servers
that sit in the middle. All my users connect to application server which
in-turn talk to databases. So, my number of connections are pretty much
same.
Thanks.
"Jeje" <willgart@.hotmail.com> wrote in message
news:AECDD977-BA04-46A6-8BA6-6871E2DB3786@.microsoft.com...
> first, SQL 2005 Enterprise provides some option for online index
> operation, these options are optimized to reduce the impact on the server.
> second, when a server is "IDLE" starting an index defrag will change the
> status from idle to heavy IO, so this impact your other processes. if you
> want to stop the index action when there is activity, you will be stopped
> because you generate activities with the defrag himself. so its hard to
> detect if the activity is user based or job based.
> (you can use sys.dm_exec_connections to know what is the IO generated by
> the defrag process, but you have to do a lot of calculations and you have
> to know what is the connectionid of the job... start to become
> complicated)
>
> Maybe you can test the number of users connected to the database, if there
> is too many users, wait a minute and test again...
> so you have a first job which is the defrag job himself.
> then you create another job, a monitoring activity job, this one will stop
> the first job if there is too many users connected to the database; also
> you can add a schedule to start again the first job 1hour later for
> example using the monitoring job task.
> you can also test the wait duration of the user sessions, if the wait
> duration is too high and you defrag job is in progress, then you'll stop
> the job. (sys.dm_os_waiting_tasks dynamic view)
> hope this will give you some starting point...
> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then when
>> statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp if
>> server stays busy for a certain amount of time, simply cancel the defrag
>> and/or reindex job. In my environment, I simply cannot afford to not have
>> i/o available for realtime transactions. I need to do this for both SQL
>> server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>|||Killing a DBREINDEX job will roll back all the changes so it would probably
be better to let it finish. If you go the DEFRAG route you can stop that at
any time with no ill effect. But be aware that this command is always fully
logged and depending on how fragmented the table is to begin with it may log
much more than the index size itself when running. The Defrag operation is
always single threaded so you don't have CPU contention on servers with
multiple CPU's (not counting HT) but you still have lots of I/O. The best
defense for this kind of issue is to make sure the log file are on their own
Raid 1 or 10 and the data is on another Raid 10. If tempdb is on yet another
raid you can use the sort in Tempdb option for the rebuilds. But again the
bottom line is the hardware must be up to the task, you can't squeeze blood
from a stone. Another key to minimizing these types of issues are to only
rebuild or defrag the indexes that absolutely need it. Don't rebuild them
all each time. Look first to see which are fragmented beyond a certain point
and only do those. There are examples in BOL under DBCC SHOWCONTIG and the
2005 replacement for how to do this.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:O6r5gmD8HHA.5404@.TK2MSFTNGP02.phx.gbl...
> Thank you Andrew.
> Is there a way to quantify non-index-rebuilt / non-index-defrag activity?
> I am thinking while performing index maintenance, I can run another job
> that keeps track of other traffic. If it exceeds a pre-set threshold, it
> will kill index jobs.
> Unfortunately changing hardware to support all my load is currently not
> possible as I have over 150 SQL servers, each at a different site and
> replacing them will mean $$$$$$.
> Thanks.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uShzc9C8HHA.5752@.TK2MSFTNGP04.phx.gbl...
>> Not in current versions of SQL Server. You can affect how many CPU's are
>> used by rebuilding indexes with the MAXDOP. So if you don't want an index
>> rebuild to monopolize all the processors you can set the MAXDOP to less
>> than the max. But in terms of I/O there is no way to govern that. Bottom
>> line is that if your requirements are such that you need to reindex and
>> do other I/O you need to ensure your hardware can handle the max load you
>> need to sustain.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then when
>> statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp if
>> server stays busy for a certain amount of time, simply cancel the defrag
>> and/or reindex job. In my environment, I simply cannot afford to not
>> have i/o available for realtime transactions. I need to do this for both
>> SQL server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>>
>|||Thank you Andrew.
Currently we have a scheduled job that determines whether or not an index
needs to be defraged. If it does, it performs the defrag. Rebuilding is done
manually only. I would like to add some sort of intelligence into my defrag
script.
Say for example, I have 100 indexes on between all my databases and scripts
see that 5 of them need to be defraged. This might not be an issue but if
one day 20 of them need to be defraged, this might be an issue. Is there any
other way to control I/O a little better? I am not CPU bound by any mean on
any of my sites. Disk bound? Yes for sure. But short of replacing all
existing systems and adding clusters ($ in millions), I need to come up with
some method better than what I have.
Thanks for any help.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OE5X03I8HHA.4476@.TK2MSFTNGP06.phx.gbl...
> Killing a DBREINDEX job will roll back all the changes so it would
> probably be better to let it finish. If you go the DEFRAG route you can
> stop that at any time with no ill effect. But be aware that this command
> is always fully logged and depending on how fragmented the table is to
> begin with it may log much more than the index size itself when running.
> The Defrag operation is always single threaded so you don't have CPU
> contention on servers with multiple CPU's (not counting HT) but you still
> have lots of I/O. The best defense for this kind of issue is to make sure
> the log file are on their own Raid 1 or 10 and the data is on another Raid
> 10. If tempdb is on yet another raid you can use the sort in Tempdb option
> for the rebuilds. But again the bottom line is the hardware must be up to
> the task, you can't squeeze blood from a stone. Another key to minimizing
> these types of issues are to only rebuild or defrag the indexes that
> absolutely need it. Don't rebuild them all each time. Look first to see
> which are fragmented beyond a certain point and only do those. There are
> examples in BOL under DBCC SHOWCONTIG and the 2005 replacement for how to
> do this.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
> news:O6r5gmD8HHA.5404@.TK2MSFTNGP02.phx.gbl...
>> Thank you Andrew.
>> Is there a way to quantify non-index-rebuilt / non-index-defrag activity?
>> I am thinking while performing index maintenance, I can run another job
>> that keeps track of other traffic. If it exceeds a pre-set threshold, it
>> will kill index jobs.
>> Unfortunately changing hardware to support all my load is currently not
>> possible as I have over 150 SQL servers, each at a different site and
>> replacing them will mean $$$$$$.
>> Thanks.
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uShzc9C8HHA.5752@.TK2MSFTNGP04.phx.gbl...
>> Not in current versions of SQL Server. You can affect how many CPU's are
>> used by rebuilding indexes with the MAXDOP. So if you don't want an
>> index rebuild to monopolize all the processors you can set the MAXDOP to
>> less than the max. But in terms of I/O there is no way to govern that.
>> Bottom line is that if your requirements are such that you need to
>> reindex and do other I/O you need to ensure your hardware can handle the
>> max load you need to sustain.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then
>> when statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp
>> if server stays busy for a certain amount of time, simply cancel the
>> defrag and/or reindex job. In my environment, I simply cannot afford to
>> not have i/o available for realtime transactions. I need to do this for
>> both SQL server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>>
>>
>|||As has already been noted, once you're in a rebuild, not much. However, I've
been thinking about using a combination of: WAITFOR, @.@.CPU_BUSY & @.@.IO_BUSY
to determin server load before starting something running. Haven't tried it
yet.
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
> Hi,
> Currently we defrage our indexes on a regular basis and defrag then when
> statics are at a certain level. What I want to find out is:
> Is there a way to add speed control during rebuilding and defragging
> process? What I mean by this is that typically I run this at slow times
> but if something comes up and database gets busy, I want to make sure most
> of the recourses (including i/o) are taken away for the defrag/reindexing
> processes and given back to data processing. Perhsp if server stays busy
> for a certain amount of time, simply cancel the defrag and/or reindex job.
> In my environment, I simply cannot afford to not have i/o available for
> realtime transactions. I need to do this for both SQL server 2000 and for
> SQL server 2005. Any ideas?
> Thank you.
>|||Thanks Jay.
Do you know of a way to pause defrag? Perhaps I can put it through a loop
and let it run for a few seconds and then pause it for a second so that
other transactions can get through... Just an idea.
"Jay" <spam@.nospam.org> wrote in message
news:OdIzD4U8HHA.1208@.TK2MSFTNGP05.phx.gbl...
> As has already been noted, once you're in a rebuild, not much. However,
> I've been thinking about using a combination of: WAITFOR, @.@.CPU_BUSY &
> @.@.IO_BUSY to determin server load before starting something running.
> Haven't tried it yet.
>
> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then when
>> statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp if
>> server stays busy for a certain amount of time, simply cancel the defrag
>> and/or reindex job. In my environment, I simply cannot afford to not have
>> i/o available for realtime transactions. I need to do this for both SQL
>> server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>|||not possible to pause a defrag.
use the reorganize option:
alter index reorganize
which is an online operation and don't lock the end user.
or use
alter index rebuild with (online = on)
I don't remember which one use a transaction and which one don't use a
transaction.
so without a transaction, stopping the defrag process don't loose the
current defragmented pages, while in a transaction... its lost.
online operation are designed to reduce the impact to the end user, there is
no lock.
so if you want to pause the defrag process for just 1 second, its not
relevant to try to stop the process.
do you suffer locking issues today when you do your defrag?
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:eJ1bBKV8HHA.5504@.TK2MSFTNGP02.phx.gbl...
> Thanks Jay.
> Do you know of a way to pause defrag? Perhaps I can put it through a loop
> and let it run for a few seconds and then pause it for a second so that
> other transactions can get through... Just an idea.
> "Jay" <spam@.nospam.org> wrote in message
> news:OdIzD4U8HHA.1208@.TK2MSFTNGP05.phx.gbl...
>> As has already been noted, once you're in a rebuild, not much. However,
>> I've been thinking about using a combination of: WAITFOR, @.@.CPU_BUSY &
>> @.@.IO_BUSY to determin server load before starting something running.
>> Haven't tried it yet.
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then when
>> statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp if
>> server stays busy for a certain amount of time, simply cancel the defrag
>> and/or reindex job. In my environment, I simply cannot afford to not
>> have i/o available for realtime transactions. I need to do this for both
>> SQL server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>>
>|||Well a Cluster will do absolutely nothing for performance, it is a hardware
fail over option only. Not being CPU bound is usually another indication
that the disks are a problem. If the data can't read or write fast enough
then the CPU's can't work as hard. You can't reduce the I/O that is required
for a Defrag operation when it is doing work. As I stated you can stop the
defrag process at any time. You may want to run it for a period of time and
wait x many seconds or minutes in between to give others a chance to catch
up.
What is the current disk situation like? How many drives, what type of
arrays, where are the files etc?
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:%2373ozBQ8HHA.2752@.TK2MSFTNGP06.phx.gbl...
> Thank you Andrew.
> Currently we have a scheduled job that determines whether or not an index
> needs to be defraged. If it does, it performs the defrag. Rebuilding is
> done manually only. I would like to add some sort of intelligence into my
> defrag script.
> Say for example, I have 100 indexes on between all my databases and
> scripts see that 5 of them need to be defraged. This might not be an issue
> but if one day 20 of them need to be defraged, this might be an issue. Is
> there any other way to control I/O a little better? I am not CPU bound by
> any mean on any of my sites. Disk bound? Yes for sure. But short of
> replacing all existing systems and adding clusters ($ in millions), I need
> to come up with some method better than what I have.
> Thanks for any help.
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:OE5X03I8HHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Killing a DBREINDEX job will roll back all the changes so it would
>> probably be better to let it finish. If you go the DEFRAG route you can
>> stop that at any time with no ill effect. But be aware that this command
>> is always fully logged and depending on how fragmented the table is to
>> begin with it may log much more than the index size itself when running.
>> The Defrag operation is always single threaded so you don't have CPU
>> contention on servers with multiple CPU's (not counting HT) but you still
>> have lots of I/O. The best defense for this kind of issue is to make sure
>> the log file are on their own Raid 1 or 10 and the data is on another
>> Raid 10. If tempdb is on yet another raid you can use the sort in Tempdb
>> option for the rebuilds. But again the bottom line is the hardware must
>> be up to the task, you can't squeeze blood from a stone. Another key to
>> minimizing these types of issues are to only rebuild or defrag the
>> indexes that absolutely need it. Don't rebuild them all each time. Look
>> first to see which are fragmented beyond a certain point and only do
>> those. There are examples in BOL under DBCC SHOWCONTIG and the 2005
>> replacement for how to do this.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:O6r5gmD8HHA.5404@.TK2MSFTNGP02.phx.gbl...
>> Thank you Andrew.
>> Is there a way to quantify non-index-rebuilt / non-index-defrag
>> activity? I am thinking while performing index maintenance, I can run
>> another job that keeps track of other traffic. If it exceeds a pre-set
>> threshold, it will kill index jobs.
>> Unfortunately changing hardware to support all my load is currently not
>> possible as I have over 150 SQL servers, each at a different site and
>> replacing them will mean $$$$$$.
>> Thanks.
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uShzc9C8HHA.5752@.TK2MSFTNGP04.phx.gbl...
>> Not in current versions of SQL Server. You can affect how many CPU's
>> are used by rebuilding indexes with the MAXDOP. So if you don't want an
>> index rebuild to monopolize all the processors you can set the MAXDOP
>> to less than the max. But in terms of I/O there is no way to govern
>> that. Bottom line is that if your requirements are such that you need
>> to reindex and do other I/O you need to ensure your hardware can handle
>> the max load you need to sustain.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then
>> when statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow
>> times but if something comes up and database gets busy, I want to make
>> sure most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp
>> if server stays busy for a certain amount of time, simply cancel the
>> defrag and/or reindex job. In my environment, I simply cannot afford
>> to not have i/o available for realtime transactions. I need to do this
>> for both SQL server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>>
>>
>|||Since I have so many SQL servers at various different sites installed over
the years, each site is different when it comes to disk situcation. In old
days, we used Legato based clustering which is a disk hog on its own. In
that case we may have only 6 disks in a flat raid 5. I know this is probably
the worst setup you can do, but this is how it is on those sites and short
of adding an external array with more disks, there is only so much I can do.
Newer sites use MS clustering and typically have 14-24 disks in various
config.
My concerns is not with sites that are on MS Clustering, but instead with
older sites (with Legato Clustering). In most cases each server is dual CPU
single core or dual core in newer sites. So I am sure none of the sites are
CPU bound. Disk bound is a different issue.All my Legato based sites are
disk bound. In Legato clusters, there are no shared disks but instead one of
the logical drives is mirrored accross servers to create a cluster. This
mirroring create additional load on the i/o subsystem. This is why I said
that short of installing new clusters, I can't do much. All my sql servers
are some type of cluster for redundency. Most are still on SQL 2000.
Thanks.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23gQVAgV8HHA.2752@.TK2MSFTNGP06.phx.gbl...
> Well a Cluster will do absolutely nothing for performance, it is a
> hardware fail over option only. Not being CPU bound is usually another
> indication that the disks are a problem. If the data can't read or write
> fast enough then the CPU's can't work as hard. You can't reduce the I/O
> that is required for a Defrag operation when it is doing work. As I stated
> you can stop the defrag process at any time. You may want to run it for a
> period of time and wait x many seconds or minutes in between to give
> others a chance to catch up.
> What is the current disk situation like? How many drives, what type of
> arrays, where are the files etc?
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
> news:%2373ozBQ8HHA.2752@.TK2MSFTNGP06.phx.gbl...
>> Thank you Andrew.
>> Currently we have a scheduled job that determines whether or not an index
>> needs to be defraged. If it does, it performs the defrag. Rebuilding is
>> done manually only. I would like to add some sort of intelligence into my
>> defrag script.
>> Say for example, I have 100 indexes on between all my databases and
>> scripts see that 5 of them need to be defraged. This might not be an
>> issue but if one day 20 of them need to be defraged, this might be an
>> issue. Is there any other way to control I/O a little better? I am not
>> CPU bound by any mean on any of my sites. Disk bound? Yes for sure. But
>> short of replacing all existing systems and adding clusters ($ in
>> millions), I need to come up with some method better than what I have.
>> Thanks for any help.
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:OE5X03I8HHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Killing a DBREINDEX job will roll back all the changes so it would
>> probably be better to let it finish. If you go the DEFRAG route you can
>> stop that at any time with no ill effect. But be aware that this command
>> is always fully logged and depending on how fragmented the table is to
>> begin with it may log much more than the index size itself when running.
>> The Defrag operation is always single threaded so you don't have CPU
>> contention on servers with multiple CPU's (not counting HT) but you
>> still have lots of I/O. The best defense for this kind of issue is to
>> make sure the log file are on their own Raid 1 or 10 and the data is on
>> another Raid 10. If tempdb is on yet another raid you can use the sort
>> in Tempdb option for the rebuilds. But again the bottom line is the
>> hardware must be up to the task, you can't squeeze blood from a stone.
>> Another key to minimizing these types of issues are to only rebuild or
>> defrag the indexes that absolutely need it. Don't rebuild them all each
>> time. Look first to see which are fragmented beyond a certain point and
>> only do those. There are examples in BOL under DBCC SHOWCONTIG and the
>> 2005 replacement for how to do this.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:O6r5gmD8HHA.5404@.TK2MSFTNGP02.phx.gbl...
>> Thank you Andrew.
>> Is there a way to quantify non-index-rebuilt / non-index-defrag
>> activity? I am thinking while performing index maintenance, I can run
>> another job that keeps track of other traffic. If it exceeds a pre-set
>> threshold, it will kill index jobs.
>> Unfortunately changing hardware to support all my load is currently not
>> possible as I have over 150 SQL servers, each at a different site and
>> replacing them will mean $$$$$$.
>> Thanks.
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uShzc9C8HHA.5752@.TK2MSFTNGP04.phx.gbl...
>> Not in current versions of SQL Server. You can affect how many CPU's
>> are used by rebuilding indexes with the MAXDOP. So if you don't want
>> an index rebuild to monopolize all the processors you can set the
>> MAXDOP to less than the max. But in terms of I/O there is no way to
>> govern that. Bottom line is that if your requirements are such that
>> you need to reindex and do other I/O you need to ensure your hardware
>> can handle the max load you need to sustain.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then
>> when statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow
>> times but if something comes up and database gets busy, I want to
>> make sure most of the recourses (including i/o) are taken away for
>> the defrag/reindexing processes and given back to data processing.
>> Perhsp if server stays busy for a certain amount of time, simply
>> cancel the defrag and/or reindex job. In my environment, I simply
>> cannot afford to not have i/o available for realtime transactions. I
>> need to do this for both SQL server 2000 and for SQL server 2005. Any
>> ideas?
>> Thank you.
>>
>>
>>
>|||Thank you jeje. Unfortunately online reindexing is only available in SQL
2005 Ent. Most of my servers are still on SQL 2000 Std.
I experience a lot of delays in response when defrag is active. At that time
I am disk bound. I know I need faster/better disk system but until that is
done (over 100 servers), I need to find some type of software solution.
"Jeje" <willgart@.hotmail.com> wrote in message
news:36C4CB13-2DEE-4E42-8FAC-55C41AA78B23@.microsoft.com...
> not possible to pause a defrag.
> use the reorganize option:
> alter index reorganize
> which is an online operation and don't lock the end user.
> or use
> alter index rebuild with (online = on)
> I don't remember which one use a transaction and which one don't use a
> transaction.
> so without a transaction, stopping the defrag process don't loose the
> current defragmented pages, while in a transaction... its lost.
> online operation are designed to reduce the impact to the end user, there
> is no lock.
> so if you want to pause the defrag process for just 1 second, its not
> relevant to try to stop the process.
> do you suffer locking issues today when you do your defrag?
>
> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
> news:eJ1bBKV8HHA.5504@.TK2MSFTNGP02.phx.gbl...
>> Thanks Jay.
>> Do you know of a way to pause defrag? Perhaps I can put it through a loop
>> and let it run for a few seconds and then pause it for a second so that
>> other transactions can get through... Just an idea.
>> "Jay" <spam@.nospam.org> wrote in message
>> news:OdIzD4U8HHA.1208@.TK2MSFTNGP05.phx.gbl...
>> As has already been noted, once you're in a rebuild, not much. However,
>> I've been thinking about using a combination of: WAITFOR, @.@.CPU_BUSY &
>> @.@.IO_BUSY to determin server load before starting something running.
>> Haven't tried it yet.
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then
>> when statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp
>> if server stays busy for a certain amount of time, simply cancel the
>> defrag and/or reindex job. In my environment, I simply cannot afford to
>> not have i/o available for realtime transactions. I need to do this for
>> both SQL server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>>
>>
>|||alter index reorganize is the option which not loose any job when you stop
it.
and the BOL says it's always done online. so the online option cannot be
changed and this method will works on both std and ent. editions.
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:ulAY6rY8HHA.3900@.TK2MSFTNGP02.phx.gbl...
> Thank you jeje. Unfortunately online reindexing is only available in SQL
> 2005 Ent. Most of my servers are still on SQL 2000 Std.
> I experience a lot of delays in response when defrag is active. At that
> time I am disk bound. I know I need faster/better disk system but until
> that is done (over 100 servers), I need to find some type of software
> solution.
>
> "Jeje" <willgart@.hotmail.com> wrote in message
> news:36C4CB13-2DEE-4E42-8FAC-55C41AA78B23@.microsoft.com...
>> not possible to pause a defrag.
>> use the reorganize option:
>> alter index reorganize
>> which is an online operation and don't lock the end user.
>> or use
>> alter index rebuild with (online = on)
>> I don't remember which one use a transaction and which one don't use a
>> transaction.
>> so without a transaction, stopping the defrag process don't loose the
>> current defragmented pages, while in a transaction... its lost.
>> online operation are designed to reduce the impact to the end user, there
>> is no lock.
>> so if you want to pause the defrag process for just 1 second, its not
>> relevant to try to stop the process.
>> do you suffer locking issues today when you do your defrag?
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:eJ1bBKV8HHA.5504@.TK2MSFTNGP02.phx.gbl...
>> Thanks Jay.
>> Do you know of a way to pause defrag? Perhaps I can put it through a
>> loop and let it run for a few seconds and then pause it for a second so
>> that other transactions can get through... Just an idea.
>> "Jay" <spam@.nospam.org> wrote in message
>> news:OdIzD4U8HHA.1208@.TK2MSFTNGP05.phx.gbl...
>> As has already been noted, once you're in a rebuild, not much. However,
>> I've been thinking about using a combination of: WAITFOR, @.@.CPU_BUSY &
>> @.@.IO_BUSY to determin server load before starting something running.
>> Haven't tried it yet.
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then
>> when statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow
>> times but if something comes up and database gets busy, I want to make
>> sure most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp
>> if server stays busy for a certain amount of time, simply cancel the
>> defrag and/or reindex job. In my environment, I simply cannot afford
>> to not have i/o available for realtime transactions. I need to do this
>> for both SQL server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>>
>>
>>
>|||Well you are right in that a single Raid 5 with all the files on it is the
worst situation in terms of performance short of a single drive with no
array. Raid 5's are notoriously bad for high write type operations. How
large are these tables? If they are small enough you can try changing the
recovery mode to SIMPLE or Bulk Logged and use DBCC DBREINDEX. This will
generate minimal activity in the log unlike a Defrag which can have a
tremendous amount of activity. The operation will be cleaner and potentially
faster depending ont he table size, number of indexes and how badly
fragmented they are over a defrag operation. The down side is this is an off
line operation in 2000. But again if the tables are small enough they may
only be inaccessible for brief periods of time and cause minor blocking.
This will reduce some of the I/O due to the decreased Log activity.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:uG1ZZmY8HHA.3900@.TK2MSFTNGP02.phx.gbl...
> Since I have so many SQL servers at various different sites installed over
> the years, each site is different when it comes to disk situcation. In old
> days, we used Legato based clustering which is a disk hog on its own. In
> that case we may have only 6 disks in a flat raid 5. I know this is
> probably the worst setup you can do, but this is how it is on those sites
> and short of adding an external array with more disks, there is only so
> much I can do. Newer sites use MS clustering and typically have 14-24
> disks in various config.
> My concerns is not with sites that are on MS Clustering, but instead with
> older sites (with Legato Clustering). In most cases each server is dual
> CPU single core or dual core in newer sites. So I am sure none of the
> sites are CPU bound. Disk bound is a different issue.All my Legato based
> sites are disk bound. In Legato clusters, there are no shared disks but
> instead one of the logical drives is mirrored accross servers to create a
> cluster. This mirroring create additional load on the i/o subsystem. This
> is why I said that short of installing new clusters, I can't do much. All
> my sql servers are some type of cluster for redundency. Most are still on
> SQL 2000.
> Thanks.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23gQVAgV8HHA.2752@.TK2MSFTNGP06.phx.gbl...
>> Well a Cluster will do absolutely nothing for performance, it is a
>> hardware fail over option only. Not being CPU bound is usually another
>> indication that the disks are a problem. If the data can't read or write
>> fast enough then the CPU's can't work as hard. You can't reduce the I/O
>> that is required for a Defrag operation when it is doing work. As I
>> stated you can stop the defrag process at any time. You may want to run
>> it for a period of time and wait x many seconds or minutes in between to
>> give others a chance to catch up.
>> What is the current disk situation like? How many drives, what type of
>> arrays, where are the files etc?
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:%2373ozBQ8HHA.2752@.TK2MSFTNGP06.phx.gbl...
>> Thank you Andrew.
>> Currently we have a scheduled job that determines whether or not an
>> index needs to be defraged. If it does, it performs the defrag.
>> Rebuilding is done manually only. I would like to add some sort of
>> intelligence into my defrag script.
>> Say for example, I have 100 indexes on between all my databases and
>> scripts see that 5 of them need to be defraged. This might not be an
>> issue but if one day 20 of them need to be defraged, this might be an
>> issue. Is there any other way to control I/O a little better? I am not
>> CPU bound by any mean on any of my sites. Disk bound? Yes for sure. But
>> short of replacing all existing systems and adding clusters ($ in
>> millions), I need to come up with some method better than what I have.
>> Thanks for any help.
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:OE5X03I8HHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Killing a DBREINDEX job will roll back all the changes so it would
>> probably be better to let it finish. If you go the DEFRAG route you can
>> stop that at any time with no ill effect. But be aware that this
>> command is always fully logged and depending on how fragmented the
>> table is to begin with it may log much more than the index size itself
>> when running. The Defrag operation is always single threaded so you
>> don't have CPU contention on servers with multiple CPU's (not counting
>> HT) but you still have lots of I/O. The best defense for this kind of
>> issue is to make sure the log file are on their own Raid 1 or 10 and
>> the data is on another Raid 10. If tempdb is on yet another raid you
>> can use the sort in Tempdb option for the rebuilds. But again the
>> bottom line is the hardware must be up to the task, you can't squeeze
>> blood from a stone. Another key to minimizing these types of issues are
>> to only rebuild or defrag the indexes that absolutely need it. Don't
>> rebuild them all each time. Look first to see which are fragmented
>> beyond a certain point and only do those. There are examples in BOL
>> under DBCC SHOWCONTIG and the 2005 replacement for how to do this.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:O6r5gmD8HHA.5404@.TK2MSFTNGP02.phx.gbl...
>> Thank you Andrew.
>> Is there a way to quantify non-index-rebuilt / non-index-defrag
>> activity? I am thinking while performing index maintenance, I can run
>> another job that keeps track of other traffic. If it exceeds a pre-set
>> threshold, it will kill index jobs.
>> Unfortunately changing hardware to support all my load is currently
>> not possible as I have over 150 SQL servers, each at a different site
>> and replacing them will mean $$$$$$.
>> Thanks.
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uShzc9C8HHA.5752@.TK2MSFTNGP04.phx.gbl...
>> Not in current versions of SQL Server. You can affect how many CPU's
>> are used by rebuilding indexes with the MAXDOP. So if you don't want
>> an index rebuild to monopolize all the processors you can set the
>> MAXDOP to less than the max. But in terms of I/O there is no way to
>> govern that. Bottom line is that if your requirements are such that
>> you need to reindex and do other I/O you need to ensure your hardware
>> can handle the max load you need to sustain.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>>> Hi,
>>>
>>> Currently we defrage our indexes on a regular basis and defrag then
>>> when statics are at a certain level. What I want to find out is:
>>>
>>> Is there a way to add speed control during rebuilding and defragging
>>> process? What I mean by this is that typically I run this at slow
>>> times but if something comes up and database gets busy, I want to
>>> make sure most of the recourses (including i/o) are taken away for
>>> the defrag/reindexing processes and given back to data processing.
>>> Perhsp if server stays busy for a certain amount of time, simply
>>> cancel the defrag and/or reindex job. In my environment, I simply
>>> cannot afford to not have i/o available for realtime transactions. I
>>> need to do this for both SQL server 2000 and for SQL server 2005.
>>> Any ideas?
>>>
>>> Thank you.
>>>
>>
>>
>>
>|||All my servers are in Simple mode. There is no way I have have it full and
have enough storage. Table sizes differ greatlely. It is a high transaction
environment so data gets added and changes a lot. Most of the data is
removed from the system after 3 months via (daily) automated jobs. This
keeps the system a bit lean. Total disk usage for all my databases is never
more than 50GB (per server, per site). This is 24/7/365 operation so most of
the offline stuff can't be done. I have only "slow" times but no close
times. I can probably reindex small tables but for larger ones I have to
think twice. This is why we try to defrag indexes as much as possible (only
the ones that need it) and reindex only when defrag is useless.
I know it seems like I am trying to make it hard for anyone to propose any
solution, but unfortunatley this is the environment I have to deal with.
Thanks for all your help.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eZVRqha8HHA.5404@.TK2MSFTNGP02.phx.gbl...
> Well you are right in that a single Raid 5 with all the files on it is the
> worst situation in terms of performance short of a single drive with no
> array. Raid 5's are notoriously bad for high write type operations. How
> large are these tables? If they are small enough you can try changing the
> recovery mode to SIMPLE or Bulk Logged and use DBCC DBREINDEX. This will
> generate minimal activity in the log unlike a Defrag which can have a
> tremendous amount of activity. The operation will be cleaner and
> potentially faster depending ont he table size, number of indexes and how
> badly fragmented they are over a defrag operation. The down side is this
> is an off line operation in 2000. But again if the tables are small enough
> they may only be inaccessible for brief periods of time and cause minor
> blocking. This will reduce some of the I/O due to the decreased Log
> activity.
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
> news:uG1ZZmY8HHA.3900@.TK2MSFTNGP02.phx.gbl...
>> Since I have so many SQL servers at various different sites installed
>> over the years, each site is different when it comes to disk situcation.
>> In old days, we used Legato based clustering which is a disk hog on its
>> own. In that case we may have only 6 disks in a flat raid 5. I know this
>> is probably the worst setup you can do, but this is how it is on those
>> sites and short of adding an external array with more disks, there is
>> only so much I can do. Newer sites use MS clustering and typically have
>> 14-24 disks in various config.
>> My concerns is not with sites that are on MS Clustering, but instead with
>> older sites (with Legato Clustering). In most cases each server is dual
>> CPU single core or dual core in newer sites. So I am sure none of the
>> sites are CPU bound. Disk bound is a different issue.All my Legato based
>> sites are disk bound. In Legato clusters, there are no shared disks but
>> instead one of the logical drives is mirrored accross servers to create a
>> cluster. This mirroring create additional load on the i/o subsystem. This
>> is why I said that short of installing new clusters, I can't do much. All
>> my sql servers are some type of cluster for redundency. Most are still on
>> SQL 2000.
>> Thanks.
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:%23gQVAgV8HHA.2752@.TK2MSFTNGP06.phx.gbl...
>> Well a Cluster will do absolutely nothing for performance, it is a
>> hardware fail over option only. Not being CPU bound is usually another
>> indication that the disks are a problem. If the data can't read or write
>> fast enough then the CPU's can't work as hard. You can't reduce the I/O
>> that is required for a Defrag operation when it is doing work. As I
>> stated you can stop the defrag process at any time. You may want to run
>> it for a period of time and wait x many seconds or minutes in between to
>> give others a chance to catch up.
>> What is the current disk situation like? How many drives, what type of
>> arrays, where are the files etc?
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:%2373ozBQ8HHA.2752@.TK2MSFTNGP06.phx.gbl...
>> Thank you Andrew.
>> Currently we have a scheduled job that determines whether or not an
>> index needs to be defraged. If it does, it performs the defrag.
>> Rebuilding is done manually only. I would like to add some sort of
>> intelligence into my defrag script.
>> Say for example, I have 100 indexes on between all my databases and
>> scripts see that 5 of them need to be defraged. This might not be an
>> issue but if one day 20 of them need to be defraged, this might be an
>> issue. Is there any other way to control I/O a little better? I am not
>> CPU bound by any mean on any of my sites. Disk bound? Yes for sure. But
>> short of replacing all existing systems and adding clusters ($ in
>> millions), I need to come up with some method better than what I have.
>> Thanks for any help.
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:OE5X03I8HHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Killing a DBREINDEX job will roll back all the changes so it would
>> probably be better to let it finish. If you go the DEFRAG route you
>> can stop that at any time with no ill effect. But be aware that this
>> command is always fully logged and depending on how fragmented the
>> table is to begin with it may log much more than the index size itself
>> when running. The Defrag operation is always single threaded so you
>> don't have CPU contention on servers with multiple CPU's (not counting
>> HT) but you still have lots of I/O. The best defense for this kind of
>> issue is to make sure the log file are on their own Raid 1 or 10 and
>> the data is on another Raid 10. If tempdb is on yet another raid you
>> can use the sort in Tempdb option for the rebuilds. But again the
>> bottom line is the hardware must be up to the task, you can't squeeze
>> blood from a stone. Another key to minimizing these types of issues
>> are to only rebuild or defrag the indexes that absolutely need it.
>> Don't rebuild them all each time. Look first to see which are
>> fragmented beyond a certain point and only do those. There are
>> examples in BOL under DBCC SHOWCONTIG and the 2005 replacement for how
>> to do this.
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:O6r5gmD8HHA.5404@.TK2MSFTNGP02.phx.gbl...
>> Thank you Andrew.
>> Is there a way to quantify non-index-rebuilt / non-index-defrag
>> activity? I am thinking while performing index maintenance, I can run
>> another job that keeps track of other traffic. If it exceeds a
>> pre-set threshold, it will kill index jobs.
>> Unfortunately changing hardware to support all my load is currently
>> not possible as I have over 150 SQL servers, each at a different site
>> and replacing them will mean $$$$$$.
>> Thanks.
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:uShzc9C8HHA.5752@.TK2MSFTNGP04.phx.gbl...
>>> Not in current versions of SQL Server. You can affect how many CPU's
>>> are used by rebuilding indexes with the MAXDOP. So if you don't want
>>> an index rebuild to monopolize all the processors you can set the
>>> MAXDOP to less than the max. But in terms of I/O there is no way to
>>> govern that. Bottom line is that if your requirements are such that
>>> you need to reindex and do other I/O you need to ensure your
>>> hardware can handle the max load you need to sustain.
>>>
>>> --
>>> Andrew J. Kelly SQL MVP
>>> Solid Quality Mentors
>>>
>>>
>>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>>> Hi,
>>>
>>> Currently we defrage our indexes on a regular basis and defrag then
>>> when statics are at a certain level. What I want to find out is:
>>>
>>> Is there a way to add speed control during rebuilding and
>>> defragging process? What I mean by this is that typically I run
>>> this at slow times but if something comes up and database gets
>>> busy, I want to make sure most of the recourses (including i/o) are
>>> taken away for the defrag/reindexing processes and given back to
>>> data processing. Perhsp if server stays busy for a certain amount
>>> of time, simply cancel the defrag and/or reindex job. In my
>>> environment, I simply cannot afford to not have i/o available for
>>> realtime transactions. I need to do this for both SQL server 2000
>>> and for SQL server 2005. Any ideas?
>>>
>>> Thank you.
>>>
>>>
>>
>>
>>
>>
>|||Dragon (noSpam_Badill@.hotmail.com) writes:
> All my servers are in Simple mode. There is no way I have have it full
> and have enough storage. Table sizes differ greatlely. It is a high
> transaction environment so data gets added and changes a lot. Most of
> the data is removed from the system after 3 months via (daily) automated
> jobs. This keeps the system a bit lean. Total disk usage for all my
> databases is never more than 50GB (per server, per site). This is
> 24/7/365 operation so most of the offline stuff can't be done. I have
> only "slow" times but no close times. I can probably reindex small
> tables but for larger ones I have to think twice. This is why we try to
> defrag indexes as much as possible (only the ones that need it) and
> reindex only when defrag is useless.
A 24/7/365 high-transaction environment with simple recovery? Interesting.
Very interesting. "Yes, we have a lot of transactions, but if a disk
crashes, we can easily recreate them once we restored a backup. Or we
don't care that we lose them."
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx|||For what ever reasons you have let the environment drive the other factors
such as recovery mode, maintenance etc. As we have been saying over and over
again you simply can't achieve peak performance and concurrency without the
proper hardware to begin with no matter how much you want to. The fact that
you have all of the files on a single Raid 5 will limit the I/O you can do
with SQL Server. At the very least you need to separate the Logs from the
Data onto their own separate physical Raid 1 or Raid 10 array. Adding just
two disks to a system like this and moving all the log files will likely
yield a tremendous gain in throughput. If these systems have caching
controllers then make sure the cache is set to 100% write back and 0% read.
Add as much cache as possible. But other than that there is little you can
do other than what has already been suggested. Erland highlights an
excellent point about the logs. Running in Simple mode can not be driven by
lack of disk space. It is the business requirements that must drive these
decisions. Can you really afford to loose a days worth of transactions?
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns99A56ECB396FDYazorman@.127.0.0.1...
> Dragon (noSpam_Badill@.hotmail.com) writes:
>> All my servers are in Simple mode. There is no way I have have it full
>> and have enough storage. Table sizes differ greatlely. It is a high
>> transaction environment so data gets added and changes a lot. Most of
>> the data is removed from the system after 3 months via (daily) automated
>> jobs. This keeps the system a bit lean. Total disk usage for all my
>> databases is never more than 50GB (per server, per site). This is
>> 24/7/365 operation so most of the offline stuff can't be done. I have
>> only "slow" times but no close times. I can probably reindex small
>> tables but for larger ones I have to think twice. This is why we try to
>> defrag indexes as much as possible (only the ones that need it) and
>> reindex only when defrag is useless.
> A 24/7/365 high-transaction environment with simple recovery? Interesting.
> Very interesting. "Yes, we have a lot of transactions, but if a disk
> crashes, we can easily recreate them once we restored a backup. Or we
> don't care that we lose them."
>
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx|||Pause a defrag, no. Defrag one index at a time and spread the load over n
days, sure.
But then, so do you.
"Dragon" <noSpam_Badill@.hotmail.com> wrote in message
news:eJ1bBKV8HHA.5504@.TK2MSFTNGP02.phx.gbl...
> Thanks Jay.
> Do you know of a way to pause defrag? Perhaps I can put it through a loop
> and let it run for a few seconds and then pause it for a second so that
> other transactions can get through... Just an idea.
> "Jay" <spam@.nospam.org> wrote in message
> news:OdIzD4U8HHA.1208@.TK2MSFTNGP05.phx.gbl...
>> As has already been noted, once you're in a rebuild, not much. However,
>> I've been thinking about using a combination of: WAITFOR, @.@.CPU_BUSY &
>> @.@.IO_BUSY to determin server load before starting something running.
>> Haven't tried it yet.
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then when
>> statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp if
>> server stays busy for a certain amount of time, simply cancel the defrag
>> and/or reindex job. In my environment, I simply cannot afford to not
>> have i/o available for realtime transactions. I need to do this for both
>> SQL server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>>
>|||Thank you Jay.
"Jay" <spam@.nospam.org> wrote in message
news:ezYWDxn8HHA.5980@.TK2MSFTNGP04.phx.gbl...
> Pause a defrag, no. Defrag one index at a time and spread the load over n
> days, sure.
> But then, so do you.
> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
> news:eJ1bBKV8HHA.5504@.TK2MSFTNGP02.phx.gbl...
>> Thanks Jay.
>> Do you know of a way to pause defrag? Perhaps I can put it through a loop
>> and let it run for a few seconds and then pause it for a second so that
>> other transactions can get through... Just an idea.
>> "Jay" <spam@.nospam.org> wrote in message
>> news:OdIzD4U8HHA.1208@.TK2MSFTNGP05.phx.gbl...
>> As has already been noted, once you're in a rebuild, not much. However,
>> I've been thinking about using a combination of: WAITFOR, @.@.CPU_BUSY &
>> @.@.IO_BUSY to determin server load before starting something running.
>> Haven't tried it yet.
>>
>> "Dragon" <noSpam_Badill@.hotmail.com> wrote in message
>> news:OQJIPvC8HHA.2208@.TK2MSFTNGP06.phx.gbl...
>> Hi,
>> Currently we defrage our indexes on a regular basis and defrag then
>> when statics are at a certain level. What I want to find out is:
>> Is there a way to add speed control during rebuilding and defragging
>> process? What I mean by this is that typically I run this at slow times
>> but if something comes up and database gets busy, I want to make sure
>> most of the recourses (including i/o) are taken away for the
>> defrag/reindexing processes and given back to data processing. Perhsp
>> if server stays busy for a certain amount of time, simply cancel the
>> defrag and/or reindex job. In my environment, I simply cannot afford to
>> not have i/o available for realtime transactions. I need to do this for
>> both SQL server 2000 and for SQL server 2005. Any ideas?
>> Thank you.
>>
>>
>|||I think you have to understand the the hardware environment to decide what
can be added. Currently if I have a Legato cluster (which I do in most of my
systems) it means I do not have an external shared storage device. Legato
mirror's a server's internal drives across servers. When this was
implemented this was the best/cost saving/redundency providing solution out
there. All our new system do not have this configuration. new systems have
MS clustering with some type of shared array or SAN and have proper disk
configuration for data, logs, indexes, temp dB etc.
In older Legato system, we do have raid controllers with cache (256-512MB)
and they are currently set to 100% write. There is no possiblity of adding
additional disks as these server support a maximum of 6 disks and that's
what they got. If I want to add any additional hardware performance, I have
to go change the OS to enterprise as we as SQL to Enterprise (in SQL 2000),
add a shared disk array and create a cluster. This is a no brainer for newer
sites but older ones (and lots of them) will take time.
I know if there is no 'soft' solution or patching-up for the performance
then their isn't one. This is why I want to make sure I ask all the experts
here if there is anything I can do.
As for simple recovery vs full recover, full recovery will require me to
perform multiple backups all day long. In most cases I do not have enough
storage to make these backups. Even if I do, I am addinging additional I/O
on the disks (for backups) which in my case can be a killer. Currently I
perform one full SQL DB backups everyday which then get backed up to tape
drives. Unfortunately, I am stuck with this environment until replacement
hardware is installed at all these sites. This could be 2-3 years before all
are changed.
Thanks.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23JQaYgi8HHA.600@.TK2MSFTNGP05.phx.gbl...
> For what ever reasons you have let the environment drive the other factors
> such as recovery mode, maintenance etc. As we have been saying over and
> over again you simply can't achieve peak performance and concurrency
> without the proper hardware to begin with no matter how much you want to.
> The fact that you have all of the files on a single Raid 5 will limit the
> I/O you can do with SQL Server. At the very least you need to separate the
> Logs from the Data onto their own separate physical Raid 1 or Raid 10
> array. Adding just two disks to a system like this and moving all the log
> files will likely yield a tremendous gain in throughput. If these systems
> have caching controllers then make sure the cache is set to 100% write
> back and 0% read. Add as much cache as possible. But other than that there
> is little you can do other than what has already been suggested. Erland
> highlights an excellent point about the logs. Running in Simple mode can
> not be driven by lack of disk space. It is the business requirements that
> must drive these decisions. Can you really afford to loose a days worth of
> transactions?
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
> news:Xns99A56ECB396FDYazorman@.127.0.0.1...
>> Dragon (noSpam_Badill@.hotmail.com) writes:
>> All my servers are in Simple mode. There is no way I have have it full
>> and have enough storage. Table sizes differ greatlely. It is a high
>> transaction environment so data gets added and changes a lot. Most of
>> the data is removed from the system after 3 months via (daily) automated
>> jobs. This keeps the system a bit lean. Total disk usage for all my
>> databases is never more than 50GB (per server, per site). This is
>> 24/7/365 operation so most of the offline stuff can't be done. I have
>> only "slow" times but no close times. I can probably reindex small
>> tables but for larger ones I have to think twice. This is why we try to
>> defrag indexes as much as possible (only the ones that need it) and
>> reindex only when defrag is useless.
>> A 24/7/365 high-transaction environment with simple recovery?
>> Interesting.
>> Very interesting. "Yes, we have a lot of transactions, but if a disk
>> crashes, we can easily recreate them once we restored a backup. Or we
>> don't care that we lose them."
>>
>>
>> --
>> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
>> Books Online for SQL Server 2005 at
>> http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
>> Books Online for SQL Server 2000 at
>> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
>|||Dragon (noSpam_Badill@.hotmail.com) writes:
> As for simple recovery vs full recover, full recovery will require me to
> perform multiple backups all day long. In most cases I do not have enough
> storage to make these backups. Even if I do, I am addinging additional I/O
> on the disks (for backups) which in my case can be a killer.
That would be log backups. Which would not take as much resources as
full backups, unless your databases are being reloaded all the time
with new data.
The real killer question is: what if a database goes belly up? What
would it cost your organisation to lose one day of data? What will
your manager say when you tell him "sorry, that data is gone forever"?
Of course, it's apparent from your description that your systems are
suffering from a severly underdimensioned budget. And I'm afraid that
the best advice I can give is that you take the political fight in
your organisation to change this - before disaster strikes.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx