Showing posts with label run. Show all posts
Showing posts with label run. Show all posts

Tuesday, March 27, 2012

Binding WebService as Datasource

I've painstakingly managed to get a XmlDocument from a webservice and run this to retrieve data.

However, i cannot bind any of the returned elements to anything. Any help appreciated.


Are you using the XML data processing extension? What is the Query that you are using? More specifically, what is the Element Path for the query?

For more information, take a look at the following article:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql90/html/RepServXMLDS.asp

Ian

binding input parameters leads to SQL_ERROR

I am trying to read the results from a query like
select a, b
from mytbl
where col0 = ?
The first time I run SQLExecute, everything works fine, but when I call
SQLExecute a second time with a modified input parameter, it returns
SQL_ERROR. Below is the code I'm running with the error checking removed and
the names simplified.
SQLAllocHandle(SQL_HANDLE_STMT, hconn, &m_hstmt)) ;
SQLPrepare(m_hstmt, (UCHAR *)m_query.GetBuffer(), m_query.GetLength()));
SQLBindParameter(m_hstmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER, 0, 0,
&where_param, 0, 0);
SQLBindCol(m_hstmt, 1, SQL_C_BIT, addr1, 0, 0)
SQLBindCol(m_hstmt, 2, SQL_C_BIT, addr2, 0, 0)
SQLExecute(m_hstmt) //works ok
SQLFetchScroll(m_hstmt, SQL_FETCH_NEXT, 0)
where_param = 10
SQLExecute(m_hstmt) //fails
SQLFetchScroll(m_hstmt, SQL_FETCH_NEXT, 0)
My goal is to run execute many times with a variety of parameters. I was
trying to have the odbc driver read from the address specified in
SQLBindParameter to set the value of the input parameter. I'm further
confused by the problem because when I try to retrieve error information by
calling SQLGetDiagRec, nothing is returned. Thanks for your help.
Scott
The problem goes away when I call SQLCloseCursor after completing the fetch.
The MSDN SQLBindParameter documentation at
http://msdn.microsoft.com/library/de...dparameter.asp
gives no indication that SQLCloseCursor should be needed.
I suppose that I am now just concerned that I am introducing extra
processing overhead by calling SQLCloseCursor.
Scott
"ScottD" wrote:

> I am trying to read the results from a query like
> select a, b
> from mytbl
> where col0 = ?
> The first time I run SQLExecute, everything works fine, but when I call
> SQLExecute a second time with a modified input parameter, it returns
> SQL_ERROR. Below is the code I'm running with the error checking removed and
> the names simplified.
> SQLAllocHandle(SQL_HANDLE_STMT, hconn, &m_hstmt)) ;
> SQLPrepare(m_hstmt, (UCHAR *)m_query.GetBuffer(), m_query.GetLength()));
> SQLBindParameter(m_hstmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER, 0, 0,
> &where_param, 0, 0);
> SQLBindCol(m_hstmt, 1, SQL_C_BIT, addr1, 0, 0)
> SQLBindCol(m_hstmt, 2, SQL_C_BIT, addr2, 0, 0)
> SQLExecute(m_hstmt) //works ok
> SQLFetchScroll(m_hstmt, SQL_FETCH_NEXT, 0)
> where_param = 10
> SQLExecute(m_hstmt) //fails
> SQLFetchScroll(m_hstmt, SQL_FETCH_NEXT, 0)
>
> My goal is to run execute many times with a variety of parameters. I was
> trying to have the odbc driver read from the address specified in
> SQLBindParameter to set the value of the input parameter. I'm further
> confused by the problem because when I try to retrieve error information by
> calling SQLGetDiagRec, nothing is returned. Thanks for your help.
> Scott
|||Are you making sure that you've fetch all of the rows? Even if only
one row is returned by the query, you should call SQLFetchXXX() until
it returns SQL_NO_DATA. Otherwise the driver doesn't necessarily know
that you've finished fetching rows on the current resultset/cursor.
Alternatively, you can use SQLMoreResults() to flush the remaining data
on the wire and position on the next resultset (if there is one,
otherwise it cleans up the connection).
SQLCloseCursor shouldn't add too much overhead. I'm assuming you're
using the default (firehose) cursor. If so, SQLCloseCursor just makes
sure that all of the data from the previous statement has been
consumed, and the connection is ready for the next statement.
Brannon

Binding data to text box

I want to bind some data to a text box from sql server db. but when i run the page i get an error. here is my code.

<formid="form1"runat="server"><div><asp:SqlDataSourceID="SqlDataSource1"runat="server"ConnectionString="<%$ ConnectionStrings:imacsConn %>"SelectCommand="SELECT Reportnumber FROM [SummaryBlue] WHERE REPORTNUMBER = @.REPORTNUMBER"><SelectParameters><asp:QueryStringParameterName="REPORTNUMBER"QueryStringField="REPORTNo"Type="String"/>

</SelectParameters></asp:SqlDataSource><asp:TextBoxID="TextBox1"runat="server"Columns="<%$ ConnectionStrings:imacsConn %>"></asp:TextBox></div></form>

Error:

Exception Details:System.FormatException: Input string was not in a correct format.

Source Error:

Line 25: </SelectParameters>Line 26: </asp:SqlDataSource>Line 27: <asp:TextBox ID="TextBox1" runat="server" Columns="<%$ ConnectionStrings:imacsConn %>"></asp:TextBox></div>Line 28: </form>

The Columns property of a text box is how many characters wide it should be (an integer property). You are trying to assign it a string.

<asp:TextBoxID="TextBox1"runat="server"Columns="<%$ ConnectionStrings:imacsConn %>">

|||

Hi xbhavesh,

From your description, I understand you want to bind data to single TextBox. Base on my experience and research, SqlDataSource can be binded to List data-bound controls, but TextBox is not belong to them. So I think we cannot simply bind data to TextBox from SqlDataSource.

I find two solutions and hope they are helpful to you.

1. Bind the DataSource to a GridView, and then bind the GridView cell value to TextBox. Don't forget set GridView to not visible and call DataBind method when Page Load. Here is the sample code:

<asp:TextBoxID="TextBox1"runat="server"Text='<% #GridView1.Rows[0].Cells[0].Text%>'></asp:TextBox>

<asp:SqlDataSourceID="SqlDataSource1"runat="server"ConnectionString="<%$ ConnectionStrings:testConnectionString%>"

SelectCommand="SELECT [id] FROM [Table1] WHERE ([id] = 2)">

</asp:SqlDataSource>

<asp:GridViewID="GridView1"Visible="false"runat="server"DataSourceID="SqlDataSource1">

</asp:GridView>

protectedvoid Page_Load(object sender,EventArgs e)

{

TextBox1.DataBind();

}

2. Create your custom control to inherit DataBoundControl:

http://msdn2.microsoft.com/en-us/library/system.web.ui.webcontrols.databoundcontrol.aspx

P.S. We should bind to the Text property, not Columns.

Sunday, March 25, 2012

binary sort versus DOCI

Hello:
Whether you are using SQL 2000 or SQL 2005, what is the best script to run
in Query Analyzer to determine the sort order that SQL was originally
installed under (binary versus dictionary order case insensitivity)?
Thanks!
childofthe1980sOn 27 Oct, 17:41, childofthe1980s
<childofthe19...@.discussions.microsoft.com> wrote:
> Hello:
> Whether you are using SQL 2000 or SQL 2005, what is the best script to run
> in Query Analyzer to determine the sort order that SQL was originally
> installed under (binary versus dictionary order case insensitivity)?
> Thanks!
> childofthe1980s
SERVERPROPERTY('Collation')
returns the name of the server's default collation.
--
David Portas|||Thank you, David!
childofthe1980s
"David Portas" wrote:
> On 27 Oct, 17:41, childofthe1980s
> <childofthe19...@.discussions.microsoft.com> wrote:
> > Hello:
> >
> > Whether you are using SQL 2000 or SQL 2005, what is the best script to run
> > in Query Analyzer to determine the sort order that SQL was originally
> > installed under (binary versus dictionary order case insensitivity)?
> >
> > Thanks!
> >
> > childofthe1980s
>
> SERVERPROPERTY('Collation')
> returns the name of the server's default collation.
> --
> David Portas
>sql

Thursday, March 22, 2012

Bigint vs int

I had set up my tables using BigInts for my Surrogate keys (identity) to
make sure I could never run out of numbers.
I am thinking maybe that was overkill and my slow my system down (I know it
takes up more space, but that is not really an issue).
Are the Bigints going to cause me a problem down the line?
Also, if I start changing the BigInts to Ints in my Database and Stored
Procedure and miss some of the Stored procedures, would that cause me any
problems. I also have the ASP.Net pages that call the Stored Procedure set
to BigInts - would there be a problem if I miss some of these. In
otherwords, would there be a problem with mixing and matching the types
until I get them all taken care of?
Thanks,
Tom>I had set up my tables using BigInts for my Surrogate keys (identity) to
>make sure I could never run out of numbers.

> Are the Bigints going to cause me a problem down the line?
Problems? No. The only issue is that all of your keys are twice as large
as for an int. This will not "hurt" anything, but it is a waste of
resources. Tables with GUIDs as keys can perform just fine also, and they
take 16 byts. It is all into how much disk space and memory you need. It
will be slightly more for bigints.
The real question is whether you really have a need for 2 billion (well, 4
billion if you don't mind negatives after you exhaust the positive values)
values in your tables? I only have had two tables in my life where this was
possible (and only one has reached this amount) and they were both used to
track clicks on a pretty large web click tracking software table. I am
actually more likely to end up with small- and tiny- int keys than I am
bigint. But you have to ask yourself, if there is a possibility of that
kind of data, what do you want to do?
If this is an OLTP system, I would suggest you probably will want to delete
old data before doing this. A billion of anything is quite a bit.

> Also, if I start changing the BigInts to Ints in my Database and Stored
> Procedure and miss some of the Stored procedures, would that cause me any
> problems. I also have the ASP.Net pages that call the Stored Procedure
> set to BigInts - would there be a problem if I miss some of these. In
> otherwords, would there be a problem with mixing and matching the types
> until I get them all taken care of?
Not "big" problems, unless you try to pass a value that fits in a bigint and
not in an int. If you aren't using stored procs, you may have some issues
with type mismatches cause funky plans, but not enough to avoid switching.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"tshad" <tscheiderich@.ftsolutions.com> wrote in message
news:uI2qqGhmFHA.1372@.TK2MSFTNGP10.phx.gbl...
>I had set up my tables using BigInts for my Surrogate keys (identity) to
>make sure I could never run out of numbers.
> I am thinking maybe that was overkill and my slow my system down (I know
> it takes up more space, but that is not really an issue).
> Are the Bigints going to cause me a problem down the line?
> Also, if I start changing the BigInts to Ints in my Database and Stored
> Procedure and miss some of the Stored procedures, would that cause me any
> problems. I also have the ASP.Net pages that call the Stored Procedure
> set to BigInts - would there be a problem if I miss some of these. In
> otherwords, would there be a problem with mixing and matching the types
> until I get them all taken care of?
> Thanks,
> Tom
>|||You can actually find an answer to your question using a calculator. Conside
r
the number of rows inserted daily and you can predict when you 'run out of
numbers'.
My guess: not in your life-time - even with integer.
ML|||"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
news:OWXwVvhmFHA.3312@.tk2msftngp13.phx.gbl...
>
> Problems? No. The only issue is that all of your keys are twice as large
> as for an int. This will not "hurt" anything, but it is a waste of
> resources. Tables with GUIDs as keys can perform just fine also, and they
> take 16 byts. It is all into how much disk space and memory you need. It
> will be slightly more for bigints.
>
But if they are not used in any of my indexes, would there be any
performance problems?

> The real question is whether you really have a need for 2 billion (well, 4
> billion if you don't mind negatives after you exhaust the positive values)
> values in your tables? I only have had two tables in my life where this
> was possible (and only one has reached this amount) and they were both
> used to track clicks on a pretty large web click tracking software table.
> I am actually more likely to end up with small- and tiny- int keys than I
> am bigint. But you have to ask yourself, if there is a possibility of
> that kind of data, what do you want to do?
Actually, I don't know if I would come close, but I don't want to build this
system - which could take a couple of years to finish, only to find out that
the numbers are filling up faster than I expected.

> If this is an OLTP system, I would suggest you probably will want to
> delete old data before doing this. A billion of anything is quite a bit.
I would definately be deleting the records as I go, but the identity field
would still be increasing by one even if I dumped half the records.

>
> Not "big" problems, unless you try to pass a value that fits in a bigint
> and not in an int. If you aren't using stored procs, you may have some
> issues with type mismatches cause funky plans, but not enough to avoid
> switching.
If the issues aren't bad issues than I could just start making the changes
and deal with the differences as I go along.
Thanks,
Tom.

> --
> ----
--
> Louis Davidson - http://spaces.msn.com/members/drsql/
> SQL Server MVP
>
> "tshad" <tscheiderich@.ftsolutions.com> wrote in message
> news:uI2qqGhmFHA.1372@.TK2MSFTNGP10.phx.gbl...
>|||"ML" <ML@.discussions.microsoft.com> wrote in message
news:EB9108DE-3804-46E8-A4FF-D31A58D48666@.microsoft.com...
> You can actually find an answer to your question using a calculator.
> Consider
> the number of rows inserted daily and you can predict when you 'run out of
> numbers'.
> My guess: not in your life-time - even with integer.
>
Probably true.
Which is why I am in the process of looking closer at my tables and making
the changes now, before I go much further.
Thanks,
Tom
> ML|||> But if they are not used in any of my indexes, would there be any
> performance problems?
If you are not indexing an identity value, then what are you using it for?
Identity value are pretty much only good as keys to go and fetch a single
row.

> Actually, I don't know if I would come close, but I don't want to build
> this system - which could take a couple of years to finish, only to find
> out that the numbers are filling up faster than I expected.
This is a valid point. I would suggest that you do some math to see how
long this will last. Like if you do 5 million new rows a year, you have
well over 100 years before you reach 500 million, so 400 years (400
*5million = 2 billion, right? I have worked a lot this w!) before you
will have to switch to use the negative values for the next 400 years :)
But it is up to you to decide just how many rows you will need and size
appropriately ;)

> If the issues aren't bad issues than I could just start making the changes
> and deal with the differences as I go along.
If you actually don't need them in indexes (UNIQUE and PRIMARY KEY
constraints use indexes also) then it really won't make very much difference
at all.

> I would definately be deleting the records as I go, but the identity field
> would still be increasing by one even if I dumped half the records.
True, but you might be able to start your values over to fill in the gaps
once you hit the max value if the data that gets deleted is in increasing
identity value. Especially if you are using this as a surrogate key. Usage
will dictate. If you post your structures and how these values are actually
being used it might help... Good luck :)
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"tshad" <tscheiderich@.ftsolutions.com> wrote in message
news:%23vCQUQimFHA.3568@.TK2MSFTNGP10.phx.gbl...
> "Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
> news:OWXwVvhmFHA.3312@.tk2msftngp13.phx.gbl...
> But if they are not used in any of my indexes, would there be any
> performance problems?
>
> Actually, I don't know if I would come close, but I don't want to build
> this system - which could take a couple of years to finish, only to find
> out that the numbers are filling up faster than I expected.
>
> I would definately be deleting the records as I go, but the identity field
> would still be increasing by one even if I dumped half the records.
>
> If the issues aren't bad issues than I could just start making the changes
> and deal with the differences as I go along.
> Thanks,
> Tom.
>
>|||On Fri, 5 Aug 2005 18:13:38 -0700, tshad wrote:

>"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
>news:OWXwVvhmFHA.3312@.tk2msftngp13.phx.gbl...
>But if they are not used in any of my indexes, would there be any
>performance problems?
Hi Tom,
Using a surrogate key without including it in an index is pointless.
The reason to add a surrogate key, is to serve as a surrogate for the
business key. This can be useful if other tables link to the table and
the business key is long - instead of duplicating the business key in
all refering tables, you duplicate the surrogate key. This of course
requires that the surrogate key is declared either PRIMARY KEY or
UNIQUE.
All PRIMARY KEY and UNIQUE constraints auto-generate an index. This is
not the case for FOREIGN KEY constraints, but in most cases, it makes
sense to manually add an index on the FOREIGN KEY column(s).
The surrogate key will be in the index for it's own PRIMARY KEY or
UNIQUE constraint, in all refering tables, and in all indexes you define
for the FOREIGN KEY constraints. If the PRIMARY KEY or UNIQUE for the
surrogate key is clustered, there'll be a copy of the surrogate key in
each nonclustered index for the same table as well.
Extra bytes in the surrogate key result in more space taken for all the
tables and all the indexes where the surrogate key is used. That means
that less rows and/or less index entries fit on a page, which will
affect the performance (more logical reads, and a lower cache-hit ratio,
resulting in more physical reads as well).
How much this will affect your total performance is hard to tell. Adding
two bytes to a 3000-byte row is not quite as significant as adding two
bytes to a 20-byte row.
Don't make your surrogate keys longer than they need to be, "just to be
on the safe side", as this WILL affect performance. On the other hand,
don't make your surrogate keys too short in order to save some
performance, as the amount of work it takes you to grow the keys later
is disproportional to the performance saved.
And if you use IDENTITY, then always start numbering at the lowest value
the datatype permits. The standard setting (starting at one) disregards
half the available values. Sure, you can roll over to the negatives once
the positivies are exhausted - but why would you?
Just my 0,02.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||> And if you use IDENTITY, then always start numbering at the lowest value
> the datatype permits. The standard setting (starting at one) disregards
> half the available values. Sure, you can roll over to the negatives once
> the positivies are exhausted - but why would you?
This has always been hard to do in my experience, and I have fought with so
many programmers/dba's about this. I would agree that if you calulate that
you will need over 2 billion rows before like twice your expected system
life expectancy it might be possibl, but most of the time the large number
of digits are too much to try to type while the system is in it's infancy
and people are troubleshooting problems.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:mkk9f19ss8ge7igqclbpmb4gvkmi2e1p85@.
4ax.com...
> On Fri, 5 Aug 2005 18:13:38 -0700, tshad wrote:
>
> Hi Tom,
> Using a surrogate key without including it in an index is pointless.
> The reason to add a surrogate key, is to serve as a surrogate for the
> business key. This can be useful if other tables link to the table and
> the business key is long - instead of duplicating the business key in
> all refering tables, you duplicate the surrogate key. This of course
> requires that the surrogate key is declared either PRIMARY KEY or
> UNIQUE.
> All PRIMARY KEY and UNIQUE constraints auto-generate an index. This is
> not the case for FOREIGN KEY constraints, but in most cases, it makes
> sense to manually add an index on the FOREIGN KEY column(s).
> The surrogate key will be in the index for it's own PRIMARY KEY or
> UNIQUE constraint, in all refering tables, and in all indexes you define
> for the FOREIGN KEY constraints. If the PRIMARY KEY or UNIQUE for the
> surrogate key is clustered, there'll be a copy of the surrogate key in
> each nonclustered index for the same table as well.
> Extra bytes in the surrogate key result in more space taken for all the
> tables and all the indexes where the surrogate key is used. That means
> that less rows and/or less index entries fit on a page, which will
> affect the performance (more logical reads, and a lower cache-hit ratio,
> resulting in more physical reads as well).
> How much this will affect your total performance is hard to tell. Adding
> two bytes to a 3000-byte row is not quite as significant as adding two
> bytes to a 20-byte row.
> Don't make your surrogate keys longer than they need to be, "just to be
> on the safe side", as this WILL affect performance. On the other hand,
> don't make your surrogate keys too short in order to save some
> performance, as the amount of work it takes you to grow the keys later
> is disproportional to the performance saved.
> And if you use IDENTITY, then always start numbering at the lowest value
> the datatype permits. The standard setting (starting at one) disregards
> half the available values. Sure, you can roll over to the negatives once
> the positivies are exhausted - but why would you?
> Just my ? 0,02.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||That helps a lot.
I will probably start moving some of my bigints to ints as everyone is
probably correct in the overkill causing a little more performance problems
just to "be safe".
As you say, the surrogate key would really have to be in the indexes or what
would the point of the surrogate be. I hadn't started putting my indexes
together yet, so didn't notice that I would have to use them in then
indexes. So it would be better for the surrogates to be smaller if
possible.
Thanks,
Tom

> Hi Tom,
> Using a surrogate key without including it in an index is pointless.
> The reason to add a surrogate key, is to serve as a surrogate for the
> business key. This can be useful if other tables link to the table and
> the business key is long - instead of duplicating the business key in
> all refering tables, you duplicate the surrogate key. This of course
> requires that the surrogate key is declared either PRIMARY KEY or
> UNIQUE.
> All PRIMARY KEY and UNIQUE constraints auto-generate an index. This is
> not the case for FOREIGN KEY constraints, but in most cases, it makes
> sense to manually add an index on the FOREIGN KEY column(s).
> The surrogate key will be in the index for it's own PRIMARY KEY or
> UNIQUE constraint, in all refering tables, and in all indexes you define
> for the FOREIGN KEY constraints. If the PRIMARY KEY or UNIQUE for the
> surrogate key is clustered, there'll be a copy of the surrogate key in
> each nonclustered index for the same table as well.
> Extra bytes in the surrogate key result in more space taken for all the
> tables and all the indexes where the surrogate key is used. That means
> that less rows and/or less index entries fit on a page, which will
> affect the performance (more logical reads, and a lower cache-hit ratio,
> resulting in more physical reads as well).
> How much this will affect your total performance is hard to tell. Adding
> two bytes to a 3000-byte row is not quite as significant as adding two
> bytes to a 20-byte row.
> Don't make your surrogate keys longer than they need to be, "just to be
> on the safe side", as this WILL affect performance. On the other hand,
> don't make your surrogate keys too short in order to save some
> performance, as the amount of work it takes you to grow the keys later
> is disproportional to the performance saved.
> And if you use IDENTITY, then always start numbering at the lowest value
> the datatype permits. The standard setting (starting at one) disregards
> half the available values. Sure, you can roll over to the negatives once
> the positivies are exhausted - but why would you?
> Just my ? 0,02.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
news:ObhzBZimFHA.708@.TK2MSFTNGP09.phx.gbl...
> If you are not indexing an identity value, then what are you using it for?
> Identity value are pretty much only good as keys to go and fetch a single
> row.
You're right. I would have to use the surrogate in the index.

>
> This is a valid point. I would suggest that you do some math to see how
> long this will last. Like if you do 5 million new rows a year, you have
> well over 100 years before you reach 500 million, so 400 years (400
> *5million = 2 billion, right? I have worked a lot this w!) before you
> will have to switch to use the negative values for the next 400 years :)
> But it is up to you to decide just how many rows you will need and size
> appropriately ;)
I will need to do that as I haven't really worked that out.

>
changes
> If you actually don't need them in indexes (UNIQUE and PRIMARY KEY
> constraints use indexes also) then it really won't make very much
difference
> at all.
>
field
> True, but you might be able to start your values over to fill in the gaps
> once you hit the max value if the data that gets deleted is in increasing
> identity value. Especially if you are using this as a surrogate key.
Usage
> will dictate. If you post your structures and how these values are
actually
> being used it might help... Good luck :)
Here is a stripped down version of some of my tables.
What I have is mulitple companies. Each company can have many users.
There would be many positions available. The JobTitle cannot be unique as
different companies could have the same JobTitles - so I need to use an
PositionID as my surrogate key and then index the JobTitle + PositionID.
There are many Applicants and each Applicant can have multiple positions
(ApplicantPosition table). And each ApplicantPosition table can have many
skills (ApplicantSkills table).
The tables could be set up as follows (except I will probably be changing
many of the bigints to ints):
CREATE TABLE [dbo].[Companies] (
[CompanyID] [bigint] IDENTITY (1, 1) NOT NULL Primary Key,
[Name] [varchar] (45) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_Company_Idx on Companies(Name,CompanyID)
CREATE TABLE [dbo].[logon] (
[UserID] [bigint] IDENTITY (1, 1) NOT NULL Primary Key ,
[CompanyID] [bigint] NULL ,
[FirstName] [varchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[LastName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Email] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[UserName] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Password] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DateCreated] [datetime] NULL
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_Logon_Idx on Companies(UserName,UserID)
CREATE NONCLUSTERED INDEX Name_Password_Logon_Idx on
Companies(UserName,Password)
CREATE TABLE [dbo].[Position] (
[CompanyID] [bigint] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[PositionID] [int] IDENTITY (1, 1) NOT NULL Primary Key,
[CompanyID] [bigint] NULL ,
[JobTitle] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DatePosted] [datetime] NULL ,
[DateUpdated] [datetime] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_Position_Idx on
Companies(JobTitle,PositionID)
CREATE TABLE [dbo].[Applicant] (
[CompanyID] [bigint] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[UserID] [bigint] NOT NULL ,
[ApplicantID] [bigint] IDENTITY (1, 1) NOT NULL Primary Key,
[PositionID] [int] NOT NULL
[Name] [varchar] (45)
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_Applicant_Idx on
Companies(Name,ApplicantID)
CREATE TABLE [dbo].[ApplicantPosition] (
[CompanyID] [bigint] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[UserID] [bigint] NOT NULL ,
[ApplicantPositionID] [bigint] IDENTITY (1, 1) NOT NULL
ApplicantPositionID,
[ApplicantID] [bigint] NOT NULL ,
[PositionID] [int] NOT NULL ,
[FirstName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LastName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Status] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_ApplicantPosition_Idx on
ApplicantPosition(LastName,FirstName,App
licantPositionID)
CREATE TABLE [dbo].[ApplicantSkills] (
[CompanyID] [bigint] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[UserID] [bigint] NULL ,
[ApplicantSkillsID] [bigint] IDENTITY (1, 1) NOT NULL Primary Key,
[ApplicantID] [bigint] NOT NULL ,
[Skill] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Level] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Skill_ID_ApplicantSkillsID on
ApplicantSkills(Skill,ApplicantSkillsID)
Thanks,
Tom

> --
> ----
--
> Louis Davidson - http://spaces.msn.com/members/drsql/
> SQL Server MVP
>
> "tshad" <tscheiderich@.ftsolutions.com> wrote in message
> news:%23vCQUQimFHA.3568@.TK2MSFTNGP10.phx.gbl...
to
of
(well,
tiny-
bit.
field
Stored
matching
bigint
changes
>
--
to
know
Stored
matching
>

Monday, March 19, 2012

BIDS is not acceptable - to "heavy"

Hello everybody!

I have a question - is it possible to visualize an execution of the SSIS package when it is being run from SQL Agent? "Visualize" means to show a data-flow "live" - similair to the visualization provided by BI Dev.Studio when you run a package there, with coloured boxes, blinking etc.

I searched the Web but found nothing - neither MS-related nor utilities from third parties. Is it possible in any way?

Thanks in advance,

Andrey.

P.S. Parsing log-files is an option, but we would like to try first something less "painfull" and more universal...

To the best of my knowledge, no such tool exists.

Why not simply run the package from BIDS when you require the visualization?

|||

Just because we do not want to install BIDS in order to run a couple of SSIS packages. We thought of some utility ot "light-weight" version of it invoked when package is run.

And thanks for the information.

Regards,

A.G.

|||Yep, no tool exists, other than BIDS. Is the real-time log/output of the package not enough?|||

Thanks a lot! We will think about other solution - perhaps, log-based.

Regards,

A.G.

|||Sorry I couldn't help - good luck!|||

Hi:

I have had requests for such a UI too. Once you show users a package running in BIDS, they just love it, and want to see it running that way.

It is also a useful aid in Requirements and Design reviews.

Is there some place we can request Microsoft for a feature like this?

TIA

Kar

|||connect.microsoft.com|||Hello.

To be honest, I'm not sure that MS will implement your request ever. This tool is not interesting for marketing. It has no real sales potential.

Regards,
Andrey

Wednesday, March 7, 2012

BETWEEN statement

I was wondering if someone can correct me on this. When I run this query I get no results displayed. By default, I made sure I used all the values from beginning to end to test it.

USE POS410
GO
SELECT SSN#, Hire_date
FROM Employee
WHERE Hire_date BETWEEN 1/1/1995 AND 12/31/2005
GO1/1/1995 is an arithmetic expression meaning "1 divided by 1 divided by 1995"

this will round off to zero

you will probably get some results if you write this --

WHERE Hire_date BETWEEN '1995-01-01' AND '2005-12-31'|||that did the trick! thank you

Friday, February 24, 2012

Best way to upgrade for SQLServer2000 standard to Enterprice

Hi all.
One of our customers need SQL to run on more than 2GB of memory. They are
running on SQL2000 standard edition and therefor need to upgrade to
Enterprice edition. Is it just to install then new CD's over the old system
or do we need to do it as a reinstall of the server and some backup/restore
of the database? Our customers talked about some wizards to do this. Any
"best practice" advice anyone?
Thanx
GeirGeir ,
it's really simple. Just run the setup select the option upgrade and if
necessary add some (new) components. It shouldn't take more tha 2
minutes and if I there's no need to reapply service packs.
But as always, make sure you have a backup of all your databases
including the system databases before you start.
Markus|||Hi
The amount of memory you can use also depends on the version of the OS you
are using, which you haven't specified.
If you haven't already check out
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_ts_8dbn.asp
http://www.sql-server-performance.com/awe_memory.asp
http://support.microsoft.com/kb/274750
You should be able to install over your existing system, make sure that you
have backups of all your databases and a recovery plan for each stage in the
process, so that you know exactly how to get your system back up and running
in the event of problems being encountered.
John
"Geir Holme" wrote:
> Hi all.
> One of our customers need SQL to run on more than 2GB of memory. They are
> running on SQL2000 standard edition and therefor need to upgrade to
> Enterprice edition. Is it just to install then new CD's over the old system
> or do we need to do it as a reinstall of the server and some backup/restore
> of the database? Our customers talked about some wizards to do this. Any
> "best practice" advice anyone?
> Thanx
> Geir
>
>

Best way to upgrade for SQLServer2000 standard to Enterprice

Hi all.
One of our customers need SQL to run on more than 2GB of memory. They are
running on SQL2000 standard edition and therefor need to upgrade to
Enterprice edition. Is it just to install then new CD's over the old system
or do we need to do it as a reinstall of the server and some backup/restore
of the database? Our customers talked about some wizards to do this. Any
"best practice" advice anyone?
Thanx
GeirGeir ,
it's really simple. Just run the setup select the option upgrade and if
necessary add some (new) components. It shouldn't take more tha 2
minutes and if I there's no need to reapply service packs.
But as always, make sure you have a backup of all your databases
including the system databases before you start.
Markus|||Hi
The amount of memory you can use also depends on the version of the OS you
are using, which you haven't specified.
If you haven't already check out:
http://msdn.microsoft.com/library/d...br />
8dbn.asp
http://www.sql-server-performance.com/awe_memory.asp
http://support.microsoft.com/kb/274750
You should be able to install over your existing system, make sure that you
have backups of all your databases and a recovery plan for each stage in the
process, so that you know exactly how to get your system back up and running
in the event of problems being encountered.
John
"Geir Holme" wrote:

> Hi all.
> One of our customers need SQL to run on more than 2GB of memory. They are
> running on SQL2000 standard edition and therefor need to upgrade to
> Enterprice edition. Is it just to install then new CD's over the old syste
m
> or do we need to do it as a reinstall of the server and some backup/restor
e
> of the database? Our customers talked about some wizards to do this. Any
> "best practice" advice anyone?
> Thanx
> Geir
>
>|||Geir ,
it's really simple. Just run the setup select the option upgrade and if
necessary add some (new) components. It shouldn't take more tha 2
minutes and if I there's no need to reapply service packs.
But as always, make sure you have a backup of all your databases
including the system databases before you start.
Markus|||Hi
The amount of memory you can use also depends on the version of the OS you
are using, which you haven't specified.
If you haven't already check out:
http://msdn.microsoft.com/library/d...br />
8dbn.asp
http://www.sql-server-performance.com/awe_memory.asp
http://support.microsoft.com/kb/274750
You should be able to install over your existing system, make sure that you
have backups of all your databases and a recovery plan for each stage in the
process, so that you know exactly how to get your system back up and running
in the event of problems being encountered.
John
"Geir Holme" wrote:

> Hi all.
> One of our customers need SQL to run on more than 2GB of memory. They are
> running on SQL2000 standard edition and therefor need to upgrade to
> Enterprice edition. Is it just to install then new CD's over the old syste
m
> or do we need to do it as a reinstall of the server and some backup/restor
e
> of the database? Our customers talked about some wizards to do this. Any
> "best practice" advice anyone?
> Thanx
> Geir
>
>

best way to tolerate an unreliable WAN connection?

I'm looking at a problem two sites have - the both run the same ERP system
on one server, and their WAN and internet connections are unreliable. What
is the preferred method to keep people from sitting on their thumbs when the
WAN is unavailable? What about if you want to be able to change data at both
locations while the connection is down?
Paul
I would cehck with the ERP vendor to see if they support
replication...either through SQL Server or through their own processes.
Many ERP/CRM vendors do have a "remote server" concept built-in for just
such a situation
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Paul" <noone@.executespammers.org> wrote in message
news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> I'm looking at a problem two sites have - the both run the same ERP system
> on one server, and their WAN and internet connections are unreliable. What
> is the preferred method to keep people from sitting on their thumbs when
> the
> WAN is unavailable? What about if you want to be able to change data at
> both
> locations while the connection is down?
> Paul
>
|||I messed with replication while back. It made making changes to the database
(new columns, SP's etc) a pain. I was hoping there might be an alternative,
maybe a third-party product.
The ERP system is a custom one. I'm trying to find a solution for a problem
my friend is having. Definitely anything he settles on to fix it would be of
interest to me, because I do custom ERP systems as well. We're fortunate
enough here though where WAN downtime hasn't been an issue thus far.
Paul
"Kevin3NF" <KHill@.NopeIDontNeedNoSPAM3NF-inc.com> wrote in message
news:u6VabyujFHA.2852@.TK2MSFTNGP15.phx.gbl...[vbcol=seagreen]
> I would cehck with the ERP vendor to see if they support
> replication...either through SQL Server or through their own processes.
> Many ERP/CRM vendors do have a "remote server" concept built-in for just
> such a situation
> --
> Kevin Hill
> President
> 3NF Consulting
> www.3nf-inc.com/NewsGroups.htm
> www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
> www.experts-exchange.com - experts compete for points to answer your
> questions
>
> "Paul" <noone@.executespammers.org> wrote in message
> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
system[vbcol=seagreen]
What
>
|||If you want the system usable in two places that are not connected (during a
WAN outage), you are definitely looking into replication. Unless one of the
systems is going to be read-only. In that case you may be able to look into
log shipping, or 3rd party products like double-take from
www.nsisoftware.com (no affiliation)
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Paul" <noone@.executespammers.org> wrote in message
news:%231ALn%23ujFHA.3348@.tk2msftngp13.phx.gbl...
>I messed with replication while back. It made making changes to the
>database
> (new columns, SP's etc) a pain. I was hoping there might be an
> alternative,
> maybe a third-party product.
> The ERP system is a custom one. I'm trying to find a solution for a
> problem
> my friend is having. Definitely anything he settles on to fix it would be
> of
> interest to me, because I do custom ERP systems as well. We're fortunate
> enough here though where WAN downtime hasn't been an issue thus far.
> Paul
> "Kevin3NF" <KHill@.NopeIDontNeedNoSPAM3NF-inc.com> wrote in message
> news:u6VabyujFHA.2852@.TK2MSFTNGP15.phx.gbl...
> system
> What
>
|||Hi
Why not get the network sorted out. If the WAN goes down, hit ISDN/Dial Up
and keep the location working.
Implementing workarounds because the network guys can't keep their act
together is a difficult and costly thing.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Paul" <noone@.executespammers.org> wrote in message
news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> I'm looking at a problem two sites have - the both run the same ERP system
> on one server, and their WAN and internet connections are unreliable. What
> is the preferred method to keep people from sitting on their thumbs when
> the
> WAN is unavailable? What about if you want to be able to change data at
> both
> locations while the connection is down?
> Paul
>
|||Yeah, that's the advice I gave.
According to the guy suffering from this, one site is out in the sticks and
has only Verizon for a provider and (apparently) one option for
connectivity. I told him to try to set up a failover connection over a
regular cable broadband connection. Never seen it done, but I don't see why
it couldn't work. People may turn their noses up at residential connections
but it's better than nothing, and some of them are damn fast. I've been
looking at Verizon's FIOS and salivating. 30Mb/s!
Paul
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:u8tKsY3jFHA.3144@.TK2MSFTNGP12.phx.gbl...[vbcol=seagreen]
> Hi
> Why not get the network sorted out. If the WAN goes down, hit ISDN/Dial Up
> and keep the location working.
> Implementing workarounds because the network guys can't keep their act
> together is a difficult and costly thing.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Paul" <noone@.executespammers.org> wrote in message
> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
system[vbcol=seagreen]
What
>

best way to tolerate an unreliable WAN connection?

I'm looking at a problem two sites have - the both run the same ERP system
on one server, and their WAN and internet connections are unreliable. What
is the preferred method to keep people from sitting on their thumbs when the
WAN is unavailable? What about if you want to be able to change data at both
locations while the connection is down?
PaulI would cehck with the ERP vendor to see if they support
replication...either through SQL Server or through their own processes.
Many ERP/CRM vendors do have a "remote server" concept built-in for just
such a situation
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Paul" <noone@.executespammers.org> wrote in message
news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> I'm looking at a problem two sites have - the both run the same ERP system
> on one server, and their WAN and internet connections are unreliable. What
> is the preferred method to keep people from sitting on their thumbs when
> the
> WAN is unavailable? What about if you want to be able to change data at
> both
> locations while the connection is down?
> Paul
>|||I messed with replication while back. It made making changes to the database
(new columns, SP's etc) a pain. I was hoping there might be an alternative,
maybe a third-party product.
The ERP system is a custom one. I'm trying to find a solution for a problem
my friend is having. Definitely anything he settles on to fix it would be of
interest to me, because I do custom ERP systems as well. We're fortunate
enough here though where WAN downtime hasn't been an issue thus far.
Paul
"Kevin3NF" <KHill@.NopeIDontNeedNoSPAM3NF-inc.com> wrote in message
news:u6VabyujFHA.2852@.TK2MSFTNGP15.phx.gbl...
> I would cehck with the ERP vendor to see if they support
> replication...either through SQL Server or through their own processes.
> Many ERP/CRM vendors do have a "remote server" concept built-in for just
> such a situation
> --
> Kevin Hill
> President
> 3NF Consulting
> www.3nf-inc.com/NewsGroups.htm
> www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
> www.experts-exchange.com - experts compete for points to answer your
> questions
>
> "Paul" <noone@.executespammers.org> wrote in message
> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
system[vbcol=seagreen]
What[vbcol=seagreen]
>|||If you want the system usable in two places that are not connected (during a
WAN outage), you are definitely looking into replication. Unless one of the
systems is going to be read-only. In that case you may be able to look into
log shipping, or 3rd party products like double-take from
www.nsisoftware.com (no affiliation)
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Paul" <noone@.executespammers.org> wrote in message
news:%231ALn%23ujFHA.3348@.tk2msftngp13.phx.gbl...
>I messed with replication while back. It made making changes to the
>database
> (new columns, SP's etc) a pain. I was hoping there might be an
> alternative,
> maybe a third-party product.
> The ERP system is a custom one. I'm trying to find a solution for a
> problem
> my friend is having. Definitely anything he settles on to fix it would be
> of
> interest to me, because I do custom ERP systems as well. We're fortunate
> enough here though where WAN downtime hasn't been an issue thus far.
> Paul
> "Kevin3NF" <KHill@.NopeIDontNeedNoSPAM3NF-inc.com> wrote in message
> news:u6VabyujFHA.2852@.TK2MSFTNGP15.phx.gbl...
> system
> What
>|||Hi
Why not get the network sorted out. If the WAN goes down, hit ISDN/Dial Up
and keep the location working.
Implementing workarounds because the network guys can't keep their act
together is a difficult and costly thing.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Paul" <noone@.executespammers.org> wrote in message
news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> I'm looking at a problem two sites have - the both run the same ERP system
> on one server, and their WAN and internet connections are unreliable. What
> is the preferred method to keep people from sitting on their thumbs when
> the
> WAN is unavailable? What about if you want to be able to change data at
> both
> locations while the connection is down?
> Paul
>|||Yeah, that's the advice I gave.
According to the guy suffering from this, one site is out in the sticks and
has only Verizon for a provider and (apparently) one option for
connectivity. I told him to try to set up a failover connection over a
regular cable broadband connection. Never seen it done, but I don't see why
it couldn't work. People may turn their noses up at residential connections
but it's better than nothing, and some of them are damn fast. I've been
looking at Verizon's FIOS and salivating. 30Mb/s!
Paul
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:u8tKsY3jFHA.3144@.TK2MSFTNGP12.phx.gbl...
> Hi
> Why not get the network sorted out. If the WAN goes down, hit ISDN/Dial Up
> and keep the location working.
> Implementing workarounds because the network guys can't keep their act
> together is a difficult and costly thing.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Paul" <noone@.executespammers.org> wrote in message
> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
system[vbcol=seagreen]
What[vbcol=seagreen]
>

best way to tolerate an unreliable WAN connection?

I'm looking at a problem two sites have - the both run the same ERP system
on one server, and their WAN and internet connections are unreliable. What
is the preferred method to keep people from sitting on their thumbs when the
WAN is unavailable? What about if you want to be able to change data at both
locations while the connection is down?
PaulI would cehck with the ERP vendor to see if they support
replication...either through SQL Server or through their own processes.
Many ERP/CRM vendors do have a "remote server" concept built-in for just
such a situation
--
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Paul" <noone@.executespammers.org> wrote in message
news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> I'm looking at a problem two sites have - the both run the same ERP system
> on one server, and their WAN and internet connections are unreliable. What
> is the preferred method to keep people from sitting on their thumbs when
> the
> WAN is unavailable? What about if you want to be able to change data at
> both
> locations while the connection is down?
> Paul
>|||I messed with replication while back. It made making changes to the database
(new columns, SP's etc) a pain. I was hoping there might be an alternative,
maybe a third-party product.
The ERP system is a custom one. I'm trying to find a solution for a problem
my friend is having. Definitely anything he settles on to fix it would be of
interest to me, because I do custom ERP systems as well. We're fortunate
enough here though where WAN downtime hasn't been an issue thus far.
Paul
"Kevin3NF" <KHill@.NopeIDontNeedNoSPAM3NF-inc.com> wrote in message
news:u6VabyujFHA.2852@.TK2MSFTNGP15.phx.gbl...
> I would cehck with the ERP vendor to see if they support
> replication...either through SQL Server or through their own processes.
> Many ERP/CRM vendors do have a "remote server" concept built-in for just
> such a situation
> --
> Kevin Hill
> President
> 3NF Consulting
> www.3nf-inc.com/NewsGroups.htm
> www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
> www.experts-exchange.com - experts compete for points to answer your
> questions
>
> "Paul" <noone@.executespammers.org> wrote in message
> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> > I'm looking at a problem two sites have - the both run the same ERP
system
> > on one server, and their WAN and internet connections are unreliable.
What
> > is the preferred method to keep people from sitting on their thumbs when
> > the
> > WAN is unavailable? What about if you want to be able to change data at
> > both
> > locations while the connection is down?
> >
> > Paul
> >
> >
>|||If you want the system usable in two places that are not connected (during a
WAN outage), you are definitely looking into replication. Unless one of the
systems is going to be read-only. In that case you may be able to look into
log shipping, or 3rd party products like double-take from
www.nsisoftware.com (no affiliation)
--
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Paul" <noone@.executespammers.org> wrote in message
news:%231ALn%23ujFHA.3348@.tk2msftngp13.phx.gbl...
>I messed with replication while back. It made making changes to the
>database
> (new columns, SP's etc) a pain. I was hoping there might be an
> alternative,
> maybe a third-party product.
> The ERP system is a custom one. I'm trying to find a solution for a
> problem
> my friend is having. Definitely anything he settles on to fix it would be
> of
> interest to me, because I do custom ERP systems as well. We're fortunate
> enough here though where WAN downtime hasn't been an issue thus far.
> Paul
> "Kevin3NF" <KHill@.NopeIDontNeedNoSPAM3NF-inc.com> wrote in message
> news:u6VabyujFHA.2852@.TK2MSFTNGP15.phx.gbl...
>> I would cehck with the ERP vendor to see if they support
>> replication...either through SQL Server or through their own processes.
>> Many ERP/CRM vendors do have a "remote server" concept built-in for just
>> such a situation
>> --
>> Kevin Hill
>> President
>> 3NF Consulting
>> www.3nf-inc.com/NewsGroups.htm
>> www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
>> www.experts-exchange.com - experts compete for points to answer your
>> questions
>>
>> "Paul" <noone@.executespammers.org> wrote in message
>> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
>> > I'm looking at a problem two sites have - the both run the same ERP
> system
>> > on one server, and their WAN and internet connections are unreliable.
> What
>> > is the preferred method to keep people from sitting on their thumbs
>> > when
>> > the
>> > WAN is unavailable? What about if you want to be able to change data at
>> > both
>> > locations while the connection is down?
>> >
>> > Paul
>> >
>> >
>>
>|||Hi
Why not get the network sorted out. If the WAN goes down, hit ISDN/Dial Up
and keep the location working.
Implementing workarounds because the network guys can't keep their act
together is a difficult and costly thing.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Paul" <noone@.executespammers.org> wrote in message
news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> I'm looking at a problem two sites have - the both run the same ERP system
> on one server, and their WAN and internet connections are unreliable. What
> is the preferred method to keep people from sitting on their thumbs when
> the
> WAN is unavailable? What about if you want to be able to change data at
> both
> locations while the connection is down?
> Paul
>|||Yeah, that's the advice I gave.
According to the guy suffering from this, one site is out in the sticks and
has only Verizon for a provider and (apparently) one option for
connectivity. I told him to try to set up a failover connection over a
regular cable broadband connection. Never seen it done, but I don't see why
it couldn't work. People may turn their noses up at residential connections
but it's better than nothing, and some of them are damn fast. I've been
looking at Verizon's FIOS and salivating. 30Mb/s!
Paul
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:u8tKsY3jFHA.3144@.TK2MSFTNGP12.phx.gbl...
> Hi
> Why not get the network sorted out. If the WAN goes down, hit ISDN/Dial Up
> and keep the location working.
> Implementing workarounds because the network guys can't keep their act
> together is a difficult and costly thing.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Paul" <noone@.executespammers.org> wrote in message
> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> > I'm looking at a problem two sites have - the both run the same ERP
system
> > on one server, and their WAN and internet connections are unreliable.
What
> > is the preferred method to keep people from sitting on their thumbs when
> > the
> > WAN is unavailable? What about if you want to be able to change data at
> > both
> > locations while the connection is down?
> >
> > Paul
> >
> >
>

Sunday, February 19, 2012

Best way to schedule a package to run during the day

Hi,

What is the best way to schedule a package to run every 30 minutes during the day. Would it be using SQL Agent Jobs? If so, what is the things I need to consider.

Thank you,

Aldo

Hi ya,

Yes it would be Sql Server Agent job. You would need to make the package and then deploy it to either Sql Server or File system. You will get errors if the Agent credentials running agent service is different then your package credentials. If that is the case search for either Sql Agent proxy or SSIS jobs not running in Sql Server agent and you will find loads of posts.

The rest is pretty easy. You would create a new job in agent and then for daily frequency put it as 30 minutes.

Hope this helps


Cheers

Rizwan

|||Thanks you!

best way to run 45 queries in a row?

Hi,
I have created 45 queries that I need to run which will seperate and
stratify data. Each one has to be run one after the other. Right now I sit
there and muanually run them in the SQL Query Analyzer screen. Is there
something I can create to run them automatically, one after the other?
Thanks
If you have to do this more than once you can create a multistep Job or a DTS
package
http://sqlservercode.blogspot.com/
"John Jasper" wrote:

> Hi,
> I have created 45 queries that I need to run which will seperate and
> stratify data. Each one has to be run one after the other. Right now I sit
> there and muanually run them in the SQL Query Analyzer screen. Is there
> something I can create to run them automatically, one after the other?
> Thanks
|||You could copy and paste them all into one stored procedure, with some error
handling thrown in to exit and alert you if one failed...
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
"John Jasper" <JohnJasper@.discussions.microsoft.com> wrote in message
news:74DA6630-0986-4014-AE2E-DACEDF9200D9@.microsoft.com...
> Hi,
> I have created 45 queries that I need to run which will seperate and
> stratify data. Each one has to be run one after the other. Right now I
> sit
> there and muanually run them in the SQL Query Analyzer screen. Is there
> something I can create to run them automatically, one after the other?
> Thanks

best way to run 45 queries in a row?

Hi,
I have created 45 queries that I need to run which will seperate and
stratify data. Each one has to be run one after the other. Right now I sit
there and muanually run them in the SQL Query Analyzer screen. Is there
something I can create to run them automatically, one after the other?
ThanksIf you have to do this more than once you can create a multistep Job or a DT
S
package
http://sqlservercode.blogspot.com/
"John Jasper" wrote:

> Hi,
> I have created 45 queries that I need to run which will seperate and
> stratify data. Each one has to be run one after the other. Right now I si
t
> there and muanually run them in the SQL Query Analyzer screen. Is there
> something I can create to run them automatically, one after the other?
> Thanks|||You could copy and paste them all into one stored procedure, with some error
handling thrown in to exit and alert you if one failed...
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
"John Jasper" <JohnJasper@.discussions.microsoft.com> wrote in message
news:74DA6630-0986-4014-AE2E-DACEDF9200D9@.microsoft.com...
> Hi,
> I have created 45 queries that I need to run which will seperate and
> stratify data. Each one has to be run one after the other. Right now I
> sit
> there and muanually run them in the SQL Query Analyzer screen. Is there
> something I can create to run them automatically, one after the other?
> Thanks

best way to run 45 queries in a row?

Hi,
I have created 45 queries that I need to run which will seperate and
stratify data. Each one has to be run one after the other. Right now I sit
there and muanually run them in the SQL Query Analyzer screen. Is there
something I can create to run them automatically, one after the other?
ThanksIf you have to do this more than once you can create a multistep Job or a DTS
package
http://sqlservercode.blogspot.com/
"John Jasper" wrote:
> Hi,
> I have created 45 queries that I need to run which will seperate and
> stratify data. Each one has to be run one after the other. Right now I sit
> there and muanually run them in the SQL Query Analyzer screen. Is there
> something I can create to run them automatically, one after the other?
> Thanks|||You could copy and paste them all into one stored procedure, with some error
handling thrown in to exit and alert you if one failed...
--
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
"John Jasper" <JohnJasper@.discussions.microsoft.com> wrote in message
news:74DA6630-0986-4014-AE2E-DACEDF9200D9@.microsoft.com...
> Hi,
> I have created 45 queries that I need to run which will seperate and
> stratify data. Each one has to be run one after the other. Right now I
> sit
> there and muanually run them in the SQL Query Analyzer screen. Is there
> something I can create to run them automatically, one after the other?
> Thanks

Thursday, February 16, 2012

best way to log out when running the job

Hi,
When I run the job, I am not sure whether it could be done at the table
level to log users out. I would not want to keep them out of the whole
database...only the tables I are updating. Also, once the job is running, I
would like to prevent any users from accessing the tables I am updating unti
l
the end of the job.
Is there anyway to do that?
Thanks
EdYou can take an Exclusive lock on the table and hold it until you are done.
That will not allow anyone to access the table as they will be blocked. Not
sure if they will like you for doing that though<g>.
Andrew J. Kelly SQL MVP
"Ed" <Ed@.discussions.microsoft.com> wrote in message
news:933F98D6-BA8A-46D6-A136-35C59BE7F5D6@.microsoft.com...
> Hi,
> When I run the job, I am not sure whether it could be done at the table
> level to log users out. I would not want to keep them out of the whole
> database...only the tables I are updating. Also, once the job is running,
> I
> would like to prevent any users from accessing the tables I am updating
> until
> the end of the job.
> Is there anyway to do that?
> Thanks
> Ed|||Andrew,
Thanks for your answer. I would also like to know what is the best way to
log users out before the job is running. Again, I only need to log them out
of the tables that I will be updating not the whole database.
Thanks again
Ed
"Andrew J. Kelly" wrote:

> You can take an Exclusive lock on the table and hold it until you are done
.
> That will not allow anyone to access the table as they will be blocked. N
ot
> sure if they will like you for doing that though<g>.
> --
> Andrew J. Kelly SQL MVP
>
> "Ed" <Ed@.discussions.microsoft.com> wrote in message
> news:933F98D6-BA8A-46D6-A136-35C59BE7F5D6@.microsoft.com...
>
>|||You can't "Log out" users from a table. They log in to the server and
potentially have access to one or more db's in that server. A user only
takes out locks while they are accessing the data. So any locks they may
have on any specific table are released once they are done with the query or
the transaction is committed or rolled back. Why do you care if they are in
the table or not? Maybe if you explain in more detail what you are
attempting to do we can suggest an appropriate action.
Andrew J. Kelly SQL MVP
"Ed" <Ed@.discussions.microsoft.com> wrote in message
news:3937FA07-1F29-497C-99AA-608E89D0E807@.microsoft.com...
> Andrew,
> Thanks for your answer. I would also like to know what is the best way
> to
> log users out before the job is running. Again, I only need to log them
> out
> of the tables that I will be updating not the whole database.
> Thanks again
> Ed
> "Andrew J. Kelly" wrote:
>|||The major reason why I would like to "Log Out" users is because the job has
to be run in the daytime and the job has to be finished as quick as
possible. with so many users using the resources of the server (hugh data
warehouse to be accessed by so many people), I would like to "Log Out" all
users and block the access of all updating tables except the one running by
the job.
any suggestion
Thanks
Ed
"Andrew J. Kelly" wrote:

> You can't "Log out" users from a table. They log in to the server and
> potentially have access to one or more db's in that server. A user only
> takes out locks while they are accessing the data. So any locks they may
> have on any specific table are released once they are done with the query
or
> the transaction is committed or rolled back. Why do you care if they are
in
> the table or not? Maybe if you explain in more detail what you are
> attempting to do we can suggest an appropriate action.
> --
> Andrew J. Kelly SQL MVP
>
> "Ed" <Ed@.discussions.microsoft.com> wrote in message
> news:3937FA07-1F29-497C-99AA-608E89D0E807@.microsoft.com...
>
>|||Well you can KILL a connection that has a lock in the table you want but
that obviously ruins their day. Other than that there is no way to remove a
users locks on a table until that user has finished accessing it and
committed or rolled back their transaction if they had one. As I stated
earlier the only real way to prevent them from accessing the table is to
take out an exclusive lock. You may have to wait a while until all current
users in that table have finished before the lock takes hold. But no new
users can access it either. It almost sounds like you need to place the db
in a single user mode, do the work and return it as soon as possible. You
can specify a termination option to kick out the current users after a
period of time with the alter database command.
Andrew J. Kelly SQL MVP
"Ed" <Ed@.discussions.microsoft.com> wrote in message
news:5C6CCABD-739B-472F-892B-4F3B3F53DB31@.microsoft.com...
> The major reason why I would like to "Log Out" users is because the job
> has
> to be run in the daytime and the job has to be finished as quick as
> possible. with so many users using the resources of the server (hugh data
> warehouse to be accessed by so many people), I would like to "Log Out" all
> users and block the access of all updating tables except the one running
> by
> the job.
> any suggestion
> Thanks
> Ed
> "Andrew J. Kelly" wrote:
>

Tuesday, February 14, 2012

Best way to HIDE passwords that are in a table

Hello All,
If I am storing passwords in a table within SQL Server 2000. Is there a way
to ENCRYPT or HIDE the passwords, so that when I run a SELECT statement, I
can not see them?
If this is not possible, what is the recommended way to HIDE passwords from
the SQL Server Administrator?
thans...SQL 2000 does not have built in tools for encryption but it support 3rd part
y
solutions.
check out the link below
http://www.sqlservercentral.com/col...oolkitpart1.asp
and also this one is pretty popular not free but not expensive
http://www.xpcrypt.com/
Good day,
Bulent
"MSUTech" wrote:

> Hello All,
> If I am storing passwords in a table within SQL Server 2000. Is there a w
ay
> to ENCRYPT or HIDE the passwords, so that when I run a SELECT statement, I
> can not see them?
> If this is not possible, what is the recommended way to HIDE passwords fro
m
> the SQL Server Administrator?
> thans...|||Hello,
Please use the password encrytion from your application side and stored the
encrypted data into database.
Thanks
Hari
"MSUTech" <MSUTech@.discussions.microsoft.com> wrote in message
news:5D6620EA-05C0-4044-9D1B-497261028F16@.microsoft.com...
> Hello All,
> If I am storing passwords in a table within SQL Server 2000. Is there a
> way
> to ENCRYPT or HIDE the passwords, so that when I run a SELECT statement, I
> can not see them?
> If this is not possible, what is the recommended way to HIDE passwords
> from
> the SQL Server Administrator?
> thans...|||One common method of storing passwords securely in a database is to use a
one-way hash function on the password and store the hashed value in the
database instead of encrypting it. You'll probably want to use an
SHA-family (or other secure) hash function (MD5-family hashes and many other
hash functions are not considered secure). Using this method you'll do
something like this:
-Initially you hash the password and store the hashed value in the database
in a VARBINARY or BINARY column.
-When you need to validate, accept an input string from the user and hash
that string using the same algorithm. Then compare the hashed user input
string to the hash of the password stored in the database.
The down-side is that once you hash a password you can't retrieve it's
original value as you can with encryption/decryption (hence the term
"one-way hash"). This is generally not a problem, however. You can perform
the hashes client-side with .NET or the Windows CryptoAPI or server-side
with extended procedures like these
http://www.sqlservercentral.com/col...oolkitpart1.asp
Other options include using .NET or CryptoAPI to encrypt/decrypt your
password client side, or using server-side extended procedures to do the
job.
"MSUTech" <MSUTech@.discussions.microsoft.com> wrote in message
news:5D6620EA-05C0-4044-9D1B-497261028F16@.microsoft.com...
> Hello All,
> If I am storing passwords in a table within SQL Server 2000. Is there a
> way
> to ENCRYPT or HIDE the passwords, so that when I run a SELECT statement, I
> can not see them?
> If this is not possible, what is the recommended way to HIDE passwords
> from
> the SQL Server Administrator?
> thans...|||Not that this is immediately helpful, but encryption is built into SQL 2005
so that you can do this without having to move it to the business layer. In
2000, as other have pointed out, you would build this into the CRUD object o
r
the business layer accessing the field.
--
Ryan Hanisco
MCSE, MCTS: SQL 2005, Project+
Chicago, IL
Remember: Marking helpful answers helps everyone find the info they need
quickly.
"MSUTech" wrote:

> Hello All,
> If I am storing passwords in a table within SQL Server 2000. Is there a w
ay
> to ENCRYPT or HIDE the passwords, so that when I run a SELECT statement, I
> can not see them?
> If this is not possible, what is the recommended way to HIDE passwords fro
m
> the SQL Server Administrator?
> thans...

Best way to get TOP 10 CPU intensive SQL

I run a shared hosting server where multiple people share a single SQL
server (each of them have their own data base). I need a way to monitor to
find out what % of the resources each user is taking as we don't want any
one person abusing the system. What is the best way to do this
Jay
Jay
Run SQL Server Profiler tool.
"Jay Janarthanan" <jay@.objectcube.com> wrote in message
news:e4AaSxinEHA.648@.tk2msftngp13.phx.gbl...
> I run a shared hosting server where multiple people share a single SQL
> server (each of them have their own data base). I need a way to monitor to
> find out what % of the resources each user is taking as we don't want any
> one person abusing the system. What is the best way to do this
> Jay
>
|||See if this helps:
Identifying SQL Server Perfromance Issues Using Profiler
http://vyaskn.tripod.com/analyzing_profiler_output.htm
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
"Jay Janarthanan" <jay@.objectcube.com> wrote in message
news:e4AaSxinEHA.648@.tk2msftngp13.phx.gbl...
> I run a shared hosting server where multiple people share a single SQL
> server (each of them have their own data base). I need a way to monitor to
> find out what % of the resources each user is taking as we don't want any
> one person abusing the system. What is the best way to do this
> Jay
>

Monday, February 13, 2012

Best way to get TOP 10 CPU intensive SQL

I run a shared hosting server where multiple people share a single SQL
server (each of them have their own data base). I need a way to monitor to
find out what % of the resources each user is taking as we don't want any
one person abusing the system. What is the best way to do this
JayJay
Run SQL Server Profiler tool.
"Jay Janarthanan" <jay@.objectcube.com> wrote in message
news:e4AaSxinEHA.648@.tk2msftngp13.phx.gbl...
> I run a shared hosting server where multiple people share a single SQL
> server (each of them have their own data base). I need a way to monitor to
> find out what % of the resources each user is taking as we don't want any
> one person abusing the system. What is the best way to do this
> Jay
>|||See if this helps:
Identifying SQL Server Perfromance Issues Using Profiler
http://vyaskn.tripod.com/analyzing_profiler_output.htm
--
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
"Jay Janarthanan" <jay@.objectcube.com> wrote in message
news:e4AaSxinEHA.648@.tk2msftngp13.phx.gbl...
> I run a shared hosting server where multiple people share a single SQL
> server (each of them have their own data base). I need a way to monitor to
> find out what % of the resources each user is taking as we don't want any
> one person abusing the system. What is the best way to do this
> Jay
>