Thursday, March 22, 2012
Bii.exe V1.0-1 Utility for SQL Server 2000
From http://developmentnow.com/g/118_2004_11_0_18_0/microsoft-public-sqlserver-server.htm
Posted via DevelopmentNow.com Groups
http://www.developmentnow.com
I'm not aware of away to load images into a bigint column but I am curious
as to what kind of images do you have that fit into an 8 byte column?
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Emmanuel" <nospam@.developmentnow.com> wrote in message
news:273fc90b-d4fa-4fbb-a9e3-7c35d65974f9@.developmentnow.com...
>I tried using this utilily to upload images into a table with a bigint
>datatype but it failed saying it does not support such datatypes. Is there
>another version of the bii utility that can support all the data types
>supported by SQL server 2000?
> From
> http://developmentnow.com/g/118_2004_11_0_18_0/microsoft-public-sqlserver-server.htm
> Posted via DevelopmentNow.com Groups
> http://www.developmentnow.com
Bii.exe V1.0-1 Utility for SQL Server 2000
From http://developmentnow.com/g/118_2004_11_0_18_0/microsoft-public-sqlserver-server.ht
Posted via DevelopmentNow.com Group
http://www.developmentnow.comI'm not aware of away to load images into a bigint column but I am curious
as to what kind of images do you have that fit into an 8 byte column?
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Emmanuel" <nospam@.developmentnow.com> wrote in message
news:273fc90b-d4fa-4fbb-a9e3-7c35d65974f9@.developmentnow.com...
>I tried using this utilily to upload images into a table with a bigint
>datatype but it failed saying it does not support such datatypes. Is there
>another version of the bii utility that can support all the data types
>supported by SQL server 2000?
> From
> http://developmentnow.com/g/118_2004_11_0_18_0/microsoft-public-sqlserver-server.htm
> Posted via DevelopmentNow.com Groups
> http://www.developmentnow.com
Bii.exe V1.0-1 Utility for SQL Server 2000
ype but it failed saying it does not support such datatypes. Is there anothe
r version of the bii utility that can support all the data types supported b
y SQL server 2000?
From http://developmentnow.com/g/118_200...
ver.htm
Posted via DevelopmentNow.com Groups
http://www.developmentnow.comI'm not aware of away to load images into a bigint column but I am curious
as to what kind of images do you have that fit into an 8 byte column?
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Emmanuel" <nospam@.developmentnow.com> wrote in message
news:273fc90b-d4fa-4fbb-a9e3-7c35d65974f9@.developmentnow.com...
>I tried using this utilily to upload images into a table with a bigint
>datatype but it failed saying it does not support such datatypes. Is there
>another version of the bii utility that can support all the data types
>supported by SQL server 2000?
> From
> http://developmentnow.com/g/118_200...ver
.htm
> Posted via DevelopmentNow.com Groups
> http://www.developmentnow.com
Bigint vs int
make sure I could never run out of numbers.
I am thinking maybe that was overkill and my slow my system down (I know it
takes up more space, but that is not really an issue).
Are the Bigints going to cause me a problem down the line?
Also, if I start changing the BigInts to Ints in my Database and Stored
Procedure and miss some of the Stored procedures, would that cause me any
problems. I also have the ASP.Net pages that call the Stored Procedure set
to BigInts - would there be a problem if I miss some of these. In
otherwords, would there be a problem with mixing and matching the types
until I get them all taken care of?
Thanks,
Tom>I had set up my tables using BigInts for my Surrogate keys (identity) to
>make sure I could never run out of numbers.
> Are the Bigints going to cause me a problem down the line?
Problems? No. The only issue is that all of your keys are twice as large
as for an int. This will not "hurt" anything, but it is a waste of
resources. Tables with GUIDs as keys can perform just fine also, and they
take 16 byts. It is all into how much disk space and memory you need. It
will be slightly more for bigints.
The real question is whether you really have a need for 2 billion (well, 4
billion if you don't mind negatives after you exhaust the positive values)
values in your tables? I only have had two tables in my life where this was
possible (and only one has reached this amount) and they were both used to
track clicks on a pretty large web click tracking software table. I am
actually more likely to end up with small- and tiny- int keys than I am
bigint. But you have to ask yourself, if there is a possibility of that
kind of data, what do you want to do?
If this is an OLTP system, I would suggest you probably will want to delete
old data before doing this. A billion of anything is quite a bit.
> Also, if I start changing the BigInts to Ints in my Database and Stored
> Procedure and miss some of the Stored procedures, would that cause me any
> problems. I also have the ASP.Net pages that call the Stored Procedure
> set to BigInts - would there be a problem if I miss some of these. In
> otherwords, would there be a problem with mixing and matching the types
> until I get them all taken care of?
Not "big" problems, unless you try to pass a value that fits in a bigint and
not in an int. If you aren't using stored procs, you may have some issues
with type mismatches cause funky plans, but not enough to avoid switching.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"tshad" <tscheiderich@.ftsolutions.com> wrote in message
news:uI2qqGhmFHA.1372@.TK2MSFTNGP10.phx.gbl...
>I had set up my tables using BigInts for my Surrogate keys (identity) to
>make sure I could never run out of numbers.
> I am thinking maybe that was overkill and my slow my system down (I know
> it takes up more space, but that is not really an issue).
> Are the Bigints going to cause me a problem down the line?
> Also, if I start changing the BigInts to Ints in my Database and Stored
> Procedure and miss some of the Stored procedures, would that cause me any
> problems. I also have the ASP.Net pages that call the Stored Procedure
> set to BigInts - would there be a problem if I miss some of these. In
> otherwords, would there be a problem with mixing and matching the types
> until I get them all taken care of?
> Thanks,
> Tom
>|||You can actually find an answer to your question using a calculator. Conside
r
the number of rows inserted daily and you can predict when you 'run out of
numbers'.
My guess: not in your life-time - even with integer.
ML|||"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
news:OWXwVvhmFHA.3312@.tk2msftngp13.phx.gbl...
>
> Problems? No. The only issue is that all of your keys are twice as large
> as for an int. This will not "hurt" anything, but it is a waste of
> resources. Tables with GUIDs as keys can perform just fine also, and they
> take 16 byts. It is all into how much disk space and memory you need. It
> will be slightly more for bigints.
>
But if they are not used in any of my indexes, would there be any
performance problems?
> The real question is whether you really have a need for 2 billion (well, 4
> billion if you don't mind negatives after you exhaust the positive values)
> values in your tables? I only have had two tables in my life where this
> was possible (and only one has reached this amount) and they were both
> used to track clicks on a pretty large web click tracking software table.
> I am actually more likely to end up with small- and tiny- int keys than I
> am bigint. But you have to ask yourself, if there is a possibility of
> that kind of data, what do you want to do?
Actually, I don't know if I would come close, but I don't want to build this
system - which could take a couple of years to finish, only to find out that
the numbers are filling up faster than I expected.
> If this is an OLTP system, I would suggest you probably will want to
> delete old data before doing this. A billion of anything is quite a bit.
I would definately be deleting the records as I go, but the identity field
would still be increasing by one even if I dumped half the records.
>
> Not "big" problems, unless you try to pass a value that fits in a bigint
> and not in an int. If you aren't using stored procs, you may have some
> issues with type mismatches cause funky plans, but not enough to avoid
> switching.
If the issues aren't bad issues than I could just start making the changes
and deal with the differences as I go along.
Thanks,
Tom.
> --
> ----
--
> Louis Davidson - http://spaces.msn.com/members/drsql/
> SQL Server MVP
>
> "tshad" <tscheiderich@.ftsolutions.com> wrote in message
> news:uI2qqGhmFHA.1372@.TK2MSFTNGP10.phx.gbl...
>|||"ML" <ML@.discussions.microsoft.com> wrote in message
news:EB9108DE-3804-46E8-A4FF-D31A58D48666@.microsoft.com...
> You can actually find an answer to your question using a calculator.
> Consider
> the number of rows inserted daily and you can predict when you 'run out of
> numbers'.
> My guess: not in your life-time - even with integer.
>
Probably true.
Which is why I am in the process of looking closer at my tables and making
the changes now, before I go much further.
Thanks,
Tom
> ML|||> But if they are not used in any of my indexes, would there be any
> performance problems?
If you are not indexing an identity value, then what are you using it for?
Identity value are pretty much only good as keys to go and fetch a single
row.
> Actually, I don't know if I would come close, but I don't want to build
> this system - which could take a couple of years to finish, only to find
> out that the numbers are filling up faster than I expected.
This is a valid point. I would suggest that you do some math to see how
long this will last. Like if you do 5 million new rows a year, you have
well over 100 years before you reach 500 million, so 400 years (400
*5million = 2 billion, right? I have worked a lot this w

will have to switch to use the negative values for the next 400 years :)
But it is up to you to decide just how many rows you will need and size
appropriately ;)
> If the issues aren't bad issues than I could just start making the changes
> and deal with the differences as I go along.
If you actually don't need them in indexes (UNIQUE and PRIMARY KEY
constraints use indexes also) then it really won't make very much difference
at all.
> I would definately be deleting the records as I go, but the identity field
> would still be increasing by one even if I dumped half the records.
True, but you might be able to start your values over to fill in the gaps
once you hit the max value if the data that gets deleted is in increasing
identity value. Especially if you are using this as a surrogate key. Usage
will dictate. If you post your structures and how these values are actually
being used it might help... Good luck :)
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"tshad" <tscheiderich@.ftsolutions.com> wrote in message
news:%23vCQUQimFHA.3568@.TK2MSFTNGP10.phx.gbl...
> "Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
> news:OWXwVvhmFHA.3312@.tk2msftngp13.phx.gbl...
> But if they are not used in any of my indexes, would there be any
> performance problems?
>
> Actually, I don't know if I would come close, but I don't want to build
> this system - which could take a couple of years to finish, only to find
> out that the numbers are filling up faster than I expected.
>
> I would definately be deleting the records as I go, but the identity field
> would still be increasing by one even if I dumped half the records.
>
> If the issues aren't bad issues than I could just start making the changes
> and deal with the differences as I go along.
> Thanks,
> Tom.
>
>|||On Fri, 5 Aug 2005 18:13:38 -0700, tshad wrote:
>"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
>news:OWXwVvhmFHA.3312@.tk2msftngp13.phx.gbl...
>But if they are not used in any of my indexes, would there be any
>performance problems?
Hi Tom,
Using a surrogate key without including it in an index is pointless.
The reason to add a surrogate key, is to serve as a surrogate for the
business key. This can be useful if other tables link to the table and
the business key is long - instead of duplicating the business key in
all refering tables, you duplicate the surrogate key. This of course
requires that the surrogate key is declared either PRIMARY KEY or
UNIQUE.
All PRIMARY KEY and UNIQUE constraints auto-generate an index. This is
not the case for FOREIGN KEY constraints, but in most cases, it makes
sense to manually add an index on the FOREIGN KEY column(s).
The surrogate key will be in the index for it's own PRIMARY KEY or
UNIQUE constraint, in all refering tables, and in all indexes you define
for the FOREIGN KEY constraints. If the PRIMARY KEY or UNIQUE for the
surrogate key is clustered, there'll be a copy of the surrogate key in
each nonclustered index for the same table as well.
Extra bytes in the surrogate key result in more space taken for all the
tables and all the indexes where the surrogate key is used. That means
that less rows and/or less index entries fit on a page, which will
affect the performance (more logical reads, and a lower cache-hit ratio,
resulting in more physical reads as well).
How much this will affect your total performance is hard to tell. Adding
two bytes to a 3000-byte row is not quite as significant as adding two
bytes to a 20-byte row.
Don't make your surrogate keys longer than they need to be, "just to be
on the safe side", as this WILL affect performance. On the other hand,
don't make your surrogate keys too short in order to save some
performance, as the amount of work it takes you to grow the keys later
is disproportional to the performance saved.
And if you use IDENTITY, then always start numbering at the lowest value
the datatype permits. The standard setting (starting at one) disregards
half the available values. Sure, you can roll over to the negatives once
the positivies are exhausted - but why would you?
Just my 0,02.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||> And if you use IDENTITY, then always start numbering at the lowest value
> the datatype permits. The standard setting (starting at one) disregards
> half the available values. Sure, you can roll over to the negatives once
> the positivies are exhausted - but why would you?
This has always been hard to do in my experience, and I have fought with so
many programmers/dba's about this. I would agree that if you calulate that
you will need over 2 billion rows before like twice your expected system
life expectancy it might be possibl, but most of the time the large number
of digits are too much to try to type while the system is in it's infancy
and people are troubleshooting problems.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:mkk9f19ss8ge7igqclbpmb4gvkmi2e1p85@.
4ax.com...
> On Fri, 5 Aug 2005 18:13:38 -0700, tshad wrote:
>
> Hi Tom,
> Using a surrogate key without including it in an index is pointless.
> The reason to add a surrogate key, is to serve as a surrogate for the
> business key. This can be useful if other tables link to the table and
> the business key is long - instead of duplicating the business key in
> all refering tables, you duplicate the surrogate key. This of course
> requires that the surrogate key is declared either PRIMARY KEY or
> UNIQUE.
> All PRIMARY KEY and UNIQUE constraints auto-generate an index. This is
> not the case for FOREIGN KEY constraints, but in most cases, it makes
> sense to manually add an index on the FOREIGN KEY column(s).
> The surrogate key will be in the index for it's own PRIMARY KEY or
> UNIQUE constraint, in all refering tables, and in all indexes you define
> for the FOREIGN KEY constraints. If the PRIMARY KEY or UNIQUE for the
> surrogate key is clustered, there'll be a copy of the surrogate key in
> each nonclustered index for the same table as well.
> Extra bytes in the surrogate key result in more space taken for all the
> tables and all the indexes where the surrogate key is used. That means
> that less rows and/or less index entries fit on a page, which will
> affect the performance (more logical reads, and a lower cache-hit ratio,
> resulting in more physical reads as well).
> How much this will affect your total performance is hard to tell. Adding
> two bytes to a 3000-byte row is not quite as significant as adding two
> bytes to a 20-byte row.
> Don't make your surrogate keys longer than they need to be, "just to be
> on the safe side", as this WILL affect performance. On the other hand,
> don't make your surrogate keys too short in order to save some
> performance, as the amount of work it takes you to grow the keys later
> is disproportional to the performance saved.
> And if you use IDENTITY, then always start numbering at the lowest value
> the datatype permits. The standard setting (starting at one) disregards
> half the available values. Sure, you can roll over to the negatives once
> the positivies are exhausted - but why would you?
> Just my ? 0,02.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||That helps a lot.
I will probably start moving some of my bigints to ints as everyone is
probably correct in the overkill causing a little more performance problems
just to "be safe".
As you say, the surrogate key would really have to be in the indexes or what
would the point of the surrogate be. I hadn't started putting my indexes
together yet, so didn't notice that I would have to use them in then
indexes. So it would be better for the surrogates to be smaller if
possible.
Thanks,
Tom
> Hi Tom,
> Using a surrogate key without including it in an index is pointless.
> The reason to add a surrogate key, is to serve as a surrogate for the
> business key. This can be useful if other tables link to the table and
> the business key is long - instead of duplicating the business key in
> all refering tables, you duplicate the surrogate key. This of course
> requires that the surrogate key is declared either PRIMARY KEY or
> UNIQUE.
> All PRIMARY KEY and UNIQUE constraints auto-generate an index. This is
> not the case for FOREIGN KEY constraints, but in most cases, it makes
> sense to manually add an index on the FOREIGN KEY column(s).
> The surrogate key will be in the index for it's own PRIMARY KEY or
> UNIQUE constraint, in all refering tables, and in all indexes you define
> for the FOREIGN KEY constraints. If the PRIMARY KEY or UNIQUE for the
> surrogate key is clustered, there'll be a copy of the surrogate key in
> each nonclustered index for the same table as well.
> Extra bytes in the surrogate key result in more space taken for all the
> tables and all the indexes where the surrogate key is used. That means
> that less rows and/or less index entries fit on a page, which will
> affect the performance (more logical reads, and a lower cache-hit ratio,
> resulting in more physical reads as well).
> How much this will affect your total performance is hard to tell. Adding
> two bytes to a 3000-byte row is not quite as significant as adding two
> bytes to a 20-byte row.
> Don't make your surrogate keys longer than they need to be, "just to be
> on the safe side", as this WILL affect performance. On the other hand,
> don't make your surrogate keys too short in order to save some
> performance, as the amount of work it takes you to grow the keys later
> is disproportional to the performance saved.
> And if you use IDENTITY, then always start numbering at the lowest value
> the datatype permits. The standard setting (starting at one) disregards
> half the available values. Sure, you can roll over to the negatives once
> the positivies are exhausted - but why would you?
> Just my ? 0,02.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
news:ObhzBZimFHA.708@.TK2MSFTNGP09.phx.gbl...
> If you are not indexing an identity value, then what are you using it for?
> Identity value are pretty much only good as keys to go and fetch a single
> row.
You're right. I would have to use the surrogate in the index.
>
> This is a valid point. I would suggest that you do some math to see how
> long this will last. Like if you do 5 million new rows a year, you have
> well over 100 years before you reach 500 million, so 400 years (400
> *5million = 2 billion, right? I have worked a lot this w

> will have to switch to use the negative values for the next 400 years :)
> But it is up to you to decide just how many rows you will need and size
> appropriately ;)
I will need to do that as I haven't really worked that out.
>
changes
> If you actually don't need them in indexes (UNIQUE and PRIMARY KEY
> constraints use indexes also) then it really won't make very much
difference
> at all.
>
field
> True, but you might be able to start your values over to fill in the gaps
> once you hit the max value if the data that gets deleted is in increasing
> identity value. Especially if you are using this as a surrogate key.
Usage
> will dictate. If you post your structures and how these values are
actually
> being used it might help... Good luck :)
Here is a stripped down version of some of my tables.
What I have is mulitple companies. Each company can have many users.
There would be many positions available. The JobTitle cannot be unique as
different companies could have the same JobTitles - so I need to use an
PositionID as my surrogate key and then index the JobTitle + PositionID.
There are many Applicants and each Applicant can have multiple positions
(ApplicantPosition table). And each ApplicantPosition table can have many
skills (ApplicantSkills table).
The tables could be set up as follows (except I will probably be changing
many of the bigints to ints):
CREATE TABLE [dbo].[Companies] (
[CompanyID] [bigint] IDENTITY (1, 1) NOT NULL Primary Key,
[Name] [varchar] (45) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_Company_Idx on Companies(Name,CompanyID)
CREATE TABLE [dbo].[logon] (
[UserID] [bigint] IDENTITY (1, 1) NOT NULL Primary Key ,
[CompanyID] [bigint] NULL ,
[FirstName] [varchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[LastName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Email] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[UserName] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Password] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DateCreated] [datetime] NULL
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_Logon_Idx on Companies(UserName,UserID)
CREATE NONCLUSTERED INDEX Name_Password_Logon_Idx on
Companies(UserName,Password)
CREATE TABLE [dbo].[Position] (
[CompanyID] [bigint] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[PositionID] [int] IDENTITY (1, 1) NOT NULL Primary Key,
[CompanyID] [bigint] NULL ,
[JobTitle] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DatePosted] [datetime] NULL ,
[DateUpdated] [datetime] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_Position_Idx on
Companies(JobTitle,PositionID)
CREATE TABLE [dbo].[Applicant] (
[CompanyID] [bigint] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[UserID] [bigint] NOT NULL ,
[ApplicantID] [bigint] IDENTITY (1, 1) NOT NULL Primary Key,
[PositionID] [int] NOT NULL
[Name] [varchar] (45)
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_Applicant_Idx on
Companies(Name,ApplicantID)
CREATE TABLE [dbo].[ApplicantPosition] (
[CompanyID] [bigint] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[UserID] [bigint] NOT NULL ,
[ApplicantPositionID] [bigint] IDENTITY (1, 1) NOT NULL
ApplicantPositionID,
[ApplicantID] [bigint] NOT NULL ,
[PositionID] [int] NOT NULL ,
[FirstName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LastName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Status] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Name_ID_ApplicantPosition_Idx on
ApplicantPosition(LastName,FirstName,App
licantPositionID)
CREATE TABLE [dbo].[ApplicantSkills] (
[CompanyID] [bigint] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[UserID] [bigint] NULL ,
[ApplicantSkillsID] [bigint] IDENTITY (1, 1) NOT NULL Primary Key,
[ApplicantID] [bigint] NOT NULL ,
[Skill] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Level] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX Skill_ID_ApplicantSkillsID on
ApplicantSkills(Skill,ApplicantSkillsID)
Thanks,
Tom
> --
> ----
--
> Louis Davidson - http://spaces.msn.com/members/drsql/
> SQL Server MVP
>
> "tshad" <tscheiderich@.ftsolutions.com> wrote in message
> news:%23vCQUQimFHA.3568@.TK2MSFTNGP10.phx.gbl...
to
of
(well,
tiny-
bit.
field
Stored
matching
bigint
changes
>
--
to
know
Stored
matching
>
Bigint stored as varchar has issues...
I'm importing contact information into SQL Server from Excel using
OPENROWSET. The issue I'm having is with how the phone numbers get stored.
The phone numbers have no extra characters like dashes, so they appear like
9495551212, which is equivelant to 949-555-1212. The phone number is being
imported to a varchar field, which implicitly converts it to a format like
7.70947e+009. The final destination field is intended to hold the data as it
is originally, so it's a bigint datatype.
My first thought was to use CAST or CONVERT. But I get:
Server: Msg 8114, Level 16, State 5, Line 1
Error converting data type varchar to bigint.
Any ideas?Try,
select str(cast('7.70947e+009' as float), 10, 0)
AMB
"Eric" wrote:
> Hi,
> I'm importing contact information into SQL Server from Excel using
> OPENROWSET. The issue I'm having is with how the phone numbers get stored.
> The phone numbers have no extra characters like dashes, so they appear lik
e
> 9495551212, which is equivelant to 949-555-1212. The phone number is being
> imported to a varchar field, which implicitly converts it to a format like
> 7.70947e+009. The final destination field is intended to hold the data as
it
> is originally, so it's a bigint datatype.
> My first thought was to use CAST or CONVERT. But I get:
> Server: Msg 8114, Level 16, State 5, Line 1
> Error converting data type varchar to bigint.
> Any ideas?|||Problem #1 7.70947e+009 is not a representation of an integer. It is
expressed with exponentials, so it is a float. You first have to cast it
to a float:
select cast(cast ('7.70947e+009' as float) as bigint)
Problem #2,( and this is the kicker) The result will be:
7709470000
Which is probably not a valid phone number. You are losing some of the
significant digits. Try casting it ot a bigint first, then to a
varhchar(10).
SELECT cast(cast(phonenumber as bigint) as varchar(10))
FROM OPENDATASOURCE('Microsoft.Jet.OLEDB.4.0',
'Data Source=c:\test.xls;Extended Properties=Excel 8.0')...sheet1$
This worked for me.
----
Louis Davidson - drsql@.hotmail.com
SQL Server MVP
Compass Technology Management - www.compass.net
Pro SQL Server 2000 Database Design -
http://www.apress.com/book/bookDisplay.html?bID=266
Note: Please reply to the newsgroups only unless you are interested in
consulting services. All other replies may be ignored :)
"Eric" <Eric@.discussions.microsoft.com> wrote in message
news:6BCA27F2-B212-4B4A-9E47-FAAEA48B7A14@.microsoft.com...
> Hi,
> I'm importing contact information into SQL Server from Excel using
> OPENROWSET. The issue I'm having is with how the phone numbers get stored.
> The phone numbers have no extra characters like dashes, so they appear
> like
> 9495551212, which is equivelant to 949-555-1212. The phone number is being
> imported to a varchar field, which implicitly converts it to a format like
> 7.70947e+009. The final destination field is intended to hold the data as
> it
> is originally, so it's a bigint datatype.
> My first thought was to use CAST or CONVERT. But I get:
> Server: Msg 8114, Level 16, State 5, Line 1
> Error converting data type varchar to bigint.
> Any ideas?|||That did the trick! Thank you so much...
"Louis Davidson" wrote:
> Problem #1 7.70947e+009 is not a representation of an integer. It is
> expressed with exponentials, so it is a float. You first have to cast it
> to a float:
> select cast(cast ('7.70947e+009' as float) as bigint)
> Problem #2,( and this is the kicker) The result will be:
> 7709470000
> Which is probably not a valid phone number. You are losing some of the
> significant digits. Try casting it ot a bigint first, then to a
> varhchar(10).
> SELECT cast(cast(phonenumber as bigint) as varchar(10))
> FROM OPENDATASOURCE('Microsoft.Jet.OLEDB.4.0',
> 'Data Source=c:\test.xls;Extended Properties=Excel 8.0')...sheet1$
> This worked for me.
>
> --
> ----
--
> Louis Davidson - drsql@.hotmail.com
> SQL Server MVP
> Compass Technology Management - www.compass.net
> Pro SQL Server 2000 Database Design -
> http://www.apress.com/book/bookDisplay.html?bID=266
> Note: Please reply to the newsgroups only unless you are interested in
> consulting services. All other replies may be ignored :)
> "Eric" <Eric@.discussions.microsoft.com> wrote in message
> news:6BCA27F2-B212-4B4A-9E47-FAAEA48B7A14@.microsoft.com...
>
>|||You could try adding IMEX=1 to the connection properties to see if that
helps.
select *
from OpenRowSet(
'Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=C:\filename.xls;HDR=YES;IMEX=1;'
,Sheet1$
)
"Eric" <Eric@.discussions.microsoft.com> wrote in message
news:6BCA27F2-B212-4B4A-9E47-FAAEA48B7A14@.microsoft.com...
> Hi,
> I'm importing contact information into SQL Server from Excel using
> OPENROWSET. The issue I'm having is with how the phone numbers get stored.
> The phone numbers have no extra characters like dashes, so they appear
> like
> 9495551212, which is equivelant to 949-555-1212. The phone number is being
> imported to a varchar field, which implicitly converts it to a format like
> 7.70947e+009. The final destination field is intended to hold the data as
> it
> is originally, so it's a bigint datatype.
> My first thought was to use CAST or CONVERT. But I get:
> Server: Msg 8114, Level 16, State 5, Line 1
> Error converting data type varchar to bigint.
> Any ideas?|||Hi
Make sure that on the Excel spreadsheet, the column is not "general" but
rather a "text" type cell.
General cells are evaluated by the Excel driver and if they meet numeric
criteria, they are passed as numeric to SQL Server. This is an Excel issue.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Eric" <Eric@.discussions.microsoft.com> wrote in message
news:6BCA27F2-B212-4B4A-9E47-FAAEA48B7A14@.microsoft.com...
> Hi,
> I'm importing contact information into SQL Server from Excel using
> OPENROWSET. The issue I'm having is with how the phone numbers get stored.
> The phone numbers have no extra characters like dashes, so they appear
like
> 9495551212, which is equivelant to 949-555-1212. The phone number is being
> imported to a varchar field, which implicitly converts it to a format like
> 7.70947e+009. The final destination field is intended to hold the data as
it
> is originally, so it's a bigint datatype.
> My first thought was to use CAST or CONVERT. But I get:
> Server: Msg 8114, Level 16, State 5, Line 1
> Error converting data type varchar to bigint.
> Any ideas?
bigint problem
I have two sql server enterprise servers running sp4 on windows server2003
sp1
I have a DTS package that does some various processing on one machine, then
at the end of the package. I copy tables to the other sql server box.
In one of the tables being copied, I have a bigint data type. The table
copies and shows that the bigint datatype is still part of the table
definition, but the values in the bigint column are negatives and positives.
Does anyone have any ideas as to what might be causing this?
thanks in advance,
Troy
Values in a bigint column can be negative as well as positive, but I assume
that you have only positive values on one side and end up with negative and
positive values on the other? If that's the case it looks to me like your
bigints are accidentally treated as ints somewhere along the way, at a
binary level.
Jacco Schalkwijk
SQL Server MVP
"Troy Sherrill" <tsherrill@.nc.rr.com> wrote in message
news:ekzUk8kfFHA.460@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two sql server enterprise servers running sp4 on windows server2003
> sp1
> I have a DTS package that does some various processing on one machine,
> then
> at the end of the package. I copy tables to the other sql server box.
> In one of the tables being copied, I have a bigint data type. The table
> copies and shows that the bigint datatype is still part of the table
> definition, but the values in the bigint column are negatives and
> positives.
> Does anyone have any ideas as to what might be causing this?
> thanks in advance,
> Troy
>
|||What provider are you using? It may not understand a BigInt datatype.
Andrew J. Kelly SQL MVP
"Troy Sherrill" <tsherrill@.nc.rr.com> wrote in message
news:ekzUk8kfFHA.460@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two sql server enterprise servers running sp4 on windows server2003
> sp1
> I have a DTS package that does some various processing on one machine,
> then
> at the end of the package. I copy tables to the other sql server box.
> In one of the tables being copied, I have a bigint data type. The table
> copies and shows that the bigint datatype is still part of the table
> definition, but the values in the bigint column are negatives and
> positives.
> Does anyone have any ideas as to what might be causing this?
> thanks in advance,
> Troy
>
bigint problem
I have two sql server enterprise servers running sp4 on windows server2003
sp1
I have a DTS package that does some various processing on one machine, then
at the end of the package. I copy tables to the other sql server box.
In one of the tables being copied, I have a bigint data type. The table
copies and shows that the bigint datatype is still part of the table
definition, but the values in the bigint column are negatives and positives.
Does anyone have any ideas as to what might be causing this?
thanks in advance,
TroyValues in a bigint column can be negative as well as positive, but I assume
that you have only positive values on one side and end up with negative and
positive values on the other? If that's the case it looks to me like your
bigints are accidentally treated as ints somewhere along the way, at a
binary level.
--
Jacco Schalkwijk
SQL Server MVP
"Troy Sherrill" <tsherrill@.nc.rr.com> wrote in message
news:ekzUk8kfFHA.460@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two sql server enterprise servers running sp4 on windows server2003
> sp1
> I have a DTS package that does some various processing on one machine,
> then
> at the end of the package. I copy tables to the other sql server box.
> In one of the tables being copied, I have a bigint data type. The table
> copies and shows that the bigint datatype is still part of the table
> definition, but the values in the bigint column are negatives and
> positives.
> Does anyone have any ideas as to what might be causing this?
> thanks in advance,
> Troy
>|||What provider are you using? It may not understand a BigInt datatype.
--
Andrew J. Kelly SQL MVP
"Troy Sherrill" <tsherrill@.nc.rr.com> wrote in message
news:ekzUk8kfFHA.460@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two sql server enterprise servers running sp4 on windows server2003
> sp1
> I have a DTS package that does some various processing on one machine,
> then
> at the end of the package. I copy tables to the other sql server box.
> In one of the tables being copied, I have a bigint data type. The table
> copies and shows that the bigint datatype is still part of the table
> definition, but the values in the bigint column are negatives and
> positives.
> Does anyone have any ideas as to what might be causing this?
> thanks in advance,
> Troy
>sql
bigint problem
I have two sql server enterprise servers running sp4 on windows server2003
sp1
I have a DTS package that does some various processing on one machine, then
at the end of the package. I copy tables to the other sql server box.
In one of the tables being copied, I have a bigint data type. The table
copies and shows that the bigint datatype is still part of the table
definition, but the values in the bigint column are negatives and positives.
Does anyone have any ideas as to what might be causing this?
thanks in advance,
TroyValues in a bigint column can be negative as well as positive, but I assume
that you have only positive values on one side and end up with negative and
positive values on the other? If that's the case it looks to me like your
bigints are accidentally treated as ints somewhere along the way, at a
binary level.
Jacco Schalkwijk
SQL Server MVP
"Troy Sherrill" <tsherrill@.nc.rr.com> wrote in message
news:ekzUk8kfFHA.460@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two sql server enterprise servers running sp4 on windows server2003
> sp1
> I have a DTS package that does some various processing on one machine,
> then
> at the end of the package. I copy tables to the other sql server box.
> In one of the tables being copied, I have a bigint data type. The table
> copies and shows that the bigint datatype is still part of the table
> definition, but the values in the bigint column are negatives and
> positives.
> Does anyone have any ideas as to what might be causing this?
> thanks in advance,
> Troy
>|||What provider are you using? It may not understand a BigInt datatype.
Andrew J. Kelly SQL MVP
"Troy Sherrill" <tsherrill@.nc.rr.com> wrote in message
news:ekzUk8kfFHA.460@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I have two sql server enterprise servers running sp4 on windows server2003
> sp1
> I have a DTS package that does some various processing on one machine,
> then
> at the end of the package. I copy tables to the other sql server box.
> In one of the tables being copied, I have a bigint data type. The table
> copies and shows that the bigint datatype is still part of the table
> definition, but the values in the bigint column are negatives and
> positives.
> Does anyone have any ideas as to what might be causing this?
> thanks in advance,
> Troy
>
BigInt performance on SQL 64
2000/2005 64 bit? If so I would like to know what your conclusion was.Performance in regards to what? A bigint is twice the size of an Int so it
will take up twice the memory and disk space for tables and indexes. You
really shouldn't be looking at performance to make a decision such as this.
You need to consider the data that you will be storing in that column. What
will the business rules dictate. Why use a BigInt when an Int will suffice?
Why use an Int if it can't hold your largest value?
Andrew J. Kelly SQL MVP
"Will" <whbmitchell@.gmail.com> wrote in message
news:1139525534.880875.88630@.o13g2000cwo.googlegroups.com...
> Has anyone done some benchmarks of BigInt vs. Int on SQL Server
> 2000/2005 64 bit? If so I would like to know what your conclusion was.
>|||I have. BigInt will store a much larger number than Int will. So storing a
really big number in and Int will be much slower than with a BigInt :)
Seriously, I too would like to hear if there is any difference.
Since it is just 8 bytes instead of 4 bytes, it will be a bit slower, for
many reasons on any version because at the very least a bit more disk space
will be required. Fact is, if you need a BigInt, use it, otherwise use an
int. I wouldn't expect to do much math on a number > 2 billion, (certainly
not extensively,) and that is where a minute difference might be noticed
based on 32 or 64 bit hardware, though even on 64 bit hardware the Int
"shouldn't" be slower than the BigInt, since math would be done in the same
hardware, just probably no "tricks" to deal with the number that is bigger
than the native registers.
Do you have any specific instances where you think it was slower?
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"Will" <whbmitchell@.gmail.com> wrote in message
news:1139525534.880875.88630@.o13g2000cwo.googlegroups.com...
> Has anyone done some benchmarks of BigInt vs. Int on SQL Server
> 2000/2005 64 bit? If so I would like to know what your conclusion was.
>
bigint or uniqueidentifer performance metrics?
relations is it optimal to use as the type for primary keys? Is there some
point other than performance why one should not worry about reverting to the
use of bigint for related tables? I've only read a couple of stories about
it so far. What do you think?
<%= Clinton GallagherClinton
I would try to avoid using GUID as a Primary Key.
GUIDs take up 16-bytes of storage, more than an Identify column, which in
turn make the index larger, which increases I/O reads, which can hurt
performance. While the performance hit will be minimal if you do decide to
add a clustered index to a GUID column, every little bit adds up
"clintonG" < csgallagher@.REMOVETHISTEXTmetromilwaukee
.com> wrote in message
news:%2388qeVoOGHA.1832@.TK2MSFTNGP11.phx.gbl...
> If the use of the uniqueidentifier is now preferred, across how many
> relations is it optimal to use as the type for primary keys? Is there some
> point other than performance why one should not worry about reverting to
> the use of bigint for related tables? I've only read a couple of stories
> about it so far. What do you think?
> <%= Clinton Gallagher
>|||That's what I read Uri but then on the next article GUIDs are praised not to
mention their use in the aspnet_Users table as well as other tables in the
ASP.NET 2.0 Membership, Roles and Profiles implementation Microsoft has
provided.
I think I'll just use their GUID as a foreign key and revert to bigint for
primary keys. It seems there are benefits for grabbing the last inserted
record using @.@.IDENTITY which GUID does not support.
I'm also trying to figure out how to use GUID as a primary key -- anyway --
and auto-generate the value using the newid( ) default binding. I've tried
learning to use newid( ) by entering data into the table but all I get to
see is red circles with ! and error messages or the red circles and no error
messages.
Thanks for your attention...
<%= Clinton Gallagher
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:eqYqhBqOGHA.2300@.TK2MSFTNGP15.phx.gbl...
> Clinton
> I would try to avoid using GUID as a Primary Key.
> GUIDs take up 16-bytes of storage, more than an Identify column, which in
> turn make the index larger, which increases I/O reads, which can hurt
> performance. While the performance hit will be minimal if you do decide to
> add a clustered index to a GUID column, every little bit adds up
>
> "clintonG" < csgallagher@.REMOVETHISTEXTmetromilwaukee
.com> wrote in message
> news:%2388qeVoOGHA.1832@.TK2MSFTNGP11.phx.gbl...
>|||This is kind of a confusing question, since you really need to give us more
information. Do you have tables that you need to store > 4 billion row? In
2005 there is a way to create an always increasing guid using
newsequentialid that can help a bit with the problems that using a GUID for
a key can create, but GUIDs make fine surrogate keys for tables, other than
performance and typability (the latter of which is just a programmer
convienience, so should not be really important to the question of whether
to choose them or not)
If you need the benefits of a GUID (like the UI can create them and be
guaranteed uniqueness,) then it might be worth it, otherwise use the correct
integer type. If you are going to have > 4 billion rows in many of your
tables, I hope you have spent a lot of money on a fast disk subsystem anyhow
because you will need it!
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"clintonG" < csgallagher@.REMOVETHISTEXTmetromilwaukee
.com> wrote in message
news:%2388qeVoOGHA.1832@.TK2MSFTNGP11.phx.gbl...
> If the use of the uniqueidentifier is now preferred, across how many
> relations is it optimal to use as the type for primary keys? Is there some
> point other than performance why one should not worry about reverting to
> the use of bigint for related tables? I've only read a couple of stories
> about it so far. What do you think?
> <%= Clinton Gallagher
>|||Well, good point. I should be comparing an int rather than bigint.
If its true that deleting a record can leave gaps in the sequence of the
key's
value when using an int which can impact sorting the same must be true
when using the uniqueidentifier.
Furthermore, as I understand it even when using newsequentialid( ) there
is no @.@.IDENTITY to get the value of the last inserted record. Do you
know if this is true?
Other than uniqueness, I see no reason to use uniqueidentifier yet..
That's the basis of this inquiry.
<%= Clinton Gallagher
"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
news:OwfDDvxOGHA.2040@.TK2MSFTNGP14.phx.gbl...
> This is kind of a confusing question, since you really need to give us
> more information. Do you have tables that you need to store > 4 billion
> row? In 2005 there is a way to create an always increasing guid using
> newsequentialid that can help a bit with the problems that using a GUID
> for a key can create, but GUIDs make fine surrogate keys for tables, other
> than performance and typability (the latter of which is just a programmer
> convienience, so should not be really important to the question of whether
> to choose them or not)
> If you need the benefits of a GUID (like the UI can create them and be
> guaranteed uniqueness,) then it might be worth it, otherwise use the
> correct integer type. If you are going to have > 4 billion rows in many
> of your tables, I hope you have spent a lot of money on a fast disk
> subsystem anyhow because you will need it!
> --
> ----
--
> Louis Davidson - http://spaces.msn.com/members/drsql/
> SQL Server MVP
> "Arguments are to be avoided: they are always vulgar and often
> convincing."
> (Oscar Wilde)
> "clintonG" < csgallagher@.REMOVETHISTEXTmetromilwaukee
.com> wrote in message
> news:%2388qeVoOGHA.1832@.TK2MSFTNGP11.phx.gbl...
>|||> Furthermore, as I understand it even when using newsequentialid( ) there
> is no @.@.IDENTITY to get the value of the last inserted record. Do you
> know if this is true?
One of the differences between uniqueidentifier and identity is that a
uniqueidentifier value can be assigned by the application instead of SQL
Server. This eliminates the need to retrieve the surrogate key value after
insertion.
Hope this helps.
Dan Guzman
SQL Server MVP
"clintonG" < csgallagher@.REMOVETHISTEXTmetromilwaukee
.com> wrote in message
news:%2313yEZzOGHA.2624@.TK2MSFTNGP12.phx.gbl...
> Well, good point. I should be comparing an int rather than bigint.
> If its true that deleting a record can leave gaps in the sequence of the
> key's
> value when using an int which can impact sorting the same must be true
> when using the uniqueidentifier.
> Furthermore, as I understand it even when using newsequentialid( ) there
> is no @.@.IDENTITY to get the value of the last inserted record. Do you
> know if this is true?
> Other than uniqueness, I see no reason to use uniqueidentifier yet..
> That's the basis of this inquiry.
> <%= Clinton Gallagher
> "Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
> news:OwfDDvxOGHA.2040@.TK2MSFTNGP14.phx.gbl...
>|||That was helpful to be reminded of as the DNF will indeed generate a GUID
consistent with the uniqueidentitifer.
<%= Clinton Gallagher
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:OOChgd0OGHA.2696@.TK2MSFTNGP14.phx.gbl...
> One of the differences between uniqueidentifier and identity is that a
> uniqueidentifier value can be assigned by the application instead of SQL
> Server. This eliminates the need to retrieve the surrogate key value
> after insertion.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "clintonG" < csgallagher@.REMOVETHISTEXTmetromilwaukee
.com> wrote in message
> news:%2313yEZzOGHA.2624@.TK2MSFTNGP12.phx.gbl...
>
Tuesday, March 20, 2012
Bigint handling
An error occurred while executing the subreport
â'PlanLocationbyItemNumberLocations_subreportâ': Error during processing of
â'item_idâ' report parameter.
Preview complete -- 0 errors, 1 warnings
This problem only occurs when the value of the bigint in field "item_id"
(which is passed as a parmater to the subreport) is very large. Since I
can't find a report parameter option of bigint, only integer, what options do
I have?You can use Float instead of Integer.
--
| Thread-Topic: Bigint handling
| thread-index: AcS7okSK6UOq2ghATlGlwu+m0Ima4g==| X-WBNR-Posting-Host: 65.207.30.6
| From: "=?Utf-8?B?S0I=?=" <KB@.discussions.microsoft.com>
| Subject: Bigint handling
| Date: Tue, 26 Oct 2004 14:25:08 -0700
| Lines: 12
| Message-ID: <384D7CB6-0F38-4BCC-B8A7-DC550E2EE13C@.microsoft.com>
| MIME-Version: 1.0
| Content-Type: text/plain;
| charset="Utf-8"
| Content-Transfer-Encoding: 8bit
| X-Newsreader: Microsoft CDO for Windows 2000
| Content-Class: urn:content-classes:message
| Importance: normal
| Priority: normal
| X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.0
| Newsgroups: microsoft.public.sqlserver.reportingsvcs
| NNTP-Posting-Host: TK2MSFTNGXA03.phx.gbl 10.40.1.29
| Path: cpmsftngxa10.phx.gbl!TK2MSFTNGXA03.phx.gbl
| Xref: cpmsftngxa10.phx.gbl microsoft.public.sqlserver.reportingsvcs:33239
| X-Tomcat-NG: microsoft.public.sqlserver.reportingsvcs
|
|
|
| I have a subreport that gives me the error :
| An error occurred while executing the subreport
| â'PlanLocationbyItemNumberLocations_subreportâ': Error during
processing of
| â'item_idâ' report parameter.
| Preview complete -- 0 errors, 1 warnings
|
| This problem only occurs when the value of the bigint in field "item_id"
| (which is passed as a parmater to the subreport) is very large. Since I
| can't find a report parameter option of bigint, only integer, what
options do
| I have?
|
bigint field vs int
i am creating an application which the number of records will be beyond 100000 so i need to set the primary keys to bigint instead of int
i need to know about the performance and difference
will it affect the performance much and second will i have to change any code like delete,update
A standard Int ranges from -2,147,483,648 through 2,147,483,647 so you can use them to store values much larger than 100000.
If you do use bigints, you would need to use Int64 in your code
|||HI,
There will be some performance difference in using int and bigint. If you use bigint the performance is slower in an 32 bit processer. But in case of a 64 bit processor the performance should not be a problem
Int will give you up to 2,147,483,647 while Bigint gives up to 9,223,372,036,854,775,807. The only difference in your stored procedure is that you will need to change the parameter declarations from INT to BIGINT.
>i need to know about the performance and difference
Obviously more working memory will be required for BIGINT instead of INT (8 bytes instead of 4 for for each value), however SQL's will still work very efficiently provided there is enough RAM. If you only have enough RAM for INT, changing to BIGINT will put you at a disadvantage - so make sure you have enough RAM.
sql
Bigint autoincrement question
i wonder if i can get an bigint autoincrement field where the number begins with the current year + 1 autonumber
Does someone know if it is possible and if yes, how?
Already thanx.
Cheers Wimcreate table test_t
(col1 bigint identity (2004, 1), col2 char (5)
)
go
declare @.counter tinyint
set @.counter = 1
while @.counter <> 5
begin
insert into test_t
select 'test'
set @.counter = @.counter + 1
end
go
select * from test_t
go|||Originally posted by aldo_2003
create table test_t
(col1 bigint identity (2004, 1), col2 char (5)
)
go
declare @.counter tinyint
set @.counter = 1
while @.counter <> 5
begin
insert into test_t
select 'test'
set @.counter = @.counter + 1
end
go
select * from test_t
go
But it doen't change the year automatically|||How do you mean automatically ?
Can you clarify ?|||Originally posted by aldo_2003
How do you mean automatically ?
Can you clarify ?
The id should always start with the current year, so when it is 2005 the id should start with 2005 1 + increment|||Example please?|||Originally posted by aldo_2003
Example please?
It should be something like
create table test_t
(col1 bigint identity (datepart(yy, getdate())1, 1), col2 char (5)
)
go|||An example of what it would look like in a table.|||You should store your year value and your ID value separately, and then use a trigger to increment the ID value.
[YearColumn] defaults to year(getdate())
[IDColumn] is set by a trigger (Insert only!) to:
set [IDColumn] = isnull((select Max([IDCOLUMN]) from YourTable where [YearColumn] = Year(Getdate())), 0) + 1
Then you can combine the two values in your SQL code, or create a calculted column that combines the two.|||I like your solution, but the problem is that i cant set that column to primary key|||You can add a unique index to a computed column. You may be able to make it a primary key, thought I'm not certain about that. In any case, you could set the Year column and the ID column together as a composite primary key.
bigint as unsigned value?
nothing. actually.
just be sure your not going out of range of unsigned bigint.
to ensure that this things
you can however put a constraints on a column
that does not accept negative values
|||
I must still be missing something as the following code fails: with a failure to convert an unsigned int to signed int...
using System;
using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Data.SqlClient;
namespace dbTest
{
class Program
{
static void Main(string[] args)
{
//Create a connection
// Data Source=(local);Database=AdventureWorks;" _
// & "Integrated Security=SSPI;"
string strConnString = @."Data Source=(local);Database=test;Integrated Security=SSPI";
SqlConnection objConn = new SqlConnection(strConnString);
// Create the query
string strSQL = "INSERT INTO dbo.Test (BI) VALUES(@.BI)";
SqlCommand objCmd = new SqlCommand(strSQL, objConn);
// Create parameter
SqlParameter UlongDB;
UInt64 quadValue = 0xFFFFFFFFFFFFFFFF;
UlongDB = new SqlParameter("@.BI", SqlDbType.BigInt);
UlongDB.Value = quadValue;
objCmd.Parameters.Add(UlongDB);
// Insert the record
try
{
objConn.Open();
objCmd.ExecuteNonQuery();
}
catch (Exception e)
{
Console.WriteLine("Exception is: " + e.Message);
}
finally
{
objConn.Close();
}
}
}
bigint - Arithmetic overflow
I am getting
Server: Msg 232, Level 16, State 3, Line 1
Arithmetic overflow error for type int, value =
1152921504606847000.000000.
when I do
select power(2,60) in query analyzer
then I tried
declare @.b bigint
select @.b = power(2,60)
same error.
I tried
select convert(bigint,power(2,60)) no luck
Is there any way to tell sql server to use bigint when it
calculate power(2,60)
Ram.This worked:
declare @.b bigint
select @.b = power(cast(2 as bigint),60)
select @.b
-------
1152921504606846976
(1 row(s) affected)|||Originally posted by rdjabarov
This worked:
declare @.b bigint
select @.b = power(cast(2 as bigint),60)
select @.b
-------
1152921504606846976
(1 row(s) affected)
much thanks !!!!!
bigint - Arithmetic overflow
I am getting
Server: Msg 232, Level 16, State 3, Line 1
Arithmetic overflow error for type int, value = 1152921504606847000.000000.
when I do
select power(2,60) in query analyzer
then I tried
declare @.b bigint
select @.b = power(2,60)
same error.
I tried
select convert(bigint,power(2,60)) no luck
Is there any way to tell sql server to use bigint when it
calculate power(2,60)
Ram.declare @.t bigint
select @.t = 2
select power(@.t,60)
>--Original Message--
>Hi,
>I am getting
>Server: Msg 232, Level 16, State 3, Line 1
>Arithmetic overflow error for type int, value =>1152921504606847000.000000.
>when I do
>select power(2,60) in query analyzer
>then I tried
>declare @.b bigint
>select @.b = power(2,60)
>same error.
>I tried
>select convert(bigint,power(2,60)) no luck
>Is there any way to tell sql server to use bigint when it
>calculate power(2,60)
>Ram.
>
>
>.
>|||power function will return value of same type as input
type, so instead of constant 2 (which is integer) use a
variable defined as bigint, try following:
declare @.a bigint, @.b bigint
select @.a = 2
select @.b = power(@.a,60)
print @.b
hope this helps.
>--Original Message--
>Hi,
>I am getting
>Server: Msg 232, Level 16, State 3, Line 1
>Arithmetic overflow error for type int, value =>1152921504606847000.000000.
>when I do
>select power(2,60) in query analyzer
>then I tried
>declare @.b bigint
>select @.b = power(2,60)
>same error.
>I tried
>select convert(bigint,power(2,60)) no luck
>Is there any way to tell sql server to use bigint when it
>calculate power(2,60)
>Ram.
>
>
>.
>
big[er]int Datatype?
I have a need to store a numeric value greater than the 2^63 allowed by the
bigint data type...any options?
TIADecimal?
--
Hope this helps.
Dan Guzman
SQL Server MVP
--
SQL FAQ links (courtesy Neil Pike):
http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
http://www.sqlserverfaq.com
http://www.mssqlserver.com/faq
--
"Steven Frank" <stevef@.relation.com> wrote in message
news:upMgP1rqDHA.1084@.tk2msftngp13.phx.gbl...
> Hello,
> I have a need to store a numeric value greater than the 2^63 allowed
by the
> bigint data type...any options?
> TIA
>|||How much bigger? E.g. did you try DECIMAL or NUMERIC?
If you don't need to perform calculations, you might be safer just storing
as a VARCHAR.
A
"Steven Frank" <stevef@.relation.com> wrote in message
news:upMgP1rqDHA.1084@.tk2msftngp13.phx.gbl...
> Hello,
> I have a need to store a numeric value greater than the 2^63 allowed by
the
> bigint data type...any options?
> TIA
>|||Well don't I feel like a dork... I though bigint was the King!
Thanks a lot!
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:O3zlX7rqDHA.1084@.tk2msftngp13.phx.gbl...
> Decimal?
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> --
> SQL FAQ links (courtesy Neil Pike):
> http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
> http://www.sqlserverfaq.com
> http://www.mssqlserver.com/faq
> --
> "Steven Frank" <stevef@.relation.com> wrote in message
> news:upMgP1rqDHA.1084@.tk2msftngp13.phx.gbl...
> > Hello,
> >
> > I have a need to store a numeric value greater than the 2^63 allowed
> by the
> > bigint data type...any options?
> >
> > TIA
> >
> >
>|||Well, I've spoken too soon.
The decimal sorks great for storage, but I *do* have to do calculations,
specifically a logical AND (&) operation which does not allow the use of the
decimal data type. Any ideas for this one?
Thanks again!
Steve
"Aaron Bertrand - MVP" <aaron@.TRASHaspfaq.com> wrote in message
news:uoRd29rqDHA.2012@.TK2MSFTNGP12.phx.gbl...
> How much bigger? E.g. did you try DECIMAL or NUMERIC?
> If you don't need to perform calculations, you might be safer just storing
> as a VARCHAR.
> A
>
> "Steven Frank" <stevef@.relation.com> wrote in message
> news:upMgP1rqDHA.1084@.tk2msftngp13.phx.gbl...
> > Hello,
> >
> > I have a need to store a numeric value greater than the 2^63 allowed by
> the
> > bigint data type...any options?
> >
> > TIA
> >
> >
>|||> The decimal sorks great for storage, but I *do* have to do calculations,
> specifically a logical AND (&) operation which does not allow the use of
the
> decimal data type. Any ideas for this one?
Use CAST/CONVERT when performing the calculations?|||I've tried that and it blows up because you can cast it to a bigint because
its too big...
"Aaron Bertrand - MVP" <aaron@.TRASHaspfaq.com> wrote in message
news:%23LoVn0TrDHA.1488@.TK2MSFTNGP12.phx.gbl...
> > The decimal sorks great for storage, but I *do* have to do calculations,
> > specifically a logical AND (&) operation which does not allow the use of
> the
> > decimal data type. Any ideas for this one?
> Use CAST/CONVERT when performing the calculations?
>
big task?
duration (int)
startdatetime (bigint)
userid (int)
i want to get that row's startdatetime where sum of duration becomes equal
to or greater than 1000.
how can i do that? i can only think of a cursor solution.
ThanksIn the future, instead of columnname (datatype) if you could give a real
table structure and data it would be easier to get a proper solution. I get
the feeling that this is a bit more complex that you are letting on, so you
might have to rework the code where I a TOP to include max values and
groupings, but based on the table you gave:
drop table test
go
create table test
(
duration int,
startdatetime bigint primary key,
userid int
)
go
insert into test
select 400, 500, 1
union all
select 500, 600, 1
union all
select 100, 650, 1
union all
select 100, 700, 1
go
select top 1 *
from ( select *,
(select sum(startdatetime)
from test as t2
where t2.startDateTime <= test.startDateTime) as cumulativeDuration
from test) as sums
where cumulativeDuration > 1000
Returns:
duration startdatetime userid cumulativeDuration
-- -- -- --
500 600 1 1100
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"JY" <jy1970us@.yahoo.com> wrote in message
news:O%bHf.5429$J%6.383634@.news20.bellglobal.com...
>i have a table with 3 columns:
> duration (int)
> startdatetime (bigint)
> userid (int)
> i want to get that row's startdatetime where sum of duration becomes equal
> to or greater than 1000.
> how can i do that? i can only think of a cursor solution.
> Thanks
>
>
>|||On Fri, 10 Feb 2006 21:31:09 -0500, JY wrote:
>i have a table with 3 columns:
>duration (int)
>startdatetime (bigint)
>userid (int)
>i want to get that row's startdatetime where sum of duration becomes equal
>to or greater than 1000.
>how can i do that? i can only think of a cursor solution.
>Thanks
Hi JY,
Here's an alternative solution, based on the table and data created and
posted by Louis Davidson:
SELECT TOP 1 a.duration, a.startdatetime, a.userid,
SUM(b.duration) AS cumulativeDuration
FROM test AS a
INNER JOIN test AS b
ON b.startdatetime <= a.startdatetime
GROUP BY a.duration, a.startdatetime, a.userid
HAVING SUM(b.duration) > 1000
ORDER BY a.startdatetime
Hugo Kornelis, SQL Server MVPsql