Showing posts with label format. Show all posts
Showing posts with label format. Show all posts

Sunday, March 25, 2012

Binary_Checksum - How secure is it?

Hi,
In one of my client's database some confidential information is stored in an
encrypted format and a different column has the original value in a
Binary_Checksum format.
For example, if 'abc' is a password, its first encrypted and put in Column1.
But a Binary_Checksum of 'abc' is stored in Column2 for comparison purposes.
I am just wondering whether this is secure. Can't the Binary_Checksum value
(26435) be reversed to get the original 'abc'?
Thank you.
Regards,
KarthikHi Karthik,
The hashes generated are only 32 bits long, which is tiny. BOL states that
the probability of a collision is higher than that of the MD5 one-way hash
function, which itself is considered insecure at this point. This means a
hacker has a much greater chance of guessing the password with brute force
or dictionary attacks. Consider the following sample run in SQL 2005:
SELECT BINARY_CHECKSUM('AAAAAAAAAAAAAAAAA')
SELECT BINARY_CHECKSUM('A')
The first thing to notice is the simplicity of the algorithm.
BINARY_CHECKSUM('A') returns 65, which is the ASCII code for.. what else?
'A'. Both of the SELECT statements above produce the same result. Someone
trying to hack this particular system where you use BINARY_CHECKSUM to hash
the password will have a pretty easy time of getting some generated string
to match that hash. I would switch to another hash algorithm like SHA or
something.
"Karthik" <Karthik@.discussions.microsoft.com> wrote in message
news:0AC981BD-2919-496D-9572-D7F5EA38F3A8@.microsoft.com...
> Hi,
> In one of my client's database some confidential information is stored in
> an
> encrypted format and a different column has the original value in a
> Binary_Checksum format.
> For example, if 'abc' is a password, its first encrypted and put in
> Column1.
> But a Binary_Checksum of 'abc' is stored in Column2 for comparison
> purposes.
> I am just wondering whether this is secure. Can't the Binary_Checksum
> value
> (26435) be reversed to get the original 'abc'?
> Thank you.
> Regards,
> Karthik|||Karthik (Karthik@.discussions.microsoft.com) writes:
> In one of my client's database some confidential information is stored
> in an encrypted format and a different column has the original value in
> a Binary_Checksum format.
> For example, if 'abc' is a password, its first encrypted and put in
> Column1. But a Binary_Checksum of 'abc' is stored in Column2 for
> comparison purposes.
> I am just wondering whether this is secure. Can't the Binary_Checksum
> value (26435) be reversed to get the original 'abc'?
I believe the checksum algorithm is not very sophisticated at all, it only
performs some XOR operations. Then again, it's a destroying
transformation. There are many strings that gets the same checksum. So
it's not completely trivial to guess the original text. Unless, of
course, you already have an idea of what it could be.
So it's not certainly not as secure as a real encrypted value.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||You need to educate them on the definition of "encrypted" versus "encoded".
It's generally accepted that encrypted means some type of key is required to
decrypt the data, whether it's a certificate or a password. Encoded means no
key is required. You're talking about encoded here.
Ray

> In one of my client's database some confidential information is stored in
> an
> encrypted format|||Any checksum is simply Character by Character XOR. This is the same as for
RAID parity, .zip checksums, etc. It is not intended to be used for
encryption.
It is based on the fact that the XOR operator is commutative and transitive:
A XOR B = C = B XOR A,
B XOR C = A = C XOR B, and
C XOR A = B = A XOR C,
Which is why PARITY works.
Encryption works the same way, which is why you can decrypt. However, to
make it more secure, instead of XORing the characters together, the original
values are XORed with a fixed algorithm hash. The method of constructing
this hash is what determines the strength of the algorithm.
The hash for a checksum is 0, which is extremely simple to hack: I just told
you what it was; no big secret.
Sincerely,
Anthony Thomas
"Karthik" <Karthik@.discussions.microsoft.com> wrote in message
news:0AC981BD-2919-496D-9572-D7F5EA38F3A8@.microsoft.com...
> Hi,
> In one of my client's database some confidential information is stored in
an
> encrypted format and a different column has the original value in a
> Binary_Checksum format.
> For example, if 'abc' is a password, its first encrypted and put in
Column1.
> But a Binary_Checksum of 'abc' is stored in Column2 for comparison
purposes.
> I am just wondering whether this is secure. Can't the Binary_Checksum
value
> (26435) be reversed to get the original 'abc'?
> Thank you.
> Regards,
> Karthik|||Hi Mike, Erland, Ray and Anthony,
Thank you for all the valuable input. I will present these facts to my
client. Hopefully he will agree for a proper hash rather than
binary_checksum()
Thank you!
Regards,
Karthik
"Mike C#" wrote:

> Hi Karthik,
> The hashes generated are only 32 bits long, which is tiny. BOL states tha
t
> the probability of a collision is higher than that of the MD5 one-way hash
> function, which itself is considered insecure at this point. This means a
> hacker has a much greater chance of guessing the password with brute force
> or dictionary attacks. Consider the following sample run in SQL 2005:
> SELECT BINARY_CHECKSUM('AAAAAAAAAAAAAAAAA')
> SELECT BINARY_CHECKSUM('A')
> The first thing to notice is the simplicity of the algorithm.
> BINARY_CHECKSUM('A') returns 65, which is the ASCII code for.. what else?
> 'A'. Both of the SELECT statements above produce the same result. Someon
e
> trying to hack this particular system where you use BINARY_CHECKSUM to has
h
> the password will have a pretty easy time of getting some generated string
> to match that hash. I would switch to another hash algorithm like SHA or
> something.
> "Karthik" <Karthik@.discussions.microsoft.com> wrote in message
> news:0AC981BD-2919-496D-9572-D7F5EA38F3A8@.microsoft.com...
>
>

Saturday, February 25, 2012

Better Way to format day?

I'm trying to find a better way to format the day of the week. What I have stored is the numerical day of the week 1=sunday, 2=monday etc. What I want to display is Sunday, Monday etc. Is there a format that I can use that is better than the case statement below?

select Case Datepart(dw,getdate()) when 1 then 'Sunday'
when 2 then 'Monday'
when 3 then 'Tuesday'
when 4 then 'Wednesday'
when 5 then 'Thursday'
when 6 then 'Friday'
when 7 then 'Saturday' end as DOW

Thanks,

BrentTry this

SELECT datename(dw,getdate())

:)

Thursday, February 16, 2012

Best way to load data from text files

Hi,
I have problem I'm hoping someone can give me some pointers with.

I need to load data from several text files into one table. The format of the files are simple - each line is comma separated, with double quotes around each element e.g.

"parameter 1","value 1","parameter 2","value 2"...
"parameter 12","value 12","parameter 13","value 13"...

However, the files themselves will have different numbers of columns e.g file 1 may have 8 columns, file 2 may have 16 columns.

I'm going to load the data into a table that has at least as many columns as the longest file. The table columns are all varchar, and are named simply as [Col001] [Col002] [Col003] etc...

The first two columns of this table must be left empty during the load (I use these later on), so the data entry will start at [Col003].

My question is what is the best way to do this? I thought perhaps using a BULK INSERT in a stored procedure might do the trick, but I haven't used it before and haven't got very far. I gather another approach might be to use bcp utility. Someone has also suggested a DTS package, but the filenames will be suffixed with current date/time stamp, so i don't think that will work.

My preferred appraoch would be the BULK INSERT..but i'm open to any pointers.

Many Thanks
GregYou could use BCP or BULK INSERT but you would need to know, a priori, how many columns were in the current input file and then use a different format file that matched.|||I would bring them into a staging table as the first step. Then I would have a control proc count the number of commas in a row and then call a proc set up to handle that number of columns. The called proc would then put the columns into the table designed to hold the largest number of columns.

That way, if someone comes along later and adds another file with n more columns, you merely create another proc to handle n more columns, alter your table to hold n more columns, and modify your control proc to count up to n more columns and call the appropriate sproc.

Sunday, February 12, 2012

Best way to format a letter so that text can break across a page

I'm currently using SQL reporting services to produce letters. Most of the
formatting can be achieved however I'm having problems with page breaks.
I've currently set up the report using Text Boxes in the following format:
Name
Address
Salutation
Main Content
From
I need to ensure that the Main Content follows on directly from the
Salutation field. This works fine if the letter is 1 or 3 pages long.
However, in the case where the Main Content requires a few lines on page 2 it
places all the main content onto Page 2.
How can I ensure that the Main Content always follows directly on from the
Salutation? Is it possible to do this using a table instead of text boxes?
Any help would be greatly appreciated!I should have mentioned that the report is being produced in PDF format.
I read in a newsgroup article that PDF won't split a single row between 2
pages unless the row itself is longer than one page. Is this still the case
or is there a workaround? If this it true, then one solution might be to
bring back the main content text in sections although this isn't too
appealing!
"SV" wrote:
> I'm currently using SQL reporting services to produce letters. Most of the
> formatting can be achieved however I'm having problems with page breaks.
> I've currently set up the report using Text Boxes in the following format:
> Name
> Address
> Salutation
> Main Content
> From
> I need to ensure that the Main Content follows on directly from the
> Salutation field. This works fine if the letter is 1 or 3 pages long.
> However, in the case where the Main Content requires a few lines on page 2 it
> places all the main content onto Page 2.
> How can I ensure that the Main Content always follows directly on from the
> Salutation? Is it possible to do this using a table instead of text boxes?
> Any help would be greatly appreciated!

Friday, February 10, 2012

BEST UDF to handle char or varchar to smalldatatime converstion pr

Hi Experts,
I am importing data from text files, data is about 800GB, yeah thats right.
in different file and in different varchar format. like 01/01/2004 jan 17,
200, mm/dd/yy. it had bad data and good data, but there are millions and
millions records, and it has all kind of problems any body can think of.
is there any site or way to get any function which handle these problems,
or if any of you ever had chance to see this kinda function and pass to me.
thanking you in advace for your help..
-PermoodYou can use ISDATE to determine if the specified string can be converted to
a SQL Server datetime datatype: Examples:
SELECT ISDATE('1/01/2004')
SELECT ISDATE('jan 17, 200')
SELECT ISDATE('mm/dd/yy')
SELECT ISDATE('01/01/04')
You might consider performing data type validation and conversion as part of
your ETL process before importing data into SQL Server. This will likely
perform better than using an intermediate SQL Server staging table when you
have a lot of bad data and large volumes.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"-Permood" <Permood@.discussions.microsoft.com> wrote in message
news:FD3FF986-932F-4FA0-9AD1-E027327AA100@.microsoft.com...
> Hi Experts,
> I am importing data from text files, data is about 800GB, yeah thats
> right.
> in different file and in different varchar format. like 01/01/2004 jan 17,
> 200, mm/dd/yy. it had bad data and good data, but there are millions and
> millions records, and it has all kind of problems any body can think of.
> is there any site or way to get any function which handle these problems,
>
> or if any of you ever had chance to see this kinda function and pass to
> me.
>
> thanking you in advace for your help..
> -Permood
>|||Thanks Dan,
here is problem, i am very new in my company, some 'DEVELOPERS' don't want
to change their way of doing and they decided to use sql staging, and
transformationg. I dont' have much experience with SQL Server either. So I
wanted to see if some thing is already there, i means any function, which and
handle most date problems and then i and enhance it according to my needs.
but thank you for your time and suggestions.
"Dan Guzman" wrote:
> You can use ISDATE to determine if the specified string can be converted to
> a SQL Server datetime datatype: Examples:
> SELECT ISDATE('1/01/2004')
> SELECT ISDATE('jan 17, 200')
> SELECT ISDATE('mm/dd/yy')
> SELECT ISDATE('01/01/04')
> You might consider performing data type validation and conversion as part of
> your ETL process before importing data into SQL Server. This will likely
> perform better than using an intermediate SQL Server staging table when you
> have a lot of bad data and large volumes.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "-Permood" <Permood@.discussions.microsoft.com> wrote in message
> news:FD3FF986-932F-4FA0-9AD1-E027327AA100@.microsoft.com...
> > Hi Experts,
> >
> > I am importing data from text files, data is about 800GB, yeah thats
> > right.
> > in different file and in different varchar format. like 01/01/2004 jan 17,
> > 200, mm/dd/yy. it had bad data and good data, but there are millions and
> > millions records, and it has all kind of problems any body can think of.
> >
> > is there any site or way to get any function which handle these problems,
> >
> >
> > or if any of you ever had chance to see this kinda function and pass to
> > me.
> >
> >
> > thanking you in advace for your help..
> > -Permood
> >
>
>

BEST UDF to handle char or varchar to smalldatatime converstion pr

Hi Experts,
I am importing data from text files, data is about 800GB, yeah thats right.
in different file and in different varchar format. like 01/01/2004 jan 17,
200, mm/dd/yy. it had bad data and good data, but there are millions and
millions records, and it has all kind of problems any body can think of.
is there any site or way to get any function which handle these problems,
or if any of you ever had chance to see this kinda function and pass to me.
thanking you in advace for your help..
-Permood
You can use ISDATE to determine if the specified string can be converted to
a SQL Server datetime datatype: Examples:
SELECT ISDATE('1/01/2004')
SELECT ISDATE('jan 17, 200')
SELECT ISDATE('mm/dd/yy')
SELECT ISDATE('01/01/04')
You might consider performing data type validation and conversion as part of
your ETL process before importing data into SQL Server. This will likely
perform better than using an intermediate SQL Server staging table when you
have a lot of bad data and large volumes.
Hope this helps.
Dan Guzman
SQL Server MVP
"-Permood" <Permood@.discussions.microsoft.com> wrote in message
news:FD3FF986-932F-4FA0-9AD1-E027327AA100@.microsoft.com...
> Hi Experts,
> I am importing data from text files, data is about 800GB, yeah thats
> right.
> in different file and in different varchar format. like 01/01/2004 jan 17,
> 200, mm/dd/yy. it had bad data and good data, but there are millions and
> millions records, and it has all kind of problems any body can think of.
> is there any site or way to get any function which handle these problems,
>
> or if any of you ever had chance to see this kinda function and pass to
> me.
>
> thanking you in advace for your help..
> -Permood
>