Sql Server Storing SHA256 String as Question Marks - sql-server-2008

I have followed this article on how to implement password hashing and salting
http://www.codeproject.com/Articles/608860/A-Beginners-Tutorial-for-Understanding-and-Impleme
I have implemented all the code mentioned in the article into my MVC 5 web application, however, whenever I store the PasswordHash and Salt, both of these strings save in my User table as question marks, e.g, ????????????????
The database I'm using is Sql Server 2008 R2. The two fields within my database User table have both got a datatype of Nvarchar(100)
I should also mention the data is being persisted to the database using Entity Framework 5.
Has anyone seen this before? I'm thinking it might be a datatype problem, i.e., shouldn't be Nvarchar, however, I don't really know.
Any help with this would be great.
Thanks.

There's a problem in Utility.cs:
public static string GetString(byte[] bytes)
{
char[] chars = new char[bytes.Length / sizeof(char)];
System.Buffer.BlockCopy(bytes, 0, chars, 0, bytes.Length);
return new string(chars);
}
The function is fed random bytes. This is not how you create a random string. Characters are not meant to store binary data. Such strings will be hard to swallow for many components.
Use Convert.ToBase64String and don't trust random articles on the web. Validate what you find with your own understanding before using it.

SHA256 are not string, are byte arrays. Use byte[] in your client code, use VARBINARY on the server code.

Related

Is data returned from a MySQL Connector/C query not in native C data format?

If I execute a query against the MySQL Connector/C library the data I'm getting back all appears to be in straight char * format, including numerical data types.
For example, if I execute a query that returns 4 columns, all of which are INTEGER in MySQL, rather than getting back 4 bytes worth of data (each byte representing a single column row value), I'm actually getting back 4 ASCII encoded character bytes, where 1 is actually a byte with the numeric value 49 in it (ASCII for 1).
Is this accurate or am I just missing something complete?
Do I really need to then atoi that returned byte into an int in my code or is there a mechanism to get the native C data types out of the MySQL client directly?
I guess my real question is: is the mysql_store_result structure converting that data to ASCII encoded representations in a way that can be bypassed by my application code?
I believe the data is sent on the wire as text in the MySQL protocol (I just confirmed this with Wireshark). So that means mysql_store_result() is not converting the data, it's just simply passing the data on as it was received. MySQL actually sends integers as text. I agree this always seemed like an odd design to me as well.
MySQL originally only offered the Text Protocol that you are currently using, in which (as you note) results are encoded as strings. MySQL v4.1 (released in April 2003) introduced the Prepared Statement protocol, which (amongst other things) transmits results in a binary format.
See C API Prepared Statements for more information on how to use the latter protocol with Connector/C.

Varchar or Blob object for very large string? in Mysql through Eclipselink

I have an application where I am going to store the JSON string in MySql database through Eclipselink JPA.
The JSON string can be of any length. Most of the time a String from a JSON file of length around 200 to 300 lines.
What is the best way to store the string? To use varchar or Blob?
Please provide an example if any.
You should not save it as a BLOB, as it is primarily used for image data or other binary data... Use Varchar() or use TEXT which has a size of 65535 characters if you are unsure about how many characters you might need to store..
There was a thread previously discussing WHEN to use varchar or text: Thread
To store text - use a TEXT column (or even a LONGTEXT), blobs are for binary.
Also if you're on Mysql 5.7+ - there's now a JSON data type, which is checked for being a correct json, stored more efficiently and have pretty manipulation methods

errors, incorrect values when comparing sha512 as3, with vb.net

ive been testing an sha512 class. i need to generate a hash from a string within flash cs5, but i need it to match the hash produced by asp.net(vb). it appears to be adding a zero somewhere in the string, and i dont know why.
these are the files im using: Porting SHA512 Javascript implemention to Actionscript.
the hashed string is the name "Karla" in this example
example (asp.net)// ** the brackets show where the difference is ** C4DB628AD520AFF7308ED19E91635E8E24A6C7CFD4DB2F71BBE2FA6CD63770B315A839143037BB9DB16784C0BDCEB622ECAA4077D4D8(1787)D5023E86734748
(as3)
C4DB628AD520AFF7308ED19E91635E8E24A6C7CFD4DB2F71BBE2FA6CD63770B315A839143037BB9DB16784C0BDCEB622ECAA4077D4D8(17087)D5023E86734748
there's added info below, in the link i provided, but i do not think it related to what i need, i dont think im using hmac, just a straight string hash, however, when i do it in vb.net i get the bytes from the string first the i has the bytes.
I had a feeling that the as3 code converted the string automatically in the sha512 class?
hoping someone came across this issue as well.
thanks for any help with this.
Neither one of those hashes are correct. The correct SHA512 hash for the string "Karla" is:
C4DB628AD520AFF7308ED19E91635E8E24A6C7CFD4DB2F71BBE2FA6CD63770B315A839143037BB9DB16784C0BDCEB622ECAA4077D4D817087D5023E867347408
However, I would wager that the AS3 hash is actually correct -- the javascript version generates the correct hash, see here -- and was just pasted incorrectly.
In two places in the computed hash, it contains the byte 0x08, but in the ASP.NET version high 4 bits of the byte are being lost, and its being appended to the output string as just "8" not "08".
Basically, your ASP.NET hash generator is trashing numbers less than 0x10 -- ignoring the leading zero -- and giving you malformed hashes..
Another way to tell that there is something amiss with your ASP.NET hash is that its only 126 characters (504 hex encoded bits) long.

how to avoid scientific number format using float datatype in mysql?

I am using mysql as my backend DB and hibernate and spring as my Front end. I am using float type in many of the tables, which accept only upto seven digits after then the value is saved in scientific number format which is a big problem, i cant used double as it will refelct most of the tables.
I have tried using decimal format but it didn't work
protected void initBinder(HttpServletRequest request, ServletRequestDataBinder binder)
throws Exception{
CustomDateEditor dateEditor = new CustomDateEditor(new format("dd/MM/yyyy"), true);
binder.registerCustomEditor(Date.class,null, dateEditor);
DecimalFormat decimalFormat = new DecimalFormat();
DecimalFormatSymbols symbols = new DecimalFormatSymbols();
symbols.setDecimalSeparator('.');
decimalFormat.setDecimalFormatSymbols(symbols);
decimalFormat.setMaximumFractionDigits(2);
binder.registerCustomEditor(Float.class,
new CustomNumberEditor(Float.class, decimalFormat, true));
}
Can any one tell how to avoid the scientific notation in float without changing its type.
There is no "scientific number format using float", you only see scientific notation applied when you (implicitly or explicitly) change the representation of the data, most likely to a character representation.
Hibernate maps data stored in a database to java data types - it doesn't do any conversion which would cause the representation of the number to change to scientific notation.
Java.text.DecimalFormat is a tool for converting numbers into strings. If you want to change the format of the string it generates, then tell it what format you want to use. e.g.
DecimalFormat decimalFormat = new DecimalFormat('###,###,###,###,###.0#');
BTW, you shouldn't use floats for currency values.

SQL Server Integration Services - Incremental data load hash comparison

Using SQL Server Integration Services (SSIS) to perform incremental data load, comparing a hash of to-be-imported and existing row data. I am using this:
http://ssismhash.codeplex.com/
to create the SHA512 hash for comparison. When trying to compare data import hash and existing hash from database using a Conditional Split task (expression is NEW_HASH == OLD_HASH) I get the following error upon entering the expression:
The data type "DT_BYTES" cannot be used with binary operator "==". The type of one or both of the operands is not supported for the operation. To perform this operation, one or both operands need to be explicitly cast with a cast operator.
Attempts at casting each column to a string (DT_WSTR, 64) before comparison have resulted in a truncation error.
Is there a better way to do this, or am I missing some small detail?
Thanks
Have you tried expanding the length beyond 64? I believe DT_BYTES is valid up to 8000 characters. I verified the following are legal cast destinations for DT_BYTES based on the books online article:
DT_I4
DT_UI4
DT_I8
DT_UI8
DT_STR
DT_WSTR
DT_GUID
DT_IMAGE
I also ran a test in BIDS and verified it had no problem comparing the values once I cast them to a sufficiently long data type.
SHA512 is a bit much as your chances of actually colliding are 1 in 2^256. SHA512 always outputs 512 bits which is 64 bytes. I have a similar situation where I check the hash of an incoming binary file. I use a Lookup Transformation instead of a Conditional Split.
This post is older but in order to help other users...
The answer is that in SSIS you cannot compare binary data using the == operator.
What I've seen is that people will most often convert (and store) the hashed value as varchar or nvarchar which can be compared in SSIS.
I believe the other users have answered your issue with "truncation" correctly.