I working on MS Access application to store customers data.
All data are stored in SQL DB.
One of input field is used to store ID number of card with magnetic strip.
Instead of typing long number I purchased usb magnetic scannert.
Scanner works but after I scan card it giving me card number with not wanted char on front and back of string, example #1234567890123456789012345-1-1-1#.
How can I get rid of additional char, leaving only 25 characters between 2nd and 26th char.
You can use
strData = Mid(strData,2,25)
after reading the data.
Also I would recommend to create a procedure for recognizing of scanner input. Use Form_KeyPress form event and start buffering symbols when first received symbol is # until you receive last character. After this you can set focus to scanner input field and display only required characters from received string. In this case you can scan the data independent of current focus and show to the user only meaning characters. I can provide example for regular laser scanner with AIM service codes (3 service characters at the begining)
Related
What is data in the following format called:
I guess this is a string representation of some lower level data store. I'm not sure what it would be called though.
A source database field of type INT is read through an OLE DB Source. It is eventually written to a Flat File Destination. The destination Flat File Connection Manager > Advanced page reports it as a four-byte signed integer [DT_I4].
This data type made me think it indicated binary. Clearly, it does not. I was surprised that it was not the more generic numeric [DT_NUMERIC].
I changed this type setting to single-byte signed integer [DT_I1]. I expected this to fail, but it did not. The process produced the same result, even though the value of the field was always > 127. Why did this not fail?
Some of the values that are produced are
1679576722
1588667638
1588667638
1497758544
1306849450
1215930367
1215930367
1023011178
1932102084
Clearly, outside the range of a single-byte signed integer [DT_I1].
As a related question, is it possible to output binary data to a flat file? If so, what settings and where should be used?
Data types validation
I think this issue is related to the connection manager that is used, since the data type validation (outside the pipeline) is not done by Integration services, it is done by the service provider:
OLEDB for Excel and Access
SQL Database Engine for SQL Server
...
When it comes to flat file connection manager, it doesn't guarantee any data types consistency since all values are stored as text. As example try adding a flat file connection manager and select a text file that contains names, try changing the columns data types to Date and go to the Columns preview tab, it will show all columns without any issue. It only take care of the Row Delimiter, column delimiter , text qualifier and common properties used to read from a flat file. (similar to TextFieldParser class in VB.NET)
The only case that data types may cause an exception is when you are using a Flat file source because the Flat file source will create an External columns with defined metadata in the Flat file connection manager and link them to the original columns (you can see that when you open the Advanced editor of the Flat file source) when SSIS try reading from flat file source the External columns will throw the exception.
Binary output
You should convert the column into binary within the package and map it to the destination column. As example you can use a script component to do that:
public override void myInput_ProcessInputRow(myInputBuffer Row)
{
Row.ByteValues=System.Text.Encoding.UTF8.GetBytes (Row.name);
}
I haven't try if this will work with a Derived column or Data conversion transformation.
References
Converting Input to (DT_BYTES,20)
DT Bytes in SSIS
After re-reading the question to make sure it matched my proof-edits, I realized that it doesn't appear that I answered your question - sorry about that. I have left the first answer in case it is helpful.
SSIS does not appear to enforce destination metadata; however, it will enforce source metadata. I created a test file with ranges -127 to 400. I tested this with the following scenarios:
Test 1: Source and destination flat file connection managers with signed 1 byte data type.
Result 1: Failed
Test 2: Source is 4 byte signed and destination is 1 byte signed.
Result 2: Pass
SSIS's pipeline metadata validation only cares about the metadata of the input matching the width of the pipeline. It appears to not care what the output is. Though, it offers you the ability to set the destination to whatever the downstream source is so that it can check and provide a warning if the destination's (i.e., SQL Server) metadata matches or not.
This was an unexpected result - I expected it to fail as you did. Intuitively, the fact that it did not fail still makes sense. Since we are writing to a CSV file, then there is no way to control what the required metadata is. But, if we hook this to a SQL Server destination and the metadata doesn't match, then SQL Server will frown upon the out of bounds data (see my other answer).
Now, I would still set the metadata of the output to match what it is in the pipeline as this has important considerations with distinguishing string versus numeric data types. So, if you try to set a datetime as integer then there will be no text qualifier, which may cause an error on the next input process. Conversely, you could have the same problem of setting an integer to a varchar and having, which means it would get a text qualifier.
I think the fact that destination metadata is not enforced is a bit of a weak link in SSIS. But, it can be negated by just setting it to match the pipeline buffer, which is done automatically assuming it is the last task that is dropped to the design. With that being said, if you update the metadata on the pipeline after development is complete then you are in for a real treat with getting the metadata updated throughout the entire pipeline because some tasks have to be opened and closed while others have to be deleted and re-created in order to update the metadata.
Additional Information
TL DR: TinyInt is stored as an unsigned data type in SQL Server, which means it supports values between 0 and 255. So a value greater than 127 is acceptable - up to 255. Anything over will result in an error.
The byte size indicates the maximum number of possible combinations where the signed/unsigned indicates whether or not the range is split between positive and negative values.
1 byte = TinyInt in SQL Server
1 byte is 8 bits = 256 combinations
Signed Range: -128 to 127
Unsigned Range: 0 to 255
It is important to note that SQL Server does not support signing the data types directly. What I mean here is that there is no way to set the integer data types (i.e., TinyInt, Int, and BigInt) as signed or unsigned.
TinyInt it is unsigned
Int and BigInt are signed
See reference below: Max Size of SQL Server Auto-Identity Field
If we attempt to set a TinyInt to any value that is outside of the Unsigned Range (e.g., -1 or 256), then we get the following error message:
This is why you were able to set a value greater than 127.
Int Error Message:
BigInt Error Message:
With respect to Identity columns, if we declare an Identity column as Int (i.e., 32 bit ~= 4.3 billion combinations) and set the seed to 0 with an increment of 1, then SQL Server will only go to 2,147,483,647 rows before it stops, which is the maximum signed value. But, we are short by half the range. If we set the seed to -2,147,483,648 (don't forget to include 0 in the range) then SQL Server will increment through the full range of combinations before stopping.
References:
SSIS Data Types and Limitations
Max Size of SQL Server Auto-Identity Field
If I execute a query against the MySQL Connector/C library the data I'm getting back all appears to be in straight char * format, including numerical data types.
For example, if I execute a query that returns 4 columns, all of which are INTEGER in MySQL, rather than getting back 4 bytes worth of data (each byte representing a single column row value), I'm actually getting back 4 ASCII encoded character bytes, where 1 is actually a byte with the numeric value 49 in it (ASCII for 1).
Is this accurate or am I just missing something complete?
Do I really need to then atoi that returned byte into an int in my code or is there a mechanism to get the native C data types out of the MySQL client directly?
I guess my real question is: is the mysql_store_result structure converting that data to ASCII encoded representations in a way that can be bypassed by my application code?
I believe the data is sent on the wire as text in the MySQL protocol (I just confirmed this with Wireshark). So that means mysql_store_result() is not converting the data, it's just simply passing the data on as it was received. MySQL actually sends integers as text. I agree this always seemed like an odd design to me as well.
MySQL originally only offered the Text Protocol that you are currently using, in which (as you note) results are encoded as strings. MySQL v4.1 (released in April 2003) introduced the Prepared Statement protocol, which (amongst other things) transmits results in a binary format.
See C API Prepared Statements for more information on how to use the latter protocol with Connector/C.
Has anyone else seen this or can you verify seeing this behavior?
I'm using PayPal's new REST API. It is a fact that some CVV numbers on credit card start with a 0 (zero). Yet sending a request to the PayPal REST API with a CVV number starting with zero fails. This is because the "cvv2" value within a "funding_instrument" object is expected to be a number and a number starting with zero is invalid JSON. When I try to execute my request anyway I get a "INTERNAL_SERVICE_ERROR" error as my response.
In an attempt to correct this I wrapped my CVV number in quotation marks to treat it as a string and then resubmitted my request. This time I get a "VALIDATION_ERROR" response telling me that the CVV number must be numeric. So unless there's some way to escape a leading zero in a number in JSON there's no way to accept cards via PayPal REST API where the CVV contains a zero as its first digit.
Any help?
This is a bug in our new REST API - where the cvv2 field is defined as an integer instead of a string to accomodate the values that begin with zeros (eg. 011, 001). We are working the fix - will update this thread once the fix is rolled out.
The only integer whose decimal representation starts with a "0" is zero, which is perfectly legal in JSON. The problem you describe is impossible. You do have to convert the CVV2 code from whatever representation you have to a canonical decimal number because that is required by the JSON specification.
You never actually got the CVV number from the user (or whatever the source is). You tried to convert the representation directly into JSON. Converting representations directly will get you into trouble -- instead convert through numbers.
"012" on a credit card represents the number twelve. The number twelve is represented in JSON was "12". When trying to convert a number from one representation to another, it's almost always best to convert it to a number first.
"012" is not a legal representation of any number according to the JSON specification. Trying to send it violates that specification and indicates you never actually got the CVV number but instead tried to use its representation as if it was the number represented. This is like eating a recipe and is likely to give you, and the PayPal API, indigestion.
Update: Apparently, the bug is in the PayPal API. CVV codes are not numbers. There is no such thing as a "CVV number". The PayPal API requires you to supply something that does not exist and fails when there is no number that corresponds to the CVV code.
I'm trying to read some data from an SQL Server 2008 database into an Excel 2007 spreadsheet with C#, using this connection string:
Provider=Microsoft.ACE.OLEDB.12.0;Data Source=foo.xlsx;Extended Properties='Excel 12.0 XML;HDR=YES'
One of the columns in the database is a VARCHAR(1000). When I try recreating the schema in the spreadsheet, it seems like Excel's VARCHAR only supports up to 255. This page suggests that the "Total number of characters that a cell can contain" is around 32K, so in principle, it should be possible to get a longer string in.
Is there a simple way to work around the 255 char limit?
Although XLOPER12 will now support a string up to 32,767 Unicode characters long, xlfEvaluate (and other) excel C-Api function continues to be limited to 255 characters long in Excel 2010. It will return xltypeErr if it is passed an XLOPER12 with a string longer than 255.
All strings the user sees in Excel have for many versions now been stored internally as Unicode strings. Unicode worksheet strings can be up to 32,767 (215 - 1) characters in length and can contain any valid Unicode character.
When the C API was first introduced, worksheet strings were byte strings limited in length to 255 characters, and the C API reflected these limitations. With Excel 2007, the C API is updated to handle Excel long Unicode strings. This means that DLL functions registered in the right way can accept Unicode arguments and return Unicode strings.
Note:
Byte strings are still fully supported in the C API for backward compatibility, however they still have the same 255-character limit. No easy solution other than to truncate the string, or divide the string into multiple cells.