CVV numbers starting with 0 (zero) fail during execution? - json

Has anyone else seen this or can you verify seeing this behavior?
I'm using PayPal's new REST API. It is a fact that some CVV numbers on credit card start with a 0 (zero). Yet sending a request to the PayPal REST API with a CVV number starting with zero fails. This is because the "cvv2" value within a "funding_instrument" object is expected to be a number and a number starting with zero is invalid JSON. When I try to execute my request anyway I get a "INTERNAL_SERVICE_ERROR" error as my response.
In an attempt to correct this I wrapped my CVV number in quotation marks to treat it as a string and then resubmitted my request. This time I get a "VALIDATION_ERROR" response telling me that the CVV number must be numeric. So unless there's some way to escape a leading zero in a number in JSON there's no way to accept cards via PayPal REST API where the CVV contains a zero as its first digit.
Any help?

This is a bug in our new REST API - where the cvv2 field is defined as an integer instead of a string to accomodate the values that begin with zeros (eg. 011, 001). We are working the fix - will update this thread once the fix is rolled out.

The only integer whose decimal representation starts with a "0" is zero, which is perfectly legal in JSON. The problem you describe is impossible. You do have to convert the CVV2 code from whatever representation you have to a canonical decimal number because that is required by the JSON specification.
You never actually got the CVV number from the user (or whatever the source is). You tried to convert the representation directly into JSON. Converting representations directly will get you into trouble -- instead convert through numbers.
"012" on a credit card represents the number twelve. The number twelve is represented in JSON was "12". When trying to convert a number from one representation to another, it's almost always best to convert it to a number first.
"012" is not a legal representation of any number according to the JSON specification. Trying to send it violates that specification and indicates you never actually got the CVV number but instead tried to use its representation as if it was the number represented. This is like eating a recipe and is likely to give you, and the PayPal API, indigestion.
Update: Apparently, the bug is in the PayPal API. CVV codes are not numbers. There is no such thing as a "CVV number". The PayPal API requires you to supply something that does not exist and fails when there is no number that corresponds to the CVV code.

Related

What is the correct syntax to fold a JSON string?

I am using Delphi 2009 to build up a string variable containing a simple JON string from values I get from a database. This results in a string of the form below (although the real string could be much longer)
{"alice#example.com": {"first":"Alice", "id": 2},"bob#example.com": {"first":"Bob", "id":1},"cath#example.com": {"first":"Cath", "id":3},"derek#example.com": {"first":"Derek", "id": 4}}
This string gets sent as a header called Recipient-Variables in an email to a company.
The instructions I have for sending the emails to the company say
Note The value of the “Recipient-Variables” header should be
valid JSON string, otherwise we won’t be able to parse it. If
your “Recipient-Variables” header exceeds 998 characters,
you should use folding to spread the variables over multiple lines.
I have looked at these SO posts to try to understand what is meant by folding but cannot really understand the replies as they often seem to be referencing a particular editor.
notepad++ user defined regions with folding
Folding JSON at specific points
Can you customize code folding?
Please can somebody use my example to show me what syntax I should use or what characters I need to insert in my string to comply with the instruction and fold my JSON string, say in between the records for bob and cath?
(BTW I understand what is meant by folding when viewing JSON or other code in an editor but I don't understand how a simple JSON string needs to be formatted in order for the folding to happen at a specific place)
I finally found the answer myself so posting here to help others, just in case.
The answer is given in this document on rfc2822 standards, published in 2001 by the Network Working Group (P. Resnick, Editor)
https://www.rfc-editor.org/rfc/rfc2822#page-11
The document ...
specifies a syntax for text messages that are sent between computer
users, within the framework of "electronic mail" messages.
...and in particular describes how emails are constructed and in particular how to deal with long headers.
Section 2.2.3 talks about long header fields, > 998 characters, and says such headers need to be folded by inserting the CRLF characters followed immediately by some white space, eg a space character.
If the receiving server is following the same standards it will strip out the CRLF character before parsing the header, which will itself will include stripping space characters.
Though structured field bodies are defined in such a way that
folding can take place between many of the lexical tokens (and even
within some of the lexical tokens), folding SHOULD be limited to
placing the CRLF at higher-level syntactic breaks. For instance, if
a field body is defined as comma-separated values, it is recommended
that folding occur after the comma separating the structured items in
preference to other places where the field could be folded, even if
it is allowed elsewhere.
Later, in section 3.2.3 it explains how comments may be combined with folding white space.
So it seems that if generating the string through code, it is necessary to fold long header lines by detecting a higher level syntactic boundary, such as a comma, that is less than 988 characters from the start of the header (or the last fold point) and insert the three hex characters x0D0A20. This could be done after the header has been constructed or on the fly as it is generated.
As a follow up, I now notice that the Overbytes ICS component I am using (TSslSmtpCli) has a boolean property FoldHeaders so this might do all the work for me.

Value that is printed by " jq . " is different from value that is present in json file [duplicate]

Why this ("Filter" in jqplay.org):
{"key":633447818234478180}
returns this ("Result" in jqplay.org):
{"key": 633447818234478200}
Original JSON doesn't matter.
Why is it changing 180 into 200? How can I overcome this? Is this a bug? A number too big?
I believe this is because jq can only represent legal JSON data and the number you've given is outside the range that can be represented without loss of precision. See also
What is JavaScript's highest integer value that a number can go to without losing precision?
If you need to work with larger numbers as strings in jq you may want to try this library:
jq-bigintA big integer library for working with possibly-signed arbitrarily long decimal strings. Written by Peter Koppstein (#pkoppstein) and released under the MIT license.

Substring when inserting data from magnetic scanner to MS Access input field

I working on MS Access application to store customers data.
All data are stored in SQL DB.
One of input field is used to store ID number of card with magnetic strip.
Instead of typing long number I purchased usb magnetic scannert.
Scanner works but after I scan card it giving me card number with not wanted char on front and back of string, example #1234567890123456789012345-1-1-1#.
How can I get rid of additional char, leaving only 25 characters between 2nd and 26th char.
You can use
strData = Mid(strData,2,25)
after reading the data.
Also I would recommend to create a procedure for recognizing of scanner input. Use Form_KeyPress form event and start buffering symbols when first received symbol is # until you receive last character. After this you can set focus to scanner input field and display only required characters from received string. In this case you can scan the data independent of current focus and show to the user only meaning characters. I can provide example for regular laser scanner with AIM service codes (3 service characters at the begining)

Golang serialize/deserialize an Empty Array not as null

Is there a way to serialize an empty array attribute (not null) of a struct and deserialize it back to an empty array (not null again)?
Considering that an empty array is actually a pointer to null, is the perceptible initial difference between an empty array and pointer to null completely lost after serialize/deserialize?
The worst practical scenario is that when I show an empty array attribute to my REST client, as a json "att":[], at first time, and, after cache register to redis and recover it, the same attribute is shown to my client as "att":null, causing a contract broken and a lot of confusing.
Summing up: is possible to show the Customer 2 addresses like an json empty array, after serialize/deserialize => https://play.golang.org/p/TVwvTWDyHZ
I am pretty sure the easiest way you can do it is to change your line
var cust1_recovered Customer
to
cust1_recovered := Customer{Addresses: []Address{}}
Unless I am reading your question incorrectly, I believe this is your desired output:
ORIGINAL Customer 2 {
"Name": "Customer number 2",
"Addresses": []
}
RECOVERED Customer 2 {
"Name": "Customer number 2",
"Addresses": []
}
Here is a playground to verify with: https://play.golang.org/p/T9K1VSTAM0
The limitation here, as #mike pointed out, is if Addresses is truly nil before you encode, once you decode you do not get the json equivalent null, but would instead end up with an empty list.
No, it's not possible. To understand why, let's look at the Go spec. For it to output two different results for empty vs. nil, any serialization method would need to be able to tell the difference between the two. However, according to the Go spec,
Two array types are identical if they have identical element types and
the same array length.
Since neither contains any elements and have the same element type, the only difference could be in length, but it also states that
The length of a nil slice, map or channel is 0
So through comparison, it would be unable to tell. Of course, there are methods other than comparison, so to really put the nail in the coffin, here's the portion that shows they have the same underlying representation. The spec also guarantees that
A struct or array type has size zero if it contains no fields (or
elements, respectively) that have a size greater than zero.
so the actual allocated structure of a zero length array has to be of size zero. If it's of size zero, it can't store any information about whether it's empty or nil, so the object itself can't know either. In short, there is no difference between a nil array and a zero length array.
The "perceptible initial difference between an empty array and pointer to null" is not lost during serialization/deserialization, it's lost from the moment initial assignment is complete.
For another solution, we have forked encoding/json to add a new method called MarshalSafeCollections(). This method will marshal Slices/Arrays/Maps as their respective empty values ([]/{}). Since most of our instantiation happens on the data layer we did not want to add code that fixed issues in our http response layer. The changes to the library are minimal and follow go releases.

Find out the type of a Profobuf message (Google Chrome Sync)

I'm trying to connect to Google Chrome sync (that synchronizes your Chrome settings and your currently opened tabs).
For now I'm concentrating on on the tab syncing. I connected to the Google Talk servers and I'm receiving messages from tango bot whenever I navigate to a new webpage in Chrome.
But I have difficulties decoding those messages as they are encodes in Google's protobuf format – because there are tons of different protobuf classes dedicated to Chrome Sync and I think there's no way of figuring out the type of a binary protobuf message?
A typical message would look like this (base64 encoded, XXXX't out my mail adress):
CAAilQEKQAoGCgQIAxACEiUKBgoECAMQARISCZwF6dZYmkeFEXZLABNN3/yMGgcIhSwQAxgBINP80ri/JyoIMTgxOTgxMjYaUQpPCgwI7AcSB1NFU1NJT04QARiw64/I0se0AiIyVzpDaGZDeU9JWUZXdXFuUmRXaGtJWk94VkRSM1lmTGU1M0FoRGVxT2EwOHVQUHcyOD0wASoGCgQIAxACMAI4AUIrCG8SJxAEGAIiFGRlbHXXXXXXXXdAZ21haWwuY29tQgl0YW5nb19yYXdIAQ==
I tried decoding it with some of the protobuf classes (that I compiled for Java), but with none of them I got any useful data.
Does anyone have more information on this topic? Some insight on how to find the right protobuf class for decoding a certain binary message would be great. It would even help me to some point to be able to decode that exact message I gave as an example above.
There is very little public documentation and the Chromium source code is really difficult to look trough if you're not a C++ guy…
(I'm developing in Java, if that matters)
Yes, that is broadly possible; however, it cannot be done with the data you have posted because you have corrupted it irretrievably in your attempt to remove your email address. Protobuf is very sensitive to that; I tried replacing the XXXXXXXX with the base-64 for a 6-letter email-address, but the byte immediate before that is 199, and 199 cannot be legal there (the data immediately before string contents is the length of the string encoded as a varint, and a varint can never end with the most-significant-bit of the last byte set, because the MSB is a continuation flag).
If you have raw protobuf binary, you can try running it through protoc --decode_raw, and see what it says; that may give you enough to start reconstructing the layout. Alternatively, you can try parsing it manually with your preferred implementation's "reader" API (if it has one). For example, using protobuf-net and ProtoReader, I was able to piece together (the numbers in brackets are the offsets after reading each field-header):
{
(1) field 1: varint, value 0 if int
(3) field 4: string, looks like sub-message
// everything after this point is really really suspect
(6) field 1, string, looks like sub-message
(8) field 1, string, looks like sub-message
(16) field 2, string, looks like sub-message
(55) field 4, varint, 1357060030035 assuming int64
(62) field 5, string; "18198126"
(72) field 3, string, looks like sub-message
(64) field 1, string, looks like some encoded session data
(155) field 5: string, looks like sub-message
(157) field 1: string, looks like sub-message
(163) field 6: varint, value 2 if int
(165) field 7: varint, value 1 if int
(167) field 8: string, looks like sub-message
(169) field 1: varint, value 111 if int
(171) field 2: string, looks like sub-message
}
The problem is that due to the corruption (because of your replacement), it is impossible to say much beyond that field 4; by that point, everything could be completely gibberish due to the lengths being off. So I have very little confidence past that point. The main point of the above is simply to illustrate: yes, you can parse protobuf data without knowing the schema in advance, to reverse engineer a schema - but it requires:
patience and a little guesswork to interpret each field (each wire-type can mean multiple things)
if you know what the values being stored, without necessarily knowing how each maps to fields, then you have a headstart; for example, if you know you are being sent something with the values 22, 1325, "hello world", and 123.45F; then you should be able to figure out the mapping easily enough
intact data (which is sadly missing in this case)