I used laravel to save json in to database like as the folowing format
[{"s":"0.000","e":"","t":"\u672c\u65e5\u306f\u3054\u6765\u5854\u304f\u3060\u3055\u3044\u307e\u3057\u3066"},{"s":"0.001","e":"28.000","t":""},{"s":"0.002","e":"29.000","t":"\u3069\u3046\u305e\u6771\u4eac\u306e\u4eca\u3092\u3054\u3086\u3063\u304f\u308a\u304a\u697d\u3057\u307f\u304f\u3060\u3055\u3044\u307e\u305b\u30022"}]
Japanese become "\u3069\u3046\u305e\u..."
Can you tell me the way to covert them to Japanese?
"\u672c\u65e5\u306f\u3054\u6765\u5854\u304f\u3060\u3055\u3044\u307e\u3057\u3066" represents the same string as "本日はご来塔くださいまして". It just depends on the JSON stringifier whether or not it escapes certain characters. Since you're using laravel, I'm assuming you are generating the JSON in PHP. If you are using json_encode you can use the JSON_UNESCAPED_UNICODE option to get JSON with the Japanese UTF-8 characters instead of the escape sequences.
Either way when you parse the JSON you will get the same string. When you display that sting make sure you interpret it as UTF-8.
Related
In one of my web requests, I get Response Body data as below

{
"JobId":1528,
"CaseId":61687,
"CaseName":"CaseName_3923",
"FirmId":4175,
"FirmName":"FirmName7922442",
"CaseFirmName":"CaseFirmName7922442",
"LastUpdatedDate":"0001-01-01T00:00:00Z"
}
I need to use this whole body response in the next web request, and for that I want to remove the initial  characters.
Is there any way or setting apply in Jmeter by which I can remove these characters? In fact I tried Json Extractor with the settings below, but this is not working, so I assume that the initial  character is creating a problem in not assigning the value of job id to variable vJobid
JSON Extractor:
Apply To: Main sample only
Name of The Cretaed variables: vJobId
Json PathExtractor: $.JobID
Match No. 1
The strange characters at the beginning of your JSON structure is an wrongly encoded BOM (Byte Order Mark). It seems that you got a UTF-8 value which shows as an ISO-8859-1 encoded string.
So the first thing to do would be to find the place where you get the encoding wrong and correct that. If that is not an option, you could try to encode the data back to UTF-8 yourself by using a JSR223PostProcessor before your JSON Extractor with the following Groovy code:
vars.put("correctedResult",
new String(prev.responseDataAsString.bytes("ISO-8859-1"), "UTF-8"));
This postprocessor will try to convert the ill-encoded string back to UTF-8 and store that result in the JMeter variable correctedResult. Choose JMeter Variable Name to use with a value of correctedResult in the JSON Extractor to use that newly encoded value instead of the original data.
But clearly, find the reason for the wrong encoding is the better way.
You can use Regular Expression Extractor instead,
Use Regular Expression:
JobId":(\d+)
Match No. 1
It will match first job id number in response
In my JSON file, one of the fields has to carry the content of another file (a string).
The string has CRLFs, single/double quotes, tabs.
Is there a way to consider my whole string as a raw string so I don't have to escape special characters?
Is there an equivalent in JSON to the string raw delimiter in C++?
In C++, I would just put the whole file content inside : R"( ... )"
Put simply, no there isn't. Depending on what parser you use, it may have a feature that allows this and/or there may be a variant of JSON that allows this (examples of variants include JSONP and JSON-C, though I'm not aware of one specifically that allows for the features you are looking for), but the JSON standard ubiquitous on the web does not support multiline strings or unescaped special characters.
A workaround for the lack of raw string support in JSON is to Base64 encode your string before adding it to your JSON.
I am using Delphi XE7 and I am having trouble converting objects into JSON. I can get some object to give back what I think is proper JSON, eg TTestObject:
{"Test":{"Field":"TestField","Operation":"TestOperation","values":
["Value1","Value2","Value3","Value4"]}}
JOBJ:= TJSONObject.Create;
JOBJ.AddPair('Test', ATestObject.JSONObj);
memo1.Lines.Add(JObj.ToJSON);
JOBJ.Free;
However, when I try to get JSON back from my objects that have properties that are objects as well, I get JSON with \ characters.
{"Exceptions":{"TestObject1":"
{\"Mode\":\"0\",\"Value\":\"100.50\",\"Days\":\"10\"}","TestObject2":"
{\"Mode\":\"0\",\"Days\":\"0\",\"UnitsSold\":\"
...
What is causing this?
The JSON is perfectly valid. Your nested objects, when represented as JSON, contain double quote characters. Since they are reserved as string delimiters they need to be escaped. Hence the use of the backslash character as the escape character.
I use jackson to parse json data. Now I have a problem with handling a \uXXXX issue.
The data I got here is like
{"UID":"here_\ud83d\udc3b"}
After I use ObjectMapper.readValue(jsonContent, UserId.class); to convert json to an instance of UserId, the UID property is not literally "here_\ud83d\udc3b". Jackson convert \ud83d\udc3b to 2 chars as the unicode value.
My question is, is it possible to let jackson skip this "Unicode transformation" and key the literal value "\ud83d\udc3b" as it is?
No. JSON parsers are required to handle Unicode escapes to produce underlying Unicode characters.
When writing, on the other hand, some characters may also be encoded using similar Unicode escapes.
So if you need to use escaping, you need to re-encode such values yourself.
I have an WCF application written in C# that deliver my data in JSON or XML, depending on what the user asks for in the query string. Here is a quick snippet of my code that delivers the data:
Encoding utf8 = new System.Text.UTF8Encoding(false);
return WebOperationContext.Current.CreateTextResponse(data, "application/json", utf8);
When I deliver the data using above method, the special characters are all messed up. So Chávez looks like Chávez. On the other hand, if I create the utf8 variable above with the BOM or use the enum (Encoding.UTF8), the special characters are working fine. But then, some of my consumers are complaining that their code is throwing exception when consuming my API. This of course is happening because of the BOM in the feed. Is there a way for me to correctly display the special characters without the BOM in the feed?
It looks like the output is correct, but whatever you are using to display it expects ANSI encoded text. Chávez is what you get when you encode Chávez in UTF-8 and interpret the result as if it was Latin 1.