Escaping Symbol in json array using AngularJS - json

I'm passing some data through Json array to an html table. My problem is that the encoding of some frensh letter like (é, è, à),
my json array is like :
paniersFromJsonFile = [
{
"reference": "62010",
"LibelleDeLaPiece": "BOUCLIER",
"origine": "Pièce d'origine",
"distributeur": "datasier"
},
{
"reference": "60100",
"LibelleDeLaPiece": "Grille",
"origine": "Pièce d'origine",
"distributeur": "mvc"
}]
in the screen i have this
Thanks

yes,you need to save file as UTF-8(As gautam said) without BOM (Byte Order Mark in the begining of the file), Notepad++ allows that option

Related

Save JSON with special characters (currency symbols)

I'm using Scrapy to extract some values from a website. One of the fields is "price" and I have something like:
...
item = {
"title": title,
"url" : url,
"price": {
"symbol": the_currency_symbol,
"amount": float_number
}
}
yield item
I set the output to be a JSON file and I yield a dictionary item.
The problem is that when I open the output JSON with the items I see this:
{
"title": title,
"url" : url,
"price": {
"symbol": "\u00a3",
"amount": 12.99
}
}
How can I see the correct currency symbol in the JSON file?
Scrapy normally produces JSON feeds in ASCII encoding. Your JSON file has correct data but to see the currency symbol properly you can convert the output JSON file to UTF-8 encoding.
You can make Scrapy generate JSON in UTF-8 encoding by setting FEED_EXPORT_ENCODING="utf-8". For more help, see the answers to the question: Scrapy json response convert in utf-8 encode and Scrapy documentation https://docs.scrapy.org/en/latest/topics/feed-exports.html#std-setting-FEED_EXPORT_ENCODING.
If you do not want to run the scraper again then you can use some tool like https://stedolan.github.io/jq/ on your JSON file like jq file.json > outfile.json. Then outfile.json will have proper symbols.

JSON file to Redshift using JSONPATHS file - Invalid jsonpath format

Trying to load JSON file from s3 into Redshift using Copy with JSONPATHS file. The file contains N number of records.
Loading the entire set in one go throws an error:
Invalid operation: Invalid JSONPath format. Supported notations are 'dot-notation' and 'bracket-notation'
The Json paths:
{"jsonpaths":
[
"$.item[:].col1",
"$.item[:].col2",
"$.item[:].col3"
]
}
sample file:
{"item":
[
{
"col1":"A",
"col2":"b",
"col3":"d"
},
{
"col1": "123",
"col2": "red",
"col3": "456"
}
]
}
Working FILE:-
{"jsonpaths":
[
"$.item[0].col1",
"$.item[0].col2",
"$.item[0].col3"
]
}
What am I doing wrong to cause this error?
As per the documentation, there are 2 ways of specifying the JSONPaths. One is to use the dot notation and another is to use the bracket notation.
In this example, the user has used the dot notation, but the arrays have been indexed using a colon (:). The correct way to index JSON arrays elements is to use a number. Hence the second example of the JSONPath file works.

NiFi expression language dealing with special characters in JSON keys

So I have some json in which the keys might be something like this:
{
"name" : "John",
"num:itparams:enterprise:2.0:content" : {
"housing" : "5"
},
"num rooms": "12"
}
I get this json from an http request, and I need to use the evaluateJsonPath processor to create attributes from them.
name is easy, i just use $.name
But how would I access the other two? I imagine you would put them in quotes somehow to escape the special characters but just doing $."num:itparams:enterprise:2.0:content" doesnt work.
You can use the bracket for the key-value which has the special characters such as
$.['num:itparams:enterprise:2.0:content'].housing
then it will give you the evaluated result 5.

How to read invalid JSON format amazon firehose

I've got this most horrible scenario in where i want to read the files that kinesis firehose creates on our S3.
Kinesis firehose creates files that don't have every json object on a new line, but simply a json object concatenated file.
{"param1":"value1","param2":numericvalue2,"param3":"nested {bracket}"}{"param1":"value1","param2":numericvalue2,"param3":"nested {bracket}"}{"param1":"value1","param2":numericvalue2,"param3":"nested {bracket}"}
Now is this a scenario not supported by normal JSON.parse and i have tried working with following regex: .scan(/({((\".?\":.?)*?)})/)
But the scan only works in scenario's without nested brackets it seems.
Does anybody know an working/better/more elegant way to solve this problem?
The one in the initial anwser is for unquoted jsons which happens some times. this one:
({((\\?\".*?\\?\")*?)})
Works for quoted jsons and unquoted jsons
Besides this improved it a bit, to keep it simpler.. as you can have integer and normal values.. anything within string literals will be ignored due too the double capturing group.
https://regex101.com/r/kPSc0i/1
Modify the input to be one large JSON array, then parse that:
input = File.read("input.json")
json = "[#{input.rstrip.gsub(/\}\s*\{/, '},{')}]"
data = JSON.parse(json)
You might want to combine the first two to save some memory:
json = "[#{File.read('input.json').rstrip.gsub(/\}\s*\{/, '},{')}]"
data = JSON.parse(json)
This assumes that } followed by some whitespace followed by { never occurs inside a key or value in your JSON encoded data.
As you concluded in your most recent comment, the put_records_batch in firehose requires you to manually put delimiters in your records to be easily parsed by the consumers. You can add a new line or some special character that is solely used for parsing, % for example, which should never be used in your payload.
Other option would be sending record by record. This would be only viable if your use case does not require high throughput. For that you may loop on every record and load as a stringified data blob. If done in Python, we would have a dictionary "records" having all our json objects.
import json
def send_to_firehose(records):
firehose_client = boto3.client('firehose')
for record in records:
data = json.dumps(record)
firehose_client.put_record(DeliveryStreamName=<your stream>,
Record={
'Data': data
}
)
Firehose by default buffers the data before sending it to your bucket and it should end up with something like this. This will be easy to parse and load in memory in your preferred data structure.
[
{
"metadata": {
"schema_id": "4096"
},
"payload": {
"zaza": 12,
"price": 20,
"message": "Testing sendnig the data in message attribute",
"source": "coming routing to firehose"
}
},
{
"metadata": {
"schema_id": "4096"
},
"payload": {
"zaza": 12,
"price": 20,
"message": "Testing sendnig the data in message attribute",
"source": "coming routing to firehose"
}
}
]

json escape characters with back and forward slash

A part of my json schema is coming off with backward and forward slash after serialization.
My question is, is this a valid encoding? I'm having issues with the api too for some reason, so trying to see where the problem is.
"_links": {
"altAssetUrl": {
"href": "\/publication\/d40a4e4c-d6a3-45ae-98b3-924b31d8712a\/altasset\/48baad57-81a5-4d32-a2a1-e52c5cbe964d\/"
},
"contentUrl": {
"href": "\/publication\/d40a4e4c-d6a3-45ae-98b3-924b31d8712a\/article\/test\/contents;contentVersion=1521071354969\/"
}
},
In another area I noticed special characters %2F etc.
"socialShareUrl": "https:\/\/example.com\/ssp?entityRef=%2Fpublication%2Fd40a4e4c-d6a3-45ae-98b3-924b31d8712a%2Farticle%2Ftest",
Please advise on what can I do to fix this escaping of slashes, I'm using .net language.
My question is, is this a valid encoding?
Yes.
var json = '"This has a slash\\/"';
console.log("Raw JSON: " + json);
var str = JSON.parse(json);
console.log("String result of parsing JSON: " + str);
In another area I noticed special characters %2F etc.
Perfectly normal URL encoding.