I tried with map and flat.map to restructure a json file (here is a sample it), with ruby from:
a= {
"_nb": {
"$nb": "55dd0"
},
"conf": "linux"
}
to
{
"_nb": "55dd0",
"conf": "linux"
}
or
{
"$nb": "55dd0",
"conf": "linux"
}
Could anyone point me in the right direction, please?
NOTE:
So far I add this solution implemented which returns a NoMethodError.. Hope this will help you both user1934428 & Cary Swoveland
..
File.open("src.json", "w") do |f|
hash_data = JSON.parse(File.read(a))
hash = hash_data.to_s
hash.each do |key, value|
if key == "_id"
hash[value] = value.values.first
end
end
f.puts(hash)
end
A first step might be:
hash = {
"_nb": {
"$nb": "55dd0"
},
"conf": "linux"
}
hash.transform_values { |value| value.is_a?(Hash) ? value.values.first : value }
#=> {:_nb=>"55dd0", :conf=>"linux"}
Note: This only works when the nesting is not deeper than one level and when the nested array has only one key/value pair.
Related
So i have a json:
{
"code": "Q0934X",
"name": "PIDBA",
"longlat": "POINT(23.0 33.0)",
"altitude": 33
}
And i want to change the column code to Identifier
The wished output is this
{
"Identifier": "Q0934X",
"name": "PIDBA",
"longlat": "POINT(23.0 33.0)",
"altitude": 33
}
How can i do in the shortest way? Thanks
It appears that both "the json" you have and your desired result are JSON strings. If the one you have is json_str you can write:
json = JSON.parse(json_str).tap { |h| h["Identifier"] = h.delete("code") }.to_json
puts json
#=> {"name":"PIDBA","longlat":"POINT(23.0 33.0)","altitude":33,"Identifier":"Q0934X"}
Note that Hash#delete returns the value of the key being removed.
Perhaps transform_keys is an option.
The following seems to work for me (ruby 2.6):
json = JSON.parse(json_str).transform_keys { |k| k === 'code' ? 'Identifier' : k }.to_json
But this may work for Ruby 3.0 onwards (if I've understood the docs):
json = JSON.parse(json_str).transform_keys({ 'code': 'Identifier' }).to_json
I have a big .json file with geodata. Its a piece of this file. It has a repeating structure.
I want to save info about "id" and "area_value" and remove or replace other data.
With this structure.
{'Number':['id'],'Area sq.m.': ['area_value'],'Forest cov':'None','Status':'None'}
What method can be optimal for solving this problem?
Thanks!
{
"type":"FeatureCollection",
"crs":{
"type":"name",
"properties":{
"name":"EPSG:4326"
}
},
"features":[
{
"type":"Feature",
"properties":{
"date_create":"15.03.2008",
"statecd":"06",
"cc_date_approval":null,
"children":null,
"adate":"23.08.2017",
"cc_date_entering":"01.01.2014",
"rifr_cnt":null,
"parcel_build_attrs":null,
"rifr":null,
"sale_date":null,
"area_unit":"055",
"util_code":null,
"util_by_doc":null,
"area_value":115558.0,
"application_date":null,
"sale":null,
"cad_unit":"383",
"kvartal":"69:3:11",
"parent_id":"69:3:11:248",
"sale_cnt":null,
"sale_doc_date":null,
"date_cost":null,
"category_type":"003008000000",
"rifr_dep":null,
"kvartal_cn":"69:03:0000011",
"parent_cn":"69:03:0000011:248",
"cn":"69:03:0000011:245",
"is_big":false,
"rifr_dep_info":null,
"sale_dep":null,
"sale_dep_uo":null,
"parcel_build":false,
"id":"69:3:11:245",
"address":"\u0422\u0432\u0435\u0440\u0441\u043a\u0430\u044f \u043e\u0431\u043b\u0430\u0441\u0442\u044c, \u0440-\u043d. \u0411\u0435\u043b\u044c\u0441\u043a\u0438\u0439, \u0441/\u043f. \u0415\u0433\u043e\u0440\u044c\u0435\u0432\u0441\u043a\u043e\u0435, \u0434. \u041e\u0441\u0438\u043f\u043e\u0432\u043e",
"area_type":"009",
"parcel_type":"parcel",
"sale_doc_num":null,
"sale_doc_type":null,
"sale_price":null,
"cad_cost":139698.06,
"fp":null,
"center":{
"x":33.14727379331379,
"y":55.87764081906541
}
}
}
You can try along the following lines:
import json
# some JSON:
x = '{ "name":"John", "age":30, "city":"New York"}'
# parse x:
y = json.loads(x)
# the result is a Python dictionary:
print(y["age"])
Trying to do this using xmltodict.unparse:
I have this structure in json:
"organisations": [
{"organisation": "org1"},
{"organisation": "org2"},
{"organisation": "org3"}
]
But comes out like this in xml:
<organisations><organisation>org1</organisation></organisations>
<organisations><organisation>org2</organisation></organisations>
<organisations><organisation>org2</organisation></organisations>
I wanted like this:
<organisations>
<organisation>org1</organisation>
<organisation>org2</organisation>
<organisation>org2</organisation>
</organisations>
Im using xmltodict.unparse
def dict_to_xml(d, pretty_print=False, indent=DEFAULT_INDENT, document_root="root"):
if len(d.keys()) != 1:
d = {
document_root: d
}
res = xmltodict.unparse(d, indent=indent, short_empty_elements=True)
if pretty_print:
res = pretty_print_xml(res).strip()
return res
Anyone know what to do without hacking xmltodict??
thanks
I don't much about XML, but I got curious about this question and noticed:
Lists that are specified under a key in a dictionary use the key as a tag for each item.
https://github.com/martinblech/xmltodict#roundtripping
My approach was to reverse engineer the result you're after:
expected = '''
<organisations>
<organisation>org1</organisation>
<organisation>org2</organisation>
<organisation>org2</organisation>
</organisations>
'''
print(json.dumps(xmltodict.parse(expected), indent=4))
output:
{
"organisations": {
"organisation": [
"org1",
"org2",
"org2"
]
}
}
And "round tripping" that, gives the result you're after:
reverse = {
"organisations": {
"organisation": [
"org1",
"org2",
"org2"
]
}
}
print(xmltodict.unparse(reverse, pretty=True))
output:
<?xml version="1.0" encoding="utf-8"?>
<organisations>
<organisation>org1</organisation>
<organisation>org2</organisation>
<organisation>org2</organisation>
</organisations>
HTH!
I am trying to convert a json file which contain object and array to a JSON file.
Below is the JSON file
{
"localbusiness":{
"name": "toto",
"phone": "+11234567890"
},
"date":"05/02/2016",
"time":"5:00pm",
"count":"4",
"userInfo":{
"name": "John Doe",
"phone": "+10987654321",
"email":"john.doe#unknown.com",
"userId":"user1234333"
}
}
my goal is to save this is a database such as MongoId. I would like to use map to get something like:
localbusiness_name => "toto",
localbusiness_phone => "+11234567890",
date => "05/02/2016",
...
userInfo_name => "John Doe"
...
I have tried map but it's not splitting the array of local business or userInfo
def format_entry
ps = #params.map do | h |
ps.merge!(h)
##logger.info("entry #{h}")
end
##logger.info("formatting the data #{ps}")
ps
end
I do not really how to parse each entry and rebuild the name
It looks like to me you are trying to "flatten" the inner hashes into one big hash. Flatten being incorrect because you want to prepend the hash's key to the sub-hash's key. This will require looping through the hash, and then looping again through each sub hash. This code example will only work if you have 1 layer deep. if you have multiple layers, then I would suggest making two methods, or a recursive method.
#business = { # This is a hash, not a json blob, but you can take json and call JSON.parse(blob) to turn it into a hash.
"localbusiness":{
"name": "toto",
"phone": "+11234567890"
},
"date":"05/02/2016",
"time":"5:00pm",
"count":"4",
"userInfo":{
"name": "John Doe",
"phone": "+10987654321",
"email":"john.doe#unknown.com",
"userId":"user1234333"
}
}
#squashed_business = Hash.new
#business.each do |k, v|
if v.is_a? Hash
v.each do |key, value|
#squashed_business.merge! (k.to_s + "_" + key.to_s) => value
end
else
#squashed_business.merge! k => v
end
end
I noticed that you are getting "unexpected" outcomes when enumerating over a hash #params.each { |h| ... } because it gives you both a key and a value. Instead you want to do #params.each { |key, value| ... } as I did in the above code example.
Im a beginner with ruby and learning to iterate and parse json file. The contents inside input.json
[
{
"scheme": "http",
"domain_name": "www.example.com",
"path": "path/to/file",
"fragment": "header2"
},
{
"scheme": "http",
"domain_name": "www.example2.org",
"disabled": true
},
{
"scheme": "https",
"domain_name": "www.stack.org",
"path": "some/path",
"query": {
"key1": "val1",
"key2": "val2"
}
}
]
ho do I parse print the output as:
http://www.example.com/path/to/file#header2
https://www.stack.org/some/path?key1=val1&key2=val2
Any learning references would be very helpful.
Hopefully this code is self-explanatory:
require 'URI'
require 'json'
entries = JSON.parse(File.read('input.json'))
entries.reject { |entry| entry["disabled"] }.each do |entry|
puts URI::Generic.build({
:scheme => entry["scheme"],
:host => entry["domain_name"],
:fragment => entry["fragment"],
:query => entry["query"] && URI.encode_www_form(entry["query"]),
:path => entry["path"] && ("/" + entry["path"])
}).to_s
end
# Output:
# http://www.example.com/path/to/file#header2
# https://www.stack.org/some/path?key1=val1&key2=val2
The first step is to turn this JSON into Ruby data:
require 'json'
data = JSON.load(DATA)
Then you need to iterate over this and reject all those that are flagged as disabled:
data.reject do |entry|
entry['disabled']
end
Which you can chain together with an operation that leverages the URI library to build your output:
require 'uri'
uris = data.reject do |entry|
entry['disabled']
end.map do |entry|
case (entry['scheme'])
when 'https'
URI::HTTPS
else
URI::HTTP
end.build(
host: entry['domain_name'],
path: normalized_path(entry['path']),
query: entry['query'] && URI.encode_www_form(entry['query'])
).to_s
end
#=> ["http://www.example.com/path/to/file", "http://www.stack.org/some/path?key1=val1&key2=val2"]
This requires a function called normalized_path to deal with nil or invalid paths and fix them:
def normalized_path(path)
case (path and path[0,1])
when nil
'/'
when '/'
path
else
"/#{path}"
end
end