I need to convert a JSON response into a well-formatted string, more like a table.
Example,
[{id: 1, name: "Panda", description: ""}, {id: 2, name: "koala", description: ""},...]
To be converted as,
Id | Name |Description
1 | Panda |
2 | Koala test |
and for the text to be automatically wrapped inside the cell width.
Any help would be appreciated.
Thank you.
First you need to parse JSON result.
data = JSON.parse(result)
Then use that parsed result to generate the pattern.
data = [
{ id: 1, name: 'Panda', description: '' },
{ id: 2, name: 'koala', description: '' }
]
puts 'Id | Name |Description'
data.each do |entry|
puts "#{entry[:id]} | #{entry[:name]} |#{entry[:description]}"
end
Hope this helps!
Related
I have this json data:
consumption_json = """
{
"count": 48,
"next": null,
"previous": null,
"results": [
{
"consumption": 0.063,
"interval_start": "2018-05-19T00:30:00+0100",
"interval_end": "2018-05-19T01:00:00+0100"
},
{
"consumption": 0.071,
"interval_start": "2018-05-19T00:00:00+0100",
"interval_end": "2018-05-19T00:30:00+0100"
},
{
"consumption": 0.073,
"interval_start": "2018-05-18T23:30:00+0100",
"interval_end": "2018-05-18T00:00:00+0100"
}
]
}
"""
and I would like to covert the results list to an Arrow table.
I have managed this by first converting it to python data structure, using python's json library, and then converting that to an Arrow table.
import json
consumption_python = json.loads(consumption_json)
results = consumption_python['results']
table = pa.Table.from_pylist(results)
print(table)
pyarrow.Table
consumption: double
interval_start: string
interval_end: string
----
consumption: [[0.063,0.071,0.073]]
interval_start: [["2018-05-19T00:30:00+0100","2018-05-19T00:00:00+0100","2018-05-18T23:30:00+0100"]]
interval_end: [["2018-05-19T01:00:00+0100","2018-05-19T00:30:00+0100","2018-05-18T00:00:00+0100"]]
But, for reasons of performance, I'd rather just use pyarrow exclusively for this.
I can use pyarrow's json reader to make a table.
reader = pa.BufferReader(bytes(consumption_json, encoding='ascii'))
table_from_reader = pa.json.read_json(reader)
And 'results' is a struct nested inside a list. (Actually, everything seems to be nested).
print(table_from_reader['results'].type)
list<item: struct<consumption: double, interval_start: timestamp[s], interval_end: timestamp[s]>>
How do I turn this into a table directly?
following this https://stackoverflow.com/a/72880717/3617057
I can get closer...
import pyarrow.compute as pc
flat = pc.list_flatten(table_from_reader["results"])
print(flat)
[
-- is_valid: all not null
-- child 0 type: double
[
0.063,
0.071,
0.073
]
-- child 1 type: timestamp[s]
[
2018-05-18 23:30:00,
2018-05-18 23:00:00,
2018-05-18 22:30:00
]
-- child 2 type: timestamp[s]
[
2018-05-19 00:00:00,
2018-05-18 23:30:00,
2018-05-17 23:00:00
]
]
flat is a ChunkedArray whose underlying arrays are StructArray. To convert it to a table, you need to convert each chunks to a RecordBatch and concatenate them in a table:
pa.Table.from_batches(
[
pa.RecordBatch.from_struct_array(s)
for s in flat.iterchunks()
]
)
If flat is just a StructArray (not a ChunkedArray), you can call:
pa.Table.from_batches(
[
pa.RecordBatch.from_struct_array(flat)
]
)
I have a dataframe in which two columns are JSON objects. Something like this:
id choice name host
002 {'option': 'true'} Bob {'city': {'name': 'A'}}
003 {'option': 'false'} Ana {'city': {'name': 'B'}}
004 {'option': 'false'} Nic {'city': {'name': 'C'}}
I wish for the column result to only be the final string in columns choice and host (true, false, A, B, C...)
i was able to do it to column host with the following formula
df['host'] = (df.loc[:, 'host']
.apply(lambda x: x['city']['name']))
This was succesful. However, when i apply something similar to column choice
df['choice'] = (df.loc[:, 'choice']
.apply(lambda x: x['option']))
i get TypeError: 'NoneType' object is not subscriptable
How could i get a choice column with "true" and "false"?
Let us use str.get
df.choice.str.get('option')
0 true
1 false
2 false
Name: choice, dtype: object
df.host.str.get('city').str.get('name')
0 A
1 B
2 C
Name: host, dtype: object
First make sure they are object in your two columns , dict if not , do the conversion via ast.literal_eval
import ast
df[['choice','host']]=df[['choice','host']].applymap(ast.literal_eval)
I am trying to query the list from a dictionary which ends with digit 1.
This is what i am trying, but i am getting empty list as output.
- name: Get list of sid that are open in READ WRITE mode
set_fact:
sid_output: "{{ om.results | selectattr(\"sid\", \"match\", \"1$\") | map(attribute='sid') | list}}"
Here is the output from my dictionary:
{
'msg':u'All items completed',
'changed':True,
'results':[
{
'_ansible_parsed':True,
'stderr_lines':[
],
u'cmd':u'echo \"set pagesize 0\\nselect trim(open_mode) from v\\\\$database;\" | /u01/app/oracle/product/11.2.0/dbinst_1/bin/sqlplus -S / as sysdba',
u'end': u'2019-05-15 12:04:30.478084 ', ' _ansible_no_log':False,
u'stdout':u'READ WRITE',
'_ansible_item_result':True,
u'changed':True,
u'sid':u'dw1',
'failed':False,
u'delta': u'0:00:00.073102 ', u' stderr':u'',
u'rc':0,
u'invocation':{
u'module_args':{
u'creates':None,
u'executable':None,
u'_uses_shell':True,
u'_raw_params':u'echo \"set pagesize 0\\nselect trim(open_mode) from v\\\\$database;\" | /u01/app/oracle/product/11.2.0/dbinst_1/bin/sqlplus -S / as sysdba',
u'removes':None,
u'argv':None,
u'warn':True,
u'chdir':None,
u'stdin':None
}
},
'stdout_lines':[
u'READ WRITE'
],
u'start': u'2019-05-15 12:04:30.404982 ', ' _ansible_ignore_errors':None,
'_ansible_item_label':u'dw1'
}
]
}
Your problem is the use of the match test. From the documentation on search vs. match:
‘match’ requires a complete match in the string, while ‘search’ only requires matching a subset of the string.
In other words, if you try to test using some_match is match('foo.*bar'), that means that the string must start with foo and end with bar. If you test some_mastch is match('bar$'), then you're looking for strings that are exactly equal to bar.
Since you're using:
selectattr(\"sid\", \"match\", \"1$\")
This means it will only match strings that exactly equal 1. You want to use search instead. And while you're at it, get rid of all those escaped double-quotes:
- set_fact:
sid_output: "{{ om.results | selectattr('sid', 'search', '1$') | map(attribute='sid') | list}}"
With your example data, this will set sid_output to dw1.
I'm very new to rails, and am a little stuck on the logic for this problem.
I have one table (using mysql) of employees, each of them with a manager_id key which refers to the employee they report to. So for example the employee with the title of "CEO" with an id of 1, has a manager_id of nil, and the employee with title of "CTO" has a manager_id of 1. So my records look like this
id: 1, first_name: "Bob", last_name: "Boss", title: "CEO", manager_id: null
id: 2, first_name: "Pat", last_name: "Guy", title: "CTO", manager_id: 1
id: 3, first_name: "John", last_name: "Dude", title: "VP of engineering", manager_id: 2
and my JSON structure should look like this
[
{id: 1, first_name: "Bob", last_name: "Boss", title: "CEO", manager_id: null, descendents: [
{id: 2, first_name: "Pat", last_name: "Guy", title: "CTO", manager_id: 1, descendents: [
{id: 3, first_name: "John", last_name: "Dude", title: "VP of engineering", manager_id: 2, descendents: [....]}
]},
{..more CEO descendents...}
]
I'm trying to create a nested JSON structure that starts at CEO, lists all employees that report to them, and each of those employees descendants. I was trying to write a script that creates this but I keep getting infinite recursive calls. This is what I have
#start at some root
#root = Employee.find_by title: 'CEO'
#convert to hash table
#results[0] = #root.attributes
#add direct_reports key
#results[0]["direct_reports"] = []
def getBelow(root=#root)
#reports = Employee.where("manager_id = ?", #root[:id])
if #reports.blank?
return []
else
#reports = #reports.map(&:attributes)
#reports.each do |person|
person["direct_reports"] = []
getBelow(person)
end
#reports = Employee.where("manager_id = ?", #root[:id])
root["direct_reports"] = #reports
end
return #root
end
#list = getBelow(#results[0])
If I'm passing in each new person object, shouldn't they all eventually end when #reports.blank? becomes true?
An alternative I was thinking of was to use table associations inspired by this blog post
https://hashrocket.com/blog/posts/recursive-sql-in-activerecord
but that seems a little too complicated.
Some issues in the getBelow method
You are always using #root, instead of using the param (root). So you are always starting again from the 'CEO'.
You are calling getBelow recursively but you are not using the result.
You call #reports = Employee.where("manager_id = ?", #root[:id]) twice.
You return #root.
As Jorge Najera said, there are gems that handle a tree structure easily. If you want to build it on your own, this is my suggestion:
#start at some root
#root = Employee.find_by manager_id: nil
#convert to hash table
#tree = #root.attributes
#add direct_reports key
#tree["direct_reports"] = getBelow(#root)
def getBelow(manager)
branch = []
Employee.where("manager_id = ?", manager.id).each do |employee|
node = employee.attributes
node["direct_reports"] = getBelow(employee)
branch << node
end
branch
end
This was not tested so I think you´ll get some errors, but I believe the idea is fine.
I am using Postgres' json data type but want to do a query/ordering with data that is nested within the json.
I want to order or query with .where on the json data type. For example, I want to query for users that have a follower count > 500 or I want to order by follower or following count.
Thanks!
Example:
model User
data: {
"photos"=>[
{"type"=>"facebook", "type_id"=>"facebook", "type_name"=>"Facebook", "url"=>"facebook.com"}
],
"social_profiles"=>[
{"type"=>"vimeo", "type_id"=>"vimeo", "type_name"=>"Vimeo", "url"=>"http://vimeo.com/", "username"=>"v", "id"=>"1"},
{"bio"=>"I am not a person, but a series of plants", "followers"=>1500, "following"=>240, "type"=>"twitter", "type_id"=>"twitter", "type_name"=>"Twitter", "url"=>"http://www.twitter.com/", "username"=>"123", "id"=>"123"}
]
}
For any who stumbles upon this. I have come up with a list of queries using ActiveRecord and Postgres' JSON data type. Feel free to edit this to make it more clear.
Documentation to the JSON operators used below: https://www.postgresql.org/docs/current/functions-json.html.
# Sort based on the Hstore data:
Post.order("data->'hello' DESC")
=> #<ActiveRecord::Relation [
#<Post id: 4, data: {"hi"=>"23", "hello"=>"22"}>,
#<Post id: 3, data: {"hi"=>"13", "hello"=>"21"}>,
#<Post id: 2, data: {"hi"=>"3", "hello"=>"2"}>,
#<Post id: 1, data: {"hi"=>"2", "hello"=>"1"}>]>
# Where inside a JSON object:
Record.where("data ->> 'likelihood' = '0.89'")
# Example json object:
r.column_data
=> {"data1"=>[1, 2, 3],
"data2"=>"data2-3",
"array"=>[{"hello"=>1}, {"hi"=>2}],
"nest"=>{"nest1"=>"yes"}}
# Nested search:
Record.where("column_data -> 'nest' ->> 'nest1' = 'yes' ")
# Search within array:
Record.where("column_data #>> '{data1,1}' = '2' ")
# Search within a value that's an array:
Record.where("column_data #> '{array,0}' ->> 'hello' = '1' ")
# this only find for one element of the array.
# All elements:
Record.where("column_data ->> 'array' LIKE '%hello%' ") # bad
Record.where("column_data ->> 'array' LIKE ?", "%hello%") # good
According to this http://edgeguides.rubyonrails.org/active_record_postgresql.html#json
there's a difference in using -> and ->>:
# db/migrate/20131220144913_create_events.rb
create_table :events do |t|
t.json 'payload'
end
# app/models/event.rb
class Event < ActiveRecord::Base
end
# Usage
Event.create(payload: { kind: "user_renamed", change: ["jack", "john"]})
event = Event.first
event.payload # => {"kind"=>"user_renamed", "change"=>["jack", "john"]}
## Query based on JSON document
# The -> operator returns the original JSON type (which might be an object), whereas ->> returns text
Event.where("payload->>'kind' = ?", "user_renamed")
So you should try Record.where("data ->> 'status' = 200 ") or the operator that suits your query (http://www.postgresql.org/docs/current/static/functions-json.html).
Your question doesn't seem to correspond to the data you've shown, but if your table is named users and data is a field in that table with JSON like {count:123}, then the query
SELECT * WHERE data->'count' > 500 FROM users
will work. Take a look at your database schema to make sure you understand the layout and check that the query works before complicating it with Rails conventions.
JSON filtering in Rails
Event.create( payload: [{ "name": 'Jack', "age": 12 },
{ "name": 'John', "age": 13 },
{ "name": 'Dohn', "age": 24 }]
Event.where('payload #> ?', '[{"age": 12}]')
#You can also filter by name key
Event.where('payload #> ?', '[{"name": "John"}]')
#You can also filter by {"name":"Jack", "age":12}
Event.where('payload #> ?', {"name":"Jack", "age":12}.to_json)
You can find more about this here