how do I access a JSON file after it has been decoded - json

I can't figure out how to access this output from JSON
p (2 items)
0:"message" -> "ok"
1:"results" -> List (1 item)
key:"results"
value:List (1 item)
[0]:Map (4 items)
0:"uid" -> "1"
1:"name" -> "TallyMini"
2:"camera" -> "2"
3:"Version" -> "0.1.0"
enter code here
lst.add(convertDataToJson[1]['value']['name']);
I have tried various combinations of indexs 'result', 'list', 'value', name

I guess you are using http and get a JSON object from the server. And in this case, call json.decode() method then
Flutter understands JSON as a Map . ( Map<String, dynamic> json).
So, to get the value of name field, firstly you need to access to results field.
json['results'] . It is a list, and continue accessing to the name field.
It my answer is not helpful for you, please show me more your code.

You can try like as:
String name= jsonData['results'][0]['name'];
Please let me know it's work or not.

Related

How do we modify an json array object regardless of its position?

The problem
Each entity owns an id and a json field. That json field simply stores a json list of objects.
Entity{ id, json }
"1, '[{"tag": "Player"}, {"position": {"x": 20, "y": 20}}]'"
The order of those json objects is not always the same and i want to update the json object inside the array where "tag" :"Player". I basically wanna change the tag.
I tried to use json_replace, but it didnt worked because it seems like that function does not accept the $** wildcard. But i cant use $[0] because that json object is not always at the first position. Thats what i tried.
UPDATE entity
SET jsonComponents = JSON_REPLACE(
jsonComponents ,
'$**.tag' ,
'NewTag'
)
WHERE
entity.id = 1
The Question
How are we supposed to modify/remove an json object inside an pure json list, if we dont know where its located at ? How can we modify/remove a json object inside a list regardless of its position inside the list ?
Im actually very glad for any help on this topic, couldnt find anything about it...
The solution
If we dont know the path of the json object we seek to modify... we simply query for the path using json_search
update entity
set jsonComponents = JSON_REPLACE(
jsonComponents,
JSON_UNQUOTE(json_search(jsonComponents, 'one', 'Player')),
'NewTag'
)
where entity.id = 0

How to get ordered results from couchbase

I have in my bucket a document containing a list of ID (childList).
I would like to query over this list and keep the result ordered like in my JSON. My query is like (using java SDK) :
String query = new StringBuilder().append("SELECT B.name, META(B).id as id ")
.append("FROM" + bucket.name() + "A ")
.append("USE KEYS $id ")
.append("JOIN" + bucket.name() + "B ON KEYS ARRAY i FOR i IN A.childList end;").toString();
This query will return rows that I will transform into my domain object and create a list like this :
n1qlQueryResult.allRows().forEach(n1qlQueryRow -> (add to return list ) ...);
The problem is the output order is important.
Any ideas?
Thank you.
here is a rough idea of a solution without N1QL, provided you always start from a single A document:
List<JsonDocument> listOfBs = bucket
.async()
.get(idOfA)
.flatMap(doc -> Observable.from(doc.content().getArray("childList")))
.concatMapEager(id -> bucket.async().get(id))
.toList()
.toBlocking().first();
You might want another map before the toList to extract the name and id, or to perform your domain object transformation even maybe...
The steps are:
use the async API
get the A document
extract the list of children and stream these ids
asynchronously fetch each child document and stream them but keeping them in original order
collect all into a List<JsonDocument>
block until the list is ready and return that List.

Firebase Database Search Query

I am trying to search my database using a string, such as "A". I was just watching this Firebase tutorial Common SQL Queries converted for the Firebase Database - The Firebase Database For SQL Developers #4 and it explains that, in order to search the database for a string (in a certain location), you must use:
firebase.database().ref.child("child_name_here")
.queryOrdered(byChild: "child_name_here")
.queryStarting(atValue: "value_here_uppercase")
.queryEnding(atValue: "value_here_uppercase\\uf8ff")
You must use two \\ in the ending value as an escape character in order to get one \.
When I try this with my Firebase database, it does not work. Here is my database:
{
"Schools": {
"randomUID": {
"location" : "anyTown, anyState",
"name" : "anyName"
}
}
}
Here is my query:
databaseReference.child("Schools")
.queryOrdered(byChild: "name")
.queryStarting(atValue: "A")
.queryEnding(atValue: "A\\uf8ff") ...
When I go to print the snapshot from Firebase, I get back.
If I get rid of the ending .queryEnding(atValue: "A\\uf8ff"), the database returns all of the schools in the Schools node.
How can I search the Firebase database using a String?
queryStarting() and queryEnding() can be used for number. For example: you can get objects with someField varying from 3 to 10.
for searching string: you can search whole string using queryEqualToValue().
This shows all customers that match Wick. (It's not swift but may give you an idea)
// sample
let query = 'Wick'
clientsRef.orderByChild('name')
.startAt(query)
.endAt(query + '\uf8ff')
.once('value', (snapshot) => {
....
})

Parse complex Json string contained in Hadoop

I want to parse a string of complex JSON in Pig. Specifically, I want Pig to understand my JSON array as a bag instead of as a single chararray. I found that complex JSON can be parsed by using Twitter's Elephant Bird or Mozilla's Akela library. (I found some additional libraries, but I cannot use 'Loader' based approach since I use HCatalog Loader to load data from Hive.)
But, the problem is the structure of my data; each value of Map structure contains value part of complex JSON. For example,
1. My table looks like (WARNING: type of 'complex_data' is not STRING, a MAP of <STRING, STRING>!)
TABLE temp_table
(
user_id BIGINT COMMENT 'user ID.',
complex_data MAP <STRING, STRING> COMMENT 'complex json data'
)
COMMENT 'temp data.'
PARTITIONED BY(created_date STRING)
STORED AS RCFILE;
2. And 'complex_data' contains (a value that I want to get is marked with two *s, so basically #'d'#'f' from each PARSED_STRING(complex_data#'c') )
{ "a": "[]",
"b": "\"sdf\"",
"**c**":"[{\"**d**\":{\"e\":\"sdfsdf\"
,\"**f**\":\"sdfs\"
,\"g\":\"qweqweqwe\"},
\"c\":[{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"}]
},
{\"**d**\":{\"e\":\"sdfsdf\"
,\"**f**\":\"sdfs\"
,\"g\":\"qweqweqwe\"},
\"c\":[{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"}]
},]"
}
3. So, I tried... (same approach for Elephant Bird)
REGISTER '/path/to/akela-0.6-SNAPSHOT.jar';
DEFINE JsonTupleMap com.mozilla.pig.eval.json.JsonTupleMap();
data = LOAD temp_table USING org.apache.hive.hcatalog.pig.HCatLoader();
values_of_map = FOREACH data GENERATE complex_data#'c' AS attr:chararray; -- IT WORKS
-- dump values_of_map shows correct chararray data per each row
-- eg) ([{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... }])
([{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... }]) ...
attempt1 = FOREACH data GENERATE JsonTupleMap(complex_data#'c'); -- THIS LINE CAUSE AN ERROR
attempt2 = FOREACH data GENERATE JsonTupleMap(CONCAT(CONCAT('{\\"key\\":', complex_data#'c'), '}'); -- IT ALSO DOSE NOT WORK
I guessed that "attempt1" was failed because the value doesn't contain full JSON. However, when I CONCAT like "attempt2", I generate additional \ mark with. (so each line starts with {\"key\": ) I'm not sure that this additional marks breaks the parsing rule or not. In any case, I want to parse the given JSON string so that Pig can understand. If you have any method or solution, please Feel free to let me know.
I finally solved my problem by using jyson library with jython UDF.
I know that I can solve it by using JAVA or other languages.
But, I think that jython with jyson is the most simplist answer to this issue.

Elixir - Creating JSON object from 2 collections

I'm using Postgrex in Elixir, and when it returns query results, it returns them in the following struct format:
%{columns: ["id", "email", "name"], command: :select, num_rows: 2, rows: [{1, "me#me.com", "Bobbly Long"}, {6, "email#tts.me", "Woll Smoth"}]}
It should be noted I am using Postgrex directly WITHOUT Ecto.
The columns (table headers) are returned as a collection, but the results (rows) are returned as a list of tuples. (which seems odd, as they could get very large).
I'm trying to find the best way to programmatically create JSON objects for each result in which the JSON key is the column title and the JSON value the corresponding value from the tuple.
I've tried creating maps from both, merging and then serialising to JSON objects but it seems there should be an easier/better way of doing this.
Has anyone dealt with this before? What is the best way of creating a JSON object from a separate collection and tuple?
Something like this should work:
result = Postgrex.query!(...)
Enum.map(result.rows, fn row ->
Enum.zip(result.columns, Tuple.to_list(row))
|> Enum.into(%{})
|> JSON.encode
end)
This will result in a list of json objects where each row in the resultset is a json object.