flutter: how to make list from mysql-data? - mysql

From the MySQL query I get data like this:
(Fields: {IDAufgaben: 2630, Aufgabe: erste Aufgabe},
Fields: {IDAufgaben: 2627, Aufgabe: Testen})
json.decode gives a FormatException — I think because the quotes are lacking.
How can I change the MySQL data received in a Dart list?
Thanks a lot for help, I am newbie in Flutter and Dart…

should quote marks too, but if you take it from the terminallog then what happens is the quotation tent is not included, the solution is to change to json using jsonencode
like this one :
final myfiled = {"IDAufgaben": "2630", "Aufgabe": "erste Aufgabe"};
print(JsonEncoder.withIndent(" ").convert(myfiled));
/// result terminal is :
{
"IDAufgaben": "2630",
"Aufgabe": "erste Aufgabe"
}

Related

Redshift JSON Parsing

I have some JSON data in Redshift table of type character varying. An example entry is:
[{"value":["*"], "key":"testData"}, {"value":"["GGG"], key: "differentData"}]
I want to return vales based on keys, how can i do this? I'm attempting to do something like
json_extract_path_text(column, 'value') but unfortunately it errors out. Any ideas?
So the first issue is that your string isn't valid JSON. There are mismatched and missing quotes. I think you mean:
[{"value":["*"], "key":"testData"}, {"value":["GGG"], "key": "differentData"}]
I don't know if this is a data issue or a transcription error but these functions won't work unless the json text is valid.
The next thing to consider is that at the top level this json is an array so you will need to use json_extract_array_element_text() function to pick up an element of the array. For example:
json_extract_array_element_text('json string', 0)
So putting this together we can extract the first "value" with (untested):
json_extract_path_text(
json_extract_array_element_text(
'[{"value":["*"], "key":"testData"}, {"value":["GGG"], "key": "differentData"}]', 0
), 'value'
)
Should return the string ["*"].

Firebase Database Search Query

I am trying to search my database using a string, such as "A". I was just watching this Firebase tutorial Common SQL Queries converted for the Firebase Database - The Firebase Database For SQL Developers #4 and it explains that, in order to search the database for a string (in a certain location), you must use:
firebase.database().ref.child("child_name_here")
.queryOrdered(byChild: "child_name_here")
.queryStarting(atValue: "value_here_uppercase")
.queryEnding(atValue: "value_here_uppercase\\uf8ff")
You must use two \\ in the ending value as an escape character in order to get one \.
When I try this with my Firebase database, it does not work. Here is my database:
{
"Schools": {
"randomUID": {
"location" : "anyTown, anyState",
"name" : "anyName"
}
}
}
Here is my query:
databaseReference.child("Schools")
.queryOrdered(byChild: "name")
.queryStarting(atValue: "A")
.queryEnding(atValue: "A\\uf8ff") ...
When I go to print the snapshot from Firebase, I get back.
If I get rid of the ending .queryEnding(atValue: "A\\uf8ff"), the database returns all of the schools in the Schools node.
How can I search the Firebase database using a String?
queryStarting() and queryEnding() can be used for number. For example: you can get objects with someField varying from 3 to 10.
for searching string: you can search whole string using queryEqualToValue().
This shows all customers that match Wick. (It's not swift but may give you an idea)
// sample
let query = 'Wick'
clientsRef.orderByChild('name')
.startAt(query)
.endAt(query + '\uf8ff')
.once('value', (snapshot) => {
....
})

Escape quotes inside quoted fields when parsing CSV in Flink

In Flink, parsing a CSV file using readCsvFile raises an exception when encountring a field containing quotes like "Fazenda São José ""OB"" Airport":
org.apache.flink.api.common.io.ParseException: Line could not be parsed: '191,"SDOB","small_airport","Fazenda São José ""OB"" Airport",-21.425199508666992,-46.75429916381836,2585,"SA","BR","BR-SP","Tapiratiba","no","SDOB",,"SDOB",,,'
I've found in this mailing list thread and this JIRA issue that quoting inside the field should be realized through the \ character, but I don't have control over the data to modify it. Is there a way to work around this?
I've also tried using ignoreInvalidLines() (which is the less preferable solution) but it gave me the following error:
08:49:05,737 INFO org.apache.flink.api.common.io.LocatableInputSplitAssigner - Assigning remote split to host localhost
08:49:05,765 ERROR org.apache.flink.runtime.operators.BatchTask - Error in task code: CHAIN DataSource (at main(Job.java:53) (org.apache.flink.api.java.io.TupleCsvInputFormat)) -> Map (Map at main(Job.java:54)) -> Combine(SUM(1), at main(Job.java:56) (2/8)
java.lang.ArrayIndexOutOfBoundsException: -1
at org.apache.flink.api.common.io.GenericCsvInputFormat.skipFields(GenericCsvInputFormat.java:443)
at org.apache.flink.api.common.io.GenericCsvInputFormat.parseRecord(GenericCsvInputFormat.java:412)
at org.apache.flink.api.java.io.CsvInputFormat.readRecord(CsvInputFormat.java:111)
at org.apache.flink.api.common.io.DelimitedInputFormat.nextRecord(DelimitedInputFormat.java:454)
at org.apache.flink.api.java.io.CsvInputFormat.nextRecord(CsvInputFormat.java:79)
at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:176)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)
Here is my code:
DataSet<Tuple2<String, Integer>> csvInput = env.readCsvFile("resources/airports.csv")
.ignoreFirstLine()
.ignoreInvalidLines()
.parseQuotedStrings('"')
.includeFields("100000001")
.types(String.class, String.class)
.map((Tuple2<String, String> value) -> new Tuple2<>(value.f1, 1))
.groupBy(0)
.sum(1);
If you cannot change the input data, then you should turn off parseQuotedString(). This will simply look for the next field delimiter and return everything in between as a string (including the quotations marks). Then you can remove the leading and trailing quotation mark in a subsequent map operation.

Parse complex Json string contained in Hadoop

I want to parse a string of complex JSON in Pig. Specifically, I want Pig to understand my JSON array as a bag instead of as a single chararray. I found that complex JSON can be parsed by using Twitter's Elephant Bird or Mozilla's Akela library. (I found some additional libraries, but I cannot use 'Loader' based approach since I use HCatalog Loader to load data from Hive.)
But, the problem is the structure of my data; each value of Map structure contains value part of complex JSON. For example,
1. My table looks like (WARNING: type of 'complex_data' is not STRING, a MAP of <STRING, STRING>!)
TABLE temp_table
(
user_id BIGINT COMMENT 'user ID.',
complex_data MAP <STRING, STRING> COMMENT 'complex json data'
)
COMMENT 'temp data.'
PARTITIONED BY(created_date STRING)
STORED AS RCFILE;
2. And 'complex_data' contains (a value that I want to get is marked with two *s, so basically #'d'#'f' from each PARSED_STRING(complex_data#'c') )
{ "a": "[]",
"b": "\"sdf\"",
"**c**":"[{\"**d**\":{\"e\":\"sdfsdf\"
,\"**f**\":\"sdfs\"
,\"g\":\"qweqweqwe\"},
\"c\":[{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"}]
},
{\"**d**\":{\"e\":\"sdfsdf\"
,\"**f**\":\"sdfs\"
,\"g\":\"qweqweqwe\"},
\"c\":[{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"}]
},]"
}
3. So, I tried... (same approach for Elephant Bird)
REGISTER '/path/to/akela-0.6-SNAPSHOT.jar';
DEFINE JsonTupleMap com.mozilla.pig.eval.json.JsonTupleMap();
data = LOAD temp_table USING org.apache.hive.hcatalog.pig.HCatLoader();
values_of_map = FOREACH data GENERATE complex_data#'c' AS attr:chararray; -- IT WORKS
-- dump values_of_map shows correct chararray data per each row
-- eg) ([{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... }])
([{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... }]) ...
attempt1 = FOREACH data GENERATE JsonTupleMap(complex_data#'c'); -- THIS LINE CAUSE AN ERROR
attempt2 = FOREACH data GENERATE JsonTupleMap(CONCAT(CONCAT('{\\"key\\":', complex_data#'c'), '}'); -- IT ALSO DOSE NOT WORK
I guessed that "attempt1" was failed because the value doesn't contain full JSON. However, when I CONCAT like "attempt2", I generate additional \ mark with. (so each line starts with {\"key\": ) I'm not sure that this additional marks breaks the parsing rule or not. In any case, I want to parse the given JSON string so that Pig can understand. If you have any method or solution, please Feel free to let me know.
I finally solved my problem by using jyson library with jython UDF.
I know that I can solve it by using JAVA or other languages.
But, I think that jython with jyson is the most simplist answer to this issue.

Difference Between Two Mongo Queries

what is the difference between two mongo queries.
db.test.find({"field" : "Value"})
db.test.find({field : "Value"})
mongo shell accepts both.
There is no difference in your example.
The problem happens when your field names contain characters which cannot be a part of an identifier in Javascript (because the query engine is run in a javascript repl/shell)
For example user-name because there is a hyphen in it.
Then you would have to query like db.test.find({"user-name" : "Value"})
For the mongo shell there is no actual difference, but in some other language cases it does matter.
The actual case here is presenting what is valid JSON, and with myself as a given example, I try to do this in responses on this forum and others as JSON is a data format that can easily be "parsed" into native data structures, where alternate "JavaScript" notation may not be translated so easily.
There are certain cases where the quoting is required, as in:
db.test.find({ "field-value": 1 })
or:
db.test.find({ "field.value": 1 })
As the values would otherwise be "invalid JavaScript".
But the real point here is adhering to the JSON form.
You can understand with example: suppose that you have test collection with two records
{
'_id': ObjectId("5370a826fc55bb23128b4568"),
'name': 'nanhe'
}
{
'_id': ObjectId("5370a75bfc55bb23128b4567"),
'your name': 'nanhe'
}
db.test.find({'your name':'nanhe'});
{ "_id" : ObjectId("5370a75bfc55bb23128b4567"), "your name" : "nanhe" }
db.test.find({your name:'nanhe'});
SyntaxError: Unexpected identifier