How to parse a string in MySQL - mysql

So the problem is I have a column that contains a snapshot:
<p>
<t8>xx</t8>
<s7>321</s7>
<s1>6</s1>
<s2>27</s2>
<s4>73</s4>
<t1>noemail#noemail.com</t1>
<t2>xxxxx</t2>
<t3>xxxxxx</t3>
<t11>xxxxxxxx</t11>
<t6>xxxxxxxx</t6>
<t7>12345</t7>
<t9>1234567890</t9>
</p>
I need to parse this string in MySQL so that I can count the number of times that noemail.com occurs. I am not familiar with parsing so if you could please explain the best you can.

You can do it by removing searched substring and comparing the lengths. For example :
set #str = '<p>
<t8>xx</t8>
<s7>321</s7>
<s1>6</s1>
<s2>27</s2>
<s4>73</s4>
<t1>noemail#noemail.com</t1>
<t2>xxxxx</t2>
<t3>xxxxxx</t3>
<t11>xxxxxxxx</t11>
<t6>xxxxxxxx</t6>
<t7>12345</t7>
<t9>1234567890</t9>
</p>';
set #find = 'noemail#noemail.com';
select (length(#str) - length(replace(#str, #find, '')))/length(#find) AS NumberOfTimesEmailAppears;
I think there is sadly no more elegant solution (note that a database system is not designed to parse strings : this is most the job of a scripting language)

Related

Extract string from csv file after reading in Prolog

Good evening,
I am trying to read a csv file in Prolog containing all the countries in the world. Executing this code:
read_KB(R) :- csv_read_file("countries.csv",R).
I get a list of Terms of this type:
R = [row('Afghanistan;'), row('Albania;'), row('Algeria;'), row('Andorra;'), row('Angola;'), row('Antigua and Barbuda;'), row('Argentina;'), row('Armenia;'), row(...)|...].
I would like to extract only the names of each country in form of a String and put all of them into a list of Strings.
I tried this way with only the first row executing this:
read_KB(L) :- csv_read_file("/Users/dylan/Desktop/country.csv",R),
give(R,L).
give([X|T],X).
I obtain only a Term of type row('Afghanistan;')
You can use maplist/3:
read_KB(Names) :-
csv_read_file('countries.csv', Rows, [separator(0';)]),
maplist([row(Name,_), Name] >> true, Rows, Names).
The answer given by #slago can be simplified, using arg/3 instead of a lambda expression, making it slightly more efficient:
read_KB(Names) :-
csv_read_file('countries.csv', Rows, [separator(0';)]),
maplist(arg(1), Rows, Names).

flutter: how to make list from mysql-data?

From the MySQL query I get data like this:
(Fields: {IDAufgaben: 2630, Aufgabe: erste Aufgabe},
Fields: {IDAufgaben: 2627, Aufgabe: Testen})
json.decode gives a FormatException — I think because the quotes are lacking.
How can I change the MySQL data received in a Dart list?
Thanks a lot for help, I am newbie in Flutter and Dart…
should quote marks too, but if you take it from the terminallog then what happens is the quotation tent is not included, the solution is to change to json using jsonencode
like this one :
final myfiled = {"IDAufgaben": "2630", "Aufgabe": "erste Aufgabe"};
print(JsonEncoder.withIndent(" ").convert(myfiled));
/// result terminal is :
{
"IDAufgaben": "2630",
"Aufgabe": "erste Aufgabe"
}

Replace template smart tags <<tag>> to [tag] in mysql

I have an table name templateType, It has column name Template_Text.
The Template have many smart tags <> to [tag] using mysql, and I need to replace << to [ and >> with ].
Edit from OP's comments:
It is an template with large text and having multiple smart tags. As example : " I <<Fname>> <<Lname>>, <<UserId>> <<Designation>> of xyz organization, Proud to announce...."
Here I need to replace these << with [ and >> with ], so it will look like
" [Fname] [Lname], [UserId] ...."
Based on your comments, your MySQL version does not support Regex_Replace() function. So, a generic solution is not feasible.
Assuming that your string does not contain additional << and >>, other than following the <<%>> format, we can use Replace() function.
I have also added a WHERE condition, so that we pick only those rows which match our given substring criteria.
Update templateType
SET Template_Text = REPLACE(REPLACE(Template_Text, '<<', '['), '>>', ']')
WHERE Template_Text LIKE '%<<%>>%'
In case the problem is further complex, you may get some ideas from this answer: https://stackoverflow.com/a/53286571/2469308
A couple of replace calls should work:
SELECT REPLACE(REPLACE(template_text, '<<', '['), '>>', '])
FROM template_type

Firebase Database Search Query

I am trying to search my database using a string, such as "A". I was just watching this Firebase tutorial Common SQL Queries converted for the Firebase Database - The Firebase Database For SQL Developers #4 and it explains that, in order to search the database for a string (in a certain location), you must use:
firebase.database().ref.child("child_name_here")
.queryOrdered(byChild: "child_name_here")
.queryStarting(atValue: "value_here_uppercase")
.queryEnding(atValue: "value_here_uppercase\\uf8ff")
You must use two \\ in the ending value as an escape character in order to get one \.
When I try this with my Firebase database, it does not work. Here is my database:
{
"Schools": {
"randomUID": {
"location" : "anyTown, anyState",
"name" : "anyName"
}
}
}
Here is my query:
databaseReference.child("Schools")
.queryOrdered(byChild: "name")
.queryStarting(atValue: "A")
.queryEnding(atValue: "A\\uf8ff") ...
When I go to print the snapshot from Firebase, I get back.
If I get rid of the ending .queryEnding(atValue: "A\\uf8ff"), the database returns all of the schools in the Schools node.
How can I search the Firebase database using a String?
queryStarting() and queryEnding() can be used for number. For example: you can get objects with someField varying from 3 to 10.
for searching string: you can search whole string using queryEqualToValue().
This shows all customers that match Wick. (It's not swift but may give you an idea)
// sample
let query = 'Wick'
clientsRef.orderByChild('name')
.startAt(query)
.endAt(query + '\uf8ff')
.once('value', (snapshot) => {
....
})

Parse complex Json string contained in Hadoop

I want to parse a string of complex JSON in Pig. Specifically, I want Pig to understand my JSON array as a bag instead of as a single chararray. I found that complex JSON can be parsed by using Twitter's Elephant Bird or Mozilla's Akela library. (I found some additional libraries, but I cannot use 'Loader' based approach since I use HCatalog Loader to load data from Hive.)
But, the problem is the structure of my data; each value of Map structure contains value part of complex JSON. For example,
1. My table looks like (WARNING: type of 'complex_data' is not STRING, a MAP of <STRING, STRING>!)
TABLE temp_table
(
user_id BIGINT COMMENT 'user ID.',
complex_data MAP <STRING, STRING> COMMENT 'complex json data'
)
COMMENT 'temp data.'
PARTITIONED BY(created_date STRING)
STORED AS RCFILE;
2. And 'complex_data' contains (a value that I want to get is marked with two *s, so basically #'d'#'f' from each PARSED_STRING(complex_data#'c') )
{ "a": "[]",
"b": "\"sdf\"",
"**c**":"[{\"**d**\":{\"e\":\"sdfsdf\"
,\"**f**\":\"sdfs\"
,\"g\":\"qweqweqwe\"},
\"c\":[{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"}]
},
{\"**d**\":{\"e\":\"sdfsdf\"
,\"**f**\":\"sdfs\"
,\"g\":\"qweqweqwe\"},
\"c\":[{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"}]
},]"
}
3. So, I tried... (same approach for Elephant Bird)
REGISTER '/path/to/akela-0.6-SNAPSHOT.jar';
DEFINE JsonTupleMap com.mozilla.pig.eval.json.JsonTupleMap();
data = LOAD temp_table USING org.apache.hive.hcatalog.pig.HCatLoader();
values_of_map = FOREACH data GENERATE complex_data#'c' AS attr:chararray; -- IT WORKS
-- dump values_of_map shows correct chararray data per each row
-- eg) ([{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... }])
([{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... }]) ...
attempt1 = FOREACH data GENERATE JsonTupleMap(complex_data#'c'); -- THIS LINE CAUSE AN ERROR
attempt2 = FOREACH data GENERATE JsonTupleMap(CONCAT(CONCAT('{\\"key\\":', complex_data#'c'), '}'); -- IT ALSO DOSE NOT WORK
I guessed that "attempt1" was failed because the value doesn't contain full JSON. However, when I CONCAT like "attempt2", I generate additional \ mark with. (so each line starts with {\"key\": ) I'm not sure that this additional marks breaks the parsing rule or not. In any case, I want to parse the given JSON string so that Pig can understand. If you have any method or solution, please Feel free to let me know.
I finally solved my problem by using jyson library with jython UDF.
I know that I can solve it by using JAVA or other languages.
But, I think that jython with jyson is the most simplist answer to this issue.