MySQL Query and POSIXct in Loop - mysql

I have the following MySQL Query using RMySQL. All the Database parameters are set before and Query is working well. Is there a possibility to put it in a loop to get multiple zoo objects from more than one dpname? Thanks!
dpname.df<-"%Name%"
paste.query.df<-paste("select handle from db.connect where dpname like '",dpname.df,"'",sep='')
handle.df<-dbGetQuery(dbLT,paste.query.df)
paste.query2.df<-paste("select time,value from db.data where handle='",handle.df,"' and time between '",x,"' and '",y,"'",sep='')
df <- dbGetQuery(dbLT,paste.query2.df)
df$time<-as.POSIXct(df$time,format="%Y-%m-%d %H:%M:%S")
df.zoo<-zoo(df[,-1],df[,1])
I tried to set a function with mapply:
query<-function(x,y,dpname.df)
{
paste.query.df<-paste("select handle from db.connect where dpname like '",dpname.df,"'",sep='')
handle.df<-dbGetQuery(dbLT,paste.query.df)
paste.query2.df<-paste("select time,value from db.data where handle='",handle.df,"' and time between '",x,"' and '",y,"'",sep='')
dbGetQuery(dbLT,paste.query2.df)
}
which I can run: with mapply(query,x,y,dpname.df)
But I can't get multiple outputs for each! query. Is it possible to set another List with the output names? So I could also put the zoo and POSIXct stuff in my function.Thanks!

Related

Extract string from csv file after reading in Prolog

Good evening,
I am trying to read a csv file in Prolog containing all the countries in the world. Executing this code:
read_KB(R) :- csv_read_file("countries.csv",R).
I get a list of Terms of this type:
R = [row('Afghanistan;'), row('Albania;'), row('Algeria;'), row('Andorra;'), row('Angola;'), row('Antigua and Barbuda;'), row('Argentina;'), row('Armenia;'), row(...)|...].
I would like to extract only the names of each country in form of a String and put all of them into a list of Strings.
I tried this way with only the first row executing this:
read_KB(L) :- csv_read_file("/Users/dylan/Desktop/country.csv",R),
give(R,L).
give([X|T],X).
I obtain only a Term of type row('Afghanistan;')
You can use maplist/3:
read_KB(Names) :-
csv_read_file('countries.csv', Rows, [separator(0';)]),
maplist([row(Name,_), Name] >> true, Rows, Names).
The answer given by #slago can be simplified, using arg/3 instead of a lambda expression, making it slightly more efficient:
read_KB(Names) :-
csv_read_file('countries.csv', Rows, [separator(0';)]),
maplist(arg(1), Rows, Names).

Values from xml data field in mysql

i would like to know if there is a query to select values from all of my xml data fields. There are around 1k rows which has xml data. All of them has almost the same data structure. With extract value i was able to extract one data field but at the point where more than one row is part of my subquery it breaks.
Here is an example xml data inside my db:
<EDLXML version="1.0.0" type="variable">
<properties id="template_variables">
<deliveredDuration>4444</deliveredDuration>
<deliveredNum>1</deliveredNum>
<comment/>
<projectname>cdfkusen</projectname>
<name>kral_schalke_trenink</name>
<order_id>372846</order_id>
<cutlistId>2763_ID</cutlistId>
<bcutlistId>51ddgf7a6-1268-1gdfged-95e6-5254000e8e1a</bcutlistId>
<num>1</num>
<duration>177760</duration>
<quotaRelevantDuration>0</quotaRelevantDuration>
<organisationUid>OrgName</organisationUid>
<organisationQuota>333221233</organisationQuota>
<organisationUsedQuota>123</organisationUsedQuota>
<organisationContingentIrrelevantQuotaUsed>54</organisationContingentIrrelevantQuotaUsed>
<userDbId>7xxxx84-eb9b-11fdsb-9ddd1-52cccccde1a</userDbId>
<userId>xxxx</userId>
<userRights>RH_DBC</userRights>
<firstName>DThom</firstName>
<lastName>Test</lastName>
<userMail>xxx#ccc.cz</userMail>
<language>English</language>
<orderTimestamp>1659448080</orderTimestamp>
<stitching>false</stitching>
<transcode>NO</transcode>
<destination>Standard</destination>
<collaboration>private</collaboration>
<premiumUser>false</premiumUser>
<priority>normal</priority>
<userMail2>xxx#ccc.cz</userMail2>
<cutlistItems>
<cutListId>125124_KFC</cutListId>
<cutListItemId cutlistItemDeliveryStatus="&#10004" cutlistItemDStatusMessage="delivered">112799</cutListItemId>
<bmarkerId>8f16ff80-1269-11ed-95e6-5254000e8e1a</bmarkerId>
<videoId>2912799</videoId>
<counter>1</counter>
<frameInSpecified>true</frameInSpecified>
<frameIn>15638</frameIn>
<frameOutSpecified>true</frameOutSpecified>
<frameOut>20082</frameOut>
<tcIn>00:10:25:13</tcIn>
<tcOut>00:13:23:07</tcOut>
<duration>177760</duration>
<BroadcastDate>2021-07-24</BroadcastDate>
<eventDate>2021-07-24</eventDate>
<resolutionFacet>HD</resolutionFacet>
<provider>DBC</provider>
<technicalrightholders>RH_DBC</technicalrightholders>
<rights>DBC</rights>
<materialType>DP</materialType>
<targetFilename>kral_schalke_trenink</targetFilename>
</cutlistItems>
</properties>
</EDLXML>
I got the right value from query if i do:
SELECT ExtractValue((SELECT job_xml from cutlist where job_xml is not null LIMIT 1), '//deliveredNum');
But when i change the limit amount i get back: Subquery return more than one row.
extractvalue expects two string arguments. When your subquery returns more than one row, you are not simply passing a string as the first argument (you are passing a set of results).
Instead of calling extractvalue once for your entire query, call it once for every row, like:
SELECT ExtractValue(job_xml, '//deliveredNum')
FROM cutlist
WHERE job_xml IS NOT NULL

Rails dynamically add condition to mysql query

How to add condition dynamically to sql query
for example if i have one element than it will look like
query=['one_element']
User.where('name LIKE ?, %"#{query[0]}"%')
but if it more than one
User.where('name LIKE ? and LIKE ? and Like... , %"#{query}"%', ..so on)
Im use myslq
so my main goal to split search query if it contains more than 2 words and search by them separately in one sql query
not where(name:'john dou') but where(name:'john' and name:'dou')
If you have a version of MySQL that supports RLIKE or REGEXP_LIKE:
User.where("RLIKE(name, :name_query)", name_query: query.join("|"))
Otherwise, you'll have to manually build with the ActiveRecord or operator:
# In the User model
scope :name_like, ->(name_part) {
return self.all if name_part.blank?
where("name LIKE :name_query", name_query: "%#{name_part}%")
}
scope :names_like, ->(names) {
relation = User.name_like(names.shift)
names.each { |name| relation = relation.or(name_like(name)) }
relation
}
Then you can pass it an array of any name partials you want:
query = ["john", "dou"]
User.names_like(query)
First split the word by it's separator like this query.split(' ') this will give you array of words. Then you can use like below in rails.
User.where(name: ['John', 'dou']

Returning MySQL data as an OBJECT rather than an ARRAY (Knex)

Is there a way to get the output of a MySQL query to list rows in the following structure
{
1:{voo:bar,doo:dar},
2:{voo:mar,doo:har}
}
as opposed to
[
{id:1,voo:bar,doo:dar},
{id:2,voo:mar,doo:har}
]
which I then have to loop through to create the desired object?
I should add that within each row I am also concatenating results to form an object, and from what I've experimented with you can't group_concatenate inside a group_concatenation. As follows:
knex('table').select(
'table.id',
'table.name',
knex.raw(
`CONCAT("{", GROUP_CONCAT(DISTINCT
'"',table.voo,'"',':','"',table.doo,'"'),
"}") AS object`
)
.groupBy('table.id')
Could GROUP BY be leveraged in any way to achieve this? Generally I'm inexperienced at SQL and don't know what's possible and what's not.

Django MySQL Query Json field

I have a MySQL database with a table containing a JSON field called things. The JSON looks like this
things = {"value1": "phil", "value2": "jill"}
I have collection of objects that I have pulled from the database via
my_things = Name_table.objects.values
Now, I'd like to filter the my_things collection by one of the JSON fields. I've tried this
my_things =
my_things.filter(things__contains={'value': 'phil'})
which returned an empty collection. I've also tried
my_things = my_things.filter(things={'value': 'phil'})
and
my_things = my_things.filter(things__exact={'value': 'phil'})
I'n using Django 1.10 and MySQL 5.7
Thoughts?
It depends on how exactly do you store JSON in field. If you use django-jsonfield, then your things will be string without spaces, with strings inside of quotation marks: '{"value1":"phil","value2":"jill"}'.
Then, via docs:
my_things = my_things.filter(things__contains='"value1":"phil"')
should return your filtered QuerySet, because
>>> tmp_str = '{"value1":"phil","value2":"jill"}'
>>> '"value1":"phil"' in tmp_str
True