I am using DBIx::Class and I have a query like this:
$groups = $c->model('DB::Project')->search(
{ "sessions.user_id"=>$c->user->id,done_yn=>'y' },
{
select => ["name", "id",\'SUM(UNIX_TIMESTAMP(end_time)-UNIX_TIMESTAMP(start_time)) as total_time'], #\''
join => 'sessions',
}
);
I'd like to be able to get the value of SUM(UNIX_TIMESTAMP(end_time)-UNIX_TIMESTAMP(start_time)), but because this is not a real column in the table, referencing total_time for a DBIx::Class::Row object doesn't seem to work. Does anyone know how I can get these temporary columns? Thanks!
The select docs describe perfectly how to achieve what you're trying to accomplish.
It's also recommended to avoid literal SQL when possible, you can use { sum => \'UNIX_TIMESTAMP(end_time)-UNIX_TIMESTAMP(start_time)' } instead.
The 'as' in the literal SQL isn't required to give the column a name, you have to use either the as search attribute or better the columns shortcut instead of select+as.
Related
I have a query like this...
SELECT *
FROM `000027`,`000028`
WHERE `000027`.id=(SELECT max(`000027`.id) FROM `000027`)
AND `000028`.id=(SELECT max(`000028`.id) FROM `000028`)
which returns something like this in phpmyadmin...
id time value id time value
However, in react.js it is only returning one of these like this...
id time value
2 questions, Why is it doing this? and, how can I get it to return both instead of one?
my node.js code...
const sqlSelect = "SELECT * FROM `000027`,`000028` WHERE `000027`.id=(SELECT max(`000027`.id) FROM `000027`) AND `000028`.id=(SELECT max(`000028`.id) FROM `000028`)"
dbPlant.query(sqlSelect, (err, result) => {
console.log(result)
res.send(result)
res.end()
})
and it sends this back with only one rowdatapacket when it should be two, or two of each of those values...
[
RowDataPacket {
id: 652,
time: 2021-01-24T17:28:01.000Z,
value: '262'
}
]
Your two tables have some column names in common. This is okay to have repeated column names in a result set in the mysql client, but some programming interfaces map a rows of a result set into a hash array, where the column names are the keys. So if you have duplicate column names, one naturally overwrites the other.
The remedy is to define column aliases for one or the other of each duplicate, so they are mapped into distinct keys in the result set.
You must do this one column at a time. Sorry, you can't use SELECT * anymore (you shouldn't use SELECT * anyway). There is no "automatic alias all columns" option.
SELECT
`000027`.id AS id27,
`000027`.time AS time27,
`000027`.value AS value27,
`000028`.id AS id28,
`000028`.time AS time28,
`000028`.value AS value28
FROM `000027`,`000028`
WHERE `000027`.id=(SELECT max(`000027`.id) FROM `000027`)
AND `000028`.id=(SELECT max(`000028`.id) FROM `000028`)
How to add condition dynamically to sql query
for example if i have one element than it will look like
query=['one_element']
User.where('name LIKE ?, %"#{query[0]}"%')
but if it more than one
User.where('name LIKE ? and LIKE ? and Like... , %"#{query}"%', ..so on)
Im use myslq
so my main goal to split search query if it contains more than 2 words and search by them separately in one sql query
not where(name:'john dou') but where(name:'john' and name:'dou')
If you have a version of MySQL that supports RLIKE or REGEXP_LIKE:
User.where("RLIKE(name, :name_query)", name_query: query.join("|"))
Otherwise, you'll have to manually build with the ActiveRecord or operator:
# In the User model
scope :name_like, ->(name_part) {
return self.all if name_part.blank?
where("name LIKE :name_query", name_query: "%#{name_part}%")
}
scope :names_like, ->(names) {
relation = User.name_like(names.shift)
names.each { |name| relation = relation.or(name_like(name)) }
relation
}
Then you can pass it an array of any name partials you want:
query = ["john", "dou"]
User.names_like(query)
First split the word by it's separator like this query.split(' ') this will give you array of words. Then you can use like below in rails.
User.where(name: ['John', 'dou']
Is there a way to get the output of a MySQL query to list rows in the following structure
{
1:{voo:bar,doo:dar},
2:{voo:mar,doo:har}
}
as opposed to
[
{id:1,voo:bar,doo:dar},
{id:2,voo:mar,doo:har}
]
which I then have to loop through to create the desired object?
I should add that within each row I am also concatenating results to form an object, and from what I've experimented with you can't group_concatenate inside a group_concatenation. As follows:
knex('table').select(
'table.id',
'table.name',
knex.raw(
`CONCAT("{", GROUP_CONCAT(DISTINCT
'"',table.voo,'"',':','"',table.doo,'"'),
"}") AS object`
)
.groupBy('table.id')
Could GROUP BY be leveraged in any way to achieve this? Generally I'm inexperienced at SQL and don't know what's possible and what's not.
I've been trying to use dynamic columns with an instance of MariaDB v10.1.12.
First, I send the following query:
INSERT INTO savedDisplays (user, name, body, dataSource, params) VALUES ('Marty', 'Hey', 'Hoy', 'temp', COLUMN_CREATE('type', 'tab', 'col0', 'champions', 'col1', 'averageResults'));
Where params' type was defined as a blob, just like the documentation suggests.
The query is accepted, the table updated. If I COLUMN_CHECK the results, it tells me it's fine.
But when I try to select:
"SELECT COLUMN_JSON(params) AS params FROM savedDisplays;
I get a {type: "Buffer", data: Array} containing binary returned to me, instead of the {"type":"tab", "col0":"champions", "col1":"averageResults"} I expect.
EDIT: I can use COLUMN_GET just fine, but I need every column inside the params field, and I need to check the type property first to know what kind of and how many columns there are in the JSON / params field. I could probably make it work still, but that would require multiple queries, as opposed to only one.
Any ideas?
Try:
SELECT CONVERT(COLUMN_JSON(params) USING utf8) AS params FROM savedDisplays
In MariaDB 10 this works at every table:
SELECT CONVERT(COLUMN_JSON(COLUMN_CREATE('t', text, 'v', value)) USING utf8)
as json FROM test WHERE 1 AND value LIKE '%12345%' LIMIT 10;
output in node.js
[ TextRow { json: '{"t":"test text","v":"0.5339044212345805"}' } ]
I want to import many informations from a CSV file to Elastic Search.
My issue is I don't how can I use a equivalent of substring to select information into a CSV column.
In my case I have a field date (YYYYMMDD) and I want to have (YYYY-MM-DD).
I use filter, mutate, gsub like:
filter
{
mutate
{
gsub => ["date", "[0123456789][0123456789][0123456789][0123456789][0123456789][0123456789][0123456789][0123456789]", "[0123456789][0123456789][0123456789][0123456789]-[0123456789][0123456789]-[0123456789][0123456789]"]
}
}
But my result is false.
I can indentified my string but I don't how can I extract part of this.
My target it's to have something like:
gsub => ["date", "[0123456789][0123456789][0123456789][0123456789][0123456789][0123456789][0123456789][0123456789]","%{date}(0..3}-%{date}(4..5)-%{date}"(6..7)]
%{date}(0..3} : select from the first to the 4 characters of csv columns date
You can use ruby plugin to do conversion. As you say, you will have a date field. So, we can use it directly in ruby
filter {
ruby {
code => "
date = Time.strptime(event['date'],'%Y%m%d')
event['date_new'] = date.strftime('%Y-%m-%d')
"
}
}
The date_new field is the format you want.
First, you can use a regexp range to match a sequence, so rather than [0123456789], you can do [0-9]. If you know there will be 4 numbers, you can do [0-9]{4}.
Second, you want to "capture" parts of your input string and reorder them in the output. For that, you need capture groups:
([0-9]{4})([0-9]{2})([0-9]{2})
where parens define the groups. Then you can reference those on the right side of your gsub:
\1-\2-\3
\1 is the first capture group, etc.
You might also consider getting these three fields when you do the grok{}, and then putting them together again later (perhaps with add_field).