Second Subquery Inside INSERT Into saves int 0 - mysql

Read it carefully, we have this query which is inserting values in the table called users. For the value member_id we are running a subquery to select from the table admin_users the id of the member. The reason why there are single quotes with +, it's because we are trying to manipulate the query. At this moment this first subquery works correctly but what happends with the second subquery?
The second subquery selects the pass from the table settings, the table settings and the value pass totally exists and there is only one record, but this second query inside the INSERT INTO is not returning nothing. When the execution of the query INSERT INTO finishs, all the values are stored correctly except notes column which finally inserts 0. I don't know why but if you delete all the ''+ it works correctly the whole sql statement but in this time we can not delete ''+ because we are altering the query. I need a solution for this issue.
INSERT INTO `users` (`username`,`password`,`number`,`member_id`,`exp_date`,`notes`) VALUES ('balvin','sjeneoeoe','3',''+(select id from `admin_users` where username = 'TEST')+'','1644622354','' + (select pass from `settings`));#;');
Also i have tried modifying the second subquery like this but it didn't work.
'' + (select pass from `settings` LIMIT 1)
'' + (select pass from `settings` GROUP BY pass LIMIT 1)
'' + (select pass from `settings` where id = 1 LIMIT 1)
Perhaps the error it's the datatype of the column value pass in settings or the column notes in users
CREATE TABLE `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`member_id` int(11) DEFAULT NULL,
`username` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`password` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`exp_date` int(11) DEFAULT NULL,
`notes` mediumtext COLLATE utf8_unicode_ci NOT NULL,
`number` int(11) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`),
KEY `member_id` (`member_id`),
KEY `exp_date` (`exp_date`),
KEY `username` (`username`),
KEY `password` (`password`),
) ENGINE=InnoDB AUTO_INCREMENT=1702894 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
CREATE TABLE `settings` (
`id` int(11) NOT NULL,
`name` mediumtext COLLATE utf8_unicode_ci NOT NULL,
`pass` mediumtext COLLATE utf8_unicode_ci NOT NULL,
...
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci

Write your insert as a select, not values.
Untested, but something like:
INSERT INTO `users` (`username`,`password`,`number`,`member_id`,`exp_date`,`notes`)
select 'balvin','sjeneoeoe','3',
(select id from `admin_users` where username = 'TEST'),
'1644622354',
(select pass from `settings`);
Note each sub-query must return a single row.

Related

How to store translates in MySQL to use join?

I have a table that contains all translations of words:
CREATE TABLE `localtexts` (
`Id` int(11) NOT NULL,
`Lang` char(2) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT 'pe',
`Text` varchar(300) DEFAULT NULL,
`ShortText` varchar(100) NOT NULL,
`DbVersion` timestamp NOT NULL DEFAULT current_timestamp(),
`Status` int(11) NOT NULL DEFAULT 1
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
As example there is a table that refers to localtexts:
CREATE TABLE `composes` (
`Status` int(11) NOT NULL DEFAULT 1,
`Id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The table above has foreign key Id to localtexts.Id. And when I need to get word on English I do:
SELECT localtexts.text,
composes.status
FROM composes
LEFT JOIN localtexts ON composes.Id = localtexts.Id
WHERE localtexts.Lang = 'en'.
I'm concerned in performance this decision when there are a lot of tables for join with localtexts.
You might find that adding the following index to the localtexts table would speed up the query:
CREATE INDEX idx ON localtexts (Lang, id, text);
This index covers the WHERE clause, join, and SELECT.

mysql procedure : Result consisted of more than one row with select statement

I am creating one procedure for my regular repeated job.
Within this, there is one steps to insert multiple rows from one table into temporary table.
CREATE TABLE `tmpUserList` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_type` varchar(45) COLLATE utf8_unicode_ci NOT NULL,
`first_name` varchar(45) COLLATE utf8_unicode_ci NOT NULL,
`last_name` varchar(45) COLLATE utf8_unicode_ci NOT NULL
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
..... some more queries.
INSERT INTO tmpUserList (
SELECT id, user_type,first_name,last_name, from user where id in (usersId)
);
SELECT * FROM tmpUserList; // return the result
But it gives me error : Result consisted of more than one row
Correct INSERT SELECT syntax:
INSERT INTO tmpUserList(id, user_type,first_name,last_name)
SELECT id, user_type,first_name,last_name
FROM user
WHERE id IN (usersId);
If usersId contains multiple values you could use:
WHERE FIND_IN_SET(id, usersId); -- table scan
Related: MySQL Prepared statements with a variable size variable list

Mysql update query with inner table join

I have problem with update query in mysql.
When I try this
update planned_expense p1
set deleted=1
where case_id=204
and deleted =0
and type='MONTHLY'
and planned_date>='2017-04-01'
and id > (select min(id) from planned_expense p2 where
p2.case_id = p1.case_id
and p2.planned_date = p1.planned_date
and p2.account = p1.account
and p2.type = p1.type and p2.deleted = 0)
I get
You can't specify target table 'p1' for update in FROM clause
And when I try
update planned_expense p1
set deleted=1
where case_id=204
and deleted =0
and type='MONTHLY'
and planned_date>='2017-04-01'
and id > (select min(id)
from (select * from planned_expense p2
where p2.case_id=p1.case_id and
p2.planned_date=p1.planned_date
and p2.account=p1.account and p2.type=p1.type and p2.deleted=0) p3)
I get
Unknown column 'p1.case_id' in 'where clause'
What should I write in order to update those records?
Tnank you.
This is my table
DROP TABLE IF EXISTS `bes-ers`.`planned_expense`;
CREATE TABLE `bes-ers`.`planned_expense` (
`ID` bigint(20) NOT NULL AUTO_INCREMENT,
`case_id` bigint(20) DEFAULT NULL,
`deleted` tinyint(1) DEFAULT '0',
`planned_date` datetime DEFAULT NULL,
`addition_mark` int(11) DEFAULT NULL,
`code` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`planned_amount` decimal(22,4) DEFAULT NULL,
`account` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`type` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`search_field` varchar(4096) COLLATE utf8_unicode_ci DEFAULT NULL,
`description` varchar(4096) COLLATE utf8_unicode_ci DEFAULT NULL,
`created_by_id` bigint(20) DEFAULT NULL,
`locked` tinyint(4) DEFAULT '0',
PRIMARY KEY (`ID`),
KEY `FK_planned_expenses_1` (`case_id`),
KEY `FK_planned_expenses_created_by_id` (`created_by_id`),
KEY `planned_expense_planned_date` (`planned_date`),
KEY `planned_expense_type` (`type`),
KEY `planned_expense_deleted` (`deleted`),
CONSTRAINT `FK_planned_expenses_1` FOREIGN KEY (`case_id`) REFERENCES `bankruptcy_case` (`ID`) ON DELETE SET NULL ON UPDATE SET NULL,
CONSTRAINT `FK_planned_expenses_created_by_id` FOREIGN KEY (`created_by_id`) REFERENCES `user` (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=13172954 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
Mysql don't allow the update of the table if is involved in select but you can avoid this problem using a dinamica select table eg:
update planned_expense
set deleted=1
where case_id=204
and deleted =0
and type='MONTHLY'
and planned_date>='2017-04-01'
and id > ( select t.min_id from (select min(id) min_id
from planned_expense p2
INNER JOIN planned_expense p1 on p2.case_id=p1.case_id
and p2.planned_date=p1.planned_date
and p2.account=p1.account
and p2.type=p1.type and p2.deleted=0) t )
try adding a composite index as
create index idx_test on planned_expense (case_id, planned_date, account)
As explained by #scaisEdge, you have to use a join and to speed up you query,
try creating a composite key as per your where clause.
Something like:
KEY case_id_deleted_type_planned_date_id (case_id, deleted, type, planned_date, id)
Use Explain to check if your query is using proper indexes or not.
Thank you all, I did what I wanted to by creating temp table and than working with it.
Thanks for

mysql match against not matching text in brackets

I'm trying to use match against to return search results.
I have a problem though where it's not returning results where the matching text is in brackets. Just getting rid of the brackets isn't an option i'm afraid.
So running the sql fiddle below I would expect it to return two results, not one
CREATE TABLE `courses` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`course_code` varchar(100) NOT NULL,
`course_name` varchar(100) DEFAULT NULL,
`startdate` varchar(100) DEFAULT NULL,
`starttimestamp` varchar(45) DEFAULT NULL,
`prospectus_title` varchar(500) DEFAULT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `info` (`course_name`,`prospectus_title`,`course_code`)
) ENGINE=MyISAM AUTO_INCREMENT=981074 DEFAULT CHARSET=utf8;
INSERT INTO `courses` (`id`, `course_code`, `course_name`, `startdate`, `starttimestamp`, `prospectus_title`) VALUES
('1', '1234', 'vrqtest', 'time','time', 'vrqtest'),
('2', '5678', '(vrq)test', 'time','time','(vrq)test');
SELECT * FROM courses force index(info)
WHERE starttimestamp IS NOT NULL AND (
MATCH ( course_name ) against ('vrq*' in boolean mode ))
SQL Fiddle
That returns only the first record but should also the second.
Any ideas?
Here's why:
https://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_ft_min_word_len
ft_min_word_len
The minimum length of the word to be included in a
FULLTEXT index. Defaults to 4
(vrq)test is too short to match. Add an extra character (e.g. (vrqa)test ) and it's matched.

MySQL optimize count query

I've got a question about MySQL performance.
These are my tables:
(about 140.000 records)
CREATE TABLE IF NOT EXISTS `article` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`label` varchar(256) COLLATE utf8_unicode_ci NOT NULL,
`title` varchar(256) COLLATE utf8_unicode_ci NOT NULL,
`intro` text COLLATE utf8_unicode_ci NOT NULL,
`content` text COLLATE utf8_unicode_ci NOT NULL,
`date` int(11) NOT NULL,
`active` int(1) NOT NULL,
`language_id` int(11) NOT NULL,
`category_id` int(11) NOT NULL,
`indexed` int(1) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=132911 ;
(about 400.000 records)
CREATE TABLE IF NOT EXISTS `article_category` (
`article_id` int(11) NOT NULL,
`category_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
RUNNING THIS COUNT QUERY:
SELECT SQL_NO_CACHE COUNT(id) as total
FROM (`article`)
LEFT JOIN `article_category` ON `article_category`.`article_id` = `article`.`id`
WHERE `article`.`language_id` = 1
AND `article_category`.`category_id` = '<catid>'
This query takes a lot of resources, so I am wondering how to optimize this query.
After executing it's beeing cached, so after the first run I am fine.
RUNNING THE EXPLAIN FUNCTION:
AFTER CREATING AN INDEX:
ALTER TABLE `article_category` ADD INDEX ( `article_id` , `category_id` ) ;
After adding indexes and changing LEFT JOIN to JOIN the query runs alot faster!
Thanks for these fast replys :)
QUERY I USE NOW (I removed the language_id because it was not that neccesary):
SELECT COUNT(id) as total
FROM (`article`)
JOIN `article_category` ON `article_category`.`article_id` = `article`.`id`
AND `article_category`.`category_id` = '<catid>'
I've read something about forcing an index, but I think thats not neccesary anymore because the tables are already indexed, right?
Thanks alot!
Martijn
You haven't created necessary index on the table
Table article_category - Create a compound index on (article_id, category_id)
Table article -Create a compound index on (id, language_id)
If this doesn't help post the explain statement.
The columns used in a JOIN condition should have an index, so you need to index article_id.