Sorry, this one is hard to explain in the title.
I have a simple table like this:
CREATE TABLE `categories` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(125) NOT NULL,
`desc` text NOT NULL,
`ordering` int(10) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
)
The ordering column is normally set in the client-- that is, the client can drag and reorder these categories, so it is not auto_incremented.
My question is, when I want to insert a row outside the client using a direct SQL insert is there a way to quickly get the max of the ordering column in the same statement?
Something like:
INSERT INTO `categories` (title, desc, ordering)
VALUES ('test title', 'description', (SELECT max(ordering) FROM `categories`)+1);
I've tried a dozen variations on this theme with no success.
The trick to get that to work is to avoid using the VALUES clause, and instead use a SELECT as the rowsource for the INSERT,
something like this:
INSERT INTO `categories` (title, desc, ordering)
SELECT 'test title', 'description', MAX(ordering)+1 FROM `categories`
NOTE: This may work with MyISAM tables, which disallows concurrent inserts. But for other engines that allow concurrent INSERTS, this approach will likely be "broken by design". I don't think there is any guarantee that two INSERT statements running concurrently won't generate the same value for the ordering column.
This design is also "broken by design" when the categories table is empty, because MAX(ordering) would return a NULL. But an IFNULL function can fix that.
SELECT 'test title', 'description', IFNULL(MAX(ordering),0)+1 FROM `categories`
try this:
insert into `categories` (`title`, `desc`, `ordering`)
select 'test title','description', max(ordering) + 1 FROM `categories`
Related
SCENARIO:
I have one large table (let's call it "WordTable") with a list of words (let's call the field "theWord") that could have 10,000+ records.
I also have a large table (let's call it "MySentences") with a VARCHAR field (let's call the field "theSentence") that contains many varied sentences - it could have millions of records.
QUESTION:
What SQL could I write for the MySQL database to give me a list of which records in MySentences.theSentence contain any of the words from WordTable.theWord ?
Since there are many records in both tables, using numerous Like statements is not feasible. Would FullText Search allow some capability here?
Hopefully this helps... by the way, a "sentence" does not always need to have spaces... it could just be a collection of letters
Here are some MySQL scripts to illustrate the scenario:
CREATE TABLE `MySentences` (
`id` int(11) NOT NULL,
`theSentence` varchar(1000) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1;
INSERT INTO `MySentences` (`id`, `theSentence`) VALUES
(1, 'hereisatestsentence'),
(2, 'asdfasdfadsf'),
(3, 'today is a blue sky'),
(4, 'jk2k2lkjskylkjdsf'),
(5, 'ddddddd'),
(6, 'nothing'),
(7, 'sometest');
CREATE TABLE `WordTable` (
`id` int(11) NOT NULL,
`theWord` varchar(50) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1;
INSERT INTO `WordTable` (`id`, `theWord`) VALUES
(1, 'test'),
(2, 'house'),
(3, 'blue'),
(4, 'sky');
ALTER TABLE `MySentences`
ADD PRIMARY KEY (`id`);
ALTER TABLE `WordTable`
ADD PRIMARY KEY (`id`);
ALTER TABLE `MySentences`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=8;
ALTER TABLE `WordTable`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=5;
I made a query using the LIKE operator in the JOIN clause which will find any sentence that contains word. The LIKE operator uses wildcards % which will match anything.
SELECT
A.theSentence, B.theWord
FROM
MySentences A
INNER JOIN WordTable B ON A.theSentence LIKE CONCAT('%',B.theWord,'%');
If you are interested in just the sentence that was matched, you could use the DISTINCT operator to see distinct results:
SELECT
DISTINCT A.theSentence
FROM
MySentences A
INNER JOIN WordTable B ON A.theSentence LIKE CONCAT('%',B.theWord,'%');
You split your string into rows using something like this
SQL split values to multiple rows
You need a separator char probably space _, but also be carefull, may have to remove special chars like , . : ;
Then you join that result to your WordTable and find which words are there.
I have two tables with different schemas:
Base A, table T1:
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(255) NOT NULL DEFAULT '',
`uid` int(11) NOT NULL DEFAULT '0',
`language` varchar(12) NOT NULL DEFAULT ''
Base B, table T2:
`ID` int(11) NOT NULL AUTO_INCREMENT,
`Type` varchar(10) COLLATE utf8_unicode_ci DEFAULT NULL,
`UserID` int(11) NOT NULL,
`Name` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
I need to transfer data from T1 to T2 in this way:
A.T1.id -> B.T2.ID
A.T1.title -> B.T2.Name
A.T1.uid -> B.T2.UserID
As you can see fields A.T1.language and B.T2.Type are not needed.
I think I should do this migration through dump of CSV. But this is all I have come up to.
Any idea?
UPDATE
Thank you guys for your answers. Please forgive me for not being clear enough, I should have emphasized that my tables are in different bases, and even on different servers. So it is not as easy as to just insert fields from one table into another.
You can do it with a combination of UPDATE and SELECT query. However since TABLE 1 has the column title which is of type VARCHAR(255) and TABLE 2 has the column Name which is of type VARCHAR(100) might give a problem.
The following query can do this migration however any row with column title having length more than 100 will be SHORTENED to 100.
INSERT INTO T2
(ID, Name, UserID)
SELECT id, SUBSTR(title, 0, 100), uid
FROM T1
Use the INSERT ... SELECT syntax as in
INSERT INTO `B`.`T2` (`ID`, `Name`, `UserID`)
SELECT `id`, `title`, `uid` FROM `A`.`T1`
Are they on the same database? In that case:
INSERT INTO T2 (ID, Name, UserID)
SELECT id, title, uid FROM T1
I there any reason why you have to add the limitation to your int fields such as INT(10) instead of just INT?
The explanation of your data transfer does not coincide with your base tables?
Anyway, the problem you might run int here is that some of your column limitations are different so you either have to make them the same or SUBSTRING them into the destination table if the string from the source column is longer than the destination column for instance if you try to insert "This is my string" into a VARCHAR(10) column, you will get a truncate error.
To insert the data into the destination table you can use this:
INSERT INTO [Destination Table] (ID, Name, uid)
SELECT
ID,
SUBSTRING(title, 0, 100) as 'Name',
uid
FROM
[Source Table]
This will work yet you will be sacrificing the data on the name column. I would suggest giving your destination columns the same data type and limitations as your source table.
I have a MySQL table that looks like this:
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`company_id` int(8) unsigned NOT NULL,
`term_type` varchar(255) NOT NULL DEFAULT '',
`term` varchar(255) NOT NULL DEFAULT '',
I would like to be able to do this...
INSERT IGNORE INTO table ( company_id, term_type, term )
VALUES( a_company_id, 'a_term_type', 'a_term' )
... but I'd like the insert to be ignored when the same combination of company_id, term_type and term already exists. I am aware that if I have a unique index on a single field when I try to insert a duplicate value, the insert will be ignored. Is there a way to do the combo that I'm attempting? Could I use a multi-column index?
I'm trying to avoid doing a SELECT to check for this combination before every insert. As I'm processing hundreds of millions of rows of data into this table.
Maybe something like this:
ALTER TABLE table ADD UNIQUE (company_id, term_type,term);
If you use the IGNORE keyword, errors that occur while executing the INSERT statement are treated as warnings instead. For example, without IGNORE, a row that duplicates an existing UNIQUE index or PRIMARY KEY value in the table causes a duplicate-key error and the statement is aborted. With IGNORE, the row still is not inserted, but no error is issued.
So if you have a multicolumn primary key - it works.
i always in inserting data into a mysql table i use a select for that data before inserting to avoid duplicate records and if the query return null then i insert record.
but i think maybe it is not a professional way to do this job.
would you let me the ways you do?
if the reason you don't wish to use primary keys, or unique indexes is because of the error this will generate (which is an issue if you are inserting multiple rows on a single query), you can use the following syntax
insert ignore into [tablename] () VALUES ()
You can also use ON DUPLICATE KEY UPDATE as in to update certain fields as well.
insert into [tablename] () VALUES () ON DUPLICATE KEY UPDATE
for more information, have a look at http://dev.mysql.com/doc/refman/5.1/en/insert.html
You can try the following example. I would suggest you to try this first in your testing environment then you can implement this in your actual scenario.
Follow the below steps:
Step 1: create a table
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(30) NOT NULL,
`email` varchar(50) NOT NULL,
`password` varchar(20) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
Step 2: Run this query multiple times and check that only one row is inserted
INSERT INTO `student` ( `name`, `age`)
SELECT `name`, `age` FROM `student`
WHERE NOT EXISTS (SELECT 1
FROM `student`
WHERE `name` = 'manish'
AND `age` = '23'
);
The "professional" way to do this will be using a primary key constraint.
$qry="INSERT username INTO users";
if(!mysql_query($qry))
{
if(mysql_errno()=1062)
{
echo 'Unique costraint violation!';
}
else
{
//other error
}
}
I have a form on a website which has a lot of different fields. Some of the fields are optional while some are mandatory. In my DB I have a table which holds all these values, is it better practice to insert a NULL value or an empty string into the DB columns where the user didn't put any data?
By using NULL you can distinguish between "put no data" and "put empty data".
Some more differences:
A LENGTH of NULL is NULL, a LENGTH of an empty string is 0.
NULLs are sorted before the empty strings.
COUNT(message) will count empty strings but not NULLs
You can search for an empty string using a bound variable but not for a NULL. This query:
SELECT *
FROM mytable
WHERE mytext = ?
will never match a NULL in mytext, whatever value you pass from the client. To match NULLs, you'll have to use other query:
SELECT *
FROM mytable
WHERE mytext IS NULL
One thing to consider, if you ever plan on switching databases, is that Oracle does not support empty strings. They are converted to NULL automatically and you can't query for them using clauses like WHERE somefield = '' .
One thing to keep in mind is that NULL might make your codepaths much more difficult. In Python for example most database adapters / ORMs map NULL to None.
So things like:
print "Hello, %(title)s %(firstname) %(lastname)!" % databaserow
might result in "Hello, None Joe Doe!" To avoid it you need something like this code:
if databaserow.title:
print "Hello, %(title)s %(firstname) %(lastname)!" % databaserow
else:
print "Hello, %(firstname) %(lastname)!" % databaserow
Which can make things much more complex.
Better to Insert NULL for consistency in your database in MySQL. Foreign keys can be stored as NULL but NOT as empty strings.
You will have issues with an empty string in the constraints.
You may have to insert a fake record with a unique empty string to satisfy a Foreign Key constraint. Bad practice I guess.
See also: Can a foreign key be NULL and/or duplicate?
I don't know what best practice would be here, but I would generally err in favor of the null unless you want null to mean something different from empty-string, and the user's input matches your empty-string definition.
Note that I'm saying YOU need to define how you want them to be different. Sometimes it makes sense to have them different, sometimes it doesn't. If not, just pick one and stick with it. Like I said, I tend to favor the NULL most of the time.
Oh, and bear in mind that if the column is null, the record is less likely to appear in practically any query that selects (has a where clause, in SQL terms) based off of that column, unless the selection is for a null column of course.
If you are using multiple columns in a unique index and at least one of these columns are mandatory (i.e. a required form field), if you set the other columns in the index to NULL you may end up with duplicated rows. That's because NULL values are ignored in unique columns. In this case, use empty strings in the other columns of the unique index to avoid duplicated rows.
COLUMNS IN A UNIQUE INDEX:
(event_type_id, event_title, date, location, url)
EXAMPLE 1:
(1, 'BBQ', '2018-07-27', null, null)
(1, 'BBQ', '2018-07-27', null, null) // allowed and duplicated.
EXAMPLE 2:
(1, 'BBQ', '2018-07-27', '', '')
(1, 'BBQ', '2018-07-27', '', '') // NOT allowed as it's duplicated.
Here are some codes:
CREATE TABLE `test` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`event_id` int(11) DEFAULT NULL,
`event_title` varchar(50) DEFAULT NULL,
`date` date DEFAULT NULL,
`location` varchar(50) DEFAULT NULL,
`url` varchar(200) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `event_id` (`event_id`,`event_title`,`date`,`location`,`url`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Now insert this to see it will allow the duplicated rows:
INSERT INTO `test` (`id`, `event_id`, `event_title`, `date`, `location`,
`url`) VALUES (NULL, '1', 'BBQ', '2018-07-27', NULL, NULL);
INSERT INTO `test` (`id`, `event_id`, `event_title`, `date`, `location`,
`url`) VALUES (NULL, '1', 'BBQ', '2018-07-27', NULL, NULL);
Now insert this and check that it's not allowed:
INSERT INTO `test` (`id`, `event_id`, `event_title`, `date`, `location`,
`url`) VALUES (NULL, '1', 'BBQ', '2018-07-28', '', '');
INSERT INTO `test` (`id`, `event_id`, `event_title`, `date`, `location`,
`url`) VALUES (NULL, '1', 'BBQ', '2018-07-28', '', '');
So, there is no right or wrong here. It's up to you decide what works best with your business rules.