Cannot update a field on MySQL - mysql

I have this MySQL table:
video (id int, name varchar(30), view_count int)
I tried to run the following query on my computer:
update `video` set view_count = view_count + 1 where id = 1;
It works fine.
However, after I moved the database to another server, the above update query sometimes work, sometimes not work. It doesn't have any error message, but the value doesn't change.
I tried to run:
update `video` set name = 'testing', view_count = view_count + 1 where id = 1;
The name can be updated successfully, but the vide_count doesn't change..
Anyone know what problem it is?
Here is the output from MySQL prompt:
update `video` set view_count = view_count + 1 where id = 1;
Query OK, 1 row affected (0.13 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> show create table video \G;
*************************** 1. row ***************************
Table: video
Create Table: CREATE TABLE `video` (
`id` int(10) unsigned NOT NULL auto_increment,
`name` varchar(30) default NULL,
`view_count` int(10) default NULL,
PRIMARY KEY (`id`),
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
mysql> select * from video;
| id | name | view_count |
| 1 | ABC | 8510 |

MyISAM performs a full table lock on any DML (INSERT, UPDATE, DELETE)
If the video table is MyISAM, that should take care of this problem because only one person is allowed to have exclusive write privilege on the table executing a single DML statement, even if 1000 people are trying to execute
update `video` set name = 'testing', view_count = view_count + 1 where id = 1;
InnoDB performs MVCC around any data you are updating. Since the default is REPEATABLE-READ, 1000 people may see the same value for view_count just before attempting to increment it. Thus, if DB Connection #1 sees view_count as 12, increments it, it should be 13. However, if DB Connection #2 sees view_count as 12, it wiil increment it, and be 13 AGAIN !!!
Keep in mind that I am only speculating based on multiple DB Connections attempting the exact same UPDATE query.

Check that view_count doesn t contain a null value

Related

trying to UPDATE foreign key in two tables joined on 4 columns

I have two tables:
mysql> show create table named \G
*************************** 1. row ***************************
Table: named
Create Table: CREATE TABLE `named` (
`table_name` varchar(40) NOT NULL,
`table_pk` int NOT NULL,
`name_type` varchar(6) NOT NULL,
`naml` varchar(200) DEFAULT NULL,
`namf` varchar(45) DEFAULT NULL,
`namt` varchar(10) DEFAULT NULL,
`nams` varchar(10) DEFAULT NULL,
`named_only_pk` int DEFAULT NULL,
PRIMARY KEY (`table_name`,`table_pk`,`name_type`),
KEY `naml` (`naml`(20)),
KEY `naml_2` (`naml`,`namf`,`namt`,`nams`),
KEY `named_only_pk` (`named_only_pk`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin
1 row in set (0.01 sec)
mysql> show create table named_only \G
*************************** 1. row ***************************
Table: named_only
Create Table: CREATE TABLE `named_only` (
`pk` int DEFAULT NULL,
`naml` varchar(200) DEFAULT NULL,
`namf` varchar(45) DEFAULT NULL,
`namt` varchar(10) DEFAULT NULL,
`nams` varchar(10) DEFAULT NULL,
KEY `naml` (`naml`,`namf`,`namt`,`nams`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin
1 row in set (0.01 sec)
The named table as 30M rows and the named_only table has 2M rows.
The named_only table has a unique constraint on (naml,namf,namt,nams).
The named table came from other tables. The named_only table was filled by:
replace into named_only (naml,namf,namt,nams) select naml,namf,namt,nams from named;
I set the pk values in named_only after the rows are added with:
update named cross join (select #pk:=0) as init set named.pk=#pk:=#pk+1;
I am trying to set up named_only_pk in named as a foreign key to the pk in named_only.
I know a query that will work. I can do:
update named n1, named_only n1 set n1.named_only_pk = n2.pk where
n1.naml = n2.naml and n1.namf = n2.namf and n1.namt = n2.namt and n1.nams = n2.nams;
This is not on a powerhouse machine, but my laptop.
But it seems as though this is going to take a week. I create indexes of (naml,namf,namt,nams) on both tables. Still was taking a while with no indication of how long it would take.
I tried different variations of selects with limits and could not get anything to work.
I tried exporting the named_only table and using that (and awk) to build SQL insert statements but this seems error-prone.
But is this the only way to do this operation in small enough bunches that it does not try to suck up all the memory in the known universe?
Any suggestions?
ADDED 2020/06/16:
Creating an insert trigger did not work. I created:
DELIMITER $$
CREATE TRIGGER named_only_after_insert
AFTER INSERT ON `named_only` for each row
begin
update named n1 set n1.only_pk = NEW.pk
where n1.naml = NEW.naml and
n1.namf = NEW.namf and
n1.namt = NEW.namt and
n1.nams = NEW.nams;
END$$
DELIMITER ;
I tried to insert 10 rows (making the op quicker?) and got:
mysql> insert into named_only (naml,namf,namt,nams) select naml,namf,namt,nams from named where only_pk is NULL limit 10;
ERROR 1442 (HY000): Can't update table 'named' in stored function/trigger because it is already used by statement which invoked this stored function/trigger.
Sigh. Ok. What if it is coming from a third table, a copy of named?
mysql> insert into named_only (naml, namf, namt, nams) select naml, namf, namt, nams from named2 where pk < 10;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
10 is too many?
mysql> insert into named_only (naml, namf, namt, nams) select naml, namf, namt, nams from named2 where pk < 2;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
Obviously not. There is no other query running, so nothing else is holding a lock on the table.
And I am now doing this in a AWS large Ubuntu image. So my laptop is not the issue.
Well, I think I will have to go outside SQL for this. Which seems lame, but O well.
ADDED LATER:
Yes, it was solvable, but not using SQL.
So here it is:
% echo "select naml,namf,namt,nams,pk from named_uniq;" | mysql -u root -p -B --skip-column-names ca_sos_20200615 > n1.txt
% echo "select naml,namf,namt,nams,table_name,table_pk,name_type from named;" | mysql -u root -p -B --skip-column-names ca_sos_20200615 > n2.txt
% cat n1.txt n2.txt | sort > n3.txt
% awk 'BEGIN{FS="\t"}{if (NF == 4) fk=$4; if (NF == 7) print $5"\t"$6"\t"$7"\t"$1"\t"$2"\t"$3"\t"$4"\t"fk}' n3.txt > n4.txt
Theoretically I can just rename n4.txt to named and import it. Nope. Nothing so easy, because that is a big insert.
So:
% split --lines=10000 --suffix-length=7 n4.txt n4_
/bin/ls -1 n4_* | awk '{print "echo \""$0"\"; cp "$0" named; echo \"load data local infile '\''/home/ubuntu/named'\'' into table named;\" | mysql -u root --password=root --local-infile ca_sos_20200615"}' | /bin/sh 2>/dev/null
And all that in under 3 hours.
Now, how can this be done with SQL?

MySQL auto increment value set to zero

Here is a table
CREATE TABLE `mytable` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`val` char(128) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
UNIQUE KEY (val)
) ENGINE=InnoDB DEFAULT CHARSET=ascii;
Any idea why is this happening, I expected it to set id to zero in the first query itself
MariaDB > insert into mytable set id=0, val="" on duplicate key update id=0, val=val;
Query OK, 1 row affected (0.01 sec)
MariaDB > select * from mytable;
+----+-------+
| id | val |
+----+-------+
| 1 | |
+----+-------+
1 row in set (0.00 sec)
MariaDB > insert into mytable set id=0, val="" on duplicate key update id=0, val=val;
Query OK, 2 rows affected (0.01 sec)
MariaDB > select * from mytable;
+----+-------+
| id | val |
+----+-------+
| 0 | |
+----+-------+
1 row in set (0.00 sec)
MariaDB > insert into mytable set id=0, val="" on duplicate key update id=0, val=val;
Query OK, 0 rows affected (0.01 sec)
Any explanation will be appreciated.
Update: I am aware of using AUTO_INCREMENT=0 but the real question here is that query explicitly set id=0, so why it is setting it as 1 in first query. It seems mysql ok to set it 0 in duplicate instance.
Thanks
When inserting a new record, setting an AUTO_INCREMENT column to 0 means "generate a new value for this column" (ref). Values for AUTO_INCREMENT columns start from 1. Thus:
insert into mytable set id=0, val="" on duplicate key update id=0, val=val;
is equivalent to:
insert into mytable set id=1, val="";
The second insert you call would create a duplicate key (for the val field, not the id field). This causes the update statement to be run, thus updating id to zero. The "2 rows affected" message appears because the on duplicate key update statement returns 2 in case an existing row is updated (ref).
The third insert does nothing. Both keys are duplicate, but the existing row doesn't need to be updated because its values are already what you expect them to be. In this case the on duplicate key update statement returns "0 rows affected".
By default, the starting value for AUTO_INCREMENT is 1, and it will increment by 1 for each new record.
To let the AUTO_INCREMENT sequence start with another value, use the following SQL statement:
ALTER TABLE myTabel AUTO_INCREMENT=0;
Credits
The answer about NO_AUTO_VALUE_ON_ZERO is correct, although a bit incomplete. There is an option to sql_mode to allow for an explicit value of zero to be entered in an autoincrement field. By default 0 is treated the same as null. If you add the NO_AUTO_VALUE_ON_ZERO option, you are allowed to specify a zero value in that field. I have this in my cnf file:
sql_mode='NO_AUTO_VALUE_ON_ZERO,STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'
please check NO_AUTO_VALUE_ON_ZERO option.
By default, auto_increment column cannot be inserted zero value.
If you set NO_AUTO_VALUE_ON_ZERO on, you can force to input auto_increment column zero value.

"Lost" 30% of records after partitioning

I've got a MYISAM table of 90 million records over 18GB of data, and tests suggest it's a candidate for partitioning.
Original schema:
CREATE TABLE `email_tracker` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`hash` varchar(65) COLLATE utf8_unicode_ci NOT NULL,
`userId` int(11) NOT NULL,
`dateSent` datetime NOT NULL,
`dateViewed` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `userId` (`userId`),
KEY `dateSent` (`dateSent`),
KEY `dateViewed` (`dateViewed`),
KEY `hash` (`hash`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci 1 row in set (0.01 sec)
I've previously partitioned the table on a test server with "ALTER TABLE email_tracker PARTITION BY HASH..." and run typical queries against it, and there were no problems with the queries. To avoid locking the table on the production DB, I'm testing again on the test server using this approach as we can afford to lose some tracking data while this runs:
RENAME TABLE email_tracker TO email_tracker_orig; CREATE TABLE email_tracker LIKE email_tracker_orig;
CREATE TABLE email_tracker_part LIKE email_tracker_orig;
ALTER TABLE email_tracker_part DROP PRIMARY KEY, ADD PRIMARY KEY (id, userId);
ALTER TABLE email_tracker_part PARTITION BY HASH (id + userId) partitions 30;
INSERT INTO email_tracker_part (SELECT * FROM email_tracker_orig);
The _orig table has 90,795,103 records. After the query, the _part table only has 68,282,298. And I have no idea why that might be. Any ideas?
mysql> select count(*) from email_tracker_orig;
+----------+
| count(*) |
+----------+
| 90795103 |
+----------+
1 row in set (0.00 sec)
mysql> select count(*) from email_tracker_part;
+----------+
| count(*) |
+----------+
| 68274818 |
+----------+
1 row in set (0.00 sec)
(On subsequent tests, the _part table contains slightly different numbers of records which is weirder still)
Edit #1: Just realised that half of the partition table are empty due to auto-increment-increment = 2 for replication, so going to repartition BY KEY (userId) and see how that works out.
Edit #2 - Still the same after re-partitioning so trying to identify missing rows to establish a pattern.
I am not sure of your requirements, but the mysql documentation states that "the use of hashing expressions involving multiple columns is not particularly recommended." I would recommend that you just partition by id. Partitioning by id + userId doesn't give any obviously better distribution of your elements across the partitions.
Looks like the INSERT query merely terminated prematurely - exactly 40 mins in this case. Just re-running this for the missing records is doing the trick:
INSERT INTO email_tracker_part (SELECT * FROM email_tracker_orig WHERE id > 148893974);
There's nothing in the my.cnf that suggests a timeout of 40 mins, and I've been running longer queries than this on this test server, but I have my solution so I'll close this even though the underlying reason remains unclear to me.

The insert APPEARS to work, but doesn’t in mysql

this is a very strange issue...
select count(*) from imprint_users;
count 461
INSERT INTO coresource.imprint_users (imprint_sid, users_sid) VALUES (2387,165);
Query OK, 1 row affected (0.00 sec)
select count(*) from imprint_users;
count 461
1) cannot see anything in mysql-error
2) checked the status of the table just in case
+--------------------------+-------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+--------------------------+-------+----------+----------+
| coresource.imprint_users | check | status | OK |
+--------------------------+-------+----------+----------+
3)here is the create statement
CREATE TABLE "imprint_users" (
"imprint_sid" bigint(20) NOT NULL,
"users_sid" bigint(20) NOT NULL,
PRIMARY KEY ("imprint_sid","users_sid"),
KEY "FK__IMPRINT_U__IMPRI__47E69B3D" ("imprint_sid"),
KEY "FK__IMPRINT_U__USERS__48DABF76" ("users_sid"),
CONSTRAINT "fk__imprint_u__impri__47e69b3d" FOREIGN KEY ("imprint_sid") REFERENCES "imprint" ("imprint_sid"),
CONSTRAINT "fk__imprint_u__users__48dabf76" FOREIGN KEY ("users_sid") REFERENCES "users" ("users_sid")
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci |
4) This is in a M-M setup and we are writing to M1
Server version: 5.6.24-72.2-log Percona Server (GPL), Release 72.2,
Any help or direction would be appreciated.
try a test.
first you flush table & autocommit is set to zero then set it to 1.
flush table;
set #autocommit=1;
select count(*) from imprint_users;
INSERT INTO coresource.imprint_users (imprint_sid, users_sid) VALUES (2387,165);
select count(*) from imprint_users;
If still the problem remains then see is there any TRIGGER OR EVENT working in background. So disable all triggers, disable the entire scheduler (all events)-
SET #disable_triggers = 1;
SET #event_scheduler = OFF;
run the query & test

Mysql 5.5 Table partition user and friends

I have two tables in my db that have millions of rows now, the selection and insertion is getting slower and slower.
I am using spring+hibernate+mysql 5.5 and read about the sharding as well as partitioning the table and like the idea of partitioning my tables,
My current Db structure is like
CREATE TABLE `user` (
`id` BIGINT(20) NOT NULL,
`name` VARCHAR(255) DEFAULT NULL,
`email` VARCHAR(255) DEFAULT NULL,
`location_id` bigint(20) default NULL,
`updated_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `FK3DC99772C476E06B` (`location_id`),
CONSTRAINT `FK3DC99772C476E06B` FOREIGN KEY (`location_id`) REFERENCES `places` (`id`)
) ENGINE=INNODB DEFAULT CHARSET=utf8
CREATE TABLE `friends` (
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`user_id` BIGINT(20) DEFAULT NULL,
`friend_id` BIGINT(20) DEFAULT NULL,
`updated_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `unique_friend` (`user_id`,`friend_id`)
) ENGINE=INNODB DEFAULT CHARSET=utf8
Now I am testing how to better use partitioning, for user table following I thought will be good based on by usage.
CREATE TABLE `user_partition` (
`id` BIGINT(20) NOT NULL,
`name` VARCHAR(255) DEFAULT NULL,
`email` VARCHAR(255) DEFAULT NULL,
`location_id` bigint(20) default NULL,
`updated_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `FK3DC99772C476E06B` (`location_id`)
) ENGINE=INNODB DEFAULT CHARSET=utf8
PARTITION BY HASH(id DIV 100000)
PARTITIONS 30;
I created a procedures to load data in two table and check the performance of the two tables
DELIMITER //
CREATE PROCEDURE load_partition_table()
BEGIN
DECLARE v INT DEFAULT 0;
WHILE v < 1000000
DO
INSERT INTO user_partition (id,NAME,email)
VALUES (v,CONCAT(v,' name'),CONCAT(v,'#yahoo.com')),
(v+1,CONCAT(v+1,' name'),CONCAT(v+1,'#yahoo.com')),
(v+2,CONCAT(v+2,' name'),CONCAT(v+2,'#yahoo.com')),
(v+3,CONCAT(v+3,' name'),CONCAT(v+3,'#yahoo.com')),
(v+4,CONCAT(v+4,' name'),CONCAT(v+4,'#yahoo.com')),
(v+5,CONCAT(v+5,' name'),CONCAT(v+5,'#yahoo.com')),
(v+6,CONCAT(v+6,' name'),CONCAT(v+6,'#yahoo.com')),
(v+7,CONCAT(v+7,' name'),CONCAT(v+7,'#yahoo.com')),
(v+8,CONCAT(v+8,' name'),CONCAT(v+8,'#yahoo.com')),
(v+9,CONCAT(v+9,' name'),CONCAT(v+9,'#yahoo.com'))
;
SET v = v + 10;
END WHILE;
END
//
CREATE PROCEDURE load_table()
BEGIN
DECLARE v INT DEFAULT 0;
WHILE v < 1000000
DO
INSERT INTO user (id,NAME,email)
VALUES (v,CONCAT(v,' name'),CONCAT(v,'#yahoo.com')),
(v+1,CONCAT(v+1,' name'),CONCAT(v+1,'#yahoo.com')),
(v+2,CONCAT(v+2,' name'),CONCAT(v+2,'#yahoo.com')),
(v+3,CONCAT(v+3,' name'),CONCAT(v+3,'#yahoo.com')),
(v+4,CONCAT(v+4,' name'),CONCAT(v+4,'#yahoo.com')),
(v+5,CONCAT(v+5,' name'),CONCAT(v+5,'#yahoo.com')),
(v+6,CONCAT(v+6,' name'),CONCAT(v+6,'#yahoo.com')),
(v+7,CONCAT(v+7,' name'),CONCAT(v+7,'#yahoo.com')),
(v+8,CONCAT(v+8,' name'),CONCAT(v+8,'#yahoo.com')),
(v+9,CONCAT(v+9,' name'),CONCAT(v+9,'#yahoo.com'))
;
SET v = v + 10;
END WHILE;
END
//
Results were surprizing, insert/select in non partition table giving better results.
mysql> select count(*) from user_partition;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.40 sec)
mysql> select count(*) from user;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.00 sec)
mysql> call load_table();
Query OK, 10 rows affected (20.31 sec)
mysql> call load_partition_table();
Query OK, 10 rows affected (21.22 sec)
mysql> select * from user where id = 999999;
+--------+-------------+------------------+---------------------+
| id | name | email | updated_time |
+--------+-------------+------------------+---------------------+
| 999999 | 999999 name | 999999#yahoo.com | 2012-11-27 08:06:54 |
+--------+-------------+------------------+---------------------+
1 row in set (0.00 sec)
mysql> select * from user_no_part where id = 999999;
+--------+-------------+------------------+---------------------+
| id | name | email | updated_time |
+--------+-------------+------------------+---------------------+
| 999999 | 999999 name | 999999#yahoo.com | 2012-11-27 08:03:14 |
+--------+-------------+------------------+---------------------+
1 row in set (0.00 sec)
So two question
1) Whats the best way to partition user table so that inserts and selects also become fast and removing FOREIGN KEY on location_id is correct? I know partition can be good only if we access on the base of partition key, In my case I want to read the table only by id. why inserts are slower in partition table?
2) What the best way to partition friend table as I want to partition friends on the bases of user_id as want to place all user friends in same partition and always access it using a user_id. Should I drop the primary key on friend.id or add the user_id in primary key?
First I would recommend if possible that you upgrade to 5.6.5 or later of Mysql to ensure you are taking advantage of partitioning properly and with best performance. This is not always possible due to GA concerns, but my experience is that there was a difference in performance between 5.5 and 5.6, and 5.6 offers some other types of partitioning.
1) My experience is that inserts and updates ARE faster on partitioned sets as well as selects AS LONG AS YOU ARE INCLUDING THE COLUMN THAT YOU ARE PARTITIONING ON IN THE QUERY. If I ask for a count of all records across all partitions, I see slower responses. That is to be expected because the partitions are functioning LIKE separate tables, so if you have 30 partitions it is like reading 30 tables and not just one.
You must include the value you are partitioning on in the primary key AND it must remain stable during the life of the record.
2) I would include user_id and id in the primary key - assuming that your friends tables user_id and id do not change at all once the record is established (i.e. any change would be a delete/insert). In my case it was "redundant" but more than worth the access. Whether you choose user_id/id or id/user_id depends on your most frequent access.
A final note. I tried to create LOTS of partitions when I first started breaking my data into partitions, and found that just a few seemed to hit the sweet spot - 6-12 partitions seemed to work best for me. YMMV.
1. Use this sql query to select table and excepting all column, except id:
I answer what you need:
I suggest you to remove FOREIGN KEY and PRIMARY KEY
I know this is crazy, but they can ask computer to know what the current id, last id, next id and this wlll take long than create id manually.
other way you can create int id manually by java .
use this sql query to insert fastly:
INSERT INTO user (id,NAME,email)
VALUES ('CREATE ID WITH JAVA', 'NAME', 'EMAIL#YAHOO.COM')
I can't decide my query can work faster or not...
Because all depend on your computer performance, make sure you use it on server, because server can finish all tasks fastly.
and for select, in page where profile info located you will need one row for one user that defined in profile id.
use mysql limit if you only need one and if you need more than one ...
Just change the limit values like this
for one row:
select * from user where id = 999999 limit 1;
and for seven row:
select * from user where id = 999999 limit 7;
I think this query will work faster than without limit
and remember limit can work with insert too
2. For friend partition:
the answer is drop the primary key
Table with no primary key is no problem
Once again, create the id with java...
java designed to be faster in interface and your code include while
and java can do it.
For example you need to retrieve your all friend data ...
use this query to perform faster:
select fr.friend_id, usr.* from friends as fr INNER JOIN user as usr
ON dr.friend_id = usr.id
where fr.user_id = 999999 LIMIT 10;
and i think this is enough
sorry i can only explain about mysql and not in java.
Because, i'm not expert in java but i understand about it
1) If You use always(or mostly) only id to select data it is obvious to use this field as base for partitioning condition. As it is number there is no need for hash function simply use range partitioning. How many partitions to create(what numbers to choose as borders) you need to find by Yourself but as #TJChambers mentioned before around 8-10 should be efficient enough.
Insert are slower because You test it wrong.
You simply insert 1000000 rows one after another without any randomness and the only difference is that for partitioned table mysql needs to calculate hash which is extra time.
But as in Your case id is base of condition for partitioning You will never gain anything with inserting as all new rows go on the end of table.
If You had for example table with GPS localizations and partitioned it by lat and lon You could see difference in inserting if for example each partition was different continent.
And difference would be seen if You had a table with some random(real) data and were inserting some random values not linear.
Your select for partitioned table is slower because again You test it wrong.
#TJChambers wrote before me about it, Your query needs to work on all partitions(it is like working with many tables) so it extends time. Try to use where to work with data from just one partition to see a difference.
for example run:
select count(*) from user_partition where id<99999;
and
select count(*) from user where id<99999;
You will see a difference.
2) This one is hard. There is no way to partition it without redundancy of data(at least no idea coming to my mind) but if time of access (select speed) is the most important the best way may be to partition it same way as user table (range on one of the id's) and insert 2 rows for each relationship it is (a,b) and (b,a). It will double number of rows but if You partition in to more than 4 parts you will work on less records per query anyway and You will have just one condition to check no need for or.
I tested it with with this schema
CREATE TABLE `test`.`friends` (
`a` INT NOT NULL ,
`b` INT NOT NULL ,
INDEX ( `a` ),
INDEX ( `b` )
) ENGINE = InnoDB;
CREATE TABLE `test`.`friends_part` (
`a` INT NOT NULL ,
`b` INT NOT NULL ,
INDEX ( `a` , `b` )
) ENGINE = InnoDB
PARTITION BY RANGE (a) (
PARTITION p0 VALUES LESS THAN (1000),
PARTITION p1 VALUES LESS THAN (2000),
PARTITION p2 VALUES LESS THAN (3000),
PARTITION p3 VALUES LESS THAN (4000),
PARTITION p4 VALUES LESS THAN (5000),
PARTITION p5 VALUES LESS THAN (6000),
PARTITION p6 VALUES LESS THAN (7000),
PARTITION p7 VALUES LESS THAN (8000),
PARTITION p8 VALUES LESS THAN (9000),
PARTITION p9 VALUES LESS THAN MAXVALUE
);
delimiter //
DROP procedure IF EXISTS fill_friends//
create procedure fill_friends()
begin
declare i int default 0;
declare a int;
declare b int;
while i<2000000
do
set a = rand()*10000;
set b = rand()*10000;
insert into friends values(a,b);
set i = i + 1;
end while;
end
//
delimiter ;
delimiter //
DROP procedure IF EXISTS fill_friends_part//
create procedure fill_friends_part()
begin
insert into friends_part (select a,b from friends);
insert into friends_part (select b as a, a as b from friends);
end
//
delimiter ;
Queries I have run are:
select * from friends where a=317 or b=317;
result set: 475
times: 1.43, 0.02, 0.01
select * from friends_part where a=317;
result set: 475
times: 0.10, 0.00, 0.00
select * from friends where a=4887 or b=4887;
result set: 483
times: 1.33, 0.01, 0.01
select * from friends_part where a=4887;
result set: 483
times: 0.06, 0.01, 0.00
I didn't bother about uniqueness of data but in your example You may use unique index.
As well I used InnoDB engine, but MyISAM is better if most of the queries are select and you are not going to do many writes.
There is no big difference for 2nd and 3rd run probably because of caching, but there is visible difference for 1st run. It is faster because we are breaking one of prime rules of database designing, but the end justifies the means so it may be good solution for really big tables. If you are going to have less than 1M of records I think You can survive without partitioning.