so in order to restrict my app's user sql login I turned off create, drop, index, alter, tmp and lock. So, one of my tables will have rows deleted by the users, hence the index will most likely have a bunch of gaps, some being very large. I would just make the user re-index the table after a delete, but that would require me to enable drop and/or alter, which for security reasons I do not want to enable.
So now the user wants to access the index number 50 from this table but there's a gap from 48-54 for instance. How would I go about this?
Select * from my_table where id > 48 order by id limit 1
Related
Is there a more-efficent, less laborious way of copying all records from one table to another that doing this:
INSERT INTO product_backup SELECT * FROM product
Typically, the product table will hold around 50,000 records. Both tables are identical in structure and have 31 columns in them. I'd like to point out this is not my database design, I have inherited a legacy system.
There's just one thing you're missing. Especially, if you're using InnoDB, is you want to explicitly add an ORDER BY clause in your SELECT statement to ensure you're inserting rows in primary key (clustered index) order:
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id
Consider removing secondary indexes on the backup table if they're not needed. This will also save some load on the server.
Finally, if you are using InnoDB, reduce the number of row locks that are required and just explicitly lock both tables:
LOCK TABLES product_backup WRITE;
LOCK TABLES product READ;
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id;
UNLOCK TABLES;
The locking stuff probably won't make a huge difference, as row locking is very fast (though not as fast as table locks), but since you asked.
mysqldump -R --add-drop-table db_name table_name > filepath/file_name.sql
This will take a dump of specified tables with a drop option to delete the exisiting table when you import it. then do,
mysql db_name < filepath/file_name.sql
DROP the destination table:
DROP TABLE DESTINATION_TABLE;
CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);
I don't think this will be worthy for a 50k table but:
If you have the database dump you can reload a table from it. As you want to load a table in another one you could change the table name in the dump with a sed command:
Here you have some hints:
http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html
An alternative (depending on your design) would be to use triggers on the original table inserts so that the duplicated table gets the data as well.
And a better alternative would be to create another MySQL instance and either run it in a master-slave configuration or in a daily dump master/load slave fashion.
I have a MySQL table of Users, and a table of Actions performed by the Users (linked to that User by a the primary key, userid ). The Actions table has an incrementing key indx. Whenever I add a new row to that table, I then update the latest column of the relevant Users row with the indx of the row I just added to the Actions table. So something like:
INSERT INTO actions(indx,actionname,userid) VALUES(default, "myaction", 1);
UPDATE users SET latest=LAST_INSERT_ID() WHERE userid=1;
The idea being that I can check for updates for a User by seeing if the latest is higher then the last time I checked.
My issue is that if more than one connection is opened on the database and they try and add an Action for the same User at the same time, connection2 could conceivably run their INSERT and UPDATE between the INSERT and update of connection1, and the latest entry of the user they're both trying to update will no longer have the indx of the most recent action entry.
I've been reading up on transaction, isolation levels, etc. But haven't really found a way around this (though my understanding of how these work exactly is pretty shaky, so maybe I just misunderstood). I think I need a way to lock the Actions table until the User table is updated. This application only gets used by a few hundred users tops, so I don't think the performance hit due to momentarily locking the table will be too bad.
So is that something that can be done in MySQL? Is there a better solution? I imagine this general pattern must be pretty common: having one table with a bunch of varieties of rows, and a second table with a row that tracks meta data for each variety in table A and needs to be updated atomically each time that first table is changed. So I'm hoping there's a solution that isn't too complex
Use SELECT ... FOR UPDATE to lock the row in order to serialize the access to the table and prevent from race conditions:
START TRANSACTION;
SELECT any_column FROM users WHERE userid=1 FOR UPDATE;
INSERT INTO actions(indx,actionname,userid) VALUES(default, "myaction", 1);
UPDATE users SET latest=LATEST_INSERT_ID() WHERE userid=1;
COMMIT;
However this will slown down your INSERTing rate, because all these transactions from all sessions will be serialized.
The better option is to not store the last ID in users table at all. Just use SELECT max( id ) FROM actions WHERE userid = xxxx in all places where this number is required. With an index on actions( userid ) this query will be very fast (assuming that id column is the primary key in this table), and the inserts will not be slowed down
I was creating a big table (30 columns * 10 million rows) by a join operation. And it kept running for a few days (on a big server) so I had to terminate the creation query.
However, this table still exists and I am unable to drop, delete, select, a simple (select count(*)), rename, or even a describe operation!
I can't drop the whole database because I have other important tables.
So how can I get rid of that table from my database?
I tried many solutions but none of them worked. I tried:
drop table badtable;
delete from badtable where 1 = 1; --always true
delete from badtable where 1 = 1 limit 100; --never terminated
Any ideas?
It is an InnoDB table, correct? When you terminated the create query, it set about to "rollback" what you had done. Yuck.
By now, the situation has uncorked itself, yes?
Next time, build the table in, say, 1000-row chunks -- each chunk being COMMITted.
Have you tried ANALYZE TABLE or REPAIR TABLE? If there's an error you can repair the table, then try deleting it.
I have a table with more than 40 million records.i want to delete about 150000 records with a sql query:
DELETE
FROM t
WHERE date="2013-11-24"
but I get error 1206(The total number of locks exceeds the lock table size).
I searched a lot and change the buffer pool size:
innodb_buffer_pool_size=3GB
but it didn't work.
I also tried to lock tables but didn't work too:
Lock Tables t write;
DELETE
FROM t
WHERE date="2013-11-24";
unlock tables;
I know one solution is to split the process of deleting but i want this be my last option.
I am using mysql server, server OS is centos and server Ram is 4GB.
I'll appreciate any help.
You can use Limit on your delete and try deleting data in batches of say 10,000 records at a time as:
DELETE
FROM t
WHERE date="2013-11-24"
LIMIT 10000
You can also include an ORDER BY clause so that rows are deleted in the order specified by the clause:
DELETE
FROM t
WHERE date="2013-11-24"
ORDER BY primary_key_column
LIMIT 10000
There are a lot of quirky ways this error can occur. I will try to list one or two and perhaps the analogy holds true for someone reading this at some point.
On larger datasets even when changing innodb_buffer_pool_size to a larger value, you can hit this error when an adequate index is not in place to isolate the rows in the where clause. Or in some cases with the primary index (see this) and the comment from Roger Gammans:
From the (5.0 documentation for innodb):-
If you have no indexes suitable for your statement and MySQL must scan
the entire table to process the statement, every row of the table
becomes locked, which in turn blocks all inserts by other users to the
table. It is important to create good indexes so that your queries do
not unnecessarily scan many rows.
A visual of how this error can occur and difficult to solve is with this simple schema:
CREATE TABLE `students` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`thing` int(11) NOT NULL,
`campusId` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `ix_stu_cam` (`camId`)
) ENGINE=InnoDB;
A table with 50 Million rows. FK's not shown, not the issue. This table was originally for showing query performance also not important. Yet, in initializing thing=id in blocks of 1M rows, I had to perform a limit during the block update to prevent other problems, by using:
update students
set thing=id
where thing!=id
order by id desc
limit 1000000 ; -- 1 Million
This was all well until it got down to say 600000 left to update as seen by
select count(*) from students where thing!=id;
Why I was doing that count(*) stemmed from repeated
Error 1206: The total number of locks exceeds the lock table size
I could keep lowering my LIMIT shown in the above update, but in the end I would be left, say, with 1200 != in the count, and the problem just continued.
Why did it continue? Because the system filled the lock table as it scanned this large table. Sure, it might "intra implicit transaction" have changed those last 1200 row to equal, in my mind, but due to the lock table filling up, in reality would abort the transaction with nothing set. And the process would stalemate.
Illustration 2:
In this example, let's say I have 288 rows of the 50 Million row table that could be updated shown above. Due to the end-game problem described, I would often find a problem running this query twice:
update students set thing=id where thing!=id order by id desc limit 200 ;
But I would not have a problem with these:
update students set thing=id where thing!=id order by id desc limit 200;
update students set thing=id where thing!=id order by id desc limit 88 ;
Solutions
There are many ways to solve this, including but not limited to:
A. The creation of another index on a column suggesting the data had been updated, perhaps a boolean. And incorporating that into the where clause. Yet on huge tables, the creation of somewhat temporary indexes may be out of the question.
B. Populating a 2nd table with yet to be cleaned id's could be another solution. Coupled with and update with a join pattern.
C. Dynamically changing the LIMIT value so as to not cause an overrun of the lock table. The overrun can occur when there are simply no more rows to UPDATE or DELETE (your operation), the LIMIT has not been reached, and the lock table fills up in a fruitless scan for more that simply don't exist (seen above in Illustration2).
The main point of this answer is to offer an understanding of why it is happening. And for any reader to craft an end-game solution that fits their needs (versus, at times, fruitless changes to system variables, reboots, and prayers).
The simplest way is to create an index on the date column. I had 170 million rows and was deleting 6.5 million rows. I ran into the same problem and solved it by creating non-clustered index on the column which I was using in the WHERE clause then I executed the delete query and it worked.
Delete the index if you don't need it for future.
I have a MySQL MYISAM table (say tbl) consisting of 2 unsigned int fields, say, f1 and f2. There is an index on f2 and the table is very large (approximately 320,000,000+ rows). I update this table periodically (with approximately 100,000 new rows a week), and, in order to be able to search this table without doing an ORDER BY (which would be very time consuming in real-time queries), I physically ORDER the table according to the way in which I want to retrieve its rows.
So, I perform an ALTER TABLE tbl ORDER BY f1 DESC. (I know I have enough physical space on the server for a copy of the table.) I have read that during this operation, a temporary table is created and SELECT statements are not affected on the current rows.
However, I have experienced that this is not the case, and SELECT statements on the table that occur at the same time with the ALTER table are getting blocked and do not terminate. After the ALTER TABLE tbl completes (about 40 minutes on the production server), the SELECT statements on tbl start executing fine again.
Is there any reason why the "ALTER table tbl ORDER BY f1 DESC" seems to be blocking other clients from querying tbl?
Altering a table will always grab a lock on the table, preventing SELECTs from running.
I'll admin that I didn't even know you could do that with an ALTER TABLE.
What are you trying to get from the table? For example, all records in a given range? 320 million rows is not a trivial number. I'll give you my gut reactions:
Switch to InnoDB (allows #2, also gives transactions, but without #2 may hurt performance)
Partition the table (makes it act like a number of slightly smaller tables)
Consider a redesign, such as having a "working set" table and a "historical" table, basically manually partitioning. If you usually look for recently inserted data, this (along with partitioning) will help a lot. If your lookups are evenly distributed, this probably won't make a difference.
Consider adding a new column you could use in conjunction to narrow down selects (so instead of searching on date, search on date and customer ID)
Since I don't know what you're storing, some of these (such as #4) may not apply.
There are some other things you could try. OPTIMIZE TABLE may help you but take less time, but I doubt it. I think internally it's implemented as a dump/reload, at least on the InnoDB side.