Cannot create table due to quota limit - create-table

I am trying to create a table in Azure SQL Data Warehouse which has a limit of 10,000 tables.
create table dbo.todd (now_ts datetime);
Error Msg: Msg 105000, Level 16, State 1, Line 1
The operation failed due to a quota of no more than 10000 Tables per Database.
Executed the below queries to get number of tables:
SELECT count(*)
FROM [sys].[tables]
results = 178
and
SELECT count(*)
FROM [sys].[external_tables]
results = 6
Am I missing something?

This can be caused by a certain pattern of external table creation/deletion. A fix is currently rolling out to prevent this pattern from causing a false positive in table creation.

Related

Doing a more efficient COUNT

I have a page that loads some high-level statistics. Nothing fancy, just about 5 metrics. There are two particular queries that takes about 5s each to load:
+ SELECT COUNT(*) FROM mybooks WHERE book_id IS NOT NULL
+ SELECT COUNT(*) FROM mybooks WHERE is_media = 1
The table has about 500,000 rows. Both columns are indexed.
This information changes all the time, so I don't think that caching here would work. What are some techniques to use that could speed this up? I was thinking:
Create a denormalized stats table that is updated whenever the columns are updated.
Load the slow queries via ajax (this doesn't speed it up, but it allows the page to load immediately).
What would be suggested here? The requirement is that the page loads within 1s.
Table structure:
id (pk, autoincrementing)
book_id (bigint)
is_media (boolean)
The stats table is probably the biggest/quickest bang for buck. Assuming you have full control of your MySQL server and don't already have job scheduling in place to take care of this, you could remedy this by using the mysql event scheduler. As Vlad mentioned above, your data will be a bit out of date. Here is a quick example:
Example stats table
CREATE TABLE stats(stat VARCHAR(20) PRIMARY KEY, count BIGINT);
Initialize your values
INSERT INTO stats(stat, count)
VALUES('all_books', 0), ('media_books', 0);
Create your event that updates every 10 minutes
DELIMITER |
CREATE EVENT IF NOT EXISTS updateBookCountsEvent
ON SCHEDULE EVERY 10 MINUTE STARTS NOW()
COMMENT 'Update book counts every 10 minutes'
DO
BEGIN
UPDATE stats
SET count = (SELECT COUNT(*) FROM mybooks)
WHERE stat = 'all_books';
UPDATE stats
SET count = (SELECT COUNT(*) FROM mybooks WHERE is_media = 1)
WHERE stat = 'media_books';
END |
Check to see if it executed
SELECT * FROM mysql.event;
No? Check to see if the event scheduler is enabled
SELECT ##GLOBAL.event_scheduler;
If it is off you'll want to enable it on startup using the param --event-scheduler=ON or setting it in you my.cnf. See this answer or the docs.
There are a couple of things you can do to speed up the query.
Run optimize table on your mybooks table
Change your book_id column to be an int unsigned, which allows for 4.2 billions values and takes 4 bytes instead of 8 (bigint), making the table and index more efficient.
Also I'm not sure if this will work but rather than doing count(*) I would just select the column in the where clause. So for example your first query would be SELECT COUNT(book_id) FROM mybooks WHERE book_id IS NOT NULL

MYSQL delete all data from table which have millions record with same name

Hi i want to delete all record of a table which have 10 millions record but its hang and give me follow error:
Lock wait timeout exceeded; try restarting transaction
I am using the following query:
delete from table where name = '' order by id limit 1000
in for loop.
Please suggest me how to optimize it.
You said i want to delete all record of a table which have 10 millions record. Then why not use TRUNCATE command instead which will have minimal/no overhead of logging.
TRUNCATE TABLE tbl_name
You can as well use DELETE statement but in your case the condition checking (where name = '' order by id limit 1000) is not necessary since you wanted to get rid of all rows but DELETE has overhead of logging in transaction log which may matter for record volume of millions.
Per your comment, you have no other option rather than going by delete from table1 where name = 'naresh'. You can delete in chunks using the LIMIT operator like delete from table1 where name = 'naresh' limit 1000. So if name='naresh' matches 25000 rows, it will be deleting only 1000 rows out of them.
You can include the same in a loop as well like below (Not tested, minor tweak might require)
DECLARE v1 INT;
SELECT count(*) INTO v1 FROM table1 WHERE name = 'naresh';
WHILE v1 > 0 DO
DELETE FROM table1 WHERE name = 'naresh' LIMIT 1000;
SET v1 = v1 - 1000;
END WHILE;
So in the above code, loop will run for 25 times deleting 1000 rows each time (assuming name='naresh' condition returns 25K rows).
If you want to delete all records(empty table),
You can use
TRUNCATE TABLE `table_name_here`...
May be it will work for you...
(not tried with big database)

Update table with count MySQL Workbench - large data - improvement needed

I am trying to generate some data as follows (everything is done in MySQL Workbench 6.0.8.11354 build 833):
What I have:
users table (~9.000.000 entries):
SUBSCRIPTION,NUMBER,STATUS,CODE,TEST1,TEST2,TEST3,MANUFACTURER,TYPE,PROFILE
text(3 options),number,text(3
options),number,yes/no,yes/no,yes,no,number(6 options),text(50
options),text(30 options)
What I need:
stats1 table (data that I want and I need to create):
PROFILE,TYPE,SUBSCRIPTION,STATUS,MANUFACTURER,TEST1YES,TEST1NO,TEST2YES,TEST2NO,TEST3YES,TEST3NO
profile1,type1,subscription1,status1,man1,count,count,count,count,count,count
profile1,type2,subscription2,status2,man2,count,count,count,count,count,count
each PROFILE,TYPE,SUBSCRIPTION,STATUS,MANUFACTURER pair is unique.
What I did so far:
Created the stats1 table
Execute the following query in order to populate the table (I ended up with ~500 distinct entries):
insert into stats1 (PROFILE,TYPE,SUBSCRIPTION,STATUS,MANUFACTURER) select DISTINCT users.PROFILE,users.TYPE,users.SUBSCRIPTION,users.STATUS,users.MANUFACTURER from users;
Execute the following script for counting the values foe OCT13YES, for each of the ~500 entries:
update stats1 SET TEST1YES = (select count(*) from users where (users.TEST1YES='yes' and users.PROFILE=stats1.PROFILE and users.TYPE=stats1.TYPE and users.SUBSCRIPTION=stats1.SUBSCRIPTION and users.STATUS=stats1.STATUS and users.MANUFACTURER=stats1.MANUFACTURER));
I receive the following error in Workbench:
Error Code: 2013. Lost connection to MySQL server during query 600.573 sec
This is a known bug in Workbench and, even so, the server continues to execute the query.
However, the query runs for more than 70 minutes in the background (as I saw in client connections / management) and I need to run 5 more queries like this, for the rest of the columns.
Is there a better / faster / more efficient way for performing the count for those 6 columns in stats1 table?

Mysql deadlock on column update

Have run into this weird problem where a simple query fails due to a deadlock
Here is the query
UPDATE myprobelmatictable SET mycolumn = (mycolum-0) WHERE id = '59'
The weird issue is that this query fails only when my php server is located on a slower server on a remote network
Before I run this query, the following happens
transaction starts
insert new row in table 5
select 1 row from myproblematictable
insert new row in table 6
update table 4
UPDATE myprobelmatictable SET mycolumn = (mycolum-0) WHERE id = '<id>'
update table 3
Commit Transaction
The strange thing is that the same query fails each time with the following error
Error Number: 1213</p><p>Deadlock found when trying to get lock; try restarting transaction
The innodb status command does not seem to mention myproblematictable
any clues?
This can be the result of another query updating the tables in a different order. I would try to see if there's a pre-determined order in which the tables should be updated, and if so, rewriting the order of the updates.
If not, I would suggest trying to find the offending quer(ies), and seeing what order they are updating the tables. What type of table engine are you using? Keep in mind that MyISAM locks the entire table.

Google Cloud SQL: Unable to execute statement

My Google cloud sql table have 1126571 rows currently and adding minimum 30 thousand every day.When execute the query :
select count(distinct sno) as tot from visits
sql prompt it will generate following error:
Error 0: Unable to execute statement
. Is Cloud SQL Query liable to 60 seconds exceed exception. How can overcome the problem when the table become large.
Break the table into two tables. One to receive new visits ... transactions ... one for reporting. Index the reporting table. Transfer and clear data on a regular basis.
The transaction table will remain relatively small and thus it will be fast to count. The reporting table will be fast to count because of the index.
add an INDEX in your column sno and it will improve its performance.
ALTER TABLE visits ADD INDEX (sno)
Try to split your select query for many parts, for example, the first select query must be limited to 50000, and then the second select query must be started from 50000 and limited to 50000 and so on.
You can do that by this scenario :
1- Get records count.
2- Make a loop and make it end at the records count.
3- For each loop, make the select query select 50000 records and append the results to a datatable (depends on what's your programming language)
4- In the next loop, you must start selecting from where previous loop ended, for example, the second query must select the next 50000 records and so on.
You can specify your select starting index by this SQL query statement:
SELECT * FROM mytable somefield LIMIT 50000 OFFSET 0;
Then you will get the whole data that you want.
NOTE : make a test to see what's the maximum records count can be loaded in 60 sec, this will decrease your loops and therefore, increased performance.