django 1.1 creating high temporary tables with mysql 5.1 - mysql

we are using django 1.1/mysql5.1 for one of our projects. Seems like as the load on the webserver(apache/mod-wsgi) increases, the number of temporary tables created on mysql also increases, causing heavy alerts on our monitoring infrastructure. To give you an example, when the number of connected clients increases from 100 - 300, the number of temporary tables goes from 500-1000. The entire database has innodb tables. Here are a few things i would like to know:
Is it normal to have such heavy number of temporary tables?
What are the normal/optimal limits on the number of temporary tables that are allowed?
How to i minimize the number of temporary tables that django creates?
I know some of these are pretty vague, as the above questions depends on hardware,memory and other aspects of the DB machine and the webserver. But i am trying to get a sense of what is going on and what causes so many temporary tables to be created?
Thanks
Venkat.

Django doesn't create any temporary tables by itself. It's all in the application that your colleagues have programmed.

Related

Side effect of large number of MySQL tables in a database

Is it OK to keep 10000+ tables in a MySQL database?
I'm making a messaging/chat script, so I'm thinking about partitioning data's over several tables as it will be a huge amount of data after some days.
IS IT OK?
Or it has some effect?
Well, as a table can hold millions of rows so I was thinking maybe a database can hold large number of tables too
or, the question could be like, how does Facebook stores their huge amount of daily chat messages?
I'm a newbie in MySQL, please help
MySQL has no limit on the number of tables. The underlying file system may have a limit on the number of files that represent tables. Individual storage engines may impose engine-specific constraints. InnoDB permits up to 4 billion tables.
Even so, the typical DBMS will 'handle' such large databases, but there is more strain on the system catalog than usual in such systems.
I have about huge tables in one database with no ill effects, other than displaying the table list in phpMyAdmin taking a while
It's possible, but I would avoid it unless you have a really good use case for it. It raises all kinds of scalability and maintainability issues. Your table size is mainly limited by available disk space.
If you really need to do it...
You'll need to increase the maximum number of file descriptors that your OS will allow to have open, since MyISAM tables use two file descriptors per table. (If you're using Linux then read the section about ulimit in the man page for bash for how to do this).
Also, there's a MySQL config value called table_cache that limits the number of allowed tables. You'll need to make sure that's large enough to support the number of tables you need.
You won't want to use the standard "flush tables" anymore (unless you're the kind of person that likes to watch paint dry) so you'll need to flush each table individually (e.g. before shutdown).
Again, I would avoid using so many tables. You're probably better off making your schema support what you need in a handful of tables, and consider archiving, warehousing (or deleting!) old data if you're concerned about storing too much data.

Drawbacks of using manually created temporary tables in MySQL

I have many queries that use manually created temporary tables in MySQL.
I want to understand if there are any drawbacks associated with this.
I ask this because I use temporary tables for queries that fetch data shown on the home screen of a web application in the form of multiple widgets. In an organization with a significant number of users, this involves creation and deletion of temporary tables numerous times. How does this affect the MySQL Database Server ?
Execution plans can't be optimal when you frequently add/use/remove tables when we would talk about databases in general. As it takes a time to generate an execution plan, the DB is unable to create one when you use described approach.

Changing Joomla tables engine to InnoDB

I wrote an application in JAVA that adds articles to a Joomla site.
My problem is that inserting an article needs five queries to run (adding article to content table, inserting corresponding node to assets table , updating other nodes in assets table & setting asset id for inserted article); and because of that my JAVA application is running on a remote machine lots of problems can make any of these queries fail & if any of them fail the entire assets table breaks.
I thought about using transactions and manual commit to solve this but Joomla's Storage Engine (MyISAM) doesn't support transactions. so thought about converting storage engine of those two tables to "InnoDB".
Is this correct ? doesn't it rise problems for Joomla(for example in JOINING with other tables that are using MyISAM)?
Will it affect on site and makes it slower?
Is there any other solution (e.g. sending all 5 queries to server to run in sequence)?
Thanks
Some thoughts:
I am not completely sure but I don't think Joomla should have any issues with InnoDB. When joining MyISAM and InnoDB tables in a JOIN, MySQL internally converts InnoDB to MyISAM. But I'm not a Joomla guy and I still can't be sure on it
Why not use triggers - an AFTER INSERT trigger on content?
You may also write a stored procedure for running all 5 INSERTs but again there will not be any transaction support
Create a single table to hold all data from all the 5 INSERT queries. Of course this table is only of an intermittent nature. Now create another stored procedure that will then migrate all data from this intermediate table to the respective tables.
Hope the above makes sense!

Max tables in a MySQL database

Is it bad to have too many tables in a database? I have about 160 tables in one database. Is it better to split it into several database rather than using a single database? Single database is more convenient for me.
There are no server limits on the number of tables in a MySQL database. You will definitely have no problems with 160 tables, and you don't need to split them into multiple databases.
You will not gain performance by splitting your tables into multiple databases. If performance remains an issue, you could consider using per-table tablespaces in order to place some sets of tables on different physical disks.
according to MySQL reference manual:
MySQL has no limit on the number of tables. The underlying file system may have a limit on the number of files that represent tables. Individual storage engines may impose engine-specific constraints. InnoDB permits up to 4 billion tables.
160 tables isn't radically huge.
16,000 might be...probably would be...more unreasonable - such databases exist in ERP or CRM systems (even into the 40-50K tables range, but many of those tables are not actually used, or are only barely used).
Even so, the typical DBMS will 'handle' such large databases, but there is more strain on the system catalog than usual in such systems.
160 is still ok. it makes SQL command more faster than making too many contents under a single table. In my case I have 8,545,214 tables in a single Mysql database. I dont want to store millions of user in a single database that's why I used multiple table to store each post a user done. it makes mysql more faster than searching on a single table with millions of rows.
WordPress Multisite creates dozens of tables for every new subsite in the same database.
So you can be so good at only 160 tables.
It might be an issue to manage them with PhpMyAdmin or other software to see and scroll through the tables. But if you work with the code it should not be a problem.

Best approach to relating databases or tables?

What I have:
A MySQL database running on Ubuntu that maintains a
large table of articles (similar to
wordpress).
Need to relate a given article to
another set of data. This set of data
will be fairly large.
There maybe various sets of data that
will be related.
The query:
Is it better to contain these various large sets of data within the same database of articles, which will have a large set of traffic on it?
or
Is it better to create different databases (on the same server) that
relate by a primary key to the main database with the articles?
Put them all in the same DB initially, until you find that there is a performance issue. Much easier than prematurely optimising.
Modern RDBMS are very good at optimising data access.
If you need to connect frequently and read both of the records, you should put in a the same database. The server then won't have to run permission checks twice for each of your databases.
If you have serious traffic, you should consider using persistent connection for that query.
If you don't need to read them together frequently, consider to put on different machine. As the high traffic for the bigger database won't cause slow downs on the other.
Different databases on the same server gives you all the problems of a distributed architecture without any of the benefits of scaling out. One database per server is the way to go.
When you say 'same database' and 'different databases related' don't you mean 'same table' vs 'different tables'?
if that's the question, i'd say:
one table for articles
if these 'other sets of data' are all of the same structure, put them all in the same table. if not, one table per kind of data.
everything on the same database
if you grow big enough to make database size a performance issue (after many million records and lots of queries a second), consider table partitioning or maybe replacing the biggest table with a key/value store (couchDB, mongoDB, redis, tokyo cabinet, [etc][6]), which can be a little faster than MySQL but a lot easier to distribute for performance.
[6]:key-value store