Drawbacks of using manually created temporary tables in MySQL - mysql

I have many queries that use manually created temporary tables in MySQL.
I want to understand if there are any drawbacks associated with this.
I ask this because I use temporary tables for queries that fetch data shown on the home screen of a web application in the form of multiple widgets. In an organization with a significant number of users, this involves creation and deletion of temporary tables numerous times. How does this affect the MySQL Database Server ?

Execution plans can't be optimal when you frequently add/use/remove tables when we would talk about databases in general. As it takes a time to generate an execution plan, the DB is unable to create one when you use described approach.

Related

Could federated table impact on database performance?

I have some questions before implement the following scenario:
I have the Database A (it contains multiple tables with lots of data, and is being queried by multiple clients)
this database contains a users table, which I need to create some triggers, but this database is managed by a partner. We don't have permissions to create triggers.
And the Database B is managed by me, much lighter, the queries are only from one source, and I need to have access to users table data from Database A so I can create triggers and take actions for every update, create or delete in users table from database A.
My most concern is, how can this federated table impact on performance in database A? Database B is not the problem.
Both databases stay in the same geographic location, just different servers.
My goal is to make possible take actions from every transaction in database A users table.
Definitely queries that read federated tables have performance issues.
https://dev.mysql.com/doc/refman/8.0/en/federated-usagenotes.html says:
A FEDERATED table does not support indexes in the usual sense; because access to the table data is handled remotely, it is actually the remote table that makes use of indexes. This means that, for a query that cannot use any indexes and so requires a full table scan, the server fetches all rows from the remote table and filters them locally. This occurs regardless of any WHERE or LIMIT used with this SELECT statement; these clauses are applied locally to the returned rows.
Queries that fail to use indexes can thus cause poor performance and network overload. In addition, since returned rows must be stored in memory, such a query can also lead to the local server swapping, or even hanging.
(emphasis mine)
The reason the federated engine was created was to support applications that need to write to tables at a rate greater than a single server can support. If you are inserting to a table and overwhelming the I/O of that server, you can use a federated table so you can write to a table on a different server.
Reading from federated tables is likely to be worse than reading local tables, and cannot be optimized with indexes.
If you need good performance, you should use replication or a CDC tool, to maintain a real table on server B that you can query as a local table, not a federated table.
Another solution would be to cache the user's table in the client application, so you don't have to read it on every query.

Comparison between MySQL Federated, Trigger, and Event Schedule?

I have a very specific problem that requires multiple MYSQL DB instances, and I need to "sync" all data from each DB/table into one DB/table.
Basically, [tableA.db1, tableB.db2, tableC.db3] into [TableAll.db4].
Some of the DB instances are on the same machine, and some are on a separate machine.
About 80,000 rows are added to a table per day, and there are 3 tables(DB).
So, about 240,000 would be "synced" to a single table per day.
I've just been using Event Schedule to copy the data from each DB into the "All-For-One" DB every hour.
However, I've been wondering lately if that's the best solution.
I considered using Trigger, but I've been told it puts heavy burden on DB.
Using statement trigger may be better, but it depends too much on how the statement is formed.
Then I heard about Federated (in Oracle term, "DBLink"),
and I thought I could use it to link each table and create a VIEW table on those tables.
But I don't know much about databases, so I don't really know the implication of each method.
So, my question is..
Considering the "All-For-One" DB only needs to be Read-Only,
which method would be better, performance and resource wise, in order to copy data from multiple databases into one database regularly?
Thanks!

Best practice for sharing data over differenet mysql/mariadb databases

I have app that uses DB with 50+ tables. Now i'm in situation that i need to install another instance of app. Problem is that i would like to use some tables as "common" data, e.g. "brands" or "cities" or "countries" in both (for now there are only 2, but it might soon be more) apps.
I searched and find that i can make "common" DB with such tables, and have views in each DB instance that points to corresponding table.
Main app queries are heavily relying on that common tables, so i'm concerned if that will slow down my queries since views don't have indexes?
Are there some better practices? I'm looking now for replication in mysql manual. Is that way to go? Replicate tables from common DB to app instance DB's?
Can this be one-direction replication? (only tables in "common" DB can be changed and then replicated to other DB's)?
thanx for advice
Y
Why don't you create Materialized views alias Flexviews then you would create indexes on them and you would not have to worry if it would slow you down.
More on Flexviews at mariadb's site:
Flexviews

Should I use a MySQL view or a report cronjob

At my work my colleagues always build report cronjobs for heavy tables. With the cronjob we get all data from 1 day per user and insert the totals in a report table. The report overview page is not correct because it has a delay for at most 1 hour.
The cronjob runs 24 times a day (every hour).
Is it better to use a MySQL view? When a record has been added to the master table the MySQL view will updated, right? This is a very though action. Will that affect the users using the dashboard?
Kind regards,
Joost
Okay so some terminology first.
The cron jobs are most likely appending data to existing tables (perhaps using an upsert method like INSERT ... ON DUPLICATE KEY UPDATE). These data you are writing to the existing tables may be indexed, just like normal MySQL tables, and they are also persistent on disk
Views, on the other hand, are really nothing more than saved queries in MySQL. Every time you open a view, you run the query again. Views aren't really useful for performance optimization as much as they are useful for small, efficient queries that otherwise might be a pain to remember. Views cannot have indices (although they are effectively saved queries, so the query itself can make use of the indices on the tables it's referencing) and they are not persistent to disk. Every time you load the view, you will be running the query that makes up the view again
Now, in between views and tables populated by Cron jobs, you also could install a plugin for MySQL called Flexviews (https://github.com/greenlion/swanhart-tools). Flexviews allows MySQL to use what are called materialized views (eg http://en.wikipedia.org/wiki/Materialized_view). Materialized views are basically views that are persisted to disk as tables. And, since they are tables, they can also use indices.
Materialized views are not native to MySQL, but the developer who maintains that plugin is well known in the MySQL community, and he tends to write good, reliable SQL tools . Obviously it would be a mistake to test the plugin in a production environment, or without using backups. But there are plenty of folks who use Flexviews in production to accomplish exactly what it seems like you'd like to do... obtain near real time updates of dashboard/summary tables in a way that doesn't murder DB performance.
I'd definitely check Flexviews out... you can learn more about it
here: http://www.percona.com/blog/2011/03/23/using-flexviews-part-one-introduction-to-materialized-views/
and here: http://www.percona.com/blog/2011/03/25/using-flexviews-part-two-change-data-capture/

django 1.1 creating high temporary tables with mysql 5.1

we are using django 1.1/mysql5.1 for one of our projects. Seems like as the load on the webserver(apache/mod-wsgi) increases, the number of temporary tables created on mysql also increases, causing heavy alerts on our monitoring infrastructure. To give you an example, when the number of connected clients increases from 100 - 300, the number of temporary tables goes from 500-1000. The entire database has innodb tables. Here are a few things i would like to know:
Is it normal to have such heavy number of temporary tables?
What are the normal/optimal limits on the number of temporary tables that are allowed?
How to i minimize the number of temporary tables that django creates?
I know some of these are pretty vague, as the above questions depends on hardware,memory and other aspects of the DB machine and the webserver. But i am trying to get a sense of what is going on and what causes so many temporary tables to be created?
Thanks
Venkat.
Django doesn't create any temporary tables by itself. It's all in the application that your colleagues have programmed.