For simplicity, say there are two tables A and B, with unique indexes on non-key INT columns COLUMN_A and COLUMN_B respectively. I would like to do something like the following pseudo-SQL. (Note that this would be in a stored procedure but, according to https://mariadb.com/kb/en/lock-tables/, that's not allowed.)
IF COLUMN_A != x for all rows of A
Single query to insert a row into B with COLUMN_B = x (if not exist)
The problem is that other parts of the code could read A and B, see that x does not exist in either, and try to insert x into A between the IF statement and insertion query. The race condition seems to necessitate a read/write lock on A. I don't think that InnoDB's internal locking would prevent this from happening (i.e. any locks used during the IF statement would be released before the execution of the insertion).
Crucially, COLUMN_A and COLUMN_B are formally unrelated so there doesn't appear to be a straightforward way to enforce a uniqueness constraint between them (given that a view involving both A and B probably wouldn't be updatable). (I would be fine creating some sort of "relationship" between them as long as they remain in separate tables but I'm not sure if there is anything that would do this.) Is it necessary to have a table lock on A in this case?
Something like this would seem like a better solution: Can I use row locks on rows that have not been created yet? But this question is about Postgres and the features don't appear to be available in MySQL.
Thank you.
Put the condition into the INSERT statement rather than using the IF statement.
INSERT INTO B (col1, col2, col3, ...)
SELECT val1, val2, val3, ...
FROM DUAL
WHERE NOT EXISTS (
SELECT *
FROM A
WHERE column_a = 'x'
)
I am working with mysql dbs. There are two columns in a particular table (column1 and column2) and 10000000+ rows. I want to get all entries where column1 is one of a list of 50000 no.s. I am using this query currently:
Select * from db.table where column1 in (list of 50000 no.s)
Is there a faster query than this?
I can not talk about MySQL - only SQL Server - but the same principle may apply.
On SQL Server an IN has a serious problem of no statistics. Which means that with a non trivial number, the query plan is a table scan.
It is better to make a temporary table and load the ID's (AND put in a unique index on it which puts up statistics) and then JOIN between the two tables. More for the query analyzer to work with.
INDEX(column1)
Are there only 2 columns in the table? If not, then don't use SELECT *, but spell out the column names.
Please provide EXPLAIN SELECT ...
Sample Table:
+----+-------+-------+-------+-------+-------+---------------+
| id | col1 | col2 | col3 | col4 | col5 | modifiedTime |
+----+-------+-------+-------+-------+-------+---------------+
| 1 | temp1 | temp2 | temp3 | temp4 | temp5 | 1554459626708 |
+----+-------+-------+-------+-------+-------+---------------+
above table has 50 million records
(col1, col2, col3, col4, col5 these are VARCHAR columns)
(id is PK)
(modifiedTime)
Every column is indexed
For Ex: I have two tabs in my website.
FirstTab - I print the count of above table with following criteria [col1 like "value1%" and col2 like "value2%"]
SeocndTab - I print the count of above table with following criteria [col3 like "value3%"]
As I have 50 million records, the count with those criteria takes too much time to get the result.
Note: I would change records data(rows in table) sometime. Insert new rows. Delete not needed records.
I need a feasible solution instead of querying the whole table. Ex: like caching the older count. Is anything like this possible.
While I'm sure it's possible for MySQL, here's a solution for Postgres, using triggers.
Count is stored in another table, and there's a trigger on each insert/update/delete that checks if the new row meets the condition(s), and if it does, add 1 to the count. Another part of the trigger checks if the old row meets the condition(s), and if it does, subtracts 1.
Here's the basic code for the trigger that counts the rows with temp2 = '5':
CREATE OR REPLACE FUNCTION updateCount() RETURNS TRIGGER AS
$func$
BEGIN
IF TG_OP = 'INSERT' OR TG_OP = 'UPDATE' THEN
EXECUTE 'UPDATE someTableCount SET cnt = cnt + 1 WHERE 1 = (SELECT 1 FROM (VALUES($1.*)) x(id, temp1, temp2, temp3) WHERE x.temp2 = ''5'')'
USING NEW;
END IF;
IF TG_OP = 'DELETE' OR TG_OP = 'UPDATE' THEN
EXECUTE 'UPDATE someTableCount SET cnt = cnt - 1 WHERE 1 = (SELECT 1 FROM (VALUES($1.*)) x(id, temp1, temp2, temp3) WHERE x.temp2 = ''5'')'
USING OLD;
END IF;
RETURN new;
END
$func$ LANGUAGE plpgsql;
Here's a working example on dbfiddle.
You could of course modify the trigger code to have dynamic where expressions and store counts for each in the table like:
CREATE TABLE someTableCount
(
whereExpr text,
cnt INT
);
INSERT INTO someTableCount VALUES ('temp2 = ''5''', 0);
In the trigger you'd then loop through the conditions and update accordingly.
FirstTab - I print the count of above table with following criteria [col1 like "value1%" and col2 like "value2%"]
That would benefit from a 'composite' index:
INDEX(col1, col2)
because it would be "covering". (That is, all the columns needed in the query are found in a single index.)
SeocndTab - I print the count of above table with following criteria [col3 like "value3%"]
You apparently already have the optimal (covering) index:
INDEX(col3)
Now, let's look at it from a different point of view. Have you noticed that search engines no longer give you an exact count of rows that match? You are finding out why -- It takes too long to do the tally not matter what technique is used.
Since "col1" gives me no clue of your app, nor any idea of what is being counted, I can only throw out some generic recommendations:
Don't give the counts.
Precompute the counts, save them somewhere and deliver 'stale' values. This can be handy if there are only a few different "values" being counted. It is probably not practical for arbitrary strings.
Say "about nnnn" in the output.
Play some tricks to decide whether it is practical to compute the exact value or just say "about".
Say "more than 1000".
etc
If you would like to describe the app and the columns, perhaps I can provide some clever tricks.
You expressed concern about "insert speed". This is usually not an issue, and the benefit of having the 'right' index for SELECTs outweighs the slight performance hit for INSERTs.
It sounds like you're trying to use a hammer when a screwdriver is needed. If you don't want to run batch computations, I'd suggest using a streaming framework such as Flink or Samza to add and subtract from your counts when records are added or deleted. This is precisely what those frameworks are built for.
If you're committed to using SQL, you can set up a job that performs the desired count operations every given time window, and stores the values to a second table. That way you don't have to perform repeated counts across the same rows.
As a general rule of thumb when it comes to optimisation (and yes, 1 SQL server node#50mio entries per table needs one!), here is a list of few possible optimisation techniques, some fairly easy to implement, others maybe need more serious modifications:
optimize your MYSQL field type and sizes, eg. use INT instead of VARCHAR if data can be presented with numbers, use SMALL INT instead of BIG INT, etc. In case you really need to have VARCHAR, then use as small as possible length of each field,
look at your dataset; is there any repeating values? Let say if any of your field has only 5 unique values in 50mio rows, then save those values to separate table and just link PK to this Sample Table,
MYSQL partitioning, basic understanding is shown at this link, so the general idea is so implement some kind of partitioning scheme, e.g. new partition is created by CRONJOB every day at "night" when server utilization is at minimum, or when you reach another 50k INSERTs or so (btw also some extra effort will be needed for UPDATE/DELETE operations on different partitions),
caching is another very simple and effective approach, since requesting (almost) same data (I am assuming your value1%, value2%, value3% are always the same?) over and over again. So do SELECT COUNT() once a while, and then use differencial index count to get actual number of selected rows,
in-memory database can be used alongside tradtional SQL DBs to get often-needed data: simple key-value pair style could be enough: Redis, Memcached, VoltDB, MemSQL are just some of them. Also, MYSQL also knows in-memory engine,
use other types of DBs, e.g NoSQL DB like MongoDB, if your dataset/system can utilize different concept.
If you are looking for aggregation performance and don't really care about insert times, I would consider changing your Row DBMS for a Column DBMS.
A Column RDBMS stores data as columns, meaning each column is indexed independantly from the others. This allows way faster aggregations, I have switched from Postgres to MonetDB (an open source column DBMS) and summing one field from a 6 milions lines table dropped down from ~60s to 50ms. I chose MonetDB as it supports SQL querying and odbc connections which were a plus for my use case, but you will experience similar performance improvements with other Column DBMS.
There is a downside to Column storing, which is that you lose performance on insert, update and delete queries, but from what you said, I believe it won't affect you that much.
In Postgres, you can get an estimated row count from the internal statistics that are managed by the query planner:
SELECT reltuples AS approximate_row_count FROM pg_class WHERE relname = 'mytable';
Here you have more details: https://wiki.postgresql.org/wiki/Count_estimate
You could create a materialized view first. Something like this:
CREATE MATERIALIZED VIEW mytable AS SELECT * FROM the_table WHERE col1 like "value1%" and col2 like "value2%";`
You can also materialize directly the count queries. If you have 10 tabs, then you should have to materialize 10 views:
CREATE MATERIALIZED VIEW count_tab1 AS SELECT count(*) FROM the_table WHERE col1 like "value1%" and col2 like "value2%";`
CREATE MATERIALIZED VIEW count_tab2 AS SELECT count(*) FROM the_table WHERE col2 like "value2%" and col3 like "value3%";`
...
After each insert, you should refresh views (asynchronously):
REFRESH MATERIALIZED VIEW count_tab1
REFRESH MATERIALIZED VIEW count_tab2
...
As noted in the critique, you have not posted what you have tried. So I would assume that the limit of question is exactly what you posted. So kindly report results of exactly that much
What is the current time you are spending for the subset of the problem, i.e. count of [col1 like "value1%" and col2 like "value2%"] and 2nd [col3 like "value3%]
The trick would be to scan the data source once and make the data source smaller by creating an index. So first create an index on col1,col2,col3,id. Purpose of col3 and id is so that database scans just the index. And I would get both counts in same SQL
select sum
(
case
when col1 like 'value1%' and col2 like 'value2%' then 1
else 0
end
) cnt_condition_1,
sum
(
case
when col3 like 'value3%' then 1
else 0
end
) cnt_condition_2
from table
where (col1 like 'value1%' and col2 like 'value2%') or
(col3 like 'value3%')
```
So the 50M row table is probably very wide right now. This should trim it down - on a reasonable server I would expect above to return in a few seconds. If it does not and each condition returns < 10% of the table, second option will be to create multiple indexes for each scenario and do count for each so that index is used in each case.
If there is no bulk insert/ bulk updates happening in your system, Can you try vertical partitioning in your table? By vertical partitioning, you can separate the data block of col1, col2 from other data of the table and so your searching space will reduce.
Also, indexing on every columns doesn't seem to be the best approach to go with. Index wherever it is absolutely needed. In this case, I would say Index(col1,col2) and Index(col3).
Even after indexing, you need to look into the fragmentation of those indexes and modify it accordingly to get the best results. Because, sometimes 50 million index of one column can sit as one huge chunk, which will restrict multi processing capabilities of your SQL server.
Each Database has their own peculiarities in how to "enhance" their RDBMS. I can't speak for MySQL or SQL Server but for PostgreSQL you should consider making the indexes that you search as GIN (Generalized Inverted Index)-based indexes.
CREATE INDEX name ON table USING gin(col1);
CREATE INDEX name ON table USING gin(col2);
CREATE INDEX name ON table USING gin(col3);
More information can be found here.
-HTH
this will work:
select count(*) from (
select * from tablename where col1 like 'value1%' and col2 like 'value2%' and col3
like'value3%')
where REGEXP_LIKE(col1,'^value1(.*)$') and REGEXP_LIKE(col2,'^value2(.*)$') and
REGEXP_LIKE(col1,'^value2(.*)$');
try not to apply index on all the columns as it slows down the processing of a sql
query and have it in required columns only.
Suppose I have a website and for its database there is one table is
Table_name table_1 and attributes are like table_1(a1(primary key,a2,a3,a4,a5,a6,a7) and in my website for for most of transactions I only uses attributes (a1,a2,a3) but the a4,a5 ,a6 and a7 are rarely used so I want to know what is better design approach to access data from following option
A)keep this table as it is and use this query select a1,a2,a3 from table_1;
B) Create 2 separate table table1(a1,a2,a3) and table_2(a1,a4,a5,a6,a7)
which approch have lower cost or load on database?
For read querys over (a1, a2, a3), obviusly "b" is (not noticeably) cheaper.
But all the other things are worst except if (a4, a5, a6, a7) are, in most of cases, nulls and you used (1->0,1) cardinality between both tables (that is: for each a1 in table_1 there is 0 or 1 tuples with the same value of a1 in table_2 and, of course, all values of a1 in table_2 exists in table_1).
Anyway, as I said, any possible advantage will be minimal compared to the complexity, maintainability issues, and also efficiency reduction (for inserts and when you need data from both tables).
So, if I were you, I would select "a" layout without any doubt.
B is provide less cost than A . Because if you choose A option you'll have been wasted space for a4, a5, a6,a7. But if you choose B option, you must create a foreign key (a1) for connect table_1. And your SQL query become cheap.
I have very simple select like this:
SELECT * FROM table
WHERE column1 IN (5, 20, 30);
on column1 is seted index, after explaining query is index used, all looks to be ok.
but if there are more than three values in range, like this:
SELECT * FROM table
WHERE column1 IN (5, 20, 30, 40);
index is not used and select runs thru all records. Am I doing something wrong? thanks
How many rows does MySql think there are in the table?
Mysql often (usually correctly!) assumes it will be quicker to do a sequential scan of the rows, rather than mess around with the more complex access via an index.
It varies from DBMS to DBMS but the tradeoff point is somewhere about 30% of the rows.
IE. If the optimiser expects more than 30% of the rows to be selected it will sequentially scan the whole table as this is usually faster than doing lots of direct access via indexes.