I was wondering what the best way of storing user queries correlated with timestamps in MySQL was. Let's say I have just two inputs, a user's "query" and "timestamp"...
I could create a MySQL table with fields (id, query, count, timestamp_list), where:
id is unique identifier of the query,
query is the literal query string,
count is the (constantly-UPDATEd) number of times that query is entered, and
timestamp_list is a LONGTEXT or something with a list of times that query was searched.
Is there a better way to correlate these using indexing I'm not familiar with? It seems like storing a list of timestamps in a LONGTEXT is dumb, but easy; perhaps I should create a separate table like:
id
query_id (correlates to id in first table)
timestamp
And I can join results with the first table. Any thoughts? Thanks!
If you need to record the timestamp when each query was performed, i'd suggest you have 2 tables:
tbl_queries
- id INT
- query VARCHAR
tbl_queries_performed
- id INT AUTOINCREMENT
- query_id INT
- timestamp CURRENT_TIMESTAMP
Each time you want to record a query, check if it's in tbl_queries already and then save an entry in tbl_queries_performed with the query_id respectively
Related
I have an innodb table with 100M records like this:
id name pid cid createdAt
int char int int timestamp
id is PK, and pid is indexed: key
the most often query is select count(*) from table1 where pid='pid'
my question is does this query do a full table scanning?
count(*) is very rarely what you want.
The count function counts rows that are not null, so count(name) counts records where the name field is not null for example. If the field being counted is not indexed then this results in a full table scan.
In the case of count(*) the database counts records that have at least one non null field, ie it excludes records where all of the fields are null. This might be what you want, but most people incorrectly use this form when they want to just count all of the records regardless of their content.
The most efficient way of counting all of the records without database specific syntax is count(1). This works because the value 1 is not null for every record, and does not require any data to be read from the database.
If you want to know what the query does, then look at the "explain" plan.
If you want to speed the query in question, then create an index on table1(pid).
The query should scan the index rather than the table.
I got an existing Mysql table with one of the columns as time int(10)
The field has many records like
1455307434
1455307760
Is it a date time, encrypted.
What should be the select Query, so it should display an actual date.
FROM_UNIXTIME()
SELECT FROM_UNIXTIME(mycolumn)
FROM mytable
I am working on a project related to a database. I want to find out the highest value from the primary key column of a same table (say tbrmenuitem) which is stored in multiple databases.
So, is it possible through one query or I do have to fire a different query at different times to make the connection with multiple databases? (that is, the first query to get the table name in the database, the second query to find the primary key of the table I got and then MAX() on the value of the primary key column?)
You can query tables in other databases on a server similar to how you would any other tables. You just need to qualify the table name with the name of the schema (database).
SELECT MAX(max) FROM (
SELECT MAX(id_column) AS max
FROM test2.test2table
UNION ALL
SELECT MAX(id_column) AS max
FROM test.test1table
) AS t
What this does, is selects the MAX() of a column from the table test2table in the test2 database.
SELECT MAX(id_column) AS max
FROM test2.test2table
It then performs a UNION on that result, with the result of a similar query performed on the test1table table in the test database.
UNION ALL
SELECT MAX(id_column) AS max
FROM test.test1table
This is then wrapped in a subquery which pulls the maximum value of each of the results returned from the UNION.
I have the following data in a MySQL table table:
ID: int(11) [this is the primary key]
Date: date
and I run the MySQL query:
SELECT * from table WHERE Date=CURDATE() and ID=1;
This takes between 0.6 and 1.2 seconds.
Is there any way to optimize this query to get results quicker?
My objective is to find out if I already have a record for today for this ID.
Add indexes on ID and Date.
See CREATE INDEX manual.
You could add a limit 1 at the end, since you are searching for a primary key the max results is 1.
And if you only want to know wether it exists or not you could replace * with ID to select only the ID.
Furthermore, if you haven't already, you really need to add indexes.
SET #cur_date = CURDATE()
...WHERE Date = #cur_date ...
and then create an index of Date, ID (order is important, it should match the order you query on).
In general, calling functions before you do the query and storing them to variables lets SQL treat them like numbers instead of functions, which tends to allow it to use a faster query algorithm.
I have a table with a nullable datetime field.
I'll execute queries like this:
select * from TABLE where FIELD is not null
select * from TABLE where FIELD is null
Should I index this field or is not necessary? I will NOT search for some datetime value in that field.
It's probably not necessary.
The only possible edge case when index can be used (and be of help) is if the ratio of null / not-null rows is rather big (e.g. you have 100 NULL datetimes in the table with 100,000 rows). In that case select * from TABLE where FIELD is null would use the index and be considerably faster for it.
In short: yes.
Slightly longer: yeeees. ;-)
(From http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html) - "A search using col_name IS NULL employs indexes if col_name is indexed."
It would depend on the number of unique values and the number of records in the table. If your just searching on whether or not a column is null or not, you'll probably have one query use it and one not depending on the amount of nulls in the table overall.
For example: If you have a table with 99% of the records have the querying column as null and you put/have an index on the column and then execute:
SELECT columnIndexed FROM blah WHERE columnIndexed is null;
The optimizer most likely won't use the index. It won't because it will cost more to read the index and then read the associated data for the records, than to just access the table directly. Index usage is based on the statistical analysis of a table, and one major player in that is cardinality of the values. In general, indexes work best and give the best performance when they select a small subset of the rows in the table. So if you change the above query to select where columnIndexed is not null, your bound to use the index.
For more details check out the following: http://dev.mysql.com/doc/refman/5.1/en/myisam-index-statistics.html