I am working on a mysql query for a report. The idea is to have a simple table say 'reportTable' with the values being fetched from various places. I could then use the reportTable more easily without remembering lots of joins etc and also share this table for other projects.
Should I break down the inner insert part of the query so it does
chunks at a time I will be adding probably tens of thousands of rows?
INSERT INTO reportTable
(
-- long query grabbing results from various places
SELECT var1 FROM schema1.table1
SELECT var2 FROM schema2.table1
SELECT var2 FROM schema2.table1
etc
)
This addresses your concerns that inserting data takes too long and so on. I understood it like you rebuild your table each time. So, instead of doing so, just fetch the data that is new and not already in your table. Since looking up if the data is already present in your report table might be expensive, too, just get the delta. Here's how:
Make sure that in every table you need a column like this is present:
ALTER TABLE yourTable ADD COLUMN created timestamp DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP;
The ON UPDATE clause is of course optionally, don't know if you need to keep track of changes. If so, give me a comment and I can provide you with a solution with which you can keep a history of your data.
Now you need a small table that holds some meta information.
CREATE TABLE deltameta (tablename varchar(50), LSET timestamp, CET timestamp);
LSET is short for Last Successful Extraction Time, CET for Current Extraction Time.
When you get your data it works like this:
UPDATE deltameta SET CET = CURRENT_TIMESTAMP WHERE tablename = 'theTableFromWhichYouGetData';
SELECT #varLSET := LSET, #varCET := CET FROM deltameta WHERE tablename = 'theTableFromWhichYouGetData';
INSERT INTO yourReportTable (
SELECT whatever FROM aTable WHERE created >= #varLSET AND created < #varCET
);
UPDATE deltameta SET LSET = CET WHERE tablename = 'theTableFromWhichYouGetData';
When anything goes wrong during inserting your script stops and you get the same data the next time you run it. Additionally you can work with transactions here, if you need to roll back. Again, write a comment if you need help with this.
I may be wrong, but you seem to be talking about a basic view. You can read an introduction to views here: http://techotopia.com/index.php/An_Introduction_to_MySQL_Views, and here are the mysql view docs: http://dev.mysql.com/doc/refman/5.0/en/create-view.html
Related
I am trying to build an API and one of the endpoints will return a random row from my database. In the database I have a table in which I want a "views" column to be updated every time I run a SELECT query on a row.
My table looks something like this:
CREATE TABLE `movies` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(256) NOT NULL,
`description` text,
`views` int(11) NOT NULL DEFAULT 0,
PRIMARY KEY (`id`)
);
The row is selected by ordering the table with rand() and then limiting the result by 1, like so:
SELECT * FROM table ORDER BY rand() LIMIT 1;
Is something like this below possible?
SELECT * FROM table ORDER BY rand() LIMIT 1
UPDATE table SET views = +1 WHERE (selected row?);
I'm new to SQL queries, so I don't know if this is the best way or even possible at all. Should I run a new query after this one has completed that updates the value instead?
Usually, every table has a Primary Key, i.e. a unique ID of every single row. Since you have a result of your SELECT query and it's only 1 row, you always can make a consequent update query like UPDATE table SET views = views + 1 WHERE id = <returned_record_id>. Here we assume that the column id is a Primary Key column. This pair of queries need to be issued by the application code. If you want to achieve SELECT + UPDATE functionality as a single SQL statement, consider using stored procedures.
While the aforementioned approach is technically possible, it might have a few performance problems. First of, ORDER BY rand() often has a poor performance. Also, having an update on each select could have bad performance implications.
No what you want is not possible .as, select and update commands can not be used togethor in a single transaction.
You can do it seperately
You need to create a procedure for this in your database like:
CREATE PROCEDURE `procedure_name`()
BEGIN
SELECT * FROM table ORDER BY rand() LIMIT 1 ;
UPDATE table SET views = +1 WHERE (selected row?) ;
END
and then call it
call procedure_name();
You can check only as there are many ways to write a procedure.
Thanks
Unfortunately, what you want to do is not possible, at least not without a lot of work. SQL in general -- and MySQL in particular -- offer a capability called triggers.
Triggers allow you to do take actions when something happens in the database. For instance, if you want to check that values are correct, you can write an insert/update trigger to check the values and reject improper ones. Or, if you want to stash deleted records into an audit table, a trigger is the way to go.
What you are describing could be implemented using a trigger on a "select". Such a beast does not exist.
What are your options? Well, the simplest is to do this in your application. When a movie is selected, then you can update views. Of course, that only increments the views where you have the code.
You can move this code into a stored procedure. This simplifies the application code. It just has to "know" to use the stored procedure. But, there is no enforcement mechanism.
You can make this more enforceable by using permissions. Basically, don't allow access to the underlying table except through the stored procedure. This is closest to what you want.
If I have a table that has these rows:
animal (primary)
-------
man
dog
cow
and I want to delete all the rows and insert my new rows (that may contain some of the same data), such as:
animal (primary)
-------
dog
chicken
wolf
I could simply do something like:
delete from animal;
and then insert the new rows.
But when I do that, for a split second, 'dog' won't be accessible through the SELECT statement.
I could simply insert ignore the new data and then delete the rest, one by one, but that doesn't feel like the right solution when I have a lot of rows.
Is there a way to insert the new data and then have MySQL automatically delete the rest afterward?
I have a program that selects data from this table every 5 minutes (and the code I'm writing now will be updating this table once every 30 minutes), so I would like to be as accurate as possible at all times, and I would rather have too many rows for a split second than too few rows for the same time.
Note: I know that this may seem like it is unnecessary but I just feel like if I leave too many of those unlikely possibilities in different places, there will be times where things go wrong.
You may want to use TRUNCATE instead of DELETE here. TRUNCATE is faster than DELETE and resets the table back to its empty state (meaning IDENTITY columns are reset to original values as well).
Not sure why you're having problems with selecting a value that was deleted and re-added, maybe I'm missing some context. But if you're wiping the table clean, you might want to use truncate instead.
You could add another column timestamp and change the select statement to accommodate this scenario where it needs to check for the latest value.
If this is for school, I would argue that you need a timestamp and that is what your professor is looking for. You shouldn't need to truncate a table to get the latest values, you need to adjust the thinking behind the table and how you are querying data. Hope this helps!
Check out these:
How to make a mysql table with date and time columns?
Why not update values instead?
My other questions would be:
How are you loading this into the table?
What does that code look like?
Can you change the way you Select from the table?
What values are being "updated" and change in such a way that you need to truncate the entire table?
If you don't want to add new column, there is an other method.
1. At first step, update table in any way that mark all existing rows for deletion in future. For example:
UPDATE `table_name` SET `animal`=CONCAT('MUST_BE_DELETED_', `animal`)
At second step, insert new rows.
On final step, remove all marked rows:
DELETE FROM `table_name` WHERE `animal` LIKE 'MUST_BE_DELETED_%'
You could implement this by having the updated_on column as timestamp and you may even utilize some default values, but let's go with an example without them.
I presume the table would look something like this:
CREATE TABLE `new_table` (
`animal` varchar(255) NOT NULL,
`updated_on` timestamp,
PRIMARY KEY (`animal`)
) ENGINE=InnoDB
This is just a dummy table example. What's important are the two queries later on.
You would simply perform a query to insert the data, such as:
insert into my_table(animal)
select animal from my_view where animal = 'dogs'
on duplicate key update
updated_on = current_timestamp;
Please notice that my_view is your table/view/query by which you supply the values to insert into your table. Also notice that you need to have primary/unique key constraint on your animal column in this example, in order to work.
Then, you proceed with the following query, to "purge" (delete) the old values:
delete from my_table
where updated_on < (
select *
from (
select max(updated_on) from my_table
) as max_date
);
Please notice that you could make a separate view in order to obtain this max_date value for updated_on entry. This entry should indicate the timestamp for your last updated/inserted values in a previous query, so you could proceed with utilizing it in a where clause in order to issue deletion of old records that you don't want/need anymore.
IMPORTANT NOTE:
Since you are doing multiple queries and it's supposed to be a single operation, I'd advise you to utilize it within a single trancations and to utilize a proper rollback on various potential outcomes (i.e. in case of mysql exceptions). You might wish to utilize a proper stored procedure for that.
I have a table with a column containing unix time. I wish to create a new column that contains the day of the week for this time. For example, 1436160600 would be a Monday in this column.
I have created a new column, entitled "day_of_week"
alter table master add column test varchar(20);
I now wish to update this new column with the appropriate values.
I found the MySQL Unixtimestamp() function (http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_unix-timestamp)
I then attempted the following
update master set day_of_week = _sent_time_stamp(from_unixtime(unix_timestamp, %W));
where _sent_time_stamp is the column containing the Unix time values
But this results in an Error 1064.
Can anyone advise?
Solution. Convert epoch to date time
alter table master add column test_date datetime ;
update master set test_date = from_unixtime(_sent_time_stamp) ;
convert datetime to day of week using dayname function
alter table master add column test_day varchar(20) ;
update master set test_day = dayname(test_date) ;
I know this post is old, but the accepted answer is sadly wasteful, and I hope that future people seeking this answer may be more enlightened.
No need to add a new column to the table just for some temporary value. To achieve what you requested, you can simply do this:
UPDATE master
SET test_day = dayname(from_unixtime(_sent_time_stamp)) ;
However, even the goal is a wasteful in that we're simply storing two representations of the same data. What you can do instead, is to create a view:
CREATE VIEW master_vw AS
(SELECT mstr.*, DAYNAME(FROM_UNIXTIME(mstr._sent_time_stamp)) AS test_day
FROM master mstr) ;
Now, you can SELECT from this view anytime you like, and the value of test_day will always be in sync with the value of _sent_time_stamp. And no new column to maintain and whatnot.
There is a use case for actually storing the test_day column - execution of the view will take a miniscule amount of additional processing versus selecting from a table. And you cannot index over the virtual test_day column like you could in a table. So if you have millions of rows and you need to quickly get one that's (say) 'Saturday' then perhaps the table approach is more ideal.
But in cases where the column is just a convenience, or where it is simply a different representation of data that already exists, you'll do well to consider the View approach.
Is there any way to essentially keep track of how many times a row has been pulled from a SQL table?
For example in my table I have a column count. Every time a SQL statement pulls a particular row (lets call it rowA), rowA's 'count' value increases 1.
Either in the settings of the table or in the statement would be fine, but i cant find anything like this.
I know that I could split it into two statements to achieve the same thing, but I would prefer to only send one.
The best way to do this is to restrict read-access of the table to a stored procedure.
This stored procedure would take various inputs (filter options) to determine which rows are returned.
Before the rows are returned, their counter field is incremented.
Note that the update and the select command share the same where clause.
create procedure Select_From_Table1
#pMyParameter varchar(20), -- sample filter parameter
as
-- First, update the counter, only on the fields that match our filter
update MyTable set Counter = Counter + 1
where
MyFilterField like CONCAT('%', #pMyParameter, '%') -- sample filter enforcement
-- Now, return those rows
select
*
from
MyTable
where
MyFilterField like CONCAT('%', #pMyParameter, '%') -- sample filter enforcement
A decent alternative would be to handle it on the application side in your data-access layer.
I don't think this is possible as I couldn't find anything but I thought I would check on here in case I am not searching for the correct thing.
I have a settings table in my database which has two columns. The first column is the setting name and the second column is the value.
I need to update all of these at the same time. I wanted to see if there was a way to update these values at the same time one query like the following
UPDATE table SET col1='setting name' WHERE col2='1 value' AND SET col1='another name' WHERE col2='another value';
I know the above isn't a correct SQL format but this is the sort of thing that I would like to do so was wondering if there was another way that this can be done instead of having to perform separate SQL queries for each setting I want to update.
Thanks for your help.
You can use INSERT INTO .. ON DUPLICATE KEY UPDATE to update multiple rows with different values.
You do need a unique index (like a primary key) to make the "duplicate key"-part work
Example:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE b = VALUES(b), c = VALUES(c);
-- VALUES(x) points back to the value you gave for field x
-- so for b it is 2 and 5, for c it is 3 and 6 for rows 1 and 4 respectively (if you assume that a is your unique key field)
If you have a specific case I can give you the exact query.
UPDATE table
SET col2 =
CASE col1
WHEN 'setting1'
THEN 'value'
ELSE col2
END
, SET col1 = ...
...
I decided to use multiple queries all in one go. so the code would go like
UPDATE table SET col2='value1' WHERE col1='setting1';
UPDATE table SET col2='value2' WHERE col1='setting1';
etc
etc
I've just done a test where I insert 1500 records into the database. Do it without starting a DB transaction and it took 35 seconds, blanked the database and did it again but starting a transaction first, then once the 1500th record inserted finish the transaction and the time it took was 1 second, so definetely seems like doing it in a db transaction is the way to go.
You need to run separate SQL queries and make use of Transactions if you want to run as atomic.
UPDATE table SET col1=if(col2='1 value','setting name','another name') WHERE col2='1 value' OR col2='another value'
#Frits Van Campen,
The insert into .. on duplicate works for me.
I am doing this for years when I want to update more than thousand records from an excel import.
Only problem with this trick is, when there is no record to update, instead of ignoring, this method inserts a record and on some instances it is a problem. Then I need to insert another field, then after import I have to delete all the records that has been inserted instead of update.