I want to update an column after every 20 minutes but it wont not work as I will. I use this SQL:
UPDATE visitors SET
is_online = '0'
WHERE is_online = '1'
AND DATE_ADD(date_lastactive, INTERVAL 20 MINUTE) < NOW()
The database looks like this:
CREATE TABLE IF NOT EXISTS `visitors` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`ipaddress` text NOT NULL,
`page` text NOT NULL,
`page_get` text NOT NULL,
`date_visited` datetime NOT NULL,
`date_lastactive` datetime NOT NULL,
`date_revisited` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`)
)
I have tried to change the < to > but it updates after every page refreshes with that arrow.
How can I fix my problem?
Thanks in advance.
If you need to run this query every 20 minutes independently on your site visitors and page loads you have to use a system scheduler: cron on Unix, and Task Scheduler on Windows.
Just code a simple shell script.
You can not make an sql query that repeats itself in every 20 minutes. There is no such of combination of mysql statements.
Related
I am having a plain flat table with below structure
CREATE TABLE `oc_pipeline_logging` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`source` TEXT,
`comments` TEXT,
`data` LONGTEXT,
`query` TEXT,
`date_added` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`ip` VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MYISAM AUTO_INCREMENT=20 DEFAULT CHARSET=latin1
In this table I basically log all my error from where ever i get in the code.
Now the data column in the above table is defined as longtext and currently I am having data in this column with almost 32Mb size for each record.
So now when i am going with the plain select query its taking alot of time to fetch the results.
eg:-
SELECT * FROM oc_pipeline_logging limit 10
In-fact when i am running the above query in the terminal i am getting below error
mysql> SELECT COMMENTs,DATA FROM oc_pipeline_logging WHERE id = 18;
ERROR 2020 (HY000): Got packet bigger than 'max_allowed_packet' bytes
But the same is running fine in sqlYog but taking lot of time.
How can I execute this query faster and fetch my rows quickly?
I am trying the same at my end getting such type of error.
But there could be a solution to increase the memory limit in my.ini.
max_allowed_packet=2048M
You can change the limit accordingly, Hope this will resolve the problem.
For some reason my slow query log is reporting the following query as "not using indexes" and for the life of me I cannot understand why.
Here is the query:
update scheduletask
set active = 0
where nextrun < date_sub( now(), interval 2 minute )
and enabled = 1
and active = 1;
Here is the table:
CREATE TABLE `scheduletask` (
`scheduletaskid` int(11) NOT NULL AUTO_INCREMENT,
`schedulethreadid` int(11) NOT NULL,
`taskname` varchar(50) NOT NULL,
`taskpath` varchar(100) NOT NULL,
`tasknote` text,
`recur` int(11) NOT NULL,
`taskinterval` int(11) NOT NULL,
`lastrunstart` datetime NOT NULL,
`lastruncomplete` datetime NOT NULL,
`nextrun` datetime NOT NULL,
`active` int(11) NOT NULL,
`enabled` int(11) NOT NULL,
`creatorid` int(11) NOT NULL,
`editorid` int(11) NOT NULL,
`created` datetime NOT NULL,
`edited` datetime NOT NULL,
PRIMARY KEY (`scheduletaskid`),
UNIQUE KEY `Name` (`taskname`),
KEY `IDX_NEXTRUN` (`nextrun`)
) ENGINE=InnoDB AUTO_INCREMENT=34 DEFAULT CHARSET=latin1;
Add another index like this
KEY `IDX_COMB` (`nextrun`, `enabled`, `active`)
I'm not sure how many rows your table have but the following might apply as well
Sometimes MySQL does not use an index, even if one is available. One
circumstance under which this occurs is when the optimizer estimates
that using the index would require MySQL to access a very large
percentage of the rows in the table. (In this case, a table scan is
likely to be much faster because it requires fewer seeks.)
try using the "explain" command in mysql.
http://dev.mysql.com/doc/refman/5.5/en/explain.html
I think explain only works on select statements, try:
explain select * from scheduletask where nextrun < date_sub( now(), interval 2 minute ) and enabled = 1 and active = 1;
Maybe if you use, nextrun = ..., it will macht the key IDX_NEXTRUN. In your where clause has to be one of your keys, scheduletaskid, taskname or nextrun
Sorry for the short answer but I don't have time to write a complete solution.
I believe you can fix your issue by saving date_sub( now(), interval 2 minute ) in a temporary variable before using it in the query, see here maybe: MySql How to set a local variable in an update statement (Syntax?).
I am working on a way to record click times for each user on my website.
I have currently 600,000+ records when trying to think of a way to go about this.
CREATE TABLE IF NOT EXISTS `clicktime` (
`id` int(5) NOT NULL AUTO_INCREMENT,
`page` int(11) DEFAULT NULL,
`user` varchar(20) DEFAULT NULL,
`time` bigint(20) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=686277 ;
I will have to do ten of these searches per page. My blog shows a snippet of ten pages at once.
SELECT time
FROM clicktime
WHERE `page` = '112'
AND `user` = 'admin'
ORDER BY `id` ASC LIMIT 1
The call that looks like it's getting me, is the WHERE page = '112'
How can I make this work faster, it is taking up to 3 seconds to pull each call?
Though there are multiple things that could be better here (the time being a bigint for instance), the thing that will help you on short term is just to add an index on your user field.
I'm going to try to explain this best I can I will provide more information if needed quickly.
I'm storing data for each hour in military time. I only need to store a days worth of data. My table structure is below
CREATE TABLE `onlinechart` (
`id` int(255) NOT NULL AUTO_INCREMENT,
`user` varchar(100) DEFAULT NULL,
`daytime` varchar(10) DEFAULT NULL,
`maxcount` smallint(20) DEFAULT NULL,
`lastupdate` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=innodb AUTO_INCREMENT=2 DEFAULT CHARSET=latin1
The "user" column is unique to each user. So I will have list for each user.
The "daytime" column I'm having it store the day and hour together. So as for today and hour it would be "2116" so the day is 21 and the current hour is 16.
The "maxcount" column is what data for each hour. I'm tracking just one total number each hour.
The "lastupdate" column is just a timestamp im using to delete data that is 24 hours+ old.
I have the script running in PHP fine for the tracking. It keeps a total of 24 rows of data for each user and deletes anything older then 24hours. My problem is how would I go about a query that would start from the current hour/day and pull that past 24 hours maxcount and display them in order.
Thanks
You will run into an issue of handling this near the end of the year. It's advisable you switch to using the native timestamp type of MySQL (described here: http://dev.mysql.com/doc/refman/5.0/en/datetime.html). Then you can grab max count by doing something such as:
SELECT * FROM onlinechart WHERE daytime >= ? ORDER BY maxcount
The question mark should be replaced by the timestamp - 86400 (number of seconds in a day).
For reference, this is my current table:
`impression` (
`impressionid` bigint(19) unsigned NOT NULL AUTO_INCREMENT,
`creationdate` datetime NOT NULL,
`ip` int(4) unsigned DEFAULT NULL,
`canvas2d` tinyint(1) DEFAULT '0',
`canvas3d` tinyint(1) DEFAULT '0',
`websockets` tinyint(1) DEFAULT '0',
`useragentid` int(10) unsigned NOT NULL,
PRIMARY KEY (`impressionid`),
UNIQUE KEY `impressionsid_UNIQUE` (`impressionid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=447267 ;
It keeps a record of all the impressions on a certain page. After one day of running, it has gathered 447266 views. Those are a lot of records.
Now I want the amount of visitors per minute. I can easily get them like this:
SELECT COUNT( impressionid ) AS visits, DATE_FORMAT( creationdate, '%m-%d %H%i' ) AS DATE
FROM `impression`
GROUP BY DATE
This query takes a long time, of course. Right now around 56 seconds.
So I'm wondering what to do next. Do I:
Create an index on creationdate (I don't know if that'll help since I'm using a function to alter this data by which to group)
Create new fields that stores hours and minutes separately.
The last one would cause there to be duplicate data, and I hate that. But maybe it's the only way in this case?
Or should I go about it in some different way?
If you run this query often, you could denormaize the calculated value into a separate column (perhaps by a trigger on insert/update) then grouping by that.
Your idea of hours and minutes is a good one too, since it lets you group a few different ways other than just minutes. It's still denormalization, but it's more versatile.
Denormalization is fine, as long as it's justified and understood.