When I run this line of code
Movie.increment_counter :views, #moive.id
So the column views will be incremented twice (+2, not +1). In terminal I see this ran query to database:
UPDATE `movies` SET `views` = COALESCE(`views`, 0) + 1 WHERE `movies`.`id` = 8
If I attempt to run this query direct to MySQL, co the value of views is incremented correctly once (+1).
Any tips, what am I missing or I haven't set up?
Are you tracking page views by any chance? I ran into this as well- for every page load I would see the page view counter increment by three rather than 1, but only in production.
It turned out to be Google's Adsense code loading the page remotely. I noticed that they hit the page twice for every time one of my users would hit the page, effectively resulting in 3 page views. I suspect they do this to verify that content on the page meets their guidelines and to help match ads appropriately to page content. Check your httpd logs for Mediapartners-Google. I bet that's what's going on.
General advice: use Google Analytics (or similar service) for tracking page views. In my case I still needed to track this in a DB because I implement autocomplete based on "popularity" of certain page actions, but you might not need this.
Related
I have the field named hits in my table which records user interaction with objects on my website. For example: if user views object preview field hits would be updated and increased by 1, if user enters object's page it would be increased by 3 and etc.
Everything works like a charm on my local development server. But on production server (online > 50) sometimes field hits increases by right value and then within several seconds it could be decreased by some random small value (1,2). This bug doesn't always occur. I think the solution can be related with MyISAM engine I'm currently using for this table.
Below is a code implementing table update query (codeigniter)
$this->db->set('hits', 'hits+' . (int) $count, FALSE);
$this->db->where('id', $id);
$this->db->update('gallery');
So I have 2 questions:
How to fix this bug?
How can I perform multiple queries to my table to duplicate this situation on my local development server?
Thanks to all for responses. Anyway I got useful information from your answers. The problem was at the crontask. I had a function running every minute every third month. It cutted these hits. I fixed that crontask and everything works now.
Check your crontask twice. Thanks you all for your help.
Ok, so what is the best practice when it comes down to paginating in mysql. Let me make it more clear, let's say that a given time I have 2000 records and there are more being inserted. And I am displaying 25 at a time, I know I have to use limit to paginate through the records. But what am I supposed to do for the total count of my records? Do I count the records every time users click to request the next 25 records. Please, don't tell me the answer straight up but rather point me in the right direction. Thanks!
The simplest solution would be to just continue working with the result set normally as new records are inserted. Presumably, each page you display will use a query looking something like the following:
SELECT *
FROM yourTable
ORDER BY someCol
LIMIT 25
OFFSET 100
As the user pages back and forth, if new data were to come in it is possible that a page could change from what it was previously. From a logical point of view, this isn't so bad. For example, if you had an alphabetical list of products and a new product appeared, then the user would receive this information in a fairly nice way.
As for counting, your code can allow moving to the next page so long as data is there to support a new page being added. Having new records added might mean more pages required to cover the entire table, but it should not affect your logic used to determine when to stop allowing pages.
If your table has a date or timestamp column representing when a record was added, then you might actually be able to restrict the entire result set to a snapshot in time. In this case, you could prevent new data from entering over a given session.
3 sugggestions
1. Only refreshing the data grid, while clicking the next button via ajax (or) storing the count in session for the search parameters opted .
2. Using memcache which is advanced, can be shared across all the users. Generate a unique key based on the filter parameters and keep the count. So you won't hit the data base. When a new record, gets added then you need to clear the existing memcache key. This requires a memache to be running.
3. Create a indexing and if you hit the db for getting the count alone. There won't be much any impact on performance.
I am looking for a way to create a trigger after any changes occur in a table on any row or field.
I want for my web app to automatically refresh if they're have been any changes to the data since it was last loaded. For this I need a "modified_on" attribute for a table which will apply to the whole table, not just a row.
Not sure what database triggers have to do with this problem, as they are not going to be able to trigger any behavior at the web application level. You will need to build logic in your web application to inspect the data looking for a change. Most likely, this would take the form of some some-client triggered refresh process (i.e. AJAX), which would need to call a application script that would take information from the client on when it last checked for an update and compare it to the most recently updated row(s) in the table. As long as you have a timestamp/datetime field on the table and update each row when it is updated, you can retrieve all updated rows via a simple query such as
SELECT {fields} FROM {table}
WHERE {timestamp field} > '{last time checked}'
I you want, you could use this to only update those rows in the application view which need updating rather than re-rendering the whole table (this would minimize response bandwidth/download time, rendering time, etc.). If you simply want to check if the table has been updated from some certain, but don't care about individual rows, you can just check that the above query returns 1 or more rows.
If you don't want the client application view to have to check at regular intervals (as would likely be done with AJAX), you might also consider websockets or similar to enable bi-directional client-server communication, but this still wouldn't change the fact that your server-side application would need to query the database to look for changed records.
I have a Xenforo forum with Waindigo Custom Fields addon. I created a custom group with 3 fields: Address, Latitude and Longitude.
A thread is created with initial values as example below:
- Address A
- Lat A
- Long A
Later on I want to change these values. Because Xenforo & Custom Fields addon don't provide the function to change them. I decided to make change directly on database. The table is MySql is xf_thread_field_value. I changed values of 3 rows coresponse to that thread.
But after refreshing the browser (CTRL+F5), the values are still the same. I tried to cron the cache in admin panel but no luck.
You should try to rebuild posts cache. XF keeps the posts from last days cached for performance (the exact number of days is configurable in the Options)
To rebuild the caches, go to Tools -> Rebuild Caches
I have a few servers. Each server has a bot program. Bots are all connected to the same mysql database. What they do is, connect to the DB, query the DB and grab a .csv file containing username;password rows, log on the accounts on a specific website, do some automated stuff, set the finished accounts as Done on the database.
I'm having a hard time deciding the best way to ensure that all bots are able to pull data from the same DB poll, without conflicts and without leaving any account behind.
My ideas were:
Pre-define what row ranges each bot will work.
For example:
Bot1 = row 0 to row 999
Bot2 = row 1000 to row 1999
Bot3 = row 2000 to row 2999 ...
This can be a problem because if I need to scale, I will have to go pre-defining every bot.
Make a column called bot => Make each bot select 500 rows, add a predefined value to the
column bot on all the 500 rows => Only work on the rows WHERE bot = 'pre-defined value'
This would work but there may still have some collision. Besides that, my bots need to work on CSV files, so they can't actually work on the fly on the databases.
My concern is scalability. I want to be able to add as many servers as I want and have them all working nicelly with each other.
Suggestions?
I was reading about mysql lock() function, but I don't think it would work in this case due to the way my bots get the acconts (.csv files).
Thanks for the answers.
What I did was:
I made a PHP API that gets a random row that has status = 0 and prints its information in XML format. Then I use GET requests in my bot to get a random row using this PHP script and put it on a variable. Later, I use regex to scrape each column from the earlier variable and put them on their respective variables. After my bot finishes running on that account, it sets status = 1 on the row it just grabbed using my PHP API, so other bots won't touch this row again. This way I can have several servers running with no problem of colissions, no problems with .CSVs and the only problem is that now I have more load on my mysql server but that won't be too hard to solve.