I am running mercurial on my sourceforge project. I am updating the repo using tortiseHg on windows. Whenever I update files, their commit times are always off by a few hours. For example, I just updated a file about 5 minutes ago, and it says that it was updated 6 hours ago. The file I updated about 6 hours ago says it was updated about 30 minutes ago.
What could be causing this?
Probably due to a time zone difference between you and the sourceforge servers where both or either one of you is reporting local time?
Related
I'll try to keep this as succinct as possible.
I was asked by my boss to make a duplicate of one of our apps with an empty DB but the same schema/structure for a venture he is doing with another company.
So like any good developer, I backed everything up on the server and made a snapshot of the DB in the RDS dashboard. This was at approx 10 pm.
Then I spun up a new ec2 and MySQL instance for the new venture and got everything networked and running there.
That's when everything went wrong.
Somehow my production DB for our app appeared to be a mirror of the new one that I made for the other company and it had no data. (Im still looking into how this happened but suspect that MySQL Workbench is to blame)
At this time I am feeling fine and dandy and just go grab the snapshot I made a few minutes earlier and restore it.
Low and behold the data is old and outdated. Now I'm freaking out but I knew that I had a manual backup I had done earlier in the day before running some pretty large insert scripts (10 am) so I just ran an import from that and the data seemed to be correct for the time that I backed it up, 10am that day.
So not all was lost. Only a day worth of updates and inserts but that is a lot of money for the company.
To get to the main question, How is it remotely even possible that a snapshot taken at 10 pm has older data than a manual backup done 12 hours earlier?
I have 61 websites on a live Magento instance. 10 or so are no longer used but we get about a thousand orders per week and have been running on Magento for over 2 years.
I went to delete one of the websites in the Manage Stores, and there is a little drop down for creating a DB backup that I left set to yes, thinking originally that it meant backup the reference in the DB at first glance.
It has been about an hour now and our sites are virtually down as no one can place orders, and the admin is unaccessible.
Is there a safe way to stop this process?
Is there a way to at least trace progress to see if there is some sort of SQL lock? I do not want to wait for hours only to find out that it will never finish. Ive read around here that the built in Magento backup tools are not good at all.
I release chrome extension updates on a regular basis. It has been observed that we have a lot of machines running old versions of extension even days after the update.
I tried logging the version number on update for our users. Following are the stats from the day of giving the update.
Days since update | % of users updated on that day
0 34
1 28
2 12
3 7
4 3
5 3
6 1.7
7 1.2
The extension is published using google developer dashboard .I requested no additional permissions since the last version.
I have the following queries.
Is this normal?
Google says that the apps/extensions get autoupdated within an hour of releasing update once the browser is closed/restarted.
Does it mean 40% of my users don't even restart chrome in 2 days?
Is there a way to force an update into all machines on the same day itself ?
With no warning e-mail, it seems that europe-west1 Zone B has gone down for maintenance, for 16 days until the 1st April 2014. Being that GCE is a cloud based service and that I have the automatic 'migrate on maintenance' setting enabled, I assumed that I had nothing to worry about. However, after the VM was terminated last night and I reread 'Designing Robust Systems' it seems that I was badly mistaken/misled! It will take 3 days work to rebuild a new server and I have 20 students with data locked up for two weeks in the middle of the semester. Does anybody have any suggestions?
I made the exact same mistake. I have been in contact with Google, and there is NOTHING we can do but wait for the maintenace window to exit.
Also, my instance is now removed, so I will have to re-create the instance and attach to my persistent disk.
MS SQL Server 2008 Standard, ShadowProtect Server Edition 4.0.0.5885 --
On Friday, our client discovered that records were missing from the database. I discovered that the Thursday night SQL backup contained all the missing records. User error is ruled out for multiple reasons.
All missing records fall within an 8-day range
The date range began 22 days before Friday and ended 14 days before Friday
All adds and all changes made during the 8-day range are missing from 14 separate tables
All missing records are present in the Thursday 11pm backup
The application logs show no unusual incidents as far as I can see.
I find nothing unusual in the Applications list in the MS SQL Server Event Viewer.
We are running ShadowProtect Server to make image backups of the 2 server drives every hour. The same sort of incident occurred 4 months ago.
ShadowProtect runs an hourly backup of the database.
One theory is that the ShadowProtect Server 4 disk image software, which runs hourly differential backups, somehow caused the data loss during its 9:00 am Friday backup. I am not aware of any other activity. other than normal user accesses, between the normal 11 pm Thursday database backup and the discovery of missing records on Friday.
Thank you for you help. As you can imagine, the client is very concerned.
If you want to know what is deleting the records or when they got deleted, the database should have audit tables set up that include the usernames and dates of changes. Then you can look at the audit logs to see when reords were deleted and by whom or what process. All databases that contain business critical information should have auditing. Unfortunately, after the event has happened is too late to find out who did it this time through auditing. You might be able to find someone third party product to look through the transaction logs and at least might find out what time the deletes happened if not who. You also should be doing transaction log backups every 15 minutes or so.
I'm not familar with the ShadowProtect server, but the missing data sounds exactly like a script was run (and cacade delete was turned on) and seems unlikely to be the ShadowProtect server. If it was interfering, I would expect a more random kind of change that one that can easily be done by a sql query. Do you allow direct access to your tables? You could have someone trying to harm the data or hide fradulent activities. Threats to the data are not always from outside sources or applications that would be in the event log. Who has access to the delete the data in the database on production? I would suspect a disgruntled employee.
We never did find out the cause of the lost records. We reinstalled the database in another MS SQL Server instance, upgraded the database to a new release, and migrated the data from the old database to the new one. That seems to have fixed the issue.
I had similar issue on VM. The error was was pointing to the database but SQL Server wasn't actually running. For some unknown reason SQL Server has stopped. It seems that VM was restarted and the service didn't start automatically.
After starting the service with Windows Administrative Tools, server was back on and the database was there