How to suppress warnings before manually deleting cookies
From time to time I want to delete most cookies, leaving account cookies for easy log in.
The problem is that before deleting each cookie, a warning dialogue appears.
This makes the task painfully slow.
Can the warnings be turned off; and if so, How?
Related
I am try to find a scalable way to allow for my desktop application to run command when a change in the database is made.
The application is for running a remote command on your PC. The user logs into the website and can choose the run the command. Currently, users have to download a desktop application that checks the database every few seconds to see if a value has changed. The value can only be changed when they login to a website and press a button.
For now it seems to be working fine since there aren't many users. But when I hit 100+ users hitting the database 100+ times every few seconds is not good. What might be a better approach?
It's true that polling for changes is too expensive, especially if you have many clients. The queries are often very costly, and it's tempting to run the queries frequently to make sure the client gets notified promptly after a change. It's better to avoid polling the database.
One suggestion in the comments above is to use a UDF called from a trigger. But I don't recommend this, because a trigger runs when you do an INSERT/UPDATE/DELETE, not when you COMMIT the change. So a client could be notified of a change, and then when they check the database the change appears to not be there, because either the transaction was rolled back, or else the transaction simply hasn't been committed yet.
Another reason the trigger solution is not good is that MySQL triggers execute once for each row changed, not once for each INSERT/UPDATE/DELETE statement. So you could cause notification spam, if you do an UPDATE that affects thousands of rows.
A different solution is to use a message queue like RabbitMQ or ActiveMQ or Amazon SQS (there are many others). When a client commits their INSERT/UPDATE/DELETE, they confirm the commit succeeded, then post a message on a message queue topic. Many clients can be notified efficiently this way. But it requires that every client who commits changes to the database write code to post to the message queue.
Another solution is for clients to subscribe to MySQL's binary log and read it as a change data capture log. Every committed change to the database is logged in the binary log. You can make clients read this, and it has no more impact to the database server than a replication client (MySQL can easily support hundreds of replicas).
A hybrid solution is to consume the binary log, and turn those changes into events in a message queue. This is how a product like Debezium works. It reads the binary log, and posts events to an Apache Kafka message queue. Then other clients can wait for events on the Kafka queue and respond to them.
I'm a web developer, and often run scripts to fix things that might time out due to server or browser settings. In the past, Chrome would just spin and spin as long as it takes until the script was done - even if it takes an hour, but they changed things and now, it imposes its own cutoff time is the server doesn't respond fast enough while the server continues to execute the script.
Now, this is annoying, it forces me to log events to a file, rather than just dump to the screen, but the worst part is Chrome thinks it is a great idea to try reconnecting to the URL after it times out. That then starts to execute the same script which probably is already running again.
The issue here is that I often create scripts to run ONCE and never again, and if the script is run more than once, it could completely destroy things.
Say I create a script to remove the first 4 characters from each field in a 1 million row database. Running the script via Chrome would eventually time out and then it would run the script again several times without letting you know. Suddenly, the data that was already reduced is being reduced again, destroying the data.
This is a serious concern that was never an issue before because Chrome wouldn't automatically try to reload a page that failed to load. So, I'm looking for a way to disable this new feature and stop Chrome from automatically reloading on a failed page load. It displays an error page saying "Click here to reload", but it completely ignores the user and decides to reload whether you click it or not.
I just ran a script to copy files from an EC2 instance to an S3 bucket as part of some cleanup, but I see from the logs that it actually ran 4 times before I closed the tab - even though I never asked it to reload. That meant it copied these same files 4 times. Fortunately, in this case, it just wasted S3 access, since it overwrote the existing files.
Yes, I realize that there are many ways of preventing the script from running more than once, from flock to renaming the file immediately after executing it. The issue is speed. These fix scripts are not intended to be full blown applications complete with all the bells and whistles, they are meant to be a fast way to apply a fix. I would rather make a change in Chrome to disable the new way it works so that I can continue to work as I have for over 10 years.
This is referring to an auto reload, and I'm not calling it a "refresh" because the page never loaded in the first place. This has nothing to do with the millions of questions regarding refreshes, and that is all I get when trying to search this problem out.
Probably this can resolve the issue:
go to chrome://flags/
set to Disabled flag Enable Offline Auto-Reload Mode (or Offline Auto-Reload Mode)
set to Disabled flag Only Auto-Reload Visible Tabs
Relaunch browser
Now I have page with error ERR_CONNECTION_RESET that does not reload itself automatically anymore
I'm implementing PayPal Payments Standard in the website I'm working on. The question is not related to PayPal, I just want to present this question through my real problem.
PayPal can notify your server about a payment in two ways:
PayPal IPN - after each payment PayPal sends a (server-to-server) notification to a url (choose by you) with the transaction details.
PayPal PDT - after a payment (if you set this up in your PP account) PayPal will redirect the user back to your site, passing the transaction id in the url, so you can query PayPal about that transaction, to get details.
The problem is, that you can't be sure which one happens first:
Will your server notified by IPN
Will be the user redirected back to your site
Whichever is happening first, I want to be sure I'm not processing a transaction twice.
So, in both cases, I query my DB against the transaction id coming from paypal (and the payment status actually..but it doesn't matter now) to see if I already saved and processed that transaction. If not, I process it, and save the transaction id with other transaction details into my database.
QUESTION
What happens if I start processing the first request (let it be the PDT..so the user was redirected back to my site, but my server wasn't notified by IPN yet), but before I actually save the transaction to database, the second (the IPN) request arrives and it will try to process the transaction too, because it doesn't find it in DB.
I would love to make sure that while I'm writing a transaction into database, no other queries can read the table, looking for that given transaction id.
I'm using InnoDB, and don't want to lock the whole table, for the time of the write.
Can this be solved simply by transactions, have I to lock "manually" that row? I'm really confused, and I hope some more experienced mysql developers can help making this clear for me and solving the problem.
Native database locks are almost useless in a Web context, particularly in situations like this. MySQL connections are generally NOT done in a persistent way - when a script shuts down, so does the MySQL connection and all locks are released and any in-flight transactions are rolled back.
e.g.
situation 1: You direct a user to paypal's site to complete the purchase
When they head off paypal, the script which sent over the http redirect will terminate and shuts down. Locks/transactions are released/rolled back, and they come back to a "virgin" status as far as the DB is concerned. Their record is no longer locked.
situation 2: Paypal does a server-to-server response. This will be done via a completely separate HTTP connection, utterly distinct from the connection established by the user to your server. That means any locks you establish in the yourserver<->user connection will be distinct from the paypal<->yourserver session, and the paypal response will encounter locked tables. And of course, there's no way of predicting when the paypal response comes in. If the network gods smile upon you and paypal's not swamped, you get a response very quickly and possibly while the user<->you connection is still open. If things are slow and the response is delayed, that response MAY encounter unlocked tables/rows because the user<->server session has completed.
You COULD use persistent MySQL connections, but they open up a whole other world of pain. e.g. consider the case where your script has a bug which gets triggered halfway through processing. You connection, do some transaction work, set up some locks... and then the script dies. Because the MySQL connection is persistent, MySQL will NOT see that the client script has died, and it will keep the transactions/locks in-flight. But the connection is still sitting there, in the shared pool waiting for another session to pick it up. When it invariably is, that new script has no idea that it's gotten this old "stale" connection. It'll step into the middle of a mess of locks and transactions it has no idea exists. You can VERY easily get yourself into a deadlock situation like this, because your buggy scripts have dumped garbage all over the system and other scripts cannot cope with that garbage.
Basically, unless you implement your own locking mechanism on top of the system, e.g. UPDATE users SET locked=1 WHERE id=XXX, you cannot use native DB locking mechanisms in a Web context except in 1-shot-per-script contexts. Locks should never be attempted over multiple independent requests.
I am trying to implement a 'remember me' system with cookies that will remember a user across browsers meaning that if a user logs into a website using browser A and checks 'remember me', and then logs into browser B using 'remember me', he will continue to be automatically logged in regardless of which browser he uses. (checking 'remember me' in browser B will not break his persistent login in browser A).
To do this, I set up my database so that multiple keys can be stored alongside a user id. When a user logs onto my website, the cookie's value is checked. If that value is found in the database, the user is assigned a new cookie and that cookie key entry in the database is updated to match. Other keys are left alone so that other browsers' login persistence will not be affected. When a user logs out manually, the cookie is checked, the corresponding entry in the database is deleted, and then the cookie is deleted.
The problem comes up when a user manually deletes his cookie. If the user does this, I have no way of deleting the corresponding entry in the database. It will simply become a permanent entry in my database. This was not a problem when I was not trying to support cross-browser 'remember me', but has become a problem by allowing multiple cookie keys to be stored.
Is there any way that I can fix / avoid this?
There is a ton of information out there on persistent logins, but persistent logins across browsers never seems to be covered, so any help would be great. (Also feel free to critique my approach and any security issues. It seemed way more secure when I was only allowing one 'remember me' per user, but persistent log ins across browsers seems like functionality that users would want).
I am using MySQL and PHP.
I agree with #llion's suggestion of setting an expiry on the cookies, in which case you can schedule a process to clear out expired cookies from the dB. However, you can make this appear to the user almost as though the cookies are indefinitely persistent by extending their life whenever you see them.
For the benefit of any other readers interested in this question, I really hope that you are only storing hashes of the cookie in your dB.
I would suggest going with a "remember me (long enough)" solution. Set an expiry on the sessions but make it a lengthy one. Depending on how often you would expect users to login this could be anything from 8 hours to a week to a year plus. Each time they visit with a valid cookie you update the expiry behind the scenes and it appears persistent. If they delete cookies then eventually their session will be removed.
(If you're not actually using sessions, which it doesn't sound like you are, you'd need to add some maintenance coding around this. Probably best to learn about sessions instead of reinventing the wheel.)
To answer your question clearly:
There is no way for you to know of rogue remember_me tokens on the wild, the only real solution will be to be make your remember_me tokens last only a couple of weeks, then cron-job or daemon kill them.
This fixes your DB overcrowding which seems to be the use case of your request.
Please take a note you are facing a reality problem, where there is no way you can guess when a user has deleted the cookie, no backprocess is fired from the browser or other method, so the only approach will be to kill them regularly if not used, and refresh the expiration date once used.
The way you describe your system is more secure, (if done right) that long live php sessions, so i suggest you keep your current approach, secure it with series+tokens, and kill the unused for a couple of weeks long_live tokens.
Hope that helps you.
ummm, what happens if he is on another machine and uses a browser, same login? it's sure to happen. in our house I do this all the time. I have 3 boxes downstairs and my mother has 2 machines upstairs.
maybe you can guarantee a session is unique using microtime and the UA string from navigatior.userAgent
but you can't get the computername. but you could possibly get their IP address through the JS api. http://www.w3.org/TR/2010/WD-system-info-api-20100202/#network but using this might trigger some sort of warning dialog in the browser. nope. doesn't work.
java can get the ip.
Had an interesting error today and couldn't find anything online about it so wondered if any of you guys had seen this behavior before.
We had an out of memory error and the CPU usage was spiking this morning on our reports server, a clean reboot seemed to rectify the issue, however since then all the email subscriptions have been sending multiple times. What do i mean by this, the subscription as far as SSRS is concerned ran once at its normal time (10am), this has been proven by scrutinizing the logs to see if another execution occurred (it didn't) and by renaming the SPROC that the report references to ensure that it would fail, yet it didn't and the mail resent. I then checked the Exchange queues and turned on logging for the connection and i could see a new mail being resubmitted every 30mins to the exchange mail queue.
The question is, what process is causing that mail to be resubmitted to the exchange server and how, other than another reboot do we stop the emails resending.
Thanks in advance
-- Further --
Having done more digging we have noticed that the [ReportServer].[dbo].Notifications table is populated with all of the reports that are sending out multiple times with the Attempts column incrementing every time the duplicate email is sent out.
We still dont know why these are resending
It seems to be down to the logging level... If you switch the Report Server Service logging level down to level 2 (Exceptions, restarts and warnings) this error seems to manifest itself however when the logging level is switched back up to 3 or above the error seems to disappear. Some similar behavior is noticed here: http://social.msdn.microsoft.com/Forums/en-NZ/sqlreportingservices/thread/b78bb6e2-0810-4afd-ba6b-8b09a243f349
Check the SQL Agent jobs (named with GUIDs) for the subscriptions. Maybe the schedule on those go messed up somehow.