Ok, so basically I've got a script in CakePHP where I'm putting over 7 million records into a database. Seeing as how there are that many records, I'm running into some issues with timeouts. This is on a personal server so the memory limit is set to 2000MB so that's not really an issue with how I'm wanting to do it.
The database rows are coming from a huge file. The file was too big for the memory limit, I've split it up into 101 pieces at 10000 lines in each file.
I want the page to refresh after 10 records, and when it comes back, restart inserting records where it left off.
Any ideas?
I've tried the $this->redirect() route, but it's created never-ending scripts that had to be stopped by manually restarting the server.
Why are you not using a shell for that?
To avoid the redirect loop you could try to redirect between two actions or try to attach a timestamp to the url. I'm not sure if that will work, the shell would be the much better approach anyways.
Related
I am trying to implement a function that uploads around 40 million records to a MySQL database that is hosted on AWS. However, my write statement gets stuck at 94% for an infinitely long time.
This is the command I'm using to upload df_intermediate.write.mode("append").jdbc(jdbcUrl, "user", connectionProperties) with rewriteBatchedStatements and useServerPrepStmts enabled in the connection properties.
This statement works for small number of points(50000) but is unable to handle this large amount. I've also increased the maximum number of connections on the MySQL side.
EDIT: I'm running this on GCP n1-standard-16 machines.
Why could be the reasons that write is stuck at 94%?
I don't think this has anything to do with Scala really, you are just saying you want to add many many rows into a DB. The quick answer would be to not have all these in the one transaction, and to commit these lets say 100 at a time. Try this on a non-production sql database first to see if that works.
I'm new to MYSQL and there is something really weird happened and I can't figure out why.
Recently, the INSERT query to some of the table become extremely slow. Weirdly enough, the query time all around 60 secs.
The tables are all with the only 10k to 35k entries, so I think they are not that big.(But indeed they are the biggest one in the database, though.)
And the slowness is only with INSERT query, DELETE, UPDATE, SELECT are all executed with 0.000x sec.
Can some help me find out why is this happening?
UPDATE: So I turned on the general log and noticed all my INSERT queries are followed with 'DO sleep(60)'. It seems my server got hacked?
Where can I find this malicious script inject the sleep() command after INSERT?
If you use code to build the queries, copy the code base off the server to your machine (ideally in a VM, just in case) and search for the changes within the code. Alternatively, you could restore the code base from source control (you use source control, right?!).
If it's store procedures you use, you'll need to change them back to a working version without the sleep. Check previous backups to try and find out when this happened, which might help a wider investigation as to how they got in and did what they did.
You'll also need to think about the wider implications of this. Do you store user data? If so, then you'll need to inform them that you've had your database compromised and therefore they should assume their accounts are and change their passwords.
Finally, wipe the server. A hacked server is no longer in your control (or that's how you should look at it). Wipe it, reinstall everything, and put in changes to help prevent the same hack happening again.
I am using Open Cart, and it is extremely slow as I have about 3000 products. The category listing page is extremely slow, it doesn't seem like an issue with the server, I checked with various servers. I doubt that there are lots of MYSQL queries are running unnecessarily. Is there any way to get a list of all the query strings called on that page using some kind of PHP function? Or perhaps an Open Cart function?
There's actually a vQmod file you can install that will create a log file for you with all of the database queries and their time taken to execute. You can find the thread and XML file here
You can also set general_log variable in mysql to ON. This will log all queries to mysql server into a file. Can be useful for debugging.
I have just reloaded my laptop and am now trying to setup my localhost again.
Currently I am trying to re-setup the database.
the issue is,
The script is 169,328 KB.
This keeps crashing whatever I use to run the query and I get the error: The mysql Server has gone away.
Seems everyone is suggesting splitting the script. Instead as this is only me setting my localhost back up, I have temporarily increased the max_packet_size.
This would explain the error.
Perhaps you should open the script and see if you can chunk it into smaller, more manageable pieces.
I don't know what you're doing about transactions, but perhaps the rollback segment (or its MySQL equivalent) is getting too large. If that's the case, break the script into several transactions that you can safely commit individually.
If you're looking to avoid the err message, consider one of these remedies:
ensure that your environment or commands aren't causing this issue. Causes for MySQL gone-away.
split your large script into smaller scripts. You could then run those in sequence.
Is there an application or a code I can use to check which cache functions are turned on?
On this app I'm working on, I thought there was mysql cacheing, but since I started using the SELECT SQL_NO_CACHE in one of my queries (applied yesterday), the cacheing has not stopped. This leads me to assume it's a php cache function that's occurring.
I looked over php.ini for any possible cache features, but didn't see anything that stood out.
Which leads me to this: Is there an app I can download or a Shell function I can input to tell me which cache functions are on or what may be causing the cache.
You probably already know that MySQL has a query caching mechanism. For instance if you have a table named users, and run a query like this:
SELECT COUNT(*) FROM `users`
It may take 3 seconds to run. However if you run the query again, it may only take 0.02 seconds to run. That's because MySQL has cached the results from the first query. However MySQL will clear it's cache if you update the users table in any way. Such as inserting new rows, updating a row, etc. So it's doubtful that MySQL is the problem here.
My hunch is your browser is caching the data. It's also possible that logic in your code is grabbing the old row, updating it, and then displaying the old row data in the form. I really can't say without seeing your code.
You probably need to close and restart your browser. I'd bet it is your browser caching not the back end.