When I first start editing a file in SublimeText 2 remotely (hosting a git repo on a VirtualBox-ed copy of Ubuntu) saves are speedy as expected. After having the file/connection open for a while it starts to lock up every time I save, forcing me to wait 3-4 seconds for the save to go through to the remote connection.
How do I solve this lag?
I solved this by unmounting/remounting the connection to my remote repo directory. After doing this, I was able to save with no lag or waiting.
Related
I have a Windows Server 2019 with 16GB Ram and 400GB HD space. One of the apps saves images uploaded by users which used up all of our HD space. Among other things I found 2 directories with 1 million and 1.2 million text files so I deleted these as the first action to free up some space. All of these went into the recycle bin (at the time I didn't know I could do a shift+del to delete them directly bypassing the recycle bin. Now we we have about 30GB free space on the hard-drive, but cannot empty the recycle bin.
If I open recycle bin, it just hangs while calculating the number of files and space. As it does this its slowly eating up memory until it uses all memory and the server crashes. If I right click on the recycle bin and select empty, nothing appears to happen, but if I look at task manager I can see that Windows Explorer is slowly eating up memory until the system crashes again. So even though I don't open the GUI, the recycle bin is still calculating things which eats up memory until it crashes.
I tried doing this with PowerShell running the command Clear-RecycleBin using the force parameter, it appears to hang in the command window and I can see in task manager that its processing and once again, eating up memory until the system crashes.
I'm really stuck here. How can I empty the recycle bin with out making it to first count the files and estimate the size of data it will remove?
Thanks.
I don´t get it. Everytime when i´ve deleted all the content of a mysql-directory in var/lib/mysql/mywebsite, my website still runs on any device or browser without any negative effect. If i look in phpMyAdmin, the database is completely empty! If i delete the whole directory (with directory itself), it has an effect (site is gone), but after restoring this directory with an different database, the old version appears on the website again instead of the new restored database!
BUT if i clear my database tables of that same database in phpMyAdmin, it affects my site immediately... why is that, could it be, that there is any kind of DB-caching on my Vserver?
After:
service mysqld restart
everything is normal, means: right content / right database.
Would be nice if somebody could help me with this.
CentOS 6.9 (Final)
Plesk Onyx 17.5.3 Update Nr. 4
Wordpress 4.7.4 (without caching enabled / caching plugins)
On Unix, if a process has a file open while another process deletes it, the process can still access the file -- it doesn't really go away until all processes close it. The mysqld process already has the database files open. So deleting the directory doesn't affect the contents of the database until you restart it, which forces it to reopen all the files.
Also, mysqld loads indexes into memory when possible, and doesn't reread them from the file if it doesn't need to.
In general, you should avoid manipulating the files used by a database directly while the daemon is running, the results can be very unpredictable. You should preferably use commands in the database client to manage the database contents, but if you need to restore it from a file-level backup, you should shut down the daemon first.
Try drop database in PhpMyAdmin, Or execute in your console.
DROP DATABASE database_name;
I have a backup from two weeks ago that will be a last resort, but the DB files themselves seem fine and it appears it's just the ibdata1 file that is having issues. As stated, I'm using Xampp and MySQL crashes right after I start it. Here is the error log: http://textuploader.com/7vfd
I hadn't done anything out of the ordinary; it seemingly just up and stopped working.
I looked up InnoDB recovery, but the solutions I tried required MySQL to be functional, which it isn't due to the corruption. Is there a way to salvage what are presumably intact IBD files with a bad ibdata1 file?
Edit: I was aware of using innodb_force_recovery = # and I had tried it...except I had tried it in the wrong my.ini. I had to use the one in the bin folder. It appears to be fixed now.
The ibdata1 file contains the tablespace information and other metadata about your Mysql database(s).
You can try using the innodb_force_recovery = 1 all the way to innodb_force_recovery = 6 to see if that rectifies the problem. Try changing this in your my.cnf (my.ini for windows) file and then attempt to restart your mysql server again.
If you are able to start Mysql using recovery flag, your database will be in a read-only mode. You should do a mysqldump of the data, stop mysql, re-install fresh, create your database again, and import back in the data.
Here is a link for more info on innodb recovery dev.mysql.
If you use linux...
Another more complex option is to use percona recovery tool kit. This will realign your tablespaces. Although, from experience its a bit of a challenge to navigate and takes a bit of time to implement if you are a newb.
However, akuzminsky the creator of the toolkit (how cool is that!) mentioned that he has made significant improvements to the toolkit.
Link to download toolkit Percona.com
Link with a walkthough from chriSQL.
Link to akuzinsky's website TwinDB.
Unless that data is mission critical, I would just revert back to the backup from 2 weeks prior. The amount of time and effort you may end up putting into recovering this data may out way the benefit.
In my case, the mysql.ini didn't have the innodb_force_recovery option.
So i added it to the file and the start was able to start normally
The location to the ini file for my case was
C:\xampp\mysql\bin\my.ini
innodb_force_recovery = 1
For my case, the data got corrupted after a disk check up by windows. In the due course, the database got corrupted.
Hope this can as well help someone out there.
on my case using xampp v3.3.3, i enable innodb_force_recovery = 6 in the top of my.ini configuration file and start the service from xampp dashboard.
also i clean up log, disable innodb_force_recovery = 6 and start again. it works like a charm
I just created a free php gear...
Is the instance automatically configured to roll logs and delete old logs (to make sure we dont go over disk quota?)
Can you pls tell me how often logs are rolled and when old ones get deleted?
thanks
At this moment (April 2014), Apache RotateLogs does not seem to be used anymore. This commit seems to have changed to use logshifter, which reportedly seems to default to rotating every 10MB with a max of 10 log files.
So, to answer your question, it seems like things are automatically configured to roll logs and delete old logs to prevent us from going over disk quota.
BTW, the new logshifter setup combines the access_log and error_log into one log file instead of keeping them separate.
At this moment (Feb 2014), all OpenShift Apache-based cartridges use Apache RotateLogs program to rotate logs every midnight:
/usr/sbin/rotatelogs <gear-dir>/php/logs/access_log-%Y%m%d-%H%M%S-%Z 86400
The log files are not deleted automatically. However, you can delete them manually using rhc app-tidy <app> command. (Read more about rhc tools.)
If concerned about logs eating all your gear capacity, you might consider using monit community cartridge to trigger automatic email notifications when the app hits 80% of gear storage quota, or to tidy your app automatically. If you already created your app, you can add the monit cartridge with the following commands:
rhc env set MONIT_ALERT_EMAIL=my#email.com -a YOUR_APP
rhc cartridge-add http://goo.gl/jiIB2C -a YOUR_APP
And last but not least, feel free to open a new bug report or new feature request for OpenShift.
I have an Ubuntu LAMJ server running Tomcat6.
One of my JSP applications freezes every couple of days and I am having trouble figuring out why. I have to reboot tomcat to get that one app going again, as it won't cone back on its own. I am getting nothing in my own log4j logs for that app, and can't see anything in Catalina.out either.
This applications shares a javax.sql.DataSource resource with another, via a context element in the server.xml file. I don't think this is the cause of the problem, but I may as well mention it.
Could anyone point me in the right direction to find the cause of this intermittent issue?
thanks in advance,
Christy
Get a Thread dump of the running server
There are two options
Use VisualVM
in your %java_home%/bin folder there will be a file called jvisualvm. Run this and connect to your tomcat server. Click the Threads tab and then "Thread Dump"
Manually from the Command Line
open up a command line and find the process id for your tomcat
ps -ef | grep java
Once you identify the process ID for the running tomcat instance,
kill -3 <pid>
replace the process Id here. This will send your thread dump to the stdout for your tomcat. Most likely catalina.out file.
edit - As per Mark's comments below:
It is normal to take 3 thread dumps ~10s apart and compare them. It
makes it much easier to see which threads are 'stuck' and which ones
are moving
Once you have the thread dump you can analyse it for stuck threads. It may not be stuck threads as the problem, but at least you can see what is going on inside the server to analyze the problem further.