I've been unable to get mysql command-line history working for my user, even though it works fine for other users on the same machine (mysql 5.5 on Debian "wheezy" distro.) I'm at wit's end, and am hoping someone here can help me...
Whenever I start mysql, I have no history to scroll back to (it just gives a visual bell alert when I try up-arrow for example).
However, as I'm using it, history works fine (within a single session). I.e., I can go back to earlier commands since I started mysql. But, the second I quit mysql, I lose all my history and have to start over again the next time.
Needless to say, this is extremely frustrating!
To troubleshoot, I did three things, none of which made any difference at all:
(1) I explicitly set the environment variable (using bash):
% MYSQL_HISTFILE=~/.mysql_history
% echo $MYSQL_HISTFILE
/var/home/userx/.mysql_history
... and I double-checked that the permissions were set correctly (both on the file and on the directory -- note, I created an empty file just to be sure it wasn't having trouble creating it itself):
drwxr-xr-x 53 userx userx 4096 Jan 24 15:26 /var/home/userx
-rw------- 1 userx userx 0 Jan 31 04:14 /var/home/userx/.mysql_history
I confirm it is "-rw-------" and the file is owned by the user in question (me), identically to all the other users on the same machine for whom it works fine. Though, the mysql client documentation doesn't say you need to set this environment variable unless you want to change it (so I've of course tried it without setting that variable as well).
(2) I tried setting/tweaking various logging-related configs in /etc/mysql/my.cnf (from looking at the documentation; however, all of the settings seem to truly be about logging, not about command-line history).
None of the settings in /etc/mysql/*.cnf seems to have anything to do with the command-line logging (only with server-level logging, e.g. to /var/log/mysql...).
To be sure, I reverted everything back to how it was in the standard installation (via debian wheezy apt-get install mysql) so any of my mucking around couldn't actually have been the reason. (Note: it works fine for other users on the same exact machine!)
(3) I tried examining/tweaking various variables within mysql itself (based on various things I've seen posted). But these are hard to find good information on, and since it works for other users on the same machine, I'm skeptical whether this will matter. Anyway, here's what I did here:
First, to get a list of all currently-set variables, I did:
% echo "show variables" | mysql > /tmp/vars
Looking through them, I didn't see anything that seemed to be relevant. But here're some examples (it's too long to dump all of them here; let me know if there's one variable or one search I can do that may yield the answer, though):
% grep -i hist /tmp/vars
performance_schema_events_waits_history_long_size 10000
performance_schema_events_waits_history_size 10
profiling_history_size 15
... as well as ...
% grep -i log /tmp/vars (note: irrelevant binlog stuff excerpted)
back_log 50
expire_logs_days 10
general_log OFF
general_log_file /var/lib/mysql/xxx.log
innodb_log_group_home_dir ./
innodb_mirrored_log_groups 1
log OFF
log_error
log_output FILE
log_queries_not_using_indexes OFF
log_slave_updates OFF
log_slow_queries OFF
log_warnings 1
slow_query_log OFF
slow_query_log_file /var/lib/mysql/rimu3-slow.log
None of these seems relevant, nor did anything I did based on various web search relating to changing variables solve my problem -- and, remember: none of the other users on the same machine has this problem. So unless the variables or other server state I haven't found actually specifically refers to my user, or there is some access policy somewhere I haven't discovered that specifically refers to my user (for example), this is just not explainable.
The only documentation I can find anywhere about the mysql_history file is here. But it doesn't tell you how to enable mysql_history! (It only says how to disable it, or to change where it goes, which also doesn't change anything for me.)
To wrap up, I have confirmed that, in the end, my troubleshooting didn't end up leaving anything set incorrectly: I'm back to the standard environment variables, server configs, variables, etc.
I'm really completely stumped here. Any help would be immensely appreciated!
Steve
I found the problem.
There are actually two files used for mysql history: ~/.mysql_history and ~/.mysql_history.TMP. I only discovered the second file using ptrace:
open("/var/home/userx/.mysql_history.TMP", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0600) = -1 EACCES (Permission denied)
Knowing that there are two files, the issue becomes much clearer:
% ls ~/.mysql_history*
-rw------- 1 userx userx 0 Jan 31 04:14 /var/home/userx/.mysql_history
-rw------- 1 root root 279506 May 11 2014 /var/home/userx/.mysql_history.TMP
(And yes, this problem has gone back to May 2014, so that makes it all make so much sense now.)
In my case, I had root access via sudo, so I could easily fix it:
% sudo chown userx:userx /home/userx/.mysql_history.TMP
And the subsequent use of mysql worked perfectly (though all my history in between was still lost forever). :-(
The root problems are:
(a) mysql documentation makes no mention of this file (and in fact, shouldn't really need it), and
(b) mysql client doesn't give any error message to the end user letting them know this file is unmodifiable either at start-up or at exit.
--
So, there you have it, in a nutshell:
(1) The mysql documentation fails to mention anywhere that it uses .mysql_history.TMP that needs the same permissions.
(2) Using a wrapper like rlwrap worked around this (see my comment above) because it apparently doesn't use that .TMP version of the file.
Steve
Related
I am a little confused about start up scripts and the command line options. I am building a small raspberry pi based server for my node applications. In order to provide maximum protection against power failures and flash write corruption, the root file system is read only, and that embraces the home directory of my main user, were the production versions of my apps (two of them) are stored. Because the .pm2 directory here is no good for logs etc I currently set PM2_HOME environment variable to a place in /var (which has 512kb unused space around it to ensure writes to i. The eco-system.json file reads this environment variable also to determine where to place its logs.
In case I need to, I also have a secondary user with a read write home directory in another (protected by buffer space around it) partition. This contains development versions of my application code which because of the convenience of setting environments up etc I also want to monitor with PM2. If I need to investigate a problem I can log in to that user and run and test the application there.
Since this is a headless box, and with watchdog and kernel panic restarts built in, I want pm2 to start during boot and at minimum restart the two production apps. Ideally it should also starts the two development versions of the app also but I can live without that if its impossible.
I can switch the read only root partition to read/write - indeed it does so automatically when I ssh into my production user account. It switches back to read only automatically when I log out.
So I went to this account to try and create a startup script. It then said (unsurprisingly) that I had to run a sudo command like so:-
sudo su -c "env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u pi --hp /home/pi"
The key issue for me here is the --hp switch. I went searching for some clue as to what it means. Its clearly a home directory, but it doesn't match PM2_HOME - which is set to /var/pas in my case to take it out of the read only area. I don't want to try and and spray my home directory with files that shouldn't be there. So am asking for some guidance here
I found out by experiment what it does with an "ubuntu" start up script. It uses it to set PM2_HOME in the script by appending "/.pm2" to it.
However there is nothing stopping you editing the script once it has created it and setting PM2_HOME to whatever you want.
So effectively its a helper for the script, but only that and nothing more special.
I've been trying to set up two databases as master & slave.
I followed the famous guide here:
https://dev.mysql.com/doc/refman/5.1/en/replication-howto-existingdata.html
But no luck on my slave server,
The issue I'm having is setting the server-id variable.
No matter where I define it (I looked into all the possible cnf files that might allow me to define the variable)
I tried to define it like so:
[mysqld]
server-id = 2
I also tried setting it by using SET GLOBAL server_id but obviously, it didn't save the setting.
when I do:
SHOW VARIABLES LIKE 'server_id'
It returns
server_id 0
Among the cnf files I've looked into are:
etc/mysql/mysql.conf.d/mysqld.cnf
etc/mysql/mysql.conf.d/mysqld_safe_syslog.cnf
etc/mysql/conf.d/mysql.cnf
etc/mysql/debian.cnf
etc/mysql/mysql.cnf
usr/my.cnf
usr/my-new.cnf
usr/etc/my.cnf
My MySQL server is running on Ubuntu.
And if it matters, I start it by typing:
service mysql start
I'd love to know where else I could look to fix this issue.
Thanks a bunch!
Additional Notes:
MySQL Ignoring the global conf file
I received this warning a couple of times and it disappeared when I returned the chmod to 644 on the etc/mysql folder, although every thing stated above was attempted using both 644 and 777 permissions, with 644 the warning disappears.
I know it's to late, but for those who will make the mistake.
It's server_id and not server-id:
[mysqld]
server_id = 2
For me it worked by renaming the from /etc/mysql/conf.d/my.cnf to /etc/mysql/conf.d/my.ini
I'm not sure of the exact reason for that (feel free to edit this answer and add to it).
I tried out various combinations:
Keeping the variable as: server-id and server_id
Keeping the group/section as: [mysql] and [mysqld]
Renaming the file to: my.cnf , mysqld.cnf, mysql.cnf, my.ini
You can keep trying various combinations out of the above options, it should definitely work. :)
I've been having some problems with my application not loading the views (sometimes).
I am running a Debian server with php-fpm and nginx (php5.6.8 and nginx 1.8.0) Both compiled from source. On top of that I am running Lavavel 4.2.
So far I've had the problem in both Chrome and Firefox (chrome simply stops loading and shows the error, firefox does not show an error but shows a incomplete version of the view).
So far I've checked the permissions of both nginx and PHP, they both run as the same user (www-data:www-data).
My php-fpm socket is configured as:
[sitename]
listen = /var/run/php5-fpm/sitename.sock
listen.backlog = -1
listen.owner = www-data
listen.group = www-data
listen.mode=0660
; Unix user/group of processes
user = folderuser
group = www-data
; Choose how the process manager will control the number of child processes.
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
; Pass environment variables
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp
Note that I set user to folderuser because the folder where the files for the site are located is owned by folderuser (folderuser:www-data).
Furthermore, permissions inside laravel folders are configured as 755 (775 for cache and upload folders so that www-data can write cache files)
I have disabled any kind of serverside php cache (except for zend opcache).
I've also tried disabling "prefetch resources to load pages more quickly" feature in chrome, which did not solve the problem.
As a last resort I've tried this solution:
/*
|--------------------------------------------------------------------------
| Fix for Chrome / PHP 5.4 issue
| http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
|--------------------------------------------------------------------------
*/
App::after(function($request, $response)
{
$content = $response->getContent();
$contentLength = strlen($content);
$response->header('Content-Length', $contentLength);
});
And some variants to this script, but I got some content length mismatches (more often than the net::ERR_INCOMPLETE_CHUNKED_ENCODING errors.
So to sum up, I've checked permissions and user/group settings serverside, I've disabled serverside caching (except for zend), I've messed around with chrome settings and I've tried a script for laravel, none of which solved the issue I am having. Note that the issue happens at random intervals at random pages on the site.
I really do not know what the next step towards solving my problem would be as the solutions above are the only ones I've found on the internet.
I would really appreciate some help.
Edit: I have a beta version of the same application running off another server with the exact same configuration (only difference is in hardware, more memory though), the issue does not present there.
Also, I forgot the mention that the application does not run with HTTPS (currently). The beta version however is running with HTTPS.
Edit The server where the issue is present has 2048 MB RAM, the beta server has 8192 MB RAM.
Edit I inspected the response with fiddler when the error occured, it simply cuts of the response at some point for no reason.
You might want to check if the folder /var/lib/nginx is owned by www-data too. I had this problem that, when the response page was too big, the Nginx worker process tried to use this folder and failed, because it was owned by nginx and the worker process ran under www-data. By doing chown -R www-data:www-data /var/lib/nginx, the problem was fixed.
If anyone finds this in the future, my net::ERR_INCOMPLETE_CHUNKED_ENCODING were as a result of having run out of space. Have a look at your disk usage and see if that's why!
I've seen a similar problem on my Nginx Server running on the latest Debian. I'm running a Wordpress site with Advanced Custom Fields installed. On the advanced custom fields it says that the problem could potentially be with the max_input_vars value in the php.ini file. I increased my value from 1000 to 3000 and that fixed the issue on one of my sites.
You can check out this link to see if it might help you. http://www.advancedcustomfields.com/faq/limit-number-fields/
I am not a MySQL expert.
I have a script that installs MySQL, starts mysqld, and then uses mysql to do some initialization.
Currently, in order to have this work, I enter into a loop that (apologize for the pseudocode mixing multiple languages):
mysqld_safe /* ... */ & /* ampersand to start in background so we can continue */
while(fileDoesNotExist("/tmp/mysql.sock")) {
sleepFor100ms();
}
mysql -u root /* and so forth */ initialize.sql
This seems to work (!) but has multiple problems:
polling smells funny,
I am not smart enough about MySQL to know whether looking at that hard-coded pathname /tmp/mysql.sock is smart at all.
And yet it's a lot easier than trying to (for example) consume and parse the stdout (or is it stderr?) of mysqld_safe to figure out whether the server has started.
My narrow question is whether there's a way to issue a blocking start of mysqld: can I issue any command that blocks until the database has started, and then exits (and detaches, maybe leaving a PID file), and has a companion stop command? (Or maybe allows me to read the PID file and issue my own SIGTERM?)
My broader question is, am I on the right track, or is there some totally different and easier (to be "easier" for me it would have to be lightweight; I'm not that interested in installing a bunch of tools like Puppet or DbMaintain/Liquibase or whatever) approach to solving the problem I articulated? That is, starting with a .gz file containing MySQL, install a userland MySQL and initialize a database?
Check out the init shell script for mysqld. They do polling, in a function called wait_for_pid().
That function checks for the existence of the pid file, and if it doesn't exist yet, sleeps for 1 whole second, then tries again. There's a timeout that defaults to 900 seconds, at which point it gives up waiting and concludes that it's not going to start (and outputs a totally unhelpful message "The server quit without updating PID file").
You don't have to guess where the pid file is. If you're starting mysqld_safe, you should tell it where it should create the pid file, using the --pid-file option.
One tricky part is that the pid file isn't created until mysqld initializes. This can take a while if it has to perform crash recovery using the InnoDB log files, and the log files are large. So it could happen that 900 seconds of timeout isn't long enough, and you get a spurious error, even though mysqld successfully starts a moment after the timeout.
You can also read the error log or the console output of mysqld. It should eventually output a line that says "ready for connections."
To read until you get this line, and then terminate the read, you could use:
tail -f | sed -e '/ready for connections/q'
You can use
mysqladmin -h localhost status
or use a pure bash solution like wait-for-it
./wait-for-it.sh --timeout 10 -h localhost -p 3306
I've successfully installed MySQL 5.6 on my F19. Although the installation was successful, I'm unable to start the mysql service.
When I ran
service mysql start
It returns the following error:
Starting MySQL..The server quit without updating PID file (/var/lib/mysql/sandboxlabs.pid).
I disabled SELinux (permissive mode), and the service started smoothly. But I did some research about disabling SELinux, and found that disabling SELinux is a bad idea. So, is there any way to add custom MySQL policy? Or should I leave the SELinux to permissive mode?
The full answer depends on your server configuration and how you're using MySQL. However, it's completely feasible to modify your SELinux policy to allow MySQL to run. In most cases, this sort of operation can be performed with a small number of shell commands.
Start by looking at /var/log/audit/audit.log. You can use audit2allow to generate a permission-granting policy around the log messages themselves. On Fedora 19, this utility is in the policycoreutils yum package.
The command
# grep mysql /var/log/audit/audit.log | audit2allow
...will output the policy code that would need to be compiled in order to allow the mysql operations that were prevented and logged in audit.log. You can review this output to determine whether you'd like to incorporate such permissions into your system's policy. It can be a bit esoteric but you can usually make out a few file permissions that mysql would need in order to run.
To enable these changes, you need to create the policy module as a compiled module:
# grep mysql /var/log/audit/audit.log | audit2allow -M mysql
...will output the saved plaintext code to mysql.te and the compiled policy code to mysql.pp. You can then use the semodule tool to import this into your system's policy.
# semodule -i mysql.pp
Once you've done this, try starting mysqld again. You might need to repeat this process a few times since mysqld might still falter on some new access permission that wasn't logged in previous runs. This is because the server daemon encounters these permission checks sequentially and if it gets tripped on one, it won't encounter the others until you allow access to the initial ones. Have patience -- sometimes you will need to create mysql1.pp mysql2.pp mysql3.pp ... and so on.
If you're really interested in combining these into a unified policy, you can take the .te files and "glue" these together to create a unified .te file. Compiling this file is only slightly more work -- you need the Makefile from /usr/share/selinux/devel/Makefile in order to convert this into a .pp file.
For more information:
If you're a more graphical type, there's also a great article by RedHat magazine on compiling policy here. There's also a great blog article which takes you through the creation of a policy here. Note the emphasis on using /usr/share/selinux/devel/Makefile to compile your own .te, .fc, and .if files (selinux source written in M4).