I have a problem, where my shutdown runlevel MySQL bash scripts are not executed. rcS.d, however works always.
I used scripts in init.d folder to be run on different runlevels. Currently, the only one I have gotten to work, is the bash script that inserts a row about the server booting up.
The problem is, that when I use the same method to run a script while the runlevels are 0 or 6, for example, the script is never run. I think it might be a priority issue. Look at the picture below (The priority is the same as the kill priorities for apache, etc...). Are the scripts run in alphabetical order or could I bump all priorities up by one somehow?
I'm guessing the problem is that the services are killed before the script is run - apache killed # 1 & mysql killed # 2. The same script works for booting up - a bash mysql insert command in the file.
Related
So I have a mysql Docker up and running with 3 log files (general, error, slow-query log) enabled, that are written to /var/log/mysql/ (path inside the mysql container), which actually is a directory on the docker host (named 'log') and mounted into the container as a volume specified in the docker-compose.yml.
We chose this way, because we didn't want general and slow-query logs combined on stdout and we prefer a daily rotation of the 3 separate log files, since it seems more comfortable to us to find a certain query that was issued - let's say - 4 days ago.
Since the mysql Docker (afaik) doesn't come with logrotate and/or cron, we decided to have another service in the docker-compose.yml named logrotator, which starts cron in it's entrypoint, which in turn regularly runs logrotate with a given logrotate.conf. The 'log' directory is also mounted into the logrotator container, so it can do it's rotation job on the mysql log files.
Now it seems like mysql needs a "mysqladmin flush-logs" after each rotation to start writing into a new file descriptor, but the logrotator container cannot issue this command inside the mysql container.
To make it short(er): I'm sure there are better ways to accomplish separate log files with log rotation. Just how? Any ideas are much appreciated. Thanks.
Update:
Since we're using mysql 5.7 as of now, and hence probably cannot solve our issue by the solution as proposed by #buaacss (which might absolutely work), we decided to stay with a "cron" container. Additionally we installed docker.io inside the cron container and mounted the docker host's /var/run/docker.sock into the cron container. This allows us to use "docker exec" to issue commands (in this case 'mysqladmin flush-logs') from the cron container to be executed in the mysql container. Problem solved.
you can indeed use SIGHUP instead of flush log statement based on doc
https://dev.mysql.com/doc/refman/5.6/en/log-file-maintenance.html
but may have some undesired effects, i.e. write huge report information to the error log.
so, as I mentioned in comment, they developed a light version of SIGHUP, i.e. SIGUSR1 to accomplish functions below
FR1: When SIGUSR1 is sent to the server, it must flush the error log.
FR2: When SIGUSR1 is sent to the server, it must flush the general log.
FR3: When SIGUSR1 is sent to the server, it must flush the slow query log.
FR4: SIGUSR1 must not send MySQL status report.
Currently when SIGHUP is sent to the server a large report of information is
printed to stdout, the status report.
FR5: The server must not fail when SIGUSR1 is sent, even though slow log is not
enabled.
FR6: The server must not fail when SIGUSR1 is sent, even though slow log output
is set to a table (log_output).
FR7: The server must not fail when SIGUSR1 is sent, even though general log is
set to OFF.
NFR1: SIGALRM must be undisguisable from how SIGUSR1 behaved before.
unfortunately such signal is only available in MySQL 8 or above
There has been an issue with Google compute instances created with containers running the startup script up to 10-20 times.
Case 1:
The container is built through Docker, then pushed to the online registry, and then an instance is created with that container. The startup script "Test.py" is instantiated through the container creation instead of being built into the Docker File directly. The following command is used to create an instance with a container and arguments:
gcloud compute instances create-with-container busybox-vm --container-image gcr.io/example-project-id/ttime2 --container-command python --container-arg="/Test.py" --container-arg="Args"
Case 2:
Including the startup script (Test.py) and corresponding arguments within the docker image itself, and then instantiating an instance also resulted in multiple runs of the script.
Notes:
The startup script is ran as a sub-process so the standard output can be easily sent to a remote server where it can be monitored for debugging purposes.
The startup script is executed multiple times before the first execution is finished (as the end of the script kills the instance successfully).
When running this docker build locally, it performs as expected with just one code execution.
I've experienced this multiple startup script execution on several different docker images
Only one instance is created.
A solution it seems would be to check for subprocesses as they spawn and kill any duplicates, I'm just not sure how I'd identify them.
Edit:If you have some general tips on tackling problems with containers that have "crashlooping" I'd like to accept that as an answer. I was personally able to add the following flag --container-restart-policy="never" to the above gcloud command to get a large variety of tests to work (not sure why), so I'm done with this issue for now.
This could be one of many reasons. A good way to diagnose would be to:
change to --container-command "sleep 50000" and create a vm.
ssh into the vm and run sudo -i
run docker ps -a until you see a container of yours appear.
get its container id and docker exec -it <ID> bash(change to sh if necessary). Your container should be sleeping. This will let you go into your container.
execute Test.py from within your container to see if there's an error.
This requires your image to have sleep.
When I run mysqld, it has a whole lot of information about what it's doing.
As I understand, this is not the correct way to run a mysql server and you should use service mysql start instead (on older servers at least).
Any searches for mysqld log come up with logs for queries, I want to know what the program is doing as it starts. (I'm trying to set up mariadb 10.1.14 with galera replication)
I want to be able to run service mysql start and then watch what's happening in the background.
I am trying to setup a django website on EC2, basically I want to start MySQL server, and Uwsgi after reboot.
In order to make MySQL start on reboot, I did:
sudo cp /opt/mysql/server-5.6/support-files/mysql.server /etc/init.d/
sudo update-rc.d mysql.server defaults
In order to make Uwsgi start on reboot, I created a file /etc/init/uwsgi.conf:
description "ubuntu uwsgi instance"
start on runlevel [2345]
stop on runlevel [06]
exec uwsgi --ini /home/ubuntu/uwsgi.ini
However the problem is that I will need mysql to start first, right now it looks like Uwsgi starts first, and tried to connect to mysql, which fails, and mysql never gets started.
Could anyone help me on how to solve this issue?
Thanks in advance
When your computer starts up, it doesn't run the init.d scripts directly. Instead, depending on what's called the "runlevel", it runs the scripts in /etc/rcN.d (where N is the runlevel). You can determine the current runlevel with the runlevel command; mine returns 2 in normal operation. That means that when the computer started up, it ran the scripts in /etc/rc2.d. The contents of rc2.d are just symlinks to scripts in /etc/init.d, named according to whether they should be started or stopped, and the order they should be run.
Use the runlevel command to find out what runlevel your computer is at (probably 2), then look in /etc/rc2.d for a link named smthing like uwsgi, which will be a symlink to /etc/init.d/uwsgi, and rename it to zzz999 - or whatever it takes to get it to sort after the other entries - that will cause it to run last.
There's more information about init.d scripts and runlevels at https://www.linux.com/news/enterprise/systems-management/8116-an-introduction-to-services-runlevels-and-rcd-scripts
Even if you start MySQL before uWSGI you're not assured it will be available when uWSGI is managing requests.
At start MySQL does some checks on database, loads InnoDB indexes, recover from transaction log or it may even hang.
You shouldn't rely on that approach.
Instead add application logic that ensures you correctly handle unavailability of database, i.e. retrying or showing an error page to the user asking to retry.
I have a Jenkins (Hudson) server setup that runs tests on a variety of slave machines. What I want to do is reconfigure the slave (using remote APIs), reboot the slave so that he changes take effect, then continue with the rest of the test. There are two hurdles that I've encountered so far:
Once a Jenkins job begins to run on the slave, the slave cannot go down or break the network connection to the server otherwise Jenkins immediately fails the test. Normally, I would say this is completely desirable behavior. But in this case, I would like for Jenkins to accept the disruption until the slave comes back online and Jenkins can reconnect to it - or the slave reconnects to Jenkins.
In a job that has been attached to the slave, I need to run some build tasks on the Jenkins master - not on the slave.
Is this possible? So far, I haven't found a way to do this using Jenkins or any of its plugins.
EDIT - Further Explanation
I really, really like the Jenkins slave architecture. Combined with the plugins already available, it makes it very easy to get jobs to a slave, run, and the results pulled back. And the ability to pick any matching slave allows for automatic job/test distribution.
In our situation, we use virtualized (VMware) slave machines. It was easy enough to write a script that would cause Jenkins to use VMware PowerCLI to start the VM up when it needed to run on a slave, then ship the job to it and pull the results back. All good.
EXCEPT Part of the setup of each test is to slightly reconfigure the virtual machine in some fashion. Disable UAC, logon as a different user, have a different driver installed, etc - each of these changes requires that the test VM/slave be rebooted before the changes take affect. Although I can write slave on-demand scripts (Launch Method=Launch slave via execution of command on the master) that handle this reconfig and restart, it has to be done BEFORE the job is run. That's where the problem occurs - I cannot configure the slave that early because the type of configuration changes are dependent on the job being run, which occurs only after the slave is started.
Possible Solutions
1) Use multiple slave instances on a single VM. This wouldn't work - several of the configurations are mutually exclusive, but Jenkins doesn't know that. So it would try to start one slave configuration for one job, another slave for a different job - and both slaves would be on the same VM. Locks on the jobs don't prevent this since slave starting isn't part of the job.
2) (Optimal) A build step that allows a job to know that it's slave connection MIGHT be disrupted. The build step may have to include some options so that Jenkins knows how to reconnect the slave (will the slave reconnect automatically, will Jenkins have to run a script, will simple SSH suffice). The build step would handle the disconnect of the slave, ignore the usually job-failing disconnect, then perform the reconnect. Once the slave is back up and running, the next build step can occur. Perhaps a timeout to fail the job if the slave isn't reconnectable in a certain amount of time.
** Current Solution ** - less than optimal
Right now, I can't use the slave function of Jenkins. Instead, I use a series of build steps - run on the master - that use Windows and PowerShell scripts to power on the VM, make the configurations, and restart it. The VM has a SSH server running on it and I use that to upload test files to the test VM, then remote execute them. Then download the results back to Jenkins for handling by the job. This solution is functional - but a lot more work than the typical Jenkins slave approach. Also, the scripts are targeted towards a single VM; I can't easily use a pool of slaves.
Not sure if this will work for you, but you might try making the Jenkins agent node programmatically tell the master node that it's offline.
I had a situation where I needed to make a Jenkins job that performs these steps (all while running on the master node):
revert the Jenkins agent node VM to a powered-off snapshot
tell the master that the agent node is disconnected (since the master does not seem to automatically notice the agent is down, whenever I revert or hard power off my VMs)
power the agent node VM back on
as a "Post-build action", launch a separate job restricted to run on the agent node VM
I perform the agent disconnect step with a curl POST request, but there might be a cleaner way to do it:
curl -d "offlineMessage=&json=%7B%22offlineMessage%22%3A+%22%22%7D&Submit=Yes" http://JENKINS_HOST/computer/THE_NODE_TO_DISCONNECT/doDisconnect
Then when I boot the agent node, the agent launches and automatically connects, and the master notices the agent is back online (and will then send it jobs).
I was also able to toggle a node's availability on and off with this command (using 'toggleOffline' instead of 'doDisconnect'):
curl -d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" http://JENKINS_HOST/computer/NODE_TO_DISCONNECT/toggleOffline
(Running the same command again puts the node status back to normal.)
The above may not apply to you since it sounds like you want to do everything from one jenkins job running on the agent node. And I'm not sure what happens if an agent node disconnects or marks itself offline in the middle of running a job. :)
Still, you might poke around in this Remote Access API doc a bit to see what else is possible with this kind of approach.
Very easy. You create a Master job that runs on the Master, from the master job you call the client job as a build step (it's a new kind of build step and I love it). You need to check that the master job should wait for the client job to finish. Then you can run your script to reconfigure your client and run the second test on the client.
An even better strategy is to have two nodes running on your slave machines. You need to configure two nodes in Jenkins. I used that strategy successfully with a unix slave. The reason was that I needed different environment variables to be set up and I didn't wanted to push that into the jobs. I used ssh clients, so I don't know if it is possible with different client types. Than you might be able to run both tests at the same time or you chain the jobs or use the master strategy mentioned above.