I would like to view SQS messages before they get picked up by sqsd on my Elastic Beanstalk intstance. Is there a way, once ssh'ed into the instance, that sqsd can be stopped / started as a service all together?
For the purpose of debugging you may stop sqsd by running sudo service aws-sqsd stop on the instance. You can check the status by running sudo service aws-sqsd status. Note this is not recommended for a production service but for the purpose of debugging you may try this.
Related
I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.
A service start by Upstart or systemd and I want hot update it. After update, the new service process is running and then kill the old service process that start by upstart or systemd. A at last, the new process can be monitored by the upstart or systemd as the old process.
You didn't say what your service does. I am answering for the easy case, a network service with short-lived connections, such as an HTTP server.
Have systemd own the socket. Search for "systemd socket activation" (stackoverflow search). I describe how to do it in Go here: https://www.darkcoding.net/software/systemd-socket-activation-in-go/
While the service is running, replace the binary on disk with the new one.
systemctl restart <myservice>
In practice there will often also be some state in your service you will need to persist on shutdown, and load on startup.
The service shutting down might need to wait a brief amount of time until all it's requests complete.
For the more difficult case with many long-lived TCP connections (such as an XMPP server), it's no longer about systemd, you have to have your old and new processes co-ordinate to pass the sockets from one to another. I describe it in Online upgrades in Go, but it's a lot more work.
I am learning laravel 5.4 "queues" chapter. I have a problem about queue:restart command. Because when I test it on my windows 10 platform, I found this command seems just kill queue worker, but not restart worker. So I wonder whether this command does not work on windows or this command is just kill worker but not restart worker? Thanks.
The queue:restart command never actually restarts a worker, it just tells it to shutdown. It is supposed to be combined with a process manager like supervisor that will restart the process when it quits. This also happens when queue:work hits the configured memory limits.
To keep the queue:work process running permanently in the background, you should use a process monitor such as Supervisor to ensure that the queue worker does not stop running.
Source: https://laravel.com/docs/5.4/queues#running-the-queue-worker
I can't restart openshift application nodejs + postgres. I use free plan. Error in openshift online:
Could not stop Postgres
Failed to execute: 'control restart' for /var/lib/openshift/{my_id}/postgresql
I found some advices to need to wait sometime and tech support will resolve it without my activity. But already three days service not work. And I asked on red hat bugzilla and my question is assigned to admin but there is no response.
I've solved it. I thought users have not permission to kill process from openshift shell but they can.
ps aux|grep postgres
kill - pid
then restart app from openshift web console.
I'm using google compute engine and have an auto scaling instance group that spins up new VMs as needed all sitting behind a load balancer. I'm also using google's cloud SQL in the same project. The VMs need to connect to the cloud SQL instance.
Since the IPs of the VMs are dynamic I can't just plug in the IPs to the SQL access config so I followed the cloud sql proxy setup along with the notes from this very similar question:
How to connect from a pool of Google Compute Engine instances to Cloud SQL DB in the same project?
I can now log into a single test VM and run:
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
and everything works great and that VM connects to the cloud SQL instance.
The next step is where I'm having issues. How can I setup the VM so it automatically starts up the proxy when it's either built from an instance template or just restarted. The obvious answer seem to be to shove the above in the VM's start-up script but that doesn't seem to be working. So with my single test VM I can SSH into the VM and manually run the cloud_sql_proxy command and all works. If I then include the below in my start-up script and restart the VM it doesn't connect:
#! /bin/bash
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
Any suggestions? I seriously can't believe it's this hard to connect to the SQL cloud from a VM in the same project...
The startup script you have shown doesn’t show the download step of the cloud_sql_proxy.
You need to first download and then launch the proxy. So, your startup script should look like:
sudo wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
sudo mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
sudo chmod +x cloud_sql_proxy
sudo ./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306 &
I choose crontab to run cloud_sql_proxy automatically when vm start up.
$crontab -e
and add
#reboot cloud_sql_proxy blah blah.