I need to use supervisord to spin up a daphne server which needs to connect to DB. However the credentials are in /opt/python/current/env. I tried to do source /opt/python/current/env and reload/restart supervisord. It's still not picking up the env, any ideas?
instead of directly use 'source' in the supervisord command, I put it into bash file along with activate the virtual env, and it works fine now
Related
I am trying to use HSQLDB in server mode, but cannot get the ACL to work.
I started a server (creating a fresh database) with this command line:
java -cp $CLASSPATH:/usr/share/java/hsqldb.jar org.hsqldb.server.Server --database.0 file:~/workspaces/foo/db/fooserver --dbname.0 fooserver
I can connect to it with HSQL Database Manager and issue a SHUTDOWN.
Next, I created an ACL file in ~/workspaces/foo/db/fooserver.acl with the following content:
deny 127.0.0.1
I sucessfully tested it with java -cp $CLASSPATH:/usr/share/java/hsqldb.jar org.hsqldb.server.ServerAcl ~/workspaces/foo/db/fooserver.acl, and it tells me 127.0.0.1 is denied access.
Now I created ~/workspaces/foo/db/server.properties (as there was no server.properties file yet) with the following content:
server.acl=traffserver.acl
However, when I now launch the server, I can still connect to the database.
HSQLDB version is 2.4.1, as shipped with Ubuntu 18.04.
Other things I have tried:
This mailing list post suggests using server.acl_filepath instead of server.acl. Behavior is still the same.
I have tried adding either property to fooserver.properties. Still no effect, and the property gets deleted when I stop the server.
What am I missing?
First of all, if you use a server.properties file which is not located in the directory where you execute the java command, you should include the path to that properties file.
In the same scenario, in the server.properties file, you need to use the same path as you successfully tested. So it should be:
server.acl=~/workspaces/foo/db/fooserver.acl
It would be easier to specify the properties and acl files if you issue the java command from the directory that contains both files. In that case you can use a short filename instead of the full path.
See the Guide http://hsqldb.org/doc/2.0/guide/listeners-chapt.html
I am trying to use a shutdown-script to create a new instance from within the the instance that is shutting down now.
The script has three tasks,
1. creates an empty file
2. get the name of the new instance to be created
3. generates a name for the next new instance to be spawned
4. creates a new instance from within this instance with the name generated.
Here is the script:
#!/bin/bash
touch /home/ubuntu/newfile.txt
new_instance_name=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/next_instance_name -H "Metadata-Flavor: Google")
next_instance_name="instance-"$(printf "%04d" $((${new_instance_name: -4}+1)))
gcloud beta compute --project=xxxxxxxxx instances create $new_instance_name --zone=us-central1-c --machine-type=f1-micro --subnet=default --network-tier=PREMIUM --metadata=next_instance_name=$next_instance_name --maintenance-policy=MIGRATE --service-account=XXXXXXXX-compute#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --image=image-1 --image-project=xxxxxxxx --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=$new_instance_name
This script is made executable using chmod +xand the file-name of the script is /home/ubuntu/shtudown_script.sh.he metadata shutdown-script for this instance is also /home/ubuntu/shtudown_script.sh.
All parts of the script runs fine when I run it manually from within the instance, so a new file is created and also a new instance is created when the current instance shuts-down.
But when it is invoked from API when I stop the instance, it only creates the file I create using touch command, but no new instance is created as before.
Am I doing something wrong here?
So I was able to reproduce the behavior you described. I ran a bash script similar to the one you have provided as a shutdown script, and it would only create the empty file called "newfile.txt".
I then decided to append the output of the gcloud command to see what was happening. I had to tweak the bash script to fit my project. Here is the bash script I ran to copy the output to a file:
#!/bin/bash
touch /home/ubuntu/newfile.txt
gcloud beta compute --project=xxx instances create instance-6 --zone=us-central1-c --machine-type=f1-micro --subnet=default --maintenance-policy=MIGRATE --service-account=xxxx-compute#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=instance-6 > /var/output.txt 2>&1
The output I received was the following:
ERROR: (gcloud.beta.compute.instances.create) Could not fetch resource: - Insufficient Permission
This means that my default service account did not have the appropriate scopes to create the VM instance.
I then stopped my VM instance and edited the scopes to give the service account full access as described here. Once I changed the scopes, I started the VM instance back up and then stopped it again. At this point, it successfully created the VM instance called "instance-6". I would not suggest giving the default service full access. I would suggest specifying which scopes it should have, but make sure that it has full access to Compute Engine if you want the shutdown script to work.
If the shutdown script works when you stop the VM instance using the command:
$sudo shutdown -h now
And does not work when stopping the VM instance from the Cloud Console by pressing the “Stop” button, then I suspect this behavior is to be expected.
A shutdown script has a limited period of time to run when you stop a VM instance; however, this limit does not apply if you request the shutdown using the “sudo shutdown” command. You can read more about this behavior here.
If you would like to know more about the shutdown period, you can read about it here.
I already had given my instance proper scope by giving the service account full access (which is a bad practice).
But the actual problem was solved when I reinstalled google-cloud-sdk using
sudo apt-get install google-cloud-sdk
When I was running those scripts before reinstalling gcloud by sshing into the instance it was using the gcloud command from preinstalled directory /snap/bin/gcloud. But when it runs from the startup or shutdown script, for some reason it can not get an access to the preinstalled /snap/bin/ directory, and when I reinstall google cloud sdk using apt-get the gcloud command was being accessed from /usr/bin/gcloud which I think is accessible by the startup or shutdown script.
I am running my web server on Elastic Beanstalk, and using Papertrail for logging. I am using the official .ebextensions script to get papertrail set up during deployment, but I have a problem. I use environment variables as part of my hostname used as the sender when remote_syslog uploads logs to papertrail, and while this works fine during deployment, when the 01_set_logger_hostname container command is triggered, I run into problems whenever I change environment variables by modifying the environment's configuration, since it seems an eb config call will only restart the application server, not run any of the scripts run during deployment, including the ebextensions container commands.
"/tmp/set-logger-hostname.sh":
mode: "00555"
owner: root
group: root
encoding: plain
content: |
#!/bin/bash
logger_config="/etc/log_files.yml"
appname=`{ "Ref" : "AWSEBEnvironmentName" }`
instid=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
myhostname=${SOME_VARIABLE}_${appname}_${instid}
if [ -f $logger_config ]; then
# Sub the hostname
sed "s/hostname:.*/hostname: $myhostname/" -i $logger_config
fi
As you can see, since my hostname depends on ${SOME_VARIABLE}, I need to refresh the hostname whenever ${SOME_VARIABLE} is modified following eb config.
Is there a way to trigger a script to be run whenever an eb config command is run, so that I can not only restart my web application but also reconfigure and restart remote_syslog with the updated hostname?
This is now possible on AWS Linux 2 based environments with Configuration deployment platform hooks.
For example, you can make a shell script .platform/confighooks/predeploy/predeploy.sh that will run on all configuration changes. Make sure that you make this file executable according to git, or Elastic Beanstalk will give you a permission denied error.
I have successfully deployed my laravel 5 app to AWS EC2. I have also created a MySQL database with AWS RDS and associated it with my app instance.
Now I want to set my env variables so it uses homesteads default values when on my local machine in development, and my AWS database when deployed and in production.
From here I've made a major edit to my original question to reflect what I've learned since asking it
The classic .env in a laravel project for local development looks roughly like this:
APP_ENV=local
APP_DEBUG=true
APP_KEY=BF3nmfzXJ2T6XU8EVkyHtULCtwnakK5k (Note, not a real key)
DB_HOST=localhost
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
MAIL_DRIVER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
For production, I've finally understood that I simply create a new .env file with my production variables. When using AWS, my .env file would roughly look like this:
APP_ENV=production
APP_DEBUG=false
APP_KEY=BF3nmfzXJ2T6XU8EVkyHtULCtwnakK5k (Note, not a real key)
DB_HOST=aaxxxxxxxxxxxxx.cyxxxxxxxxxx.eu-central-1.rds.amazonaws.com:3306
DB_DATABASE=MyAppsDatabaseName
DB_USERNAME=MyAWSRDSUserName
DB_PASSWORD=NotARealPassword
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
MAIL_DRIVER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
My question/problem now
I use AWS eb cli to deploy my app from git. But how do I deploy my production .env file without having to push it to git first?
Russ Matney above gave the right answer, so he gets the checkmark. I'll write my own answer here to add in details on how I made things work. I assume you do have your database set up and have all the credentials you need.
1. Go to your elastic beanstalk dashboard
2. Next go to your software config
3. Add your production environment variables as shown below. Remember to set the doc root to /public, and also add :3306 at the end of your database end point to avoid the PDOEXCEPTION error.
See bigger version of picture below
4. Next SSH into your apps eb instance. See details here, or try the following below:
$ ssh -i path/to/your/key/pair/pem/file.pem ec2-user#ec1-11-11-11-111.eu-central-1.compute.amazonaws.com
Note the ec1-11-11-11-111.eu-central-1.compute.amazonaws.com is your apps public DNS. You'll find yours right here:
5. cd to your app: $ cd /var/app/current
6. Give read/write access to your storage folder or the app can't write to the logs folder and that'll result in an error when running the migrations. To give access: $ sudo chmod -R ugo+rw storage
7. Finally! Run your migrations and do other artisan commands if you please! $ php artisan migrate Success should roughly look like this from gitbash:
You could create a new .env on your ec2 instance and add all the env vars in there. One option would be ssh-ing into the box and creating the file via vi or cat. Or you could write a script to remotely pull the .env in from an external location.
You could also ssh into the box and export APP_ENV=production all your env vars (assuming that's the right command for your OS).
Adding env vars to your environment will depend on the OS that your ec2 instance is running, so the solution will depend on that. ec2 has a concept of 'tags' which might be useful, but the docs show they limit the number of tags to 10, so you may have to do it manually and per ec2 instance :/
See here for one method that uses tags to pull in and set env vars (non-laravel specific).
I just went through this yesterday while getting Laravel running on Elastic Beanstalk, the solution was clean. You can actually set the env vars directly via the aws console (EB app/environment -> Configuration -> Software Configuration -> Environment Properties).
Update:
The key concept to understand is that Laravel just uses phpdotenv to dump vars from the .env file into php's global $_ENV, whereas any already existing env vars are automatically included in $_ENV when php starts the server (docs). So the .env file itself is unnecessary, really just a dev convenience. (unless I've just been spoiled by elastic beanstalk so far).
I have tried to setup a simple cron job running on openshift but once I have pushed the file to openshift and then login and search for the file it does not seem to be there and there is not log output.
I created an application from: https://github.com/smarterclayton/openshift-go-cart
I then installed the cron 1.4 cartridge.
I created a file at .openshift/cron/minutely/awesome_job and set it as 755
I added the following contents:
#! /bin/bash
date > $OPENSHIFT_LOG_DIR/last_date_cron_ran
I pushed to the server
Logged in via ssh and run find /var/lib/openshift/53760892e0b8cdb5e9000b22 -name awesome_job for which it finds nothing.
Any ideas which might help as I am at loss why is it not working.
Make sure the execution bit is set on your cron file.
The issue was not with cron but with the golang cartridge I was using which was removing the .openshift directory.
https://github.com/smarterclayton/openshift-go-cart/issues/10
You should also put a file named "jobs.allow" under your .openshift/cron/minutely/. So your cron jobs will be executed.
For your ref: https://forums.openshift.com/daily-cron-jobs-not-getting-triggered-automatically
And the reason you can find your awesome_job vis ssh login is because it is under /var/lib/openshift/53760892e0b8cdb5e9000b22/app-root/runtime/repo/.openshift, so the command find does not search any files under folders named with . prefixed.