I'm running a Bitnami MEAN instance and added a few lines to the startup script meta tag. I don't think it's running. I tailed the log files per the instructions here: https://cloud.google.com/compute/docs/startupscript after adding some debug statements and I don't see any of my debug statements in the log. Does the startup-script commands not run on a bitnami vm??? Maybe it's overwritten somewhere for another script. I don't see it documented in the Bitnami docs.
I also ran the startup manually from a terminal
sudo google_metadata_script_runner --script-type startup
and it worked. So maybe Bitnami overrides the default startup script???
To check if the instance is bypassing startup scripts you can do the following:
1 Go to the VM instances page.
2 Click on the instance for which you want to add a startup script.
3 Click the Edit button at the top of the page.
4 Under Custom metadata, search for the following key:
If there are a startup-script key, then the instance is loading and executing startup scripts, and the problem is in how you are loaded the script.
Related
I am trying to use a shutdown-script to create a new instance from within the the instance that is shutting down now.
The script has three tasks,
1. creates an empty file
2. get the name of the new instance to be created
3. generates a name for the next new instance to be spawned
4. creates a new instance from within this instance with the name generated.
Here is the script:
#!/bin/bash
touch /home/ubuntu/newfile.txt
new_instance_name=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/next_instance_name -H "Metadata-Flavor: Google")
next_instance_name="instance-"$(printf "%04d" $((${new_instance_name: -4}+1)))
gcloud beta compute --project=xxxxxxxxx instances create $new_instance_name --zone=us-central1-c --machine-type=f1-micro --subnet=default --network-tier=PREMIUM --metadata=next_instance_name=$next_instance_name --maintenance-policy=MIGRATE --service-account=XXXXXXXX-compute#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --image=image-1 --image-project=xxxxxxxx --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=$new_instance_name
This script is made executable using chmod +xand the file-name of the script is /home/ubuntu/shtudown_script.sh.he metadata shutdown-script for this instance is also /home/ubuntu/shtudown_script.sh.
All parts of the script runs fine when I run it manually from within the instance, so a new file is created and also a new instance is created when the current instance shuts-down.
But when it is invoked from API when I stop the instance, it only creates the file I create using touch command, but no new instance is created as before.
Am I doing something wrong here?
So I was able to reproduce the behavior you described. I ran a bash script similar to the one you have provided as a shutdown script, and it would only create the empty file called "newfile.txt".
I then decided to append the output of the gcloud command to see what was happening. I had to tweak the bash script to fit my project. Here is the bash script I ran to copy the output to a file:
#!/bin/bash
touch /home/ubuntu/newfile.txt
gcloud beta compute --project=xxx instances create instance-6 --zone=us-central1-c --machine-type=f1-micro --subnet=default --maintenance-policy=MIGRATE --service-account=xxxx-compute#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=instance-6 > /var/output.txt 2>&1
The output I received was the following:
ERROR: (gcloud.beta.compute.instances.create) Could not fetch resource: - Insufficient Permission
This means that my default service account did not have the appropriate scopes to create the VM instance.
I then stopped my VM instance and edited the scopes to give the service account full access as described here. Once I changed the scopes, I started the VM instance back up and then stopped it again. At this point, it successfully created the VM instance called "instance-6". I would not suggest giving the default service full access. I would suggest specifying which scopes it should have, but make sure that it has full access to Compute Engine if you want the shutdown script to work.
If the shutdown script works when you stop the VM instance using the command:
$sudo shutdown -h now
And does not work when stopping the VM instance from the Cloud Console by pressing the “Stop” button, then I suspect this behavior is to be expected.
A shutdown script has a limited period of time to run when you stop a VM instance; however, this limit does not apply if you request the shutdown using the “sudo shutdown” command. You can read more about this behavior here.
If you would like to know more about the shutdown period, you can read about it here.
I already had given my instance proper scope by giving the service account full access (which is a bad practice).
But the actual problem was solved when I reinstalled google-cloud-sdk using
sudo apt-get install google-cloud-sdk
When I was running those scripts before reinstalling gcloud by sshing into the instance it was using the gcloud command from preinstalled directory /snap/bin/gcloud. But when it runs from the startup or shutdown script, for some reason it can not get an access to the preinstalled /snap/bin/ directory, and when I reinstall google cloud sdk using apt-get the gcloud command was being accessed from /usr/bin/gcloud which I think is accessible by the startup or shutdown script.
How can I to set enabled = "true" on datasource of standalone.xml of Openshift v3 Wildfly container like below.
<datasource jndi-name="java:jboss/datasources/MySQLDS" enabled="true" use-java-context="true" pool-name="MySQLDS" use-ccm="true">
I put the OPENSHIFT_MYSQL_ENABLED environment variable to "true" but nothing happended.
The answer reference site is the below URL:
https://developer.jboss.org/wiki/DataserviceBuilderOnOpenShiftV3Online
I was dealing with the same problem: the environment variable OPENSHIFT_MYSQL_ENABLED is being ignored by variable substitution process, so I had to activate the data source with my bare hands, and that's what I did:
(I'm going to assume you have the OC tools installed on your system)
log into OC: oc login
list all pods and find the WildFly instance: oc get pods
enter the container's SSH console: oc rsh <<pod-name>>
edit the standalone.xml file vi /wildfly/standalone/configuration/standalone.xml
search for the word "datasource" by typing /datasource on vi editor then press enter
find the attribute "enabled" of your data source and update its value from false to true (to do so, press i to go to vi insert mode)
save the file by pressing esc then :x
I'm using OpenShift community edition, so to restart the container is always a hassle: it takes a very long time to find resources available (like memory and CPU) and start the server again, however, you won't have your data source enabled unless you restart the server. In this regard, to do so, you don't need to restart the container, just reload WildFly by using jboss-cli.sh command line tools. (I didn't try to kill the process and start it again, so if you did try, please comment if it worked).
The following steps should be executed on container's terminal using oc rsh <<podname>> or using the terminal on web console.
Enter jboss-cli using the command /wildfly/bin/jboss-cli.sh
Type connect to log into the WildFly console, you'll be prompted for user and password. If you do not have credentials, exit this console and create a management user by executing the script /wildfly/bin/add-user.sh
Check your data source properties by typing data-source read-resource --name=<<YOUR_DATASOURCE_NAME>> --include-runtime=true --recursive=true and follow up on the "enabled" property.
If your data source is disabled, you should enable it by entering the command data-source enable --name=<<YOUR_DATASOURCE_NAME>>
reload WildFly by entering the reload command. Once WildFly reboots you'll need to access jboss-cli.sh and log into the console again.
test your data source connection using the command data-source test-connection-in-pool --name=<<YOUR_DATASOURCE_NAME>>. If the command output was true your data source is up and running.
Openshift v3 is based on docker containers, therefore I'm afraid if you do restart the container, this configuration will probably be lost. The most appropriated solution would be to include this actions on docker's script, which I don't know yet how it works along with Openshift platform.
Hope it helps!
Using the newest Windows-10 Iot-Core on a RaspberryPi, I can replace the (single) headed startup/default App via PowerShell command "IotStartup add headed" or I can use the AppXManager to achieve the same. Then I reboot, and the new default/startup headed app appears in the AppXManager as it should.
Sometime later, my Watchdog headless process (background task) then decides to reboot (using ShutdownManager.BeginShutdown(ShutdownKind.Restart, new TimeSpan(0));). After the reboot, the DEFAULT/ORIGINAL IotCoreDefaultApp is sometimes (but not always!) returned to its status as startup app and the headed startup app that I explicitly setup is not started.
How can I assure that IoT-Core does not replace my headed-startup app with the default one upon reboot? I'd prefer not to delete the IoTCoreDefaultApp permanently at this stage in development.
This appears to work now in the newest update-release of iot-core. One can either use the Device-Portal or the "iotstartup" command in powershell. Also, if required, the scripts that handle the default AppStartup are now in C:/AppInstall.
Try to use the IoT Core Dashboard App to connect to your device. And then use the admin page to select the startup App. HTH
In the webgui for defaultapp, you can click a button to start the selected app in the drop down.
What command is hidden under that button to start the universal app? I would like to do this from a remote console programatically.
Also, is the source code available for the defaultapp/webgui?
You can use PowerShell to set an app as startup, run in headless mode etc.
Find the details of connecting it with PowerShell here: https://ms-iot.github.io/content/en-US/win10/samples/PowerShell.htm
Also running in headless mode saves more memory, power, faster boot.
Also command line utils for enabling startup process:
https://ms-iot.github.io/content/en-US/win10/tools/CommandLineUtils.htm
-> Default App Sample: https://github.com/ms-iot/samples/tree/develop/IoTCoreDefaultApp
Let me know if you need anything else.
I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.