Using startup script from .net api - google-compute-engine

I'm trying launch an instance with a startup script in the compute engine .net API.
Here's the code I'm using-
var start = new Google.Apis.Compute.v1.Data.Metadata.ItemsData();
start.Key = "startup-script";
start.Value = "C:\\Users\\User\\Desktop\\script.sh";
newinst.Metadata = new Google.Apis.Compute.v1.Data.Metadata();
newinst.Metadata.Items = new List<Google.Apis.Compute.v1.Data.Metadata.ItemsData>();
newinst.Metadata.Items.Add(start);
and this is my script-
#! /bin/sh
gsutil cp gs://bucket/file dir
dir is an existing directory in the image. The instance launches but there's no trace of that command being run.
further info: from looking at log info it looks like a script is found in metadata and the instance thinks it's running it but no commands are executed

For anyone interested, what I needed here was to add-
newinst.Metadata.Kind = "compute#metadata";
before executing the InsertRequest or it won't use the script.

Related

Your query returned no results

I am using below code to create an Elastic Beanstalk env. Now I am getting error suddenly. It was working fine, getting issue since I have restarted jenkins. terraform apply commands is running from jenkins. terraform data part is from main.tf. For more info - I am installing terraform using below command line. I have read this question, but scenario is different.
sh 'wget https://releases.hashicorp.com/terraform/0.14.5/terraform_0.14.5_linux_amd64.zip'
sh 'unzip terraform_0.14.5_linux_amd64.zip
sh 'mv ./terraform /usr/bin/'
sh 'terraform init'
sh "terraform apply -auto-approve -var \'env=${ENVNAME}\' -var \'appversion=${APPVERSION}\' -var \'sshkey=${SSHKEY}\'"
data "aws_elastic_beanstalk_solution_stack" "multi_docker" {
most_recent = true
name_regex = "^64bit Amazon Linux (.*) Multi-container Docker (.*)$"
}
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.aws_elastic_beanstalk_solution_stack.multi_docker: Refreshing state...
Error: Your query returned no results. Please change your search criteria and try again.
Based on the comments.
The aws_elastic_beanstalk_solution_stack is correct. However, multidocker container EB environments are not supported in all regions. The solution was to use a region which has support for the multidocker containers.

MySQL login-path issues with clustercheck script used in xinetd

default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps

web2py function not triggered on user request

Using web2py (Version 2.8.2-stable+timestamp.2013.11.28.13.54.07), on 64-bit Windows, I have the following problem
There is an exe program that is started on user request (first an txt file is created then p is triggered).
p = subprocess.Popen(['woshi_engine.exe', scriptId], shell=True, stdout = subprocess.PIPE, cwd=path_1)
while the exe file is running it is creating a txt file.
The program is stopped on user request by deleting the file the program needs as input.
when exe is started i have other requests user can trigger. it is common that request comes to server (I used microsoft network monitor to check that), but the function is not triggered.
I tried using scheduler but no success. Same problem
I am really stuck here with this problem
Thank you for your help
With a help of web2py google group the solution is.
I used scheduler. Created a scheduler.py file with the following code
def runWoshiEngine(scriptId, path):
import os, sys
import time
import subprocess
p = subprocess.Popen(['woshi_engine.exe', scriptId], shell=True, stdout = subprocess.PIPE, cwd=path)
return dict(status = 1)
from gluon.scheduler import Scheduler
scheduler = Scheduler(db)
In my controller function
task = scheduler.queue_task(runWoshiEngine, [scriptId, path])
you also have to import scheduler (from gluon.scheduler import Scheduler)
then I run the scheduler from command prompt with the following (so if I understood correctly you have two instances of web2py running, one for webserver, one for scheduler)
web2py.py -K woshiweb -D 0 (-D 0 is for verbose logging so it can be removed)

I need to set-up elasticsearch on windows os?

I tried to set-up a elasticsearch on my Windows 7 OS PC. Installed elasticsearch and curl and it's working as the loacahost:9200 is working fine.
Now I am strugging to search in a file located at c:\user\rajesh\raj.txt.
My doubt is, Where do mention that I have tos search in this file? elasticsearch.yml? Which parameter I need to set to point this text file?
Indexing is working with curl but mapping gives nullpointer exception? Do I need to install something else?
I tried to install sense plugin for chrome but says moved to marvel, and from there unable to install marvel!
From what I can tell, you've installed Elasticsearch and you're now expecting to be able to search within files on your local file system. This isn't how ES works. You need to create a mapping for an index and then populate that index with the content you want to search in. If you're looking to index files on your local file system rather than data you have pulled from a database you should look in to the File system River Plugin for Elasticsearch, http://www.pilato.fr/fsriver/. This deals with all of the indexing of file system based documents automatically, once you've got it set up correctly.
EDIT:
I also see you're trying to set up Kibana and Marvel/Sense. To set up Kibana just follow the instructions here: http://www.elasticsearch.org/overview/kibana/installation/
To set up Marvel open powershell, CD to C:\elasticsearch\bin then run plugin.bat -i elasticsearch/marvel/latest then you'll need to restart your cluster. Once you've done that if you go to http://localhost:9200/_plugin/marvel/ you'll see your marvel dashboard. You'll also see a tab for "Sense" which is the other plugin you referred to.
If you are using elastic search for retrieving data from any DB like PostgreSQL, then go to folder bin/rivers.bat and edit as
curl -XPUT localhost:9200/_river/actor_jdbc_river/_meta -d "{\"type\":\"jdbc\",\"jdbc\":{\"strategy\":\"simple\",\"poll\":\"1h\",\"driver\":\"org.postgresql.Driver\",\"url\":\"jdbc:postgresql://10.5.2.132:5432/prodDB\",\"user\":\"UserName\",\"password\":\"Password\",\"sql\":\"select t.id as _id,t.name from topic as t \",\"digesting\" : true},\"index\":{\"index\":\"jdbc\",\"type\":\"actor_jdbc_river1\"}}"
Then create a client in Java side to access data in river.
Here cluster name is same as that mention in folder config/elasticsearch.yml (testDBsearch)
private static Client createClient() {
//Create Client
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "testDBsearch").build();
TransportClient transportClient = new TransportClient(settings);
transportClient = transportClient.addTransportAddress(new InetSocketTransportAddress("10.5.2.132", 9300));
return (Client) transportClient;
}
public static void main(String[] args) {
Client client = createClient();
String queryString = "python";
search(client, 100, queryString);
}
public static void search(Client client,int size, String queryString) {
queryString=queryString +"*";
try{
SearchResponse responseActor;
responseActor = client.prepareSearch("jdbc").setTypes("actor_jdbc_river1").setSearchType(SearchType.DEFAULT)
.setQuery(QueryBuilders.queryString(queryString)
.field("designation",new Float(2.0)).field("name", new Float(5.0)).field("email") .defaultOperator(Operator.OR)).setFrom(0).setSize(size).setExplain(true).execute().actionGet();
for(SearchHit hit:responseActor.getHits()) {
System.out.println(hit.getSourceAsString());
System.out.println(hit.getScore());
System.out.println("---------------------------");
}
}catch(Exception e){
System.out.println("Error in elastic search "+queryString+" Error :"+e);
}
}
clear installation of elasticsearch in windows:
1) check whether your system has latest java version
2) download and extract elasticsearch from "download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/zip/elasticsearch/2.3.3/elasticsearch-2.3.3.zip"
3) set JAVA_HOME environment variable "C:\Program Files (x86)\Java\jdk1.8.0_91"
4) check JAVA_HOME environment variable using command "service" in bin directry of elasticsearch shown in below figure checking whether JAVA_HOME is set properly or not
5) install service.bat using command service.bat install
6) uncomment network.host and give value as localhost in configuration file of elasticsearch
network.host= localhost in elasticsearch.yml (config file)
7)run elasticsearch "C:\elasticsearch-2.3.3\bin\elasticsearch"
if you get error while running elastic search saying update JVM to latest version means java in your system is not containing latest version (install and run latest java version)
8)install elasticsearch-head plugin to visualize things in elasticsearch
run command "plugin install elasticsearch-head"
if its failed to install elasticsearch-head then use command-
plugin install "github.com/mobz/elasticsearch-head/archive/master.zip"
9)open elasticsearch in browser using link "localhost:9200/_plugin/head/"
elasticsearch visual interface

How to automatically exit/stop the running instance

I have managed to create an instance and ssh into it. However, I have couple of questions regarding the Google Compute Engine.
I understand that I will be charged for the time my instance is running. That is till I exit out of the instance. Is my understanding correct?
I wish to run some batch job (java program) on my instance. How do I make my instance stop automatically after the job is complete (so that I don't get charged for the additional time it may run)
If I start the job and disconnect my PC, will the job continue to run on the instance?
Regards,
Asim
Correct, instances are charged for the time they are running. (to the minute, minimum 10 minutes). Instances run from the time they are started via the API until they are stopped via the API. It doesn't matter if any user is logged in via SSH or not. For most automated use cases users never log in - programs are installed and started via start up scripts.
You can view your running instances via the Cloud Console, to confirm if any are currently running.
If you want to stop your instance from inside the instance, the easiest way is to start the instance with the compute-rw Service Account Scope and use gcutil.
For example, to start your instance from the command line with the compute-rw scope:
$ gcutil --project=<project-id> addinstance <instance name> --service_account_scopes=compute-rw
(this is the default when manually creating an instance via the Cloud Console)
Later, after your batch job completes, you can remove the instance from inside the instance:
$ gcutil deleteinstance -f <instance name>
You can put halt command at the end of your batch script (assuming that you output your results on persistent disk).
After halt the instance will have a state of TERMINATED and you will not be charged.
See https://developers.google.com/compute/docs/pricing
scroll downn to "instance uptime"
You can auto shutdown instance after model training. Just run few extra lines of code after the model training is complete.
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = 'xyz' # Project ID
# The name of the zone for this request.
zone = 'xyz' # Zone information
# Name of the instance resource to stop.
instance = 'xyz' # instance id
request = service.instances().stop(project=project, zone=zone, instance=instance)
response = request.execute()
add this to your model training script. When the training is complete GCP instance automatically shuts down.
More info on official website:
https://cloud.google.com/compute/docs/reference/rest/v1/instances/stop
If you want to stop the instance using the python script, you can follow this way:
from google.cloud.compute_v1.services.instances import InstancesClient
from google.oauth2 import service_account
instance_client = InstancesClient().from_service_account_file(<location-path>)
zone = <zone>
project = <project>
instance = <instance_id>
instance_client.stop(project=project, instance=instance, zone=zone)
In the above script, I have assumed you are using service-account for authentication. For documentation of libraries used you can go here:
https://googleapis.dev/python/compute/latest/compute_v1/instances.html