Why can I install this Jelastic manifest through the dashboard import function but not throuhg the Jelastic API? - manifest

I have the following very simple manifest:
type: install
name: very simple manifest
onInstall:
- log: installing manifest
I can install it from the Jelastic Dashboard. There is an import function in the main menu where I can copy / paste that manifest content and it gets installed. In the Jelastic console, I can see
[15:36:38 manifest.settings]: BEGIN INSTALLATION: very simple manifest
[15:36:39 manifest.settings]: BEGIN HANDLE EVENT: {"topic":"application/install","envAppid":""}
[15:36:39 manifest.settings:1]:> installing manifest
[15:36:39 manifest.settings]: END HANDLE EVENT: application/install
[15:36:39 manifest.settings]: END INSTALLATION: very simple manifest
and the Jelastic dashboard confirms installation.
Now, when I do the same, but via the Jelastic REST API, i.e. using the endpoint
http://my-jelastic-provide.com/1.0/marketplace/jps/REST/install
with the relevant data, then, it doesn't install. Instead, I get the strange error message
Can\'t find environment by domain [jelasticclient-master-0954606]
where jelasticclient-master-0954606 is the envName I set.
However, if I change my manifest to e.g.
type: install
name: very simple manifest
nodes:
count: 1
cloudlets: 4
nodeGroup: cp
image: alpine:latest
skipNodeEmails: true
onInstall:
- log: installing manifest
then it installs perfectly. What am I missing?
I am using Jelastic v6.0.2.

Your "very simple manifest" doesn't suppose any environment name to be passed.
That's why when you pass it you get an error "Can't find environment by domain [domain-name]" (Example1).
When you don't have the "nodes" parameter in the manifest (as in your second example), you shouldn't pass any environment name (Example2) or should pass the existing environment name (response is in Example3).
Example1:
curl -X POST 'https://jca.host-domain/1.0/marketplace/jps/rest/install' \
-d 'envName=jelasticclient-master-0954606' \
-d session=*** \
-d skipNodeEmails=1 \
-d ownerUid=UID \
--data-urlencode 'jps={ "type": "install", "name": "very simple manifest", "onInstall": [ { "log": "installing manifest" } ] }'
The response is:
{"result":11,"response":{"result":11,"source":"JEL","error":"domain [jelasticclient-master-0954606] doesn't exist"},"source":"JEL","error":"domain [jelasticclient-master-0954606] doesn't exist"}
When the environment name is not passed (Example2),
curl -X POST 'https://jca.host-domain/1.0/marketplace/jps/rest/install' \
-d session=*** \
-d skipNodeEmails=1 \
-d ownerUid=UID \
--data-urlencode 'jps={ "type": "install", "name": "very simple manifest", "onInstall": [ { "log": "installing manifest" } ] }'
the response is
{"result":0,"uniqueName":"3c819586-2ef7-4691-9faa-d3059459d20e","response":{"result":0,"uniqueName":"3c819586-2ef7-4691-9faa-d3059459d20e","successText":"","appid":""},"appid":"","successText":""}
When the environment with envName=jelasticclient-master-0954606 already exists, the response of the same request from the Example1 is as this (Example3)
{"result":0,"uniqueName":"b52a8db9-8850-4b66-958a-3dee3345b923","response":{"result":0,"uniqueName":"b52a8db9-8850-4b66-958a-3dee3345b923","successText":"","appid":"7b0c465f6c9573b8d8ce3ed59591781b"},"appid":"7b0c465f6c9573b8d8ce3ed59591781b","successText":""}
In other words, if you pass the environment name when deploying this "very simple manifest" this manifest is installed like an add-on because there is no "nodes" parameter in it but there is no existing environment "jelasticclient-master-0954606" to install this "add-on".

Related

cbimport not importing file which is extracted from cbq command

I tried to extract data from below cbq command which was successful.
cbq -u Administrator -p Administrator -e "http://localhost:8093" --script= SELECT * FROM `sample` where customer.id=="12345'" -q | jq '.results' > temp.json;
However when I am trying to import the same data in json format to target cluster using below command I am getting error.
cbimport json -c http://{target-cluster}:8091 -u Administrator -p Administrator -b sample -d file://C:\Users\{myusername}\Desktop\temp.json -f list -g %docId%
JSON import failed: 0 documents were imported, 0 documents failed to be imported
JSON import failed: input json is invalid: ReadArray: expect [ or , or ] or n, but found {, error found in #1 byte of ...|{
"requ|..., bigger context ...|{
"requestID": "2fc34542-4387-4643-8ae3-914e316|...],```
```{
"requestID": "6ef38b8a-8e70-4c3d-b3b4-b73518a09c62",
"signature": {
"*": "*"
},
"results": [
{
"{Bucket-name}":{my-data}
"status": "success",
"metrics": {
"elapsedTime": "4.517031ms",
"executionTime": "4.365976ms",
"resultCount": 1,
"resultSize": 24926
}
It looks like the file which was extracted from cbq command has control fields details like RequestID, metrics, status etc. Also json in pretty format. If I manually remove it(remove all fields except {my-data}) then put in a json file and make json unpretty then it works. But I want to automate it in a single run. Is there a way to do it in cbq command.
I don't find any other utility or way to use where condition on cbexport to do that on Couchbase, because the document which are exported using cbexport can be imported using cbimport easily.
For the cbq command, you can use the --quiet option to disable the startup connection messages and the --pretty=false to disable pretty-print. Then, to extract just the documents in cbimport json lines format, I used jq.
This worked for me -- selecting documents from travel-sample._default._default (for the jq filter, where I have _default, you would put the Bucket-name, based on your example):
cbq --quiet --pretty=false -u Administrator -p password --script='select * from `travel-sample`._default._default' | jq --compact-output '.results|.[]|._default' > docs.json
Then, importing into test-bucket1:
cbimport json -c localhost -u Administrator -p password -b test-bucket1 -d file://./docs.json -f lines -g %type%_%id%
cbq documentation: https://docs.couchbase.com/server/current/tools/cbq-shell.html
cbimport documentation: https://docs.couchbase.com/server/current/tools/cbimport-json.html
jq documentation:
https://stedolan.github.io/jq/manual/#Basicfilters

How to generate a 'label' using a json file in app configuration service?

I'm trying to import a json file into Azure app configuration service using cli command:
az appconfig kv import.
Sample json file
{
"Pss": {
"account/getall/get": "read",
"account/setall/put": "write",
"account/someendpoint/somevalue": "profile"
}
}
I can see below preview in cli
Adding:
{"key": "Pss:account/getall/get", "value": "\"read\""}
{"key": "Pss:account/setall/put", "value": "\"write\""}
{"key": "Pss:account/someendpoint/somevalue", "value": "\"profile\""}
Labels are created as (No label) in app configuration service.
Could you please suggest to me what changes need to be done in the json file to generate label values?
Thanks in advance.
Below command will helps you to get the label name:
az appconfig kv import --name hkappconfig --label testingLabelName --source file --path /home/hari/Import.json --format json --separator . --content-type "application/json
By adding the attribute --label labelName in the Importing az cli command, You will get the label name in the configuration explorer of the app configuration.
Output:

google cloud ML Engine model Version creation and set as default

I want to create Version for ML Engine Model by Rest API and set as default. kindly help me and suggest what is the mistake that I am doing.Sending below request and hitting the post API Below.
Trying hitting by Google Auth Playground.
Post URL : https://ml.googleapis.com/v1/projects//models//versions
Request Body :
{
"name": "v4",
"description": "This is test Version created by API",
"isDefault": True,
"deploymentUri": "gs://car-hertz/vans-uk-hertz/output/v1/F0/export/exporter/1531390162/",
"runtimeVersion": "1.4",
"framework": enum(TENSORFLOW),
"pythonVersion": "2.7"
}
In the REST API docs for the Version resource you can see in the description for the framework field that:
Valid values are TENSORFLOW, SCIKIT_LEARN, and XGBOOST
enum(Framework) is just the field's type. Also, from that same link: isDefault field is output only. You shouldn't include it in the request to create a model version. From the docs for the create method:
If you want a new version to be the default, you must call
projects.models.versions.setDefault.
So, to create a new model version and set it as default via REST API:
Put the request payload in a json file:
{
"name": "v4",
"description": "This is test Version created by API",
"deploymentUri": "gs://car-hertz/vans-uk-hertz/output/v1/F0/export/exporter/1531390162/",
"runtimeVersion": "1.4",
"framework": "TENSORFLOW",
"pythonVersion": "2.7"
}
Create the version by running in a shell the following (I like to use a gcurl alias):
alias gcurl='curl -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" -H "Content-Type: application/json" '
gcurl -X POST -T "$REQUEST_FILEPATH" https://ml.googleapis.com/v1/projects/$PROJECT/models/$MODEL/versions
Set the above as the default:
gcurl -X POST https://ml.googleapis.com/v1/projects/$PROJECT/models/$MODEL/versions/v4:setDefault

How to use Google Natural Language processing from Google Cloud Storage?

I have a sample code here. It is json
{
"document":{
"type":"PLAIN_TEXT",
"content":"Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Pott$
},
"encodingType":"UTF8"
}
I found a tutorial on google's documentation on Natural Language processing on reading from Google Cloud Storage.
curl -X POST \ -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \ --data "{
'document':{
'type':'PLAIN_TEXT',
'gcsContentUri':'gs://reubucket/textData'
}
}" "https://language.googleapis.com/v1/documents:analyzeEntitySentiment"
And the error that I got is
ERROR: (gcloud.auth) Invalid choice: '*************-_m6csS1Wzlj1pyC_J7vzC0'.
Usage: gcloud auth [optional flags] <group | command>
group may be application-default
command may be activate-service-account | configure-docker | list |
login | revoke
How do I call the command with my API key.
I need a way to change the "content" to entries into my CSV file.
Thank you.
Here is an example of the error that I am receiving please help:
mufaroshumba#reucybertextmining:~/myFolder$ gcloud auth activate-service-account --key-file="/home/mufaroshumba/myFolder/reucybertextmining-74fa66372251.json"
Activated service account credentials for: [starting-*******[CENSORED]#reucybertextmining.iam.gserviceaccount.com]
mufaroshumba#reucybertextmining:~/myFolder$ curl "https://language.googleapis.com/v1/documents:analyzeSentiment?key=${API_KEY}" \ -s -X POST -H "Content-Type: app
lication/json" --data-binary #request.json
{
"error": {
"code": 401,
"message": "Permission to access the GCS address is denied.",
"status": "UNAUTHENTICATED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "document.gcs_content_uri",
"description": "Permission to access the GCS address is denied."
}
]
}
]
}
}
curl: (6) Could not resolve host: -s
mufaroshumba#reucybertextmining:~/myFolder$
I then used this website trying to get
It looks like your auth is not setup correctly. If you just run this command:
gcloud auth application-default print-access-token
it should be giving you a token, but it seems like it's not. Please follow the steps here to make sure that this command is working first:
https://cloud.google.com/natural-language/docs/quickstart#quickstart-analyze-entities-cli
Then, as long as you have permission to access the gcs bucket, you should be able to get content out of it. Note that the API is expecting to see the actual content in the gcs file, and not a CSV.

Is there a way to switch cwd by changing environment in PM2 - node.js

I am using PM2 to manage the execution of a couple of micro-apps on node.
Goal:
However I would like to be able to automatically switch settings and the cwd value based on the environment the app is executing in.
For example: on my local machine CWD should be ~/user/pm2, while on the server it needs to be E:\Programs\PM2.
Is there any way to do this using JSON config options with PM2? Is there a better way to manage the variables for different environments?
you can save a shell script, say pm2_dev.sh containing the cd command as first line.
#!/bin/bash
cd /foo/bar
pm2-dev run my-app.js
OR you can add input to your script:
# pm2_dev.sh ~/user/pm2
file should be:
#!/bin/bash
cd $1
pm2-dev run my-app.js
If you do not want to change environment by shell script, you can follow documentation way:
{ "apps" : [{
"script" : "worker.js",
"watch" : true,
"env": {
"NODE_ENV": "development",
},
"env_production" : {
"NODE_ENV": "production"
} },{
"name" : "api-app",
"script" : "api.js",
"instances" : 4,
"exec_mode" : "cluster" }] }
When running your application you should use --env option as it is written here:
--env specify environment to get
specific env variables (for JSON declaration)
Finally you can wrap configuration in a js object that conditionally returns parameters basing on current environment:
module.exports = (function(env){
if( env === 'development' )
returnĀ { folder: '~/user/pm2' };
else if( env === 'production' )
returnĀ { folder: 'E:\Programs\PM2' };
}(process.env.NODE_ENV));
Then you can require the config file and access it being sure that it returns always the correct config.