Start execution of existing SageMaker pipeline using Python SDK - aws-sdk

SageMaker documentatin explains how to run a pipeline, but it assumes I have just defined it and I have the object pipeline available.
How can I run an existing pipeline with Python SDK?
I know how to read a pipeline with AWS CLI (i.e. aws sagemaker describe-pipeline --pipeline-name foo). Can the same be done with Python code? Then I would have pipeline object ready to use.

If the Pipeline has been created, you can use the Python Boto3 SDK to make the StartPipelineExecution API call.
response = client.start_pipeline_execution(
PipelineName='string',
PipelineExecutionDisplayName='string',
PipelineParameters=[
{
'Name': 'string',
'Value': 'string'
},
],
PipelineExecutionDescription='string',
ClientRequestToken='string',
ParallelismConfiguration={
'MaxParallelExecutionSteps': 123
}
)
If you prefer AWS CLI, the most basic call is:
aws sagemaker start-pipeline-execution --pipeline-name <name-of-the-pipeline>

Related

Changing an ApiGateway restapi stage deployment via the cli or sdk

I have a system in place for creating new deployments but I would like to be able to change a stage to use a previous deployment. You can do this via the aws console but it appears it's not an option for v1 API gateways via the SDK or CLI?
Can be done via CLI for V1 APIs. You will have to run two commands -> get-deployments and update-stage. Get the deployment ID from output of first and use it in the second.
$ aws apigateway get-deployments --rest-api-id $API_ID
$ aws apigateway update-stage --rest-api-id $API_ID --stage $STAGE_NAME --patch-operations op=replace,path=/deploymentId,value=$DEPLOYMENT_ID
get-deployments
update-stage

Keycloak logging to JSON format message field

I have been trying to set up keycloak logging to be scraped by fluentd to be used in elasticsearch. So far I have used the provided CLI string to use in my helm values script.
cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=true, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
However, as you can see in the picture provided, the logs that are generated seem to be completely json apart from the core of the log, the message field. Currently the message field is provided as comma separated key-value pairs. Is there any way to tell keycloak, jboss or wildfly that it needs to provide the message in JSON too? This allows me to efficiently search through the data in elastic.
Check this project on GitHub: keycloak_jsonlog_eventlistener: Outputs Keycloak events as JSON into the server log.
Keycloak JSON Log Eventlistener
Primarily written for the Jboss Keycloak docker image, it will output Keycloak events as JSON into the keycloak server log.
The idea is to parse logs once they get to logstash via journalbeat.
Tested with Keycloak version 8.0.1

Compute Engine accessing DataStore get Invalid Credentials (code: 401)

I am following the tutorial on
https://cloud.google.com/datastore/docs/getstarted/start_nodejs/
trying to use datastore from my Compute Engine project.
Step 2 in the tutorial mentioned I do not have to create new service account credentials when running from Compute Engine.
I run the sample with:
node test.js abc-test-123
where abc-test-123 is my Project Id and that project have enabled all cloud API access including DataStore API.
After uploaded the code and executed the sample, I got the following error:
Adams: { 'rpc error': { [Error: Invalid Credentials] code: 401,
errors: [ [Object] ] } }
Update:
I did a workaround by changing the default sample code to use the JWT credential way (with a generated .json key file) and things are working now.
Update 2:
This is the scope config when I run
gcloud compute instances describe abc-test-123
And the result:
serviceAccounts:
scopes:
- https://www.googleapis.com/auth/cloud-platform
According to the doc:
You can set scopes only when you create a new instance, and cannot
change or expand the list of scopes for existing instances. For
simplicity, you can choose to enable full access to all Google Cloud
Platform APIs with the https://www.googleapis.com/auth/cloud-platform
scope.
I still welcome any answer about why the original code not work in my case~
Thanks for reading
This most likely means that when you created the instance, you didn't specify the right scopes (datastore and userinfo-email according to the tutorial). You can check that by executing the following command:
gcloud compute instances describe <instance>
Look for serviceAccounts/scopes in the output.
There are 2 way to create an instance with right credential:
gcloud compute instances create $INSTANCE_NAME --scopes datastore,userinfo-email
Using web: on Access & Setting Enable User Info & Datastore

Pass json into AWS CLI Invalid JSON

I have a script to add some custom metric data and it works great of i write the metric data to a file and then read that in like:
aws cloudwatch put-metric-data --namespace "ec2" --metric-data file://metric2.json
But if i have the script just print and call it like this:
aws cloudwatch put-metric-data --namespace "ec2" --metric-data $(python aws-extra-metrics.py)
I get the following error:
Error parsing parameter '--metric-data': Invalid JSON:
Is their any way around this i would prefer not to have to write it to a file everytime as this will be ran from a cronjob.
We are running ubunutu
is the python script generating the json file? The difference is between passing a file name and passing the file content.
You could try:
python aws-extra-metrics.py > metric2.json && aws cloudwatch put-metric-data --namespace "ec2" --metric-data file://metric2.json
or
aws cloudwatch put-metric-data --namespace "ec2" --metric-data $(python aws-extra-metrics.py)
you may need quotes around the invocation of the python script

How to capture JSON result from Azure CLI within NodeJS script

Is there a way to capture the JSON objects from the Azure NodeJS CLI from within a NodeJS script? I could do something like exec( 'azure vm list' ) and write a promise to process the deferred stdout result, or I could hijack the process.stream.write method, but looking at the CLI code, which is quite extensive, I thought there might be a way to pass a callback to the cli function or some other option that might directly return the JSON result. I see you are using the winston logger module -- I might be familiar with this, but perhaps there is a hook there that could be used.
azure vm list does have a --json option:
C:\>azure vm list -h
help: List Azure VMs
help:
help: Usage: vm list [options]
help:
help: Options:
help: -h, --help output usage information
help: -s, --subscription <id> use the subscription id
help: -d, --dns-name <name> only show VMs for this DNS name
help: -v, --verbose use verbose output
help: --json use json output
You can get the json result in the callback of an exec(...) call. Would this work for your?
Yes you can, check this gist: https://gist.github.com/4415326 and you'll see how without doing exec. You basically override the logger hanging off the CLI.
As a side note I am about to publish a new module, azure-cli-buddy that will make it easy to call the CLI using this technique and to receive results in JSON.