Has anyone used AWS system manager parameter in data pipeline, to allocate value to a parameter in pipeline? - aws-data-pipeline

"id": "myS3Bucket",
"type": "String",
"default": "\"aws ssm get-parameters --names variable --query \"Parameters[*].{myS3Bucket:Value}\"\""
I tried this ,
Where I created a variable in AWS parameter and was able to retrieve the value using this command in AWS CLI, but not able to retrieve the value and send it in my pipeline.

Its not feasible the way, you are trying to achieve it. DP looks at it as a String and not AWS CLI command to execute to fulfill the value . When declaring String parameter, either you can define static value like s3://tst-data-bucket or dynamic derived value from another parameter(runtime or compile time or both) s3://#{anotherparameter}/#actualStartTime.To achieve your desired result, you can fetch the AWS SM parameter & pass the value in DP activation command, howsoever you are activating your DP either through lambda or cli bash script.

Related

How to pass different CSV value for every occurance in Jmeter

I am trying to pass different CSV Value in sequential manner for every occurrence in Jmeter.
I applied
Loop count
Counter
Beanshell Sampler
Value from CSV
JMS Point to Point Request
With this i am able to pass different value for every occurrence for multiple users.
But my script fails when i run for multi user multiple iterations.
It is not picking up sequential value.
My beanshell sampler code-
String variablename=vars.get("variable");
String csvvalue=vars.get("valuefromcsv");
vars.put(variablename,csvvalue);
It doesn't seem to me that you need to use scripting at all, it's quite sufficient to set up CSV Data Set Config and:
Point it to CSV file
Set up "Sharing Mode" to be All Threads
This way each thread (virtual user) will be picking up the next value from the CSV file on each iteration.
Also be aware that starting from JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting
You don't need to write the script for this. you can "Config Element-> CSV data set config" for the HTTP request. under CSV data set config page set the below values.
Variable names: give column names.
Delimiter : should be ,
Recycle of EOF: True
stop thread on EOF : false
Sharing mode : All threads.

How to parse JSON from Terraform null_resource into map using data external block

I am trying to parse JSON key/value pairs into a map I can use in Terraform during a lookup.
I've created a null_resource with a local-exec provisioner to run my aws cli command and then parsed with jq to clean it up. The JSON looks good, the correct key/value pairs are displayed when run from the CLI. I created an external data block to convert the JSON into an TF map, but I'm getting an Inccorect attribute error from TF.
resource "null_resource" "windows_vars" {
provisioner "local-exec" {
command = "aws ssm --region ${var.region} --profile ${var.profile} get-parameters-by-path --recursive --path ${var.path} --with-decryption | jq '.Parameters | map({'key': .Name, 'value': .Value}) | from_entries'"
}
}
data "external" "json" {
depends_on = [null_resource.windows_vars]
program = ["echo", "${null_resource.windows_vars}"]
}
output "map" {
value = ["${values(data.external.json.result)}"]
}
I expected the key/value pairs to be added to a TF map I could use elsewhere.
I got the following error:
Error: Incorrect attribute value type
on instances/variables.tf line 33, in data "external" "json":
33: program = ["echo", "${null_resource.windows_vars}"]
Inappropriate value for attribute "program": element 1: string required.
JSON output looks like this:
{
"/vars/windows/KEY_1": "VALUE_1",
"/vars/windows/KEY_2": "VALUE_2",
"/vars/windows/KEY_3": "VALUE_3",
"/vars/windows/KEY_4": "VALUE_4"
}
I actually answered my own question. I am using a data external block to run my aws cli command and referencing the block in my module.
data "external" "json" {
program = ["sh", "-c", "aws ssm --region ${var.region} --profile ${var.profile} get-parameters-by-path --recursive --path ${var.path} --with-decryption | jq '.Parameters | map({'key': .Name, 'value': .Value}) | from_entries'"]
}
The ${var.amis["win2k19_base"]} will do a lookup on a map of ami ids I use and I am using that as the key in the parameter store for the value I am looking for.
Inside my module I am using this:
instance_var = data.external.json.result["${var.path}${var.amis["win2k19_base"]}"]
Thank you for the great suggestions.
An alternative way to address this would be to write a data-only module which encapsulates the data fetching and has its own configured aws provider to fetch from the right account.
Although it's usually not recommended for a child module to have its own provider blocks, that is allowed and can be okay if the provider in question is only being used to fetch data sources, because Terraform will never need to "destroy" those. The recommendation against nested module provider blocks is that it will cause trouble if you remove a module while the resource objects declared inside it still exist, and then there's no provider configuration left to use to destroy them.
With that said, here's an example of the above idea, intended to be used as a child module which can be imported by any configuration that needs access to this data:
variable "region" {}
variable "profile" {}
variable "path" {}
provider "aws" {
region = var.region
profile = var.profile
}
data "aws_ssm_parameter" "foo" {
name = var.path
}
output "result" {
# For example we'll just return the entire thing, but in
# practice it might be better to pre-process the output
# into a well-defined shape for consumption by the calling
# modules, so that they can rely on a particular structure.
value = jsondecode(data.aws_ssm_parameter.foo)
}
I don't think the above is exactly equivalent to the original question since AFAIK aws_ssm_parameter does not do a recursive fetch at the time of writing, but I'm not totally sure. My main purpose here was to show the idea of using a nested module with its own provider configuration as an alternative way to fetch data from a specific account/region.
A more direct response to the original question is that provisioners are designed as one-shot actions and so it's not possible to access any data they might return. The external data source is one way to run an external program to gather some data, if a suitable data source is not already available.

Variables within AWS Lambda Payload

I have an AWS Lambda function I need to invoke from the command line. For the payload, I am passing a JSON style object with three unique key/value pairs.
aws lambda invoke --function-name .. --payload '{.....}'
Every time I invoke, the keys in that payload will be the same, however the associated values will be different.
What is not optional is that this needs to be invoked (for now) from the command line. So no console solutions will work at this very moment. Instead of re-typing the updates values each time I need to invoke (which is a few times per day), what is my best option for handling this?
Thanks!

pass array as a key value to Aws Lambda

Im trying to pass below event keys to AWS lambda python function.
Payload='{"OS":"ubuntu","region":"us-east-1","subnetids":"'subnet-123','subnet-456','subnet-789','subnet-101112'","vpcid":"vpc-abcd"}')
facing issue passing subnetids to lambda function as its a list not a single item.
And in actual fucntion not sure how to read this payload as events itself is an array
i can read OS ,region as event["OS"],event["region"] but not sure how to read subnetids as if i try event["subnetids"] its trying to read as single value not as list of subnets
Please suggest!!
Your example isn't proper json. To make it such, you'll need to wrap your subnets in array and change the quoting like:
Payload='{"OS":"ubuntu","region":"us-east-1","subnetids":["subnet-123", "subnet-456","subnet-789","subnet-101112"],"vpcid":"vpc-abcd"}'

How can I execute Web Service Task in SSIS without using output variable?

I have created an SSIS project, running a web service task to execute a function. Everything runs fine, but I get an error on the assignment of the output variable:
The type of variable being assigned to the variable differs from the current variable type.
I actually do not need for it to return an output variable, however, the task properties do not give me an option to not have an output variable.
Currently the webservice is a void type, but I also tried having it return true and set the variable type to a boolean. I got the same error. In this case, I am not sure what I need to do to assign the variable, but I'd rather it just not be looking for an output variable at all.
Can someone help me figure out how to either
not have an output variable or
assign an output of true / 1 / "" whatever arbitrarily so that it does not return an error.
This is an old question but I ran into the same problem so worth posting an answer.
As you say, SSIS does not actually allow you not to have an output variable. If your webservice method returns void, you need to set up a variable of type Object and use that as the output variable.