How to convert Pulumi Output<t> to string? - output

I am dealing with creating AWS API Gateway. I am trying to create CloudWatch Log group and name it API-Gateway-Execution-Logs_${restApiId}/${stageName}. I have no problem in Rest API creation.
My issue is in converting restApi.id which is of type pulumi.Outout to string.
I have tried these 2 versions which are proposed in their PR#2496
const restApiId = apiGatewayToSqsQueueRestApi.id.apply((v) => `${v}`);
const restApiId = pulumi.interpolate `${apiGatewayToSqsQueueRestApi.id}`
here is the code where it is used
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
`API-Gateway-Execution-Logs_${restApiId}/${stageName}`,
{},
);
stageName is just a string.
I have also tried to apply again like
const restApiIdStrign = restApiId.apply((v) => v);
I always got this error from pulumi up
aws:cloudwatch:LogGroup API-Gateway-Execution-Logs_Calling [toString] on an [Output<T>] is not supported.
Please help me convert Output to string

#Cameron answered the naming question, I want to answer your question in the title.
It's not possible to convert an Output<string> to string, or any Output<T> to T.
Output<T> is a container for a future value T which may not be resolved even after the program execution is over. Maybe, your restApiId is generated by AWS at deployment time, so if you run your program in preview, there's no value for restApiId.
Output<T> is like a Promise<T> which will be eventually resolved, potentially after some resources are created in the cloud.
Therefore, the only operations with Output<T> are:
Convert it to another Output<U> with apply(f), where f: T -> U
Assign it to an Input<T> to pass it to another resource constructor
Export it from the stack
Any value manipulation has to happen within an apply call.

So long as the Output is resolvable while the Pulumi script is still running, you can use an approach like the below:
import {Output} from "#pulumi/pulumi";
import * as fs from "fs";
// create a GCP registry
const registry = new gcp.container.Registry("my-registry");
const registryUrl = registry.id.apply(_=>gcp.container.getRegistryRepository().then(reg=>reg.repositoryUrl));
// create a GCP storage bucket
const bucket = new gcp.storage.Bucket("my-bucket");
const bucketURL = bucket.url;
function GetValue<T>(output: Output<T>) {
return new Promise<T>((resolve, reject)=>{
output.apply(value=>{
resolve(value);
});
});
}
(async()=>{
fs.writeFileSync("./PulumiOutput_Public.json", JSON.stringify({
registryURL: await GetValue(registryUrl),
bucketURL: await GetValue(bucketURL),
}, null, "\t"));
})();
To clarify, this approach only works when you're doing an actual deployment (ie. pulumi up), not merely a preview. (as explained here)
That's good enough for my use-case though, as I just want a way to store the registry-url and such after each deployment, for other scripts in my project to know where to find the latest version.

Short Answer
You can specify the physical name of your LogGroup by specifying the name input and you can construct this from the API Gateway id output using pulumi.interpolate. You must use a static string as the first argument to your resource. I would recommend using the same name you're providing to your API Gateway resource as the name for your Log Group. Here's an example:
const apiGatewayToSqsQueueRestApi = new aws.apigateway.RestApi("API-Gateway-Execution");
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
"API-Gateway-Execution", // this is the logical name and must be a static string
{
name: pulumi.interpolate`API-Gateway-Execution-Logs_${apiGatewayToSqsQueueRestApi.id}/${stageName}` // this the physical name and can be constructed from other resource outputs
},
);
Longer Answer
The first argument to every resource type in Pulumi is the logical name and is used for Pulumi to track the resource internally from one deployment to the next. By default, Pulumi auto-names the physical resources from this logical name. You can override this behavior by specifying your own physical name, typically via a name input to the resource. More information on resource names and auto-naming is here.
The specific issue here is that logical names cannot be constructed from other resource outputs. They must be static strings. Resource inputs (such as name) can be constructed from other resource outputs.

Encountered a similar issue recently. Adding this for anyone that comes looking.
For pulumi python, some policies requires the input to be stringified json. Say you're writing an sqs queue and a dlq for it, you may initially write something like this:
import pulumi_aws
dlq = aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=json.dumps({
"deadLetterTargetArn": dlq.arn,
"maxReceiveCount": "3"
})
)
The issue we see here is that the json lib errors out stating type Output cannot be parsed. When you print() dlq.arn, you'd see a memory address for it like <pulumi.output.Output object at 0x10e074b80>
In order to work around this, we have to leverage the Outputs lib and write a callback function
import pulumi_aws
def render_redrive_policy(arn):
return json.dumps({
"deadLetterTargetArn": arn,
"maxReceiveCount": "3"
})
dlq = pulumi_aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=Output.all(arn=dlq.arn).apply(
lambda args: render_redrive_policy(args["arn"])
)
)

Related

How exactly does ipfs cat method find and display contents of files using a CID by making use of DHT?

I have done a lot of research on the internet to learn how exactly ipfs cat and get methods work find and download files from other peers using a CID. I want to fully understand how this process works: "The cat method first searches your own node for the file requested, and if it can't find it there, it will attempt to find it on the broader IPFS network(https://proto.school/regular-files-api/04)".
This is the ipfs source code for cat:
async function * cat (ipfsPath, options = {}) {
ipfsPath = normalizeCidPath(ipfsPath)
if (options.preload !== false) {
const pathComponents = ipfsPath.split('/')
preload(CID.parse(pathComponents[0]))
}
const file = await exporter(ipfsPath, repo.blocks, options)
// File may not have unixfs prop if small & imported with rawLeaves true
if (file.type === 'directory') {
throw new Error('this dag node is a directory')
}
if (!file.content) {
throw new Error('this dag node has no content')
}
yield * file.content(options)
}
I deduce that two important arguments that allow for peer routing and file fetching are repo.blocks and preload. repo.blocks is created during ipfs.create() and then passed as a parameter to ipfs.createCat() which is the method that actually creates the cat method. preload is also created by ipfs.create() and passed as an argument to ipfs.createCat() so that it can be used in ipfs.cat(). What confuses me the most is which one of preload or repo.blocks is actually responsible for CID querying. I analyzed the underlying methods for this part of cat:
const pathComponents = ipfsPath.split('/')
preload(CID.parse(pathComponents[0]))
and learned that this is the part of ipfs.cat that makes http connections to other peers. However, this part:
const file = await exporter(ipfsPath, repo.blocks, options)
includes sub-methods like
const block = await blockstore.get(cid, options);
const node = dagPb.decode(block);
which also seem to be related to CID querying through use of distributed hash tables. blockstore.get did not make use of any methods that seemed to connect to other peers or search for peers that have a CID, but I am still very confused on whether these methods have any relation to CID querying. I highly appreciate any help on how the cat method works under the hood from someone who is an expert in ipfs or at least resources I can use to learn the material myself.

How to pass Pulumi's Output<T> to the container definition of a task within ecs?

A containerDefinition within a Task Definition needs to be provided as a single valid JSON document. I'm creating a generic ECS service that should handle dynamic data. Here is the code:
genericClientService(environment: string, targetGroupArn: Output<string>) {
return new aws.ecs.Service(`${this.domainName}-client-service-${environment}`, {
cluster: this.clientCluster.id,
taskDefinition: new aws.ecs.TaskDefinition(`${this.domainName}-client-${environment}`, {
family: `${this.domainName}-client-${environment}`,
containerDefinitions: JSON.stringify(
clientTemplate(
this.defaultRegion,
this.domainName,
this.taskEnvVars?.filter((object: { ENVIRONMENT: string }) => object.ENVIRONMENT === environment),
this.ecrRepositories
)
),
cpu: "256",
executionRoleArn: taskDefinitionRole.arn,
memory: "512",
networkMode: "awsvpc",
requiresCompatibilities: ["FARGATE"],
}).arn,
desiredCount: 1,
...
There is a need of information from an already built resource this.ecrRepositories which represents a list of ECR repositories needed. The problem here is that let's say you want to retrieve the repository URL and apply the necessary 'apply()' method, it will return an Output<string>. This would be fine normally, but since containerDefinitions needs to be a valid JSON document, Pulumi can't handle it since JSON on an Output<T> is not supported;
Calling [toJSON] on an [Output<T>] is not supported. To get the value of an Output as a JSON value or JSON string consider either: 1: o.apply(v => v.toJSON()) 2: o.apply(v => JSON.stringify(v)) See https://pulumi.io/help/outputs for more details. This function may throw in a future version of #pulumi/pulumi.
Blockquote
Neither of the suggested considerations above will work as the dynamicly passed variables are wrapped within a toJSON function callback. Because of this it won't matter how you pass resource information since it will always be an Output<T>.
Is there a way how to deal with this issue?
Assuming clientTemplate works correctly and the error happens in the snippet that you shared, you should be able to solve it with
containerDefinitions: pulumi.all(
clientTemplate(
this.defaultRegion,
this.domainName,
this.taskEnvVars?.filter((object: { ENVIRONMENT: string }) => object.ENVIRONMENT === environment),
this.ecrRepositories
)).apply(JSON.stringify),

Storing a Json value Security Threads using wordpress plugin

I'm performing an audit against OASP best practices, my goal is to identify all major security threads happening when I send the data from the frontend until it is saved in the database.
Context.
Json Data: It's a tree that grows/decreases according to the UI action, the JSON is formatting for a frontend function.
Frontend: custom UI, it generates a list of team members in a JS object and appends/removes from it, the data input is not stored in any HTML elements to prevent XSS, however not sure if there is any potential XSS in the code:
Function to create the element:
const newTeam = {
name,
emoji,
parent_id: parentTeamId,
children: [],
};
const newTree = insertTeam( newTeam );
Function to add the element to the nested groups:
export function insertTeam( team, root = tree ) {
if ( root.id === team.parent_id ) {
return {
...root,
children: [
...root.children,
{
...team,
// Using a simple time based ID for now.
id: `${ root.id }-${ Date.now() }`,
},
],
};
}
return {
...root,
children: root.children.map( ( childTree ) =>
insertTeam( team, childTree )
),
};
}
the data is stored in a hidden field in a form, the final format looks like this:
Var_Dump
string(756) "{\"id\":1,\"name\":\"MyCustomGroup.\",\"emoji\":\"🐕\",\"parent_id\":null,\"children\":[{\"id\":2,\"name\":\"Food\",\"emoji\":\"🥩\",\"parent_id\":1,\"children\":[]},{\"id\":3,\"name\":\"Canine Therapy\",\"emoji\":\"😌\",\"parent_id\":1,\"children\":[{\"id\":5,\"name\":\"Games\",\"emoji\":\"🎾\",\"parent_id\":3,\"children\":[{\"name\":\"rocket\",\"emoji\":\"🚀\",\"parent_id\":5,\"id\":\"5-1632455609334\",\"children\":[]}]}]},{\"name\":\"frog\",\"emoji\":\"🐸\",\"parent_id\":1,\"id\":\"1-1632456503102\",\"children\":[]},{\"name\":\"bear\",\"emoji\":\"🐻\",\"parent_id\":1,\"id\":\"1-1632456578430\",\"children\":[{\"name\":\"a\",\"emoji\":\"a\",\"parent_id\":\"1-1632456578430\",\"children\":[],\"id\":\"1-1632456578430-1632665530415\"}]}]}"
The backend: The backend is a Wordpress plugin, to insert the data I'm using $wpdb->insert process the string passed and for cleanup / sanitize I'm using:
wp_kses( $obj, array() )
I'm not an expert in security, but I can detect threads for XSS attacks, what else I'm missing? Also if you guys have some recommendations are welcome. Thanks.
OWASP security standards, as its name suggests, are only a compilation of standards security checks for web applications. In fact, the npm audit command check for outdated dependencies or known issues. That command doesn't accomplish an audit on the fly. Security issues are raised from several sources, like Node
I think you should try something like sonarqube
it's a great test tool and you can start from there

Read local JSON files when dynamically creating functional tests in Intern

I am creating functional tests dynamically using Intern v4 and dojo 1.7. To accomplish this I am assigning registerSuite to a variable and attaching each test to the Tests property in registerSuite:
var registerSuite = intern.getInterface('object').registerSuite;
var assert = intern.getPlugin('chai').assert;
// ...........a bunch more code .........
registerSuite.tests['test_name'] = function() {
// READ JSON FILE HERE
var JSON = 'filename.json';
// ....... a bunch more code ........
}
That part is working great. The challenge I am having is that I need to read information from a different JSON file for each test I am dynamically creating. I cannot seem to find a way to read a JSON file while the dojo javascript is running (I want to call it in the registerSuite.tests function where it says // READ JSON FILE HERE). I have tried dojo's xhr.get, node's fs, intern's this.remote.get, nothing seems to work.
I can get a static JSON file with define(['dojo/text!./generated_tests.json']) but this does not help me because there are an unknown number of JSON files with unknown filenames, so I don't have the information I would need to call them in the declare block.
Please let me know if my description is unclear. Any help would be greatly appreciated!
Since you're creating functional tests, they'll always run in Node, so you have access to the Node environment. That means you could do something like:
var registerSuite = intern.getPlugin('interface.object').registerSuite;
var assert = intern.getPlugin('chai').assert;
var tests = {};
tests['test_name'] = function () {
var JSON = require('filename.json');
// or require.nodeRequire('filename.json')
// or JSON.parse(require('fs').readFileSync('filename.json', {
// encoding: 'utf8'
// }))
}
registerSuite('my suite', tests);
Another thing to keep in mind is assigning values to registerSuite.tests won't (or shouldn't) actually do anything. You'll need to call registerSuite, passing it your suite name and tests object, to actually register tests.

chrome native messaging: how to receive > 1MB

What would be a good way to work with Chrome's incoming 1MB limit for native messaging extensions? The data that we would be sending to the extension is json-serialized gpx, if that matters.
When the original message is >1MB, it seems like this question really has two parts:
how to partition the data on the sending end (i.e. the client)
this part should be pretty trivial. Even if we need to split into separate self-contained complete gpx strings, that is pretty straightforward.
how to join the <1MB messages back in to the original >1MB
is there a standard known solution for this question? We can call background.js (ie. the function passed to chrome.runtime.onMessageExternal.addListener) once for each <1MB incoming message, but, how would we combine the strings from those repeated calls in to one response for the extension?
UPDATE 8-18-16:
what we've been doing is just appending each message 'chunk' on a buffer variable in background.js, and not send it back to Chrome until disconnection:
var gpxText="";
port.onMessage.addListener(function(msg) {
// msg must be a JSON-serialized simple string;
// append each incoming msg to the collective gpxText string
// but do not send it to Chrome until disconnection
// console.log("received " + msg);
gpxText+=msg;
});
port.onDisconnect.addListener(function(msg) {
if (gpxText!="") {
sendResponse(JSON.parse(gpxText));
gpxText="";
} else {
sendResponse({status: 'error', message: 'something really bad happened'});
}
// build the response object here with msg, status, error tokens, and always send it
console.log("disconnected");
});
We will have to make that appending a bit smarter to handle and send both status and message keys/values, but that should be easy.
I have this same issue and have been scouring the web for the past couple days to figure out something to do. In my application, I am currently shipping a JSON string over to the background script in chunks, having to create a subprotocol to handle this special case. e.g. my initial question might look like:
{action:"getImage",guid:"123"}
and the response for <1MB might look like:
{action:"getImage",guid:"123",status:"success",return:"ABBA..."}
where ABBA... represents a base64 encoding of the bytes. when >1MB, however, the response will look like:
{action:"getStream",guid:"123",status:"success",return:"{action:\"getImage\",guid:\"123\",return:\"ABBA...",more:true}
and upon receipt of the payload with method==='stream', the background page will immediately issue a new request like:
{action:"getStream",guid:"123"}
and the next response might look like:
{action:"getStream",guid:"123",status:"success",return:"...DEAF==",more:false}
so your onMessage handler would look something like:
var streams;
function onMessage( e ) {
var guid = e.guid;
if ( e.action === 'getStream' ) {
if ( !streams[ guid ] ) streams[ guid ] = '';
streams[ guid ] += e[ 'return' ];
if ( e.more ) {
postMessage( { action: 'getStream', guid: guid } );
// no more processing to do here, bail
return;
}
e = JSON.parse( streams[ guid ] );
streams[ guid ] = null;
}
// do something with e as if it was never chunked
...
}
it works, but I am somewhat convinced that it is slower than it should be (though this could be due to the slow feeling of the STDIO signaling and, in my particular app, additional signaling that has to happen for each new chunk).
Ideally I'd like to stream the file in a more efficient protocol supported natively by Chrome. I looked into WebRTC, but it would mean that I'd need to implement the API into my native messaging host (as best I can tell), which is not an option I'm willing to take on. I played with 'passing' the contents by file as such:
if ( e.action = 'getFile' ) {
xhr = new XMLHttpRequest();
xhr.onreadystatechange = function( e ) {
if ( e.target.readyState === 4 ) {
onMessage( e.target.responseText );
}
};
xhr.open( 'GET', chrome.extension.getURL( '/' + e.file ), true );
xhr.send();
return;
}
where I have my native message host write a .json file out to extension's install directory and it seems to work, but there is no way for me to reliably derive the path (without fudging things and hoping for the best), because as best I can the location of the extensions install path is determined by your Chrome user profile and there's no API I could find to give me that path. Additionally, there's a 'version' folder created under your extension id which includes an _0 that I don't know how to calculate (is the _0 constant for some future use? does it tick up when an extension is published anew to the web store, but the version is not adjusted?).
At this point I'm out of ideas and I'm hoping someone will stumble across this question with some guidance.