What would be a good way to work with Chrome's incoming 1MB limit for native messaging extensions? The data that we would be sending to the extension is json-serialized gpx, if that matters.
When the original message is >1MB, it seems like this question really has two parts:
how to partition the data on the sending end (i.e. the client)
this part should be pretty trivial. Even if we need to split into separate self-contained complete gpx strings, that is pretty straightforward.
how to join the <1MB messages back in to the original >1MB
is there a standard known solution for this question? We can call background.js (ie. the function passed to chrome.runtime.onMessageExternal.addListener) once for each <1MB incoming message, but, how would we combine the strings from those repeated calls in to one response for the extension?
UPDATE 8-18-16:
what we've been doing is just appending each message 'chunk' on a buffer variable in background.js, and not send it back to Chrome until disconnection:
var gpxText="";
port.onMessage.addListener(function(msg) {
// msg must be a JSON-serialized simple string;
// append each incoming msg to the collective gpxText string
// but do not send it to Chrome until disconnection
// console.log("received " + msg);
gpxText+=msg;
});
port.onDisconnect.addListener(function(msg) {
if (gpxText!="") {
sendResponse(JSON.parse(gpxText));
gpxText="";
} else {
sendResponse({status: 'error', message: 'something really bad happened'});
}
// build the response object here with msg, status, error tokens, and always send it
console.log("disconnected");
});
We will have to make that appending a bit smarter to handle and send both status and message keys/values, but that should be easy.
I have this same issue and have been scouring the web for the past couple days to figure out something to do. In my application, I am currently shipping a JSON string over to the background script in chunks, having to create a subprotocol to handle this special case. e.g. my initial question might look like:
{action:"getImage",guid:"123"}
and the response for <1MB might look like:
{action:"getImage",guid:"123",status:"success",return:"ABBA..."}
where ABBA... represents a base64 encoding of the bytes. when >1MB, however, the response will look like:
{action:"getStream",guid:"123",status:"success",return:"{action:\"getImage\",guid:\"123\",return:\"ABBA...",more:true}
and upon receipt of the payload with method==='stream', the background page will immediately issue a new request like:
{action:"getStream",guid:"123"}
and the next response might look like:
{action:"getStream",guid:"123",status:"success",return:"...DEAF==",more:false}
so your onMessage handler would look something like:
var streams;
function onMessage( e ) {
var guid = e.guid;
if ( e.action === 'getStream' ) {
if ( !streams[ guid ] ) streams[ guid ] = '';
streams[ guid ] += e[ 'return' ];
if ( e.more ) {
postMessage( { action: 'getStream', guid: guid } );
// no more processing to do here, bail
return;
}
e = JSON.parse( streams[ guid ] );
streams[ guid ] = null;
}
// do something with e as if it was never chunked
...
}
it works, but I am somewhat convinced that it is slower than it should be (though this could be due to the slow feeling of the STDIO signaling and, in my particular app, additional signaling that has to happen for each new chunk).
Ideally I'd like to stream the file in a more efficient protocol supported natively by Chrome. I looked into WebRTC, but it would mean that I'd need to implement the API into my native messaging host (as best I can tell), which is not an option I'm willing to take on. I played with 'passing' the contents by file as such:
if ( e.action = 'getFile' ) {
xhr = new XMLHttpRequest();
xhr.onreadystatechange = function( e ) {
if ( e.target.readyState === 4 ) {
onMessage( e.target.responseText );
}
};
xhr.open( 'GET', chrome.extension.getURL( '/' + e.file ), true );
xhr.send();
return;
}
where I have my native message host write a .json file out to extension's install directory and it seems to work, but there is no way for me to reliably derive the path (without fudging things and hoping for the best), because as best I can the location of the extensions install path is determined by your Chrome user profile and there's no API I could find to give me that path. Additionally, there's a 'version' folder created under your extension id which includes an _0 that I don't know how to calculate (is the _0 constant for some future use? does it tick up when an extension is published anew to the web store, but the version is not adjusted?).
At this point I'm out of ideas and I'm hoping someone will stumble across this question with some guidance.
Related
I am dealing with creating AWS API Gateway. I am trying to create CloudWatch Log group and name it API-Gateway-Execution-Logs_${restApiId}/${stageName}. I have no problem in Rest API creation.
My issue is in converting restApi.id which is of type pulumi.Outout to string.
I have tried these 2 versions which are proposed in their PR#2496
const restApiId = apiGatewayToSqsQueueRestApi.id.apply((v) => `${v}`);
const restApiId = pulumi.interpolate `${apiGatewayToSqsQueueRestApi.id}`
here is the code where it is used
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
`API-Gateway-Execution-Logs_${restApiId}/${stageName}`,
{},
);
stageName is just a string.
I have also tried to apply again like
const restApiIdStrign = restApiId.apply((v) => v);
I always got this error from pulumi up
aws:cloudwatch:LogGroup API-Gateway-Execution-Logs_Calling [toString] on an [Output<T>] is not supported.
Please help me convert Output to string
#Cameron answered the naming question, I want to answer your question in the title.
It's not possible to convert an Output<string> to string, or any Output<T> to T.
Output<T> is a container for a future value T which may not be resolved even after the program execution is over. Maybe, your restApiId is generated by AWS at deployment time, so if you run your program in preview, there's no value for restApiId.
Output<T> is like a Promise<T> which will be eventually resolved, potentially after some resources are created in the cloud.
Therefore, the only operations with Output<T> are:
Convert it to another Output<U> with apply(f), where f: T -> U
Assign it to an Input<T> to pass it to another resource constructor
Export it from the stack
Any value manipulation has to happen within an apply call.
So long as the Output is resolvable while the Pulumi script is still running, you can use an approach like the below:
import {Output} from "#pulumi/pulumi";
import * as fs from "fs";
// create a GCP registry
const registry = new gcp.container.Registry("my-registry");
const registryUrl = registry.id.apply(_=>gcp.container.getRegistryRepository().then(reg=>reg.repositoryUrl));
// create a GCP storage bucket
const bucket = new gcp.storage.Bucket("my-bucket");
const bucketURL = bucket.url;
function GetValue<T>(output: Output<T>) {
return new Promise<T>((resolve, reject)=>{
output.apply(value=>{
resolve(value);
});
});
}
(async()=>{
fs.writeFileSync("./PulumiOutput_Public.json", JSON.stringify({
registryURL: await GetValue(registryUrl),
bucketURL: await GetValue(bucketURL),
}, null, "\t"));
})();
To clarify, this approach only works when you're doing an actual deployment (ie. pulumi up), not merely a preview. (as explained here)
That's good enough for my use-case though, as I just want a way to store the registry-url and such after each deployment, for other scripts in my project to know where to find the latest version.
Short Answer
You can specify the physical name of your LogGroup by specifying the name input and you can construct this from the API Gateway id output using pulumi.interpolate. You must use a static string as the first argument to your resource. I would recommend using the same name you're providing to your API Gateway resource as the name for your Log Group. Here's an example:
const apiGatewayToSqsQueueRestApi = new aws.apigateway.RestApi("API-Gateway-Execution");
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
"API-Gateway-Execution", // this is the logical name and must be a static string
{
name: pulumi.interpolate`API-Gateway-Execution-Logs_${apiGatewayToSqsQueueRestApi.id}/${stageName}` // this the physical name and can be constructed from other resource outputs
},
);
Longer Answer
The first argument to every resource type in Pulumi is the logical name and is used for Pulumi to track the resource internally from one deployment to the next. By default, Pulumi auto-names the physical resources from this logical name. You can override this behavior by specifying your own physical name, typically via a name input to the resource. More information on resource names and auto-naming is here.
The specific issue here is that logical names cannot be constructed from other resource outputs. They must be static strings. Resource inputs (such as name) can be constructed from other resource outputs.
Encountered a similar issue recently. Adding this for anyone that comes looking.
For pulumi python, some policies requires the input to be stringified json. Say you're writing an sqs queue and a dlq for it, you may initially write something like this:
import pulumi_aws
dlq = aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=json.dumps({
"deadLetterTargetArn": dlq.arn,
"maxReceiveCount": "3"
})
)
The issue we see here is that the json lib errors out stating type Output cannot be parsed. When you print() dlq.arn, you'd see a memory address for it like <pulumi.output.Output object at 0x10e074b80>
In order to work around this, we have to leverage the Outputs lib and write a callback function
import pulumi_aws
def render_redrive_policy(arn):
return json.dumps({
"deadLetterTargetArn": arn,
"maxReceiveCount": "3"
})
dlq = pulumi_aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=Output.all(arn=dlq.arn).apply(
lambda args: render_redrive_policy(args["arn"])
)
)
I am writing a private plugin for nodebb (open forum software). In the nodebb's webserver.js file there is a line that seems to be hogging all incoming json data.
app.use(bodyParser.json(jsonOpts));
I am trying to convert all incoming json data for one of my end-points into raw data. However the challenge is I cannot remove or modify the line above.
The following code works ONLY if I temporarily remove the line above.
var rawBodySaver = function (req, res, buf, encoding) {
if (buf && buf.length) {
req.rawBody = buf.toString(encoding || 'utf8');
}
}
app.use(bodyParser.json({ verify: rawBodySaver }));
However as soon as I put the app.use(bodyParser.json(jsonOpts)); middleware back into the webserver.js file it stops working. So it seems like body-parser only processes the first parser that matches the incoming data type and then skips all the rest?
How can I get around that? I could not find any information in their official documentation.
Any help is greatly appreciated.
** Update **
The problem I am trying to solve is to correctly handle an incoming stripe webhook event. In the official stripe documentation they suggested I do the following:
// Match the raw body to content type application/json
app.post('/webhook', bodyParser.raw({type: 'application/json'}),
(request, response) => {
const sig = request.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(request.body, sig,
endpointSecret);
} catch (err) {
return response.status(400).send(Webhook Error:
${err.message});
}
Both methods, the original at the top of this post and the official stripe recommended way, construct the stripe event correctly but only if I remove the middleware in webserver. So my understanding now is that you cannot have multiple middleware to handle the same incoming data. I don't have much wiggle room when it comes to the first middleware except for being able to modify the argument (jsonOpts) that is being passed to it and comes from a .json file. I tried adding a verify field but I couldn't figure out how to add a function as its value. I hope this makes sense and sorry for not stating what problem I am trying to solve initially.
The only solution I can find without modifying the NodeBB code is to insert your middleware in a convenient hook (that will be later than you want) and then hack into the layer list in the app router to move that middleware earlier in the app layer list to get it in front of the things you want to be in front of.
This is a hack so if Express changes their internal implementation at some future time, then this could break. But, if they ever changed this part of the implementation, it would likely only be in a major revision (as in Express 4 ==> Express 5) and you could just adapt the code to fit the new scheme or perhaps NodeBB will have given you an appropriate hook by then.
The basic concept is as follows:
Get the router you need to modify. It appears it's the app router you want for NodeBB.
Insert your middleware/route as you normally would to allow Express to do all the normal setup for your middleware/route and insert it in the internal Layer list in the app router.
Then, reach into the list, take it off the end of the list (where it was just added) and insert it earlier in the list.
Figure out where to put it earlier in the list. You probably don't want it at the very start of the list because that would put it after some helpful system middleware that makes things like query parameter parsing work. So, the code looks for the first middleware that has a name we don't recognize from the built-in names we know and insert it right after that.
Here's the code for a function to insert your middleware.
function getAppRouter(app) {
// History:
// Express 4.x throws when accessing app.router and the router is on app._router
// But, the router is lazy initialized with app.lazyrouter()
// Express 5.x again supports app.router
// And, it handles the lazy construction of the router for you
let router;
try {
router = app.router; // Works for Express 5.x, Express 4.x will throw when accessing
} catch(e) {}
if (!router) {
// Express 4.x
if (typeof app.lazyrouter === "function") {
// make sure router has been created
app.lazyrouter();
}
router = app._router;
}
if (!router) {
throw new Error("Couldn't find app router");
}
return router;
}
// insert a method on the app router near the front of the list
function insertAppMethod(app, method, path, fn) {
let router = getAppRouter(app);
let stack = router.stack;
// allow function to be called with no path
// as insertAppMethod(app, metod, fn);
if (typeof path === "function") {
fn = path;
path = null;
}
// add the handler to the end of the list
if (path) {
app[method](path, fn);
} else {
app[method](fn);
}
// now remove it from the stack
let layerObj = stack.pop();
// now insert it near the front of the stack,
// but after a couple pre-built middleware's installed by Express itself
let skips = new Set(["query", "expressInit"]);
for (let i = 0; i < stack.length; i++) {
if (!skips.has(stack[i].name)) {
// insert it here before this item
stack.splice(i, 0, layerObj);
break;
}
}
}
You would then use this to insert your method like this from any NodeBB hook that provides you the app object sometime during startup. It will create your /webhook route handler and then insert it earlier in the layer list (before the other body-parser middleware).
let rawMiddleware = bodyParser.raw({type: 'application/json'});
insertAppMethod(app, 'post', '/webhook', (request, response, next) => {
rawMiddleware(request, response, (err) => {
if (err) {
next(err);
return;
}
const sig = request.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(request.body, sig, endpointSecret);
// you need to either call next() or send a response here
} catch (err) {
return response.status(400).send(`Webhook Error: ${err.message}`);
}
});
});
The bodyParser.json() middleware does the following:
Check the response type of an incoming request to see if it is application/json.
If it is that type, then read the body from the incoming stream to get all the data from the stream.
When it has all the data from the stream, parse it as JSON and put the result into req.body so follow-on request handlers can access the already-read and already-parsed data there.
Because it reads the data from the stream, there is no longer any more data in the stream. Unless it saves the raw data somewhere (I haven't looked to see if it does), then the original RAW data is gone - it's been read from the stream already. This is why you can't have multiple different middleware all trying to process the same request body. Whichever one goes first reads the data from the incoming stream and then the original data is no longer there in the stream.
To help you find a solution, we need to know what end-problem you're really trying to solve? You will not be able to have two middlewares both looking for the same content-type and both reading the request body. You could replace bodyParser.json() that does both what it does now and does something else for your purpose in the same middleware, but not in separate middleware.
What i'm trying to do is to save all the static of my website in a json file that i want to read in angular, i was thinking about two ways of do it:
Call a json file directly from AngularJS
Send the json file from node to AngularJS
I don't know how to do either of them, i've tried the second way like this(no luck):
Nodejs code:
app.get( '/content', function ( require, response ) {
response.setHeader('Content-Type', 'application/json');
response.json( readJSONFile( './client/content.json', function ( err, json ) {
if ( err ) {
throw err;
}
console.log( json );
} )
)
} );
function readJSONFile( filename, callback ) {
require( "fs" ).readFile( filename, function ( err, data ) {
if ( err ) {
callback( err );
return;
}
try {
callback( null, JSON.parse( data ) );
} catch ( exception ) {
callback( exception );
}
} );
}
When i access to localhost:3000/content and i check the network on the browser the file is sent, but i can't see the data in the preview tab, not sure of what i'm doing wrong...
Also how could i make a service to get this data from the server in AngularJS?
So your question can be broken down into 2 (quite) separate questions. Let's go one by one:
1. How to serve a json file from a Node.js app
The reason your code didn't work was that you did it in a "wrong order". Your readJSONFile function is an asynchronous one, where the content of the json file is only available after a while, when the callback block of Node's readFile function is invoked. Here, you are trying to use the immediate return value of readJSONFile as a parameter to response.json, which means undefined. And that is why you couldn't see any data in the preview tab.
The solutions:
a. You can use readFileSync instead for simplicity:
app.get( '/content', function ( request, response ) {
response.json( require('fs').readFileSync('./client/content.json', 'utf8') );
}
b. Or to live by Node.js' asynchrony signature (also a better practice), you could do something like this:
app.get( '/content', function ( request, response ) {
readJSONFile( './client/content.json', function ( err, json ) {
if ( err ) {
throw err;
}
return response.json( json );
} );
} );
I'd suggest you google readFile and readFileSync to have a more solid understanding on these native Node.js APIs. Either way, both approaches should give you some data in the console preview tab. Now what's left is for the next question:
2. How to make requests to an HTTP end-point from AngularJS
To be honest this is quite a "big question", not because it's difficult or too complicated to explain, but because there are many different ways to do it. The quick answer would be to tell you to use either Angular's native $http service, or ngResource, or the versatile Restangular; and while I can go on and on about those here in this post, I think there has been much better answers on StackOverflow about this topic, for example this one.
Again, I'd personally recommend that you research a bit more on the 3 keywords above. If things still look so murky afterwards, you may always come back here and fire a more detailed question, for example: "How can I make an HTTP request to the server from AngularJS using <NAME>", with <NAME> being $http, ngResource or Restangular, which I (or any other active user) would then be more than happy to answer!
My web-application should be able to store and update (also load) JSON data on a Server.
However, the data may contain some big arrays where every time they are saved only a new entry was appended.
My solution:
send updates to the server with a key-path within the json data.
Currently I'm sending the data with an xmlhttprequest by jquery, like this
/**
* Asynchronously writes a file on the server (via PHP-script).
* #param {String} file complete filename (path/to/file.ext)
* #param content content that should be written. may be a js object.
* #param {Array} updatePath (optional), json only. not the entire file is written,
* but the given path within the object is updated. by default the path is supposed to contain an array and the
* content is appended to it.
* #param {String} key (optional) in combination with updatePath. if a key is provided, then the content is written
* to a field named as this parameters content at the data located at the updatePath from the old content.
*
* #returns {Promise}
*/
io.write = function (file, content, updatePath, key) {
if (utils.isObject(content)) content = JSON.stringify(content, null, "\t");
file = io.parsePath(file);
var data = {f: file, t: content};
if (typeof updatePath !== "undefined") {
if (Array.isArray(updatePath)) updatePath = updatePath.join('.');
data.a = updatePath;
if (typeof key !== "undefined") data.k = key;
}
return new Promise(function (resolve, reject) {
$.ajax({
type: 'POST',
url: io.url.write,
data: data,
success: function (data) {
data = data.split("\n");
if (data[0] == "ok") resolve(data[1]);
else reject(new Error((data[0] == "error" ? "PHP error:\n" : "") + data.slice(1).join("\n")));
},
cache: false,
error: function (j, t, e) {
reject(e);
//throw new Error("Error writing file '" + file + "'\n" + JSON.stringify(j) + " " + e);
}
});
});
};
On the Server, a php script manages the rest like this:
recieves the data and checks if its valid
check if the given file path is writable
if the file exists and is .json
read it and decode the json
return an error on invalid json
if there is no update path given
just write the data
if there is an update path given
return an error if the update path in the JSON data can't be traversed (or file didn't exist)
update the data at update-path
write the pretty-printed json to file
However I'm not perfectly happy and problems kept coming for the last weeks.
My Questions
Generally: How would you approach this problem? alternative suggestions, databases? any libraries that could help?
Note: I would prefer solutions, that just use php or some standart apache stuff.
One problem was, that sometimes, multiple writes on the same file were triggered. To avoid this I used the Promises (wrapped it because I read jquerys deferred stuff isnt Promise/A compliant) client side, but I dont feel 100% sure it is working. Is there a (file) lock in php that works across multiple requests?
Every now and then the JSON files break and its not clear to me how to reproduce the problem. At the time it breaks, I don't have a history of what happened. Any general debugging strategies with a client/server saving/loading process like this?
I wrote a comet enable web server that does diffs on updates of json data structures. For the exactly same reason. The server keeps a few version of a json document and serves client with different version of the json document with the update they need to get to the most reason version of the json data.
Maybe you could reuse some of my code, written in C++ and CoffeeScript: https://github.com/TorstenRobitzki/Sioux
If you have concurrent write accesses to your data structure, are your sure, that who ever writes to the file has the right version of the file in mind when reading the file?
I would like to know what can I do to upload attachments in CouchDB using the update function.
here you will find an example of my update function to add documents:
function(doc, req){
if (!doc) {
if (!req.form._id) {
req.form._id = req.uuid;
}
req.form['|edited_by'] = req.userCtx.name
req.form['|edited_on'] = new Date();
return [req.form, JSON.stringify(req.form)];
}
else {
return [null, "Use POST to add a document."]
}
}
example for remove documents:
function(doc, req){
if (doc) {
for (var i in req.form) {
doc[i] = req.form[i];
}
doc['|edited_by'] = req.userCtx.name
doc['|edited_on'] = new Date();
doc._deleted = true;
return [doc, JSON.stringify(doc)];
}
else {
return [null, "Document does not exist."]
}
}
thanks for your help,
It is possible to add attachments to a document using an update function by modifying the document's _attachments property. Here's an example of an update function which will add an attachment to an existing document:
function (doc, req) {
// skipping the create document case for simplicity
if (!doc) {
return [null, "update only"];
}
// ensure that the required form parameters are present
if (!req.form || !req.form.name || !req.form.data) {
return [null, "missing required post fields"];
}
// if there isn't an _attachments property on the doc already, create one
if (!doc._attachments) {
doc._attachments = {};
}
// create the attachment using the form data POSTed by the client
doc._attachments[req.form.name] = {
content_type: req.form.content_type || 'application/octet-stream',
data: req.form.data
};
return [doc, "saved attachment"];
}
For each attachment, you need a name, a content type, and body data encoded as base64. The example function above requires that the client sends an HTTP POST in application/x-www-form-urlencoded format with at least two parameters: name and data (a content_type parameter will be used if provided):
name=logo.png&content_type=image/png&data=iVBORw0KGgoA...
To test the update function:
Find a small image and base64 encode it:
$ base64 logo.png | sed 's/+/%2b/g' > post.txt
The sed script encodes + characters so they don't get converted to spaces.
Edit post.txt and add name=logo.png&content_type=image/png&data= to the top of the document.
Create a new document in CouchDB using Futon.
Use curl to call the update function with the post.txt file as the body, substituting in the ID of the document you just created.
curl -X POST -d #post.txt http://127.0.0.1:5984/mydb/_design/myddoc/_update/upload/193ecff8618678f96d83770cea002910
This was tested on CouchDB 1.6.1 running on OSX.
Update: #janl was kind enough to provide some details on why this answer can lead to performance and scaling issues. Uploading attachments via an upload handler has two main problems:
The upload handlers are written in JavaScript, so the CouchDB server may have to fork() a couchjs process to handle the upload. Even if a couchjs process is already running, the server has to stream the entire HTTP request to the external process over stdin. For large attachments, the transfer of the request can take significant time and system resources. For each concurrent request to an update function like this, CouchDB will have to fork a new couchjs process. Since the process runtime will be rather long because of what is explained next, you can easily run out of RAM, CPU or the ability to handle more concurrent requests.
After the _attachments property is populated by the upload handler and streamed back to the CouchDB server (!), the server must parse the response JSON, decode the base64-encoded attachment body, and write the binary body to disk. The standard method of adding an attachment to a document -- PUT /db/docid/attachmentname -- streams the binary request body directly to disk and does not require the two processing steps.
The function above will work, but there are non-trivial issues to consider before using it in a highly-scalable system.