I've implement a redux effect takeLeading that will ignore subsequent actions if the saga is currently running:
export const takeLeading = (patternOrChannel, saga, ...args) => fork(function*() {
while (true) {
const action = yield take(patternOrChannel);
yield call(saga, ...args.concat(action));
}
});
I use this for API fetching in my application, where each endpoint in my API has its own action type. So for GET methods it's useful to block if the request has already been dispatched somewhere else in the app. The saga looks like:
return function* () {
yield all([takeLeading(GET_USER_ID, callApiGen), takeLeading(GET_WIDGET_ID, callApiGen)]);
}
The obvious problem is that if I want to get two different user IDs, the second will block because it too has action type GET_USER_ID. Short of making a different action for each possible parameter, is there a way to implement some takeLeadingForFunc(<action>, (action) => <id>, saga) that allows me to keep the concise format of specifying one effect per request type but allows me to not block if the <id> is different? I was trying to wrap takeLeading with takeEvery to implement something but couldn't quite get it.
EDIT:
I got something like this to work:
export const takeLeadingForFunc = (f) => (patternOrChannel, saga, ...args) => fork(function*() {
let takeLeadings = {};
while (true) {
const action = yield take(patternOrChannel);
if (!(f(action) in takeLeadings)) {
yield call(saga, ...args.concat(action))
takeLeadings[f(action)] = yield takeLeading((ac) => f(ac) === f(action) && ac.type === action.type, saga, ...args)
}
}
});
Which takes an extractor function f that should return a primitive. This feels kind of hacky, so was wondering if there's a more idiomatic way to do this.
Related
I'm relatively new to puppeteer and I'm trying to understand the patterns that can be used to build more complex apis with it. I am building a cli where I am running a WebGL app in puppeteer which i call various functions in, and with my current implementation i have to copy and paste a lot of setup code.
Usually, in every cli command i have to setup pupeteer, setup the app and get access to its api object, and then run an arbitrary command on that api, and get the data back in node.
It looks something like this.
const {page, browser} = await createBrowser() // Here i setup the browser and add some script tags.
let data;
page.exposeFunction('extractData', (data) => {
data = data;
})
await page.evaluate(async (input) => {
// Setup work
const requestEvent = new CustomEvent('requestAppApi', {
api: undefined;
})
window.dispatchEvent(requestEvent);
const api = requestEvent.detail.api;
// Then i call some arbitrary function, that will
always return some data that gets extracted by the exposed function.
const data = api.arbitraryFunction(input);
window.extractData(data)
}, input)
What i would like is to wrap all of the setup code in a function, so that i could call it and just specify what to do with the api object once i have it.
My initial idea was to have a function that will take a callback that has this api object as a parameter.
const { page, browser } = wait createBrowser();
page.exposeFunction(async (input) =>
setupApiObject(((api) =>
api.callSomeFunction(input)
), input)
However, this does not work. I understand that puppeteer requires any communication between the node context and the browser to be serialised as json, and obviously a function cant be. Whats tripping me up is that I'm not actually wanting to call these methods in the node context, just have a way to reuse them. The actual data transfer is already handled by page.exposeFunction.
How would a more experienced puppeteer dev accomplish this?
I'll answer my own question here, since i managed to figure out a way to do it. Basically, you can use page.evaluate to create a function on the window object that can later be reused.
So i did something like
await page.evaluate(() => {
window.useApiObject = function(callback: (api) => void){
// Perform setup code
callback()
}
})
Meaning that later on i could use that method in the browser context and avoid redoing the setup code.
page.evaluate(() => {
window.useApiObject((api) => {
api.someMethod()
})
})
onMounted(() => {
productService.value
.getProducts()
.then((data) => (products.value = data));
console.log((products))
});
When I print products with console.log, here what I have.
capture of the console
I see that the data I want are in RawValue but I don't know how to access them.
I tried Object.values(products) or just console.log(products._rawValue) or console.log(products.rawValue) it print undefined.
Do you know what function call ?
Thanks
There are 2 issues
#1 - you're using console.log(products) which shows you the reactive object, what you need instead is console.log(products.value) which will only show the value, which should match the content of data.produtcs
#2 - you might find that 👆 now shows an empty result. The reason that's happening is that you're calling the console log after the async function, but before it finishes, so you're calling it before it has a chance to update. To fix that, you can log as part of the async function
onMounted(() => {
productService.value
.getProducts()
.then((data) => {
products.value = data;
console.log(products.value);
})
});
If you're using the products inside a template, you don't need to worry about what's before or after the async function since it will re-render the component on change.
Also, you probably don't need to define productService as a ref, the class is likely not something that needs to be reactive, so you can just do simple assignment and then skip the .value to call getProducts
with axios what I do is take out the data with response.data you could try
onMounted(() => {
productService.value.getProducts().then((response) => (
products = response.data
));
console.log(products.length);
});
I see that JSON.stringify and JSON.parse are both sycnhronous.
I would like to know if there a simple npm library that does this in an asynchonous way .
Thank you
You can make anything "asynchronous" by using Promises:
function asyncStringify(str) {
return new Promise((resolve, reject) => {
resolve(JSON.stringify(str));
});
}
Then you can use it like any other promise:
asyncStringfy(str).then(ajaxSubmit);
Note that because the code is not asynchronous, the promise will be resolved right away (there's no blocking operation on stringifying a JSON, it doesn't require any system call).
You can also use the async/await API if your platform supports it:
async function asyncStringify(str) {
return JSON.stringify(str);
}
Then you can use it the same way:
asyncStringfy(str).then(ajaxSubmit);
// or use the "await" API
const strJson = await asyncStringify(str);
ajaxSubmit(strJson);
Edited: One way of adding true asynchrnous parsing/stringifying (maybe because we're parsing something too complex) is to pass the job to another process (or service) and wait on the response.
You can do this in many ways (like creating a new service that shares a REST API), I will demonstrate here a way of doing this with message passing between processes:
First create a file that will take care of doing the parsing/stringifying. Call it async-json.js for the sake of the example:
// async-json.js
function stringify(value) {
return JSON.stringify(value);
}
function parse(value) {
return JSON.parse(value);
}
process.on('message', function(message) {
let result;
if (message.method === 'stringify') {
result = stringify(message.value)
} else if (message.method === 'parse') {
result = parse(message.value);
}
process.send({ callerId: message.callerId, returnValue: result });
});
All this process does is wait a message asking to stringify or parse a JSON and then respond with the right value.
Now, on your code, you can fork this script and send messages back and forward. Whenever a request is sent, you create a new promise, whenever a response comes back to that request, you can resolve the promise:
const fork = require('child_process').fork;
const asyncJson = fork(__dirname + '/async-json.js');
const callers = {};
asyncJson.on('message', function(response) {
callers[response.callerId].resolve(response.returnValue);
});
function callAsyncJson(method, value) {
const callerId = parseInt(Math.random() * 1000000);
const callPromise = new Promise((resolve, reject) => {
callers[callerId] = { resolve: resolve, reject: reject };
asyncJson.send({ callerId: callerId, method: method, value: value });
});
return callPromise;
}
function JsonStringify(value) {
return callAsyncJson('stringify', value);
}
function JsonParse(value) {
return callAsyncJson('parse', value);
}
JsonStringify({ a: 1 }).then(console.log.bind(console));
JsonParse('{ "a": "1" }').then(console.log.bind(console));
Note: this is just one example, but knowing this you can figure out other improvements or other ways to do it. Hope this is helpful.
Check this out, another npm package-
async-json is a library that provides an asynchronous version of the standard JSON.stringify.
Install-
npm install async-json
Example-
var asyncJSON = require('async-json');
asyncJSON.stringify({ some: "data" }, function (err, jsonValue) {
if (err) {
throw err;
}
jsonValue === '{"some":"data"}';
});
Note-Didn't test it, you need to manually check it's dependency and
required packages.
By asynchronous I assume you actually mean non-blocking asynchronous - i.e., if you have a large (megabytes large) JSON string, and you stringify, you don't want your web server to hard freeze and block newly incoming web requests for 500+ milliseconds while it processes the object.
Option 1
The generic answer is to iterate through your object piece by piece, and to then call setImmedate whenever a threshold is reached. This then allows other functions in the event queue to run for a bit.
For JSON (de)serialization, the yieldable-json library does this very well. It does however drastically sacrifice JSON processing time (which is somewhat intentional).
Usage example from the yieldable-json readme:
const yj = require('yieldable-json')
yj.stringifyAsync({key:"value"}, (err, data) => {
if (!err)
console.log(data)
})
Option 2
If processing speed is extremely important (such as with real-time data), you may want to consider spawning multiple Node threads instead. I've used used the PM2 Process Manager with great success, although initial setup was quite daunting. Once it works however, the final result is magic, and does not require modifying your source code, just your package.json file. It acts as a proxy, load balancer, and monitoring tool for Node applications. It's somewhat analogous to Docker swarm, but bare metal, and does not require a special client on the server.
Immutable object can be an instance of:
Immutable.List
Immutable.Map
Immutable.OrderedMap
Immutable.Set
Immutable.OrderedSet
Immutable.Stack
There is an open ticket to improve the API which is on the roadmap for 4.0. Until this is implemented, I suggest you use Immutable.Iterable.isIterable() (docs).
Using instanceof is not reliable (e. g. returns false when different modules use different copies of Immutable.js)
I have learned that using instanceof to determine wether object is Immutable is unsafe:
Module A:
var Immutable = require('immutable');
module.exports = Immutable.Map({foo: "bar});
Module B:
var Immutable = require('immutable');
var moduleA = require('moduleA');
moduleA instanceof Immutable.Map // will return false
Immutable.js API defines the following methods to check if object is an instance of Immutable:
Map.isMap()
List.isList()
Stack.isStack()
OrderedMap.isOrderedMap()
Set.isSet()
OrderedSet.isOrderedSet()
and
Iterable.isIterable()
The latter checks if:
True if an Iterable, or any of its subclasses.
List, Stack, Map, OrderedMap, Set and OrderedSet are all subclasses of Iterable.
Immutable.js has isImmutable() function since v4.0.0-rc.1:
import { isImmutable, Map, List, Stack } from 'immutable';
isImmutable([]); // false
isImmutable({}); // false
isImmutable(Map()); // true
isImmutable(List()); // true
isImmutable(Stack()); // true
isImmutable(Map().asMutable()); // false
If you use one of the previous versions, you can check if object is Immutable this way:
Immutable.Iterable.isIterable(YOUR_ENTITY)
because all immutables inherit from the Iterable object
And this way you can get to know what type of Immutable Iterable variable is:
const obj0 = 'xxx';
const obj1 = Immutable.fromJS({x: 'XXX', z: 'ZZZ'});
const obj2 = Immutable.fromJS([ {x: 'XXX'}, {z: 'ZZZ'}]);
const types = ['List', 'Stack', 'Map', 'OrderedMap', 'Set', 'OrderedSet'];
const type0 = types.find(currType => Immutable[currType][`is${currType}`](obj0));
const type1 = types.find(currType => Immutable[currType][`is${currType}`](obj1));
const type2 = types.find(currType => Immutable[currType][`is${currType}`](obj2));
console.log(`Obj0 is: ${type0}`); // Obj0 is: undefined
console.log(`Obj1 is: ${type1}`); // Obj1 is: Map
console.log(`Obj2 is: ${type2}`); // Obj2 is: List
<script src="https://cdnjs.cloudflare.com/ajax/libs/immutable/3.8.1/immutable.js"></script>
Checking specific types will generally cause more work later on. Usually I would wait to lock types in by checking for Map or List, but...
My motivation here is mostly that my call .get of undefined poops itself really hard, and initializing properly all over the place helps, but doesn't catch all edge cases. I just want the data or undefined without any breakage. Specific type checking causes me to do more work later if I want it to make changes.
This looser version solves many more edge cases(most if not all extend type Iterable which has .get, and all data is eventually gotten) than a specific type check does(which usually only saves you when you try to update on the wrong type etc).
/* getValid: Checks for valid ImmutableJS type Iterable
returns valid Iterable, valid Iterable child data, or undefined
Iterable.isIterable(maybeIterable) && maybeIterable.get(['data', key], Map()), becomes
getValid(maybeIterable, ['data', key], Map())
But wait! There's more! As a result:
getValid(maybeIterable) returns the maybeIterable or undefined
and we can still say getValid(maybeIterable, null, Map()) returns the maybeIterable or Map() */
export const getValid = (maybeIterable, path, getInstead) =>
Iterable.isIterable(maybeIterable) && path
? ((typeof path === 'object' && maybeIterable.getIn(path, getInstead)) || maybeIterable.get(path, getInstead))
: Iterable.isIterable(maybeIterable) && maybeIterable || getInstead;
//Here is an untested version that a friend requested. It is slightly easier to grok.
export const getValid = (maybeIterable, path, getInstead) => {
if(valid(maybeIterable)) { // Check if it is valid
if(path) { // Check if it has a key
if(typeof path === 'object') { // Check if it is an 'array'
return maybeIterable.getIn(path, getInstead) // Get your stuff
} else {
maybeIterable.get(path, getInstead) // Get your stuff
}
} else {
return maybeIterable || getInstead; // No key? just return the valid Iterable
}
} else {
return undefined; // Not valid, return undefined, perhaps should return false here
}
}
Just give me what I am asking for or tell me no. Don't explode. I believe underscore does something similar also.
This may work in some cases:
typeof object.toJS === 'function'
You can use this ducktyping method if you check immutable vs plain objects (json), for example.
I'm making a todo list. When first entering the item and adding it to the list, the server works great. It takes the parameters that the user selects and passes them into a list on the server that can be viewed by rendering Item.list(), that looks like so:
[{"class":"server.Item","id":1,"assignedTo":"User 1","comments":null,"completed":false,"creator":"User 1","name":"Task 1","priority":"1","type":"Personal"},
{"class":"server.Item","id":2,"assignedTo":"User 2","comments":null,"completed":false,"creator":"User 2","name":"Er","priority":"3","type":"Work"},
{"class":"server.Item","id":3,"assignedTo":"User 1","comments":null,"completed":false,"creator":"User 2","name":"Ga","priority":"1","type":"Work"}]
Now, the user then has the option to edit the task later. On the client side this works fine, but then I need the user to be able to save the new, updated task.
This is my current update function:
def updateList() {
def newItem = Item.findById(request.JSON.id)
newItem.assignedTo = request.JSON.assignedTo
newItem.comments = request.JSON.comments
newItem.completed = request.JSON.completed
newItem.creator = request.JSON.creator
newItem.name = request.JSON.name
newItem.priority = request.JSON.priority
newItem.type = request.JSON.type
newItem.save(flush: true)
render newItem as JSON
}
This doesn't work, however. I get a null pointer exception that says "Cannot set property "assignedTo" on null object. I'm assuming that the findById request is not getting anything for the JSON object, and thus there is no object to assign values to, however I don't know what the problem is considering the items are in fact being put into the Item.list().
This is called with the following JS function on the client side:
$scope.updateList = function() {
angular.forEach($scope.items, function (item) {
// serverList.save({command: 'updateList'}, item);
$http.post('http://localhost:8080/server/todoList/updateList', item)
.success(function(response) {})
.error(function(response) {alert("Failed to update");});
});
};
This might depend on your Grails version, but you should be able to do this:
def update(Item item) {
if (!item) {
// return a 404
} else {
// you should really use a service and not save
// in the controller
itemService.update(item)
respond item
}
}
Grails is smart enough look that item up since there is an ID in the JSON params, and populate the object correctly.
Sort of a work around for anyone else that may need to do this in a basic manner, what I've done that works is clear the list when "Update List" is clicked, then read back in the values that are currently in the client side list.
Grails:
def clearList() {
Item.executeUpdate('delete from Item')
render Item.list()
}
def updateList() {
def newItem = new Item(request.JSON)
newItem.save(flush:true)
render newItem as JSON
}
Javascript:
$scope.updateList = function() { // Update list on the server
serverList.get({command: 'clearList'});
angular.forEach($scope.items, function (item) {
serverList.save({command: 'updateList'}, item);
});
};