basic reducer possibly mutating app state - ecmascript-6

I am using Redux spread operator to hopefully mantain the state as immutable objects.
However, i am managing to make the most simple unit test fail.
I assume the error probably has to do with immutables, but am i not using the spread operator correctly?
Here is my unit test:
describe('app logic', () => {
it('initialises app', () => {
const newState = reducer(INITIAL_STATE, {type: "NEXT"})
const expectState = {
start: true,
render_component: null,
requests: {},
results: {},
}
console.log('newState', newState)
console.log('expected state', expectState)
expect(newState).to.equal(expectState)
})
})
and here is my reducer
export const INITIAL_STATE = {
start: false,
render_component: null,
requests: {},
results: {}
}
export const next = (state) => {
if (state === INITIAL_STATE) {
return {
...state,
start: true,
}
}
return state
}
export function reducer(state = INITIAL_STATE, action) {
switch (action.type) {
case 'NEXT':
return next(state)
default:
return state
}
}
I print the two objects, and they look the same.
i get the error :
1) app logic initialises app:
AssertionError: expected { Object (start, render_component, ...) } to equal { Object (start, render_component, ...) }

Not sure exactly which testing library you are using, but usually a name like .equal is used to test strict equality ( === ), which means (at least in the case of objects) that the two things being compared must actually reference the exact same object. So, for example,
const original = { a: 1 }; // creates a new object, assign it
const testMe = { a: 1 }; // creates another new object, assign it
console.log( original === testMe ) // false
evaluates to false, because while the objects have the same content, they do not reference the exact same object. They are separate, independently created, objects that happen to have the same content. Compare that to
const original = {a: 1}; // create a new object
const testMe = original; // create another reference to the same object
console.log( original === testMe ); // true
So when you return
return {
...state,
start: true,
}
you are creating and returning a new object, so it naturally can not reference the same object that you created and assigned to the variable name expectedState.
If what you are interested in is not strict equality, but rather just that the content in the two objects are the same, there exists other methods than .equal, usually named something with deep (since they go deep into the objects/arrays/whatever to check if the values are the same).
Chai.js has examples of both expect(x).to.equal(y) and expect(x).to.deep.equal(y) in their docs: http://chaijs.com/api/bdd/#method_equal
Your testing library probably has very similar, if not identical, syntax.

Related

Can I define a GraphQL field to be any valid json? [duplicate]

Is it possible to specify that a field in GraphQL should be a blackbox, similar to how Flow has an "any" type? I have a field in my schema that should be able to accept any arbitrary value, which could be a String, Boolean, Object, Array, etc.
I've come up with a middle-ground solution. Rather than trying to push this complexity onto GraphQL, I'm opting to just use the String type and JSON.stringifying my data before setting it on the field. So everything gets stringified, and later in my application when I need to consume this field, I JSON.parse the result to get back the desired object/array/boolean/ etc.
#mpen's answer is great, but I opted for a more compact solution:
const { GraphQLScalarType } = require('graphql')
const { Kind } = require('graphql/language')
const ObjectScalarType = new GraphQLScalarType({
name: 'Object',
description: 'Arbitrary object',
parseValue: (value) => {
return typeof value === 'object' ? value
: typeof value === 'string' ? JSON.parse(value)
: null
},
serialize: (value) => {
return typeof value === 'object' ? value
: typeof value === 'string' ? JSON.parse(value)
: null
},
parseLiteral: (ast) => {
switch (ast.kind) {
case Kind.STRING: return JSON.parse(ast.value)
case Kind.OBJECT: throw new Error(`Not sure what to do with OBJECT for ObjectScalarType`)
default: return null
}
}
})
Then my resolvers looks like:
{
Object: ObjectScalarType,
RootQuery: ...
RootMutation: ...
}
And my .gql looks like:
scalar Object
type Foo {
id: ID!
values: Object!
}
Yes. Just create a new GraphQLScalarType that allows anything.
Here's one I wrote that allows objects. You can extend it a bit to allow more root types.
import {GraphQLScalarType} from 'graphql';
import {Kind} from 'graphql/language';
import {log} from '../debug';
import Json5 from 'json5';
export default new GraphQLScalarType({
name: "Object",
description: "Represents an arbitrary object.",
parseValue: toObject,
serialize: toObject,
parseLiteral(ast) {
switch(ast.kind) {
case Kind.STRING:
return ast.value.charAt(0) === '{' ? Json5.parse(ast.value) : null;
case Kind.OBJECT:
return parseObject(ast);
}
return null;
}
});
function toObject(value) {
if(typeof value === 'object') {
return value;
}
if(typeof value === 'string' && value.charAt(0) === '{') {
return Json5.parse(value);
}
return null;
}
function parseObject(ast) {
const value = Object.create(null);
ast.fields.forEach((field) => {
value[field.name.value] = parseAst(field.value);
});
return value;
}
function parseAst(ast) {
switch (ast.kind) {
case Kind.STRING:
case Kind.BOOLEAN:
return ast.value;
case Kind.INT:
case Kind.FLOAT:
return parseFloat(ast.value);
case Kind.OBJECT:
return parseObject(ast);
case Kind.LIST:
return ast.values.map(parseAst);
default:
return null;
}
}
For most use cases, you can use a JSON scalar type to achieve this sort of functionality. There's a number of existing libraries you can just import rather than writing your own scalar -- for example, graphql-type-json.
If you need a more fine-tuned approach, than you'll want to write your own scalar type. Here's a simple example that you can start with:
const { GraphQLScalarType, Kind } = require('graphql')
const Anything = new GraphQLScalarType({
name: 'Anything',
description: 'Any value.',
parseValue: (value) => value,
parseLiteral,
serialize: (value) => value,
})
function parseLiteral (ast) {
switch (ast.kind) {
case Kind.BOOLEAN:
case Kind.STRING:
return ast.value
case Kind.INT:
case Kind.FLOAT:
return Number(ast.value)
case Kind.LIST:
return ast.values.map(parseLiteral)
case Kind.OBJECT:
return ast.fields.reduce((accumulator, field) => {
accumulator[field.name.value] = parseLiteral(field.value)
return accumulator
}, {})
case Kind.NULL:
return null
default:
throw new Error(`Unexpected kind in parseLiteral: ${ast.kind}`)
}
}
Note that scalars are used both as outputs (when returned in your response) and as inputs (when used as values for field arguments). The serialize method tells GraphQL how to serialize a value returned in a resolver into the data that's returned in the response. The parseLiteral method tells GraphQL what to do with a literal value that's passed to an argument (like "foo", or 4.2 or [12, 20]). The parseValue method tells GraphQL what to do with the value of a variable that's passed to an argument.
For parseValue and serialize we can just return the value we're given. Because parseLiteral is given an AST node object representing the literal value, we have to do a little bit of work to convert it into the appropriate format.
You can take the above scalar and customize it to your needs by adding validation logic as needed. In any of the three methods, you can throw an error to indicate an invalid value. For example, if we want to allow most values but don't want to serialize functions, we can do something like:
if (typeof value == 'function') {
throw new TypeError('Cannot serialize a function!')
}
return value
Using the above scalar in your schema is simple. If you're using vanilla GraphQL.js, then use it just like you would any of the other scalar types (GraphQLString, GraphQLInt, etc.) If you're using Apollo, you'll need to include the scalar in your resolver map as well as in your SDL:
const resolvers = {
...
// The property name here must match the name you specified in the constructor
Anything,
}
const typeDefs = `
# NOTE: The name here must match the name you specified in the constructor
scalar Anything
# the rest of your schema
`
Just send a stringified value via GraphQL and parse it on the other side, e.g. use this wrapper class.
export class Dynamic {
#Field(type => String)
private value: string;
getValue(): any {
return JSON.parse(this.value);
}
setValue(value: any) {
this.value = JSON.stringify(value);
}
}
For similar problem I've created schema like this:
"""`MetadataEntry` model"""
type MetadataEntry {
"""Key of the entry"""
key: String!
"""Value of the entry"""
value: String!
}
"""Object with metadata"""
type MyObjectWithMetadata {
"""
... rest of my object fields
"""
"""
Key-value entries that you can attach to an object. This can be useful for
storing additional information about the object in a structured format
"""
metadata: [MetadataEntry!]!
"""Returns value of `MetadataEntry` for given key if it exists"""
metadataValue(
"""`MetadataEntry` key"""
key: String!
): String
}
And my queries can look like this:
query {
listMyObjects {
# fetch meta values by key
meta1Value: metadataValue(key: "meta1")
meta2Value: metadataValue(key: "meta2")
# ... or list them all
metadata {
key
value
}
}
}

Too tidious hooks when querying in REST. Any ideas?

I've just started using feathers to build REST server. I need your help for querying tips. Document says
When used via REST URLs all query values are strings. Depending on the service the values in params.query might have to be converted to the right type in a before hook. (https://docs.feathersjs.com/api/databases/querying.html)
, which puzzles me. find({query: {value: 1} }) does mean value === "1" not value === 1 ? Here is example client side code which puzzles me:
const feathers = require('#feathersjs/feathers')
const fetch = require('node-fetch')
const restCli = require('#feathersjs/rest-client')
const rest = restCli('http://localhost:8888')
const app = feathers().configure(rest.fetch(fetch))
async function main () {
const Items = app.service('myitems')
await Items.create( {name:'one', value:1} )
//works fine. returns [ { name: 'one', value: 1, id: 0 } ]
console.log(await Items.find({query:{ name:"one" }}))
//wow! no data returned. []
console.log(await Items.find({query:{ value:1 }})) // []
}
main()
Server side code is here:
const express = require('#feathersjs/express')
const feathers = require('#feathersjs/feathers')
const memory = require('feathers-memory')
const app = express(feathers())
.configure(express.rest())
.use(express.json())
.use(express.errorHandler())
.use('myitems', memory())
app.listen(8888)
.on('listening',()=>console.log('listen on 8888'))
I've made hooks, which works all fine but it is too tidious and I think I missed something. Any ideas?
Hook code:
app.service('myitems').hooks({
before: { find: async (context) => {
const value = context.params.query.value
if (value) context.params.query.value = parseInt(value)
return context
}
}
})
This behaviour depends on the database and ORM you are using. Some that have a schema (like feathers-mongoose, feathers-sequelize and feathers-knex), will convert values like that automatically.
Feathers itself does not know about your data format and most adapters (like the feathers-memory you are using here) do a strict comparison so they will have to be converted. The usual way to deal with this is to create some reusable hooks (instead of one for each field) like this:
const queryToNumber = (...fields) => {
return context => {
const { params: { query = {} } } = context;
fields.forEach(field => {
const value = query[field];
if(value) {
query[field] = parseInt(value, 10)
}
});
}
}
app.service('myitems').hooks({
before: {
find: [
queryToNumber('age', 'value')
]
}
});
Or using something like JSON schema e.g. through the validateSchema common hook.

node.js - if statement not working as expected

This piece of node.js code is run against a Spark History Server API.
What its supposed to do is find any jobs where the name matches the value passed in by uuid and return the id for only that job.
What the below code actually does is if the uuid is found in any job name, the id for every job is returned.
I think this has something to do with the way I'm parsing the JSON but I'm not entirely sure.
How do I change this so it works as I would like it to?
var arrFound = Object.keys(json).filter(function(key) {
console.log("gel json[key].name" + json[key].name);
return json[key].name;
}).reduce(function(obj, key){
if (json[key].name.indexOf(uuid)) {
obj = json[key].id;
return obj;
}
reduce is the wrong method for that. Use find or filter. You can even do that in the filter callback that you already have. And then you can chain a map to that to get the id property values for each matched key:
var arrFound = Object.keys(json).filter(function(key) {
console.log("gel json[key].name " + json[key].name);
return json[key].name && json[key].name.includes(uuid);
}).map(function(key) {
return json[key].id;
});
console.log (arrFound); // array of matched id values
Note also that your use of indexOf is wrong. You need to compare that value with -1 (not found). But nowadays you can use includes which returns a boolean.
Note that with Object.values you list the objects instead of the keys, which is more interesting in your case:
var arrFound = Object.values(json).filter(function(obj) {
console.log("gel obj.name " + obj.name);
return obj.name && obj.name.includes(uuid);
}).map(function(obj) {
return obj.id;
});
console.log (arrFound); // array of matched id values
While the accepted answer provides working code, I feel it's worth pointing out that reduce is a good way to solve this problem, and to me makes more sense than chaining filter and map:
const jobs = {
1: {
id: 1,
name: 'job: 2a2912c5-9ec8-4ead-9a8f-724ab44fc9c7'
},
2: {
id: 2,
name: 'job: 30ea8ab2-ae3f-4427-8e44-5090d064d58d'
},
3: {
id: 3,
name: 'job: 5f8abe54-8417-4b3c-90f1-a7f4aad67cfb'
},
4: {
id: 4,
name: 'job: 30ea8ab2-ae3f-4427-8e44-5090d064d58d'
}
}
const matchUUID = uuid =>
(acc, job) => job.name.includes(uuid) ? [ ...acc, job.id ] : acc
const target = '30ea8ab2-ae3f-4427-8e44-5090d064d58d'
const matchTarget = matchUUID(target)
// [ 2, 4 ]
console.log(Object.values(jobs).reduce(matchTarget, []))
reduce is appropriate for these kinds of problems: taking a larger, more complex or complete value, and reducing it to the data you require. On large datasets, it could also be more efficient since you only need to traverse the collection once.
If you're Node version-constrained or don't want to use array spread, here's a slightly more 'traditional' version:
var result = Object.keys(jobs).reduce(
function (acc, key) {
if (jobs[key].name.includes(uuid)) {
acc.push(jobs[key].id)
}
return acc
},
[]
)
Note use of Object.keys, since Object.values is ES2017 and may not always be available. String.prototype.includes is ES2015, but you could always use indexOf if necessary.

Create keyed Maps from nested Lists with Immutable.js

I am working with a dataset that cannot be modified on the server side. So I am trying to setup the local data model on the client in a way that I can easily traverse through the model when updating parts of the data.
Therefore I am trying to create a multi-leveled Map from multi-leveled Maps including Lists, that themselves include Maps, etc. (see schematics at the end of this post).
What I am trying to get is a Map containing other Maps, with the key of the included Map being the value of the object (again please see schematics at the end of this post).
I got it to work on the first level:
const firstLevel = data.toMap().mapKeys((key, value) => value.get('value'));
See it in action here: https://jsfiddle.net/9f0djcb0/4/
But there is a maximum of 3 levels of nested data and I can't get my head around how to get the transformation done. Any help appreciated!
The schematic datasets:
// This is what I got
const dataset = [
{
field: 'lorem',
value: 'ipsum',
more: [
{
field: 'lorem_lvl1',
value: 'ispum_lvl1',
more: [
{
field: 'lorem_lvl2',
value: 'ispum_lvl2',
more: [
{
field: 'lorem_lvl3',
value: 'ispum_lvl3',
}
]
}
]
}
]
},
{
field: 'glorem',
value: 'blipsum'
},
{
field: 'halorem',
value: 'halipsum'
}
];
This is where I want to go:
// This is what I want
const dataset_wanted = {
ipsum: {
field: 'lorem',
value: 'ipsum',
more: {
lorem_lvl1: {
field: 'lorem_lvl1',
value: 'ispum_lvl1',
more: {
lorem_lvl2: {
field: 'lorem_lvl2',
value: 'ispum_lvl2',
more: {
lorem_lvl3: {
field: 'lorem_lvl3',
value: 'ispum_lvl3',
}
}
}
}
}
}
},
glorem: {
field: 'glorem',
value: 'blipsum'
},
halorem: {
field: 'halorem',
value: 'halipsum'
}
};
Retrieve nested structures using "getIn" is beter.
const data = Immutable.fromJS(dataset[0]);
const firstLevel = data.getIn(['more']);
const twoLevel = firstLevel.getIn([0,'more']);
const threeLevel = twoLevel.getIn([0,'more']);
console.log(firstLevel.toJS(),twoLevel.toJS(),threeLevel.toJS());
As for a more generative solution, I re-wrote the answer before to a recursive approach:
function mapDeep(firstLevel) {
return firstLevel.map((obj) => {
if (obj.has('more')) {
const sec = obj.get('more').toMap().mapKeys((key, value) => value.get('value'));
const objNext = mapDeep(sec);
obj = obj.set('more', objNext);
}
return obj;
});
}
The first level still needs to be mapped manually before.
const firstLevel = data.toMap().mapKeys((key, value) => value.get('value'));
const secondLevel = mapDeep(firstLevel);
Again, see it in action: https://jsfiddle.net/9f0djcb0/12/
This is good enough for me for now. Still feels like this can be solved smarter (and more performant).. Cheers :)
So after some time passed I came up with a solution that works for me:
let sec, third, objThird;
// 1st level: simple mapping
const firstLevel = data.toMap().mapKeys((key, value) => value.get('value'));
// 2nd level: walk through updated firstLevel's subobjects and do the mapping again:
const secondLevel = firstLevel.map((obj) => {
if (obj.has('more')) {
sec = obj.get('more').toMap().mapKeys((key, value) => value.get('value'));
// 3nd level: walk through updated secondLevel's subobjects and do the mapping again:
objThird = sec.map((o) => {
if (o.has('more')) {
third = o.get('more').toMap().mapKeys((key, value) => value.get('value'));
o = o.set('more', third);
}
return o;
});
obj = obj.set('more', objThird);
}
return obj;
});
See it in action here: https://jsfiddle.net/9f0djcb0/7/
This has been working nicely so far, thur pretty hard-coded. If anyone has a more elegant solution to this, I am happy to learn about it!

How to alter keys in immutable map?

I've a data structure like this (generated by normalizr):
const data = fromJS({
templates: {
"83E51B08-5F55-4FA2-A2A0-99744AE7AAD3":
{"uuid": "83E51B08-5F55-4FA2-A2A0-99744AE7AAD3", test: "bla"},
"F16FB07B-EF7C-440C-9C21-F331FCA93439":
{"uuid": "F16FB07B-EF7C-440C-9C21-F331FCA93439", test: "bla"}
}
})
Now I try to figure out how to replace the UUIDs in both the key and the value of the template entries. Basically how can I archive the following output:
const data = fromJS({
templates: {
"DBB0B4B0-565A-4066-88D3-3284803E0FD2":
{"uuid": "DBB0B4B0-565A-4066-88D3-3284803E0FD2", test: "bla"},
"D44FA349-048E-4006-A545-DBF49B1FA5AF":
{"uuid": "D44FA349-048E-4006-A545-DBF49B1FA5AF", test: "bla"}
}
})
A good candidate seems to me the .mapEntries() method, but I'm struggling on how to use it ...
// this don't work ... :-(
const result = data.mapEntries((k, v) => {
const newUUID = uuid.v4()
return (newUUID, v.set('uuid', newUUID))
})
Maybe someone can give me a hand here?
mapEntries is the correct method. From the documentation, the mapping function has the following signature:
mapper: (entry: [K, V], index: number, iter: this) => [KM, VM]
This means that the first argument is the entry passed in as an array of [key, value]. Similarly, the return value of the mapper function should be an array of the new key and the new value. So your mapper function needs to look like this:
([k, v]) => {
const newUUID = uuid.v4()
return [newUUID, v.set('uuid', newUUID)]
}
This is equivalent to the following (more explicit) function:
(entry) => {
const key = entry[0]; // note that key isn't actually used, so this isn't necessary
const value = entry[1];
const newUUID = uuid.v4()
return [newUUID, value.set('uuid', newUUID)]
}
One thing to note is that the templates are nested under the templates property, so you can't map data directly -- instead you'll want to use the update function.
data.update('templates', templates => template.mapEntries(...)))
So putting everything together, your solution should look like the following:
const result = data.update('templates', templates =>
templates.mapEntries(([k, v]) => {
const newUUID = uuid.v4()
return [newUUID, v.set('uuid', newUUID)]
})
);