I am searching for an ES Lint rule that will force the following behavior:
aPromiseCall()
.then(() => {
// logic
})
.then(() => {
// logic
})
.catch(() => {
})
Notice that each .then should be indented by 4 spaces and on a . separate line
Use the following rule:
"indent": ["error", 4, { "MemberExpression": 1 }]
"MemberExpression" (default: 1) enforces indentation level for multi-line property chains. This can also be set to "off" to disable checking for MemberExpression indentation.
https://eslint.org/docs/rules/indent#memberexpression
Related
I have script "lint:prettier": "prettier --check --ignore-unknown \"**/*.*\"",. Then I run it, I get successful result but in github actions I getting erorrs
Github action lint-action workflow:
- name: Run linters
run: npm run lint:prettier
.prettierrc.js:
module.exports = {
semi: true,
trailingComma: 'all',
singleQuote: true,
printWidth: 120,
tabWidth: 2,
endOfLine: 'auto',
};
.eslintrc.js:
module.exports = {
parser: '#typescript-eslint/parser', // Specifies the ESLint parser
extends: [
'plugin:react/recommended', // Uses the recommended rules from #eslint-plugin-react
'plugin:#typescript-eslint/recommended', // Uses the recommended rules from the #typescript-eslint/eslint-plugin
'plugin:prettier/recommended', // Enables eslint-plugin-prettier and displays prettier errors as ESLint errors. Make sure this is always the last configuration in the extends array.
],
parserOptions: {
ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features
sourceType: 'module', // Allows for the use of imports
ecmaFeatures: {
jsx: true, // Allows for the parsing of JSX
},
},
rules: {
'react/prop-types': [2, { ignore: ['children'] }],
},
settings: {
react: {
version: 'detect', // Tells eslint-plugin-react to automatically detect the version of React to use
},
},
};
How can I fix it?
I am trying to transpile my ES6 code via Babel, I am using the next/babel preset along with preset-env and I'm using the browsers: defaults target.
The NextJS preset comes with #babel/plugin-proposal-object-rest-spread in its plugins array, I'm wondering why I am getting an error when testing on edge that says Expected identifier, string or number, and when looking in the compiled JS for the error, I see it happens when {...t} occurs.
Here is my babel.config.js:
module.exports = {
presets: [
[
'next/babel',
{
'#babel/preset-env': {
targets: {
browsers: 'defaults'
},
useBuiltIns: 'usage'
}
}
]
],
plugins: [
'#babel/plugin-proposal-optional-chaining',
'#babel/plugin-proposal-nullish-coalescing-operator',
['styled-components', { ssr: true, displayName: true, preprocess: false }],
[
'module-resolver',
{
root: ['.', './src']
}
]
],
env: {
development: {
compact: false
}
}
};
Any help on this would be greatly appreciated!
In the end my problem was related to a package that was not being transpiled by babel. My solution was to use NextJS' next-transpile-modules plugin to get babel to transpile the package code into something that would work on the browsers I need.
Here's an example of my NextJS webpack config with the package I need transpiled specified:
const withTM = require('next-transpile-modules');
module.exports = withTM({
transpileModules: ['swipe-listener']
});
SCRIPT1028: Expected identifier, string or number error can occur in 2 situations.
(1) This error get trigger if you are using trailing comma after your last property in a JavaScript object.
Example:
var message = {
title: 'Login Unsuccessful',
};
(2) This error get trigger if you are using a JavaScript reserved word as a property name.
Example:
var message = {
class: 'error'
};
solution is to pass the class property value as a string. You will need to use bracket notation, however, to call the property in your script.
Reference:
ERROR : SCRIPT1028: Expected identifier, string or number
The follow works in node v8.11.4 and in babel transpiled JavaScript running on chrome
const myFunc = ({
aryOfObjs,
combinedObj = Object.assign({}, ...aryOfObjs),
}) => console.log(combinedObj);
myFunc({
aryOfObjs: [
{ foo: 'bar'},
{ biz: 'baz' },
]
}); // => { foo: 'bar', biz: 'baz' }
In EMACScript 2015 is this guaranteed to work as shown above?
I know node and babel aren't 100% EMACScript 2015 complaint but I believe they both implement the object destructuring spec I can't find anything explicit on mdn that says this is supported nor on the official ECMAScript 2015 spec
Yes, this is valid ES2015 code. aryOfObjs is a variable introduced into the function scope, and Object.assign({}, ...aryOfObjs) is an expression evaluated in that scope so it can access any of those variables. The only time this would be an error is if they were accessed out of order, like
const myFunc = ({
combindedObj = Object.assign({}, ...aryOfObjs),
aryOfObjs,
}) => console.log(combindedObj);
which would throw an error because aryOfObjs has not been initialized yet.
I have two different syntaxes. I can access my getters and my actions using mapGetters() and mapActions(). The first one doesn't deconstruct the state parameter and works. The second one does deconstruct the state but the mutation doesn't mutate the state while the getter can access the state and the action has no problem deconstructing the context.
I don't understand why.
Do I misuse ES6 / vuejs / vuex / quasar ?
Vue.use(Vuex)
export default new Vuex.Store({
state: {
counter1: 0,
counter2: 0
},
getters: {
counter1: state => state.counter1,
counter2: ({ counter2 }) => counter2
},
mutations: {
increment1: state => state.counter1++,
increment2: ({ counter2 }) => counter2++
},
actions: {
increment1: context => context.commit('increment1'),
increment2: ({ commit }) => commit('increment2')
}
})
A friend of mine gave me the answer. I misused ES6.
{ counter2 } doesn't reference state.counter2, but makes a copy of it.
It makes sense that I can't change state.counter2 when changing counter2.
edit it seems I'm trying to create some concat gulp pipe that first transforms the contents of the various files.
I have been using through2 to write a custom pipe. Scope: Using gulp-watch, run a task that loads all of them, transform them in memory and output a few files.
Through2 comes with a "highWaterMark" option that defaults to 16 files being read at a time. My pipe doesn't need to be memory optimized (it reads dozens of <5kb jsons, runs some transforms and outputs 2 or 3 jsons). But I'd like to understand the preferred approach.
I'd like to find a good resource / tutorial explaining how such situations are handled, any lead's welcome.
Thanks,
Ok, found my problem.
When using through2 to create a custom pipe, in order to "consume" the data (and not hit the highWaterMark limit), one simply has to add an .on('data', () => ...) handler, like in the following example:
function processAll18nFiles(done) {
const dictionary = {};
let count = 0;
console.log('[i18n] Rebuilding...');
gulp
.src(searchPatternFolder)
.pipe(
through.obj({ highWaterMark: 1, objectMode: true }, (file, enc, next) => {
const { data, path } = JSON.parse(file.contents.toString('utf8'));
next(null, { data, path });
})
)
// this line fixes my issue, the highWaterMark doesn't cause a limitation now
.on('data', ({ data, path }) => ++count && composeDictionary(dictionary, data, path.split('.')))
.on('end', () =>
Promise.all(Object.keys(dictionary).map(langKey => writeDictionary(langKey, dictionary[langKey])))
.then(() => {
console.log(`[i18n] Done, ${count} files processed, language count: ${Object.keys(dictionary).length}`);
done();
})
.catch(err => console.log('ERROR ', err))
);
rem mind the "done" parameter forcing the developper to make a call when it's done()