Why can't I import a default export with "import ... as" with BabelJS - ecmascript-6

In version 5.6.4 of BabelJS, I seemingly cannot "import ... as." Here are examples of what I am trying to do:
In file 'test.js':
export default class Test {};
In file 'test2.js' (in the same directory):
import Test as Test2 from './test';
I have also tried to do:
import {Test as Test2} from './test';
Even though it says nothing about that here:
http://babeljs.io/docs/learn-es2015/#modules
And only uses brackets in the non-default syntax here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import
Has anyone done this successfully?
EDIT: It is absolutely because of the default keyword. So, in this case, the question becomes, does anyone have any links to documentation that states that I should not be able to alias a default import? ECMA or Babel.

You can import the default export by either
import Test2 from './test';
or
import {default as Test2} from './test';
The default export doesn't have Test as a name that you would need to alias - you just need to import the default under the name that you want.
The best docs I've found so far is the article ECMAScript 6 modules: the final syntax in Axel Rauschmayers blog.

Related

spacy_wordnet -> lang extra fields not permitted

i was following this tutorial for wordnet enter link description here running this code:
import spacy
print(spacy.__version__)
from spacy_wordnet.wordnet_annotator import WordnetAnnotator
nlp = spacy.load('en_core_web_sm')
nlp.add_pipe("spacy_wordnet", after='tagger', config={'lang': nlp.lang})
but im getting this error:
spacy_wordnet -> lang extra fields not permitted
How can i fix it? im using vs code, python 3.1.0 and spacy 3.3.0

How do you set existing_data_behavior in pyarrow?

I'm getting this error. How do I change the behavior when writing a dataset (write_dataset)
pyarrow.lib.ArrowInvalid: Could not write to <my-output-dir> as the directory is not empty and existing_data_behavior is to error
Update: If you are using exactly version 6.0.0 then this was a bug (see below). If you are using a version >= 6.0.1 then you can specify it as part of the write_dataset call:
import pyarrow as pa
import pyarrow.dataset as ds
tab = pa.Table.from_pydict({"x": [1, 2, 3], "y": ["x", "y", "z"]})
partitioning = ds.partitioning(schema=pa.schema([pa.field('y', pa.utf8())]), flavor='hive')
ds.write_dataset(tab, '/tmp/foo_dataset', format='parquet', partitioning=partitioning)
# This write would fail because data exists and the default
# is to not allow a potential overwrite
ds.write_dataset(tab, '/tmp/foo_dataset', format='parquet', partitioning=partitioning)
# By specifying existing_data_behavior we can change that
# default to return to the previous behavior
ds.write_dataset(tab, '/tmp/foo_dataset', format='parquet', partitioning=partitioning, existing_data_behavior='overwrite_or_ignore')
Legacy 6.0.0 Answer
This is unfortunately a bug: https://issues.apache.org/jira/browse/ARROW-14620
The default behavior changed in 6.0.0 so that the write_dataset method will not proceed if data exists in the destination directory. The flag to override this behavior did not get included in the python bindings.
Workarounds are to use an older version or delete all files in the directory first.

PhpStorm: How to use single quotation mark instead double for auto import?

I develop with PhpStorm.
For TypeScript project, I like the function "Auto Import" when I typing a module name.
But when I would like to load (for instance) NgbModule, I have the following auto import:
import {NgbModule} from "#ng-bootstrap/ng-bootstrap";
How can I configure PhpStorm to use single quotation bracket instead the double? like this:
import {NgbModule} from '#ng-bootstrap/ng-bootstrap';
Please set Use single quotes always in Preferences | Editor | Code Style | TypeScript | Punctuation.

Python 2 to 3: telling 2to3 "I got this"

with either linters or coverage.py, you can tell the tool to ignore certain parts of your code.
for example, #pragma: no cover tells coverage not to count an exception branch as missing:
except (Exception,) as e: #pragma: no cover
if cpdb(): pdb.set_trace()
raise
Now, I know I can exclude specific fixers from 2to3. For example, to avoid fixing imports below, I can use 2to3 test_import_stringio.py -x imports.
But can use code annotations/directives to keep the fixer active, except at certain locations? For example, this bit of code is already adjusted to work for 2 and 3.
#this import should work fine in 2 AND 3.
try:
from io import StringIO
except ImportError:
#pragma-for-2to3: skip
from StringIO import StringIO
but 2to3 helpfully converts, because there is no such directive/pragma
And now this won't work in 2:
#this tests should work fine in 2 and 3.
try:
from io import StringIO
except ImportError:
#pragma-for-2to3: skip
from io import StringIO
Reason I am asking is because I want to avoid a big-bang approach. I intend to refactor code bit by bit, starting with unittests, to run under 2 and 3.
I am guessing this is not possible, just looking at my options. What I'll probably end up doing is to run the converter only on imports with -f imports for example, check what it ended up doing, do that manually myself on the code and then exclude imports from future consideration with -x imports.

Any library that can help me create a JSON file with dummy records

I am looking at any library (in java) that can help me generate a dummy JSON file to test my code for e.g The JSON file can contain random user profile data-name, address, zipcode
I searched StackOverflow and found this link, found the following link : How to generate JSON string in Java?
I think the suggested library https://github.com/DiUS/java-faker, seems to be useful, however because of security constraints I cannot use this particular library. Are there any more recommendations?
Use for instance Faker, like that:
#!/usr/bin/env python3
from json import dumps
from faker import Faker
fake = Faker()
def user():
return dict(
name=fake.name(),
address=fake.address(),
bio=fake.text()
)
print('[')
try:
while True:
print(dumps(user()))
print(',')
except KeyboardInterrupt:
# XXX: json array can not end with a comma
print(dumps(user()))
print(']')
You can use it like that:
python3 fake_user.py > users.json
Use Ctrl+C to stop it when the file is big enough