RuntimeError: Lock objects should only be shared between processes through inheritance - pickle

I'm getting a ValueError: Lock objects should only be shared between processes through inheritance when writing a xarray.DataArray to_netcdf().
Everything works until writing to disk. But I found a workaround which is to use dask.config.set(scheduler='single-threaded').
Is everyone supposed to use dask.config.set(scheduler='single-threaded') before writing to disk?
Am I missing something?
I tested two schedulers:
1) from dask.distributed import Client; client = Client()
2) import dask.multiprocessing; dask.config.set(scheduler=dask.multiprocessing.get)
python=2.7, xarray=0.10.9, traceback:
File "/home/py_user/miniconda2/envs/v0/lib/python2.7/site-packages/xarray/core/dataarray.py", line 1746, in to_netcdf
return dataset.to_netcdf(*args, **kwargs)
File "/home/py_user/miniconda2/envs/v0/lib/python2.7/site-packages/xarray/core/dataset.py", line 1254, in to_netcdf
compute=compute)
File "/home/py_user/miniconda2/envs/v0/lib/python2.7/site-packages/xarray/backends/api.py", line 724, in to_netcdf
unlimited_dims=unlimited_dims, compute=compute)
File "/home/py_user/miniconda2/envs/v0/lib/python2.7/site-packages/xarray/core/dataset.py", line 1181, in dump_to_store
store.sync(compute=compute)
...
File "/home/py_user/miniconda2/envs/v0/lib/python2.7/multiprocessing/synchronize.py", line 95, in __getstate__
assert_spawning(self)
File "/home/py_user/miniconda2/envs/v0/lib/python2.7/multiprocessing/forking.py", line 52, in assert_spawning
' through inheritance' % type(self).__name__

As #jhamman mentioned in the comments. This may have been fixed in a newer version of Xarray.

Related

ib_insync "contract can't be hashed"

A process I have that's been running for a couple of years suddenly stopped working.
I avoided updating much in the way of python and packages to avoid that..
I've now updated ib_insync to the latest version, and no improvement. debugging a little gives me this:
the code
import ib_insync as ibis
ib = ibis.IB()
contract = ibis.Contract()
contract.secType = 'STK'
contract.currency = 'USD'
contract.exchange = 'SMART'
contract.localSymbol = 'AAPL'
ib.qualifyContracts(contract)
Result:
File "/Users/macuser/anaconda3/lib/python3.6/site-packages/ib_insync/client.py", line 244, in send
if field in empty:
File "/Users/macuser/anaconda3/lib/python3.6/site-packages/ib_insync/contract.py", line 153, in hash
raise ValueError(f'Contract {self} can't be hashed')
ValueError: Contract Contract(secType='STK', exchange='SMART', currency='USD', localSymbol='AAPL') can't be hashed
Exception ignored in: <bound method IB.del of <IB connected to 127.0.0.1:7497 clientId=6541>>
Traceback (most recent call last):
File "/Users/macuser/anaconda3/lib/python3.6/site-packages/ib_insync/ib.py", line 233, in del
File "/Users/macuser/anaconda3/lib/python3.6/site-packages/ib_insync/ib.py", line 281, in disconnect
File "/Users/macuser/anaconda3/lib/python3.6/logging/init.py", line 1306, in info
File "/Users/macuser/anaconda3/lib/python3.6/logging/init.py", line 1442, in _log
File "/Users/macuser/anaconda3/lib/python3.6/logging/init.py", line 1452, in handle
File "/Users/macuser/anaconda3/lib/python3.6/logging/init.py", line 1514, in callHandlers
File "/Users/macuser/anaconda3/lib/python3.6/logging/init.py", line 863, in handle
File "/Users/macuser/anaconda3/lib/python3.6/logging/init.py", line 1069, in emit
File "/Users/macuser/anaconda3/lib/python3.6/logging/init.py", line 1059, in _open
NameError: name 'open' is not defined
| => python --version
Python 3.6.4 :: Anaconda, Inc.
I am not OP, but have had a similar problem. I am attempting to send an order to IB using ib_insync.
contract = Stock('DKS','SMART','USD')
order = LimitOrder('SELL', 1, 1)
try:
ib.qualifyContracts(contract)
trade = ib.placeOrder(contract, order)
except Exception as e:
print(e)
Here is the exception that is returned:
Contract Stock(symbol='DKS', exchange='SMART', currency='USD') can't be hashed
I understand that lists and other mutable can't be hashed, but I don't understand why this wouldn't work. It pretty clearly follows the examples in the ib_insync docs.
FYI - I was able to resolve this issue by updating to the latest ib_insync version. Perhaps this will help you as well.

Pandoc Mermaid filter

I am trying to use this pandoc filter to convert markdown to HTML.
This is the example file:
gantt
dateFormat YYYY-MM-DD
title Adding GANTT diagram functionality to mermaid
section A section
Completed task :done, des1, 2014-01-06,2014-01-08
Active task :active, des2, 2014-01-09, 3d
Future task : des3, after des2, 5d
Future task2 : des4, after des3, 5d
section Critical tasks
Completed task in the critical line :crit, done, 2014-01-06,24h
Implement parser and jison :crit, done, after des1, 2d
Create tests for parser :crit, active, 3d
Future task in critical line :crit, 5d
Create tests for renderer :2d
Add to mermaid :1d
This is the command that I am running:
pandoc file.md -f markdown -o out.html --filter=pandoc-mermaid
This is the error message:
File "D:\Anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "D:\Anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\Anaconda3\Scripts\pandoc-mermaid.exe\__main__.py", line 7, in <module>
File "D:\Anaconda3\lib\site-packages\pandoc_mermaid_filter.py", line 38, in main
toJSONFilter(mermaid)
File "D:\Anaconda3\lib\site-packages\pandocfilters.py", line 130, in toJSONFilter
toJSONFilters([action])
File "D:\Anaconda3\lib\site-packages\pandocfilters.py", line 164, in toJSONFilters
sys.stdout.write(applyJSONFilters(actions, source, format))
File "D:\Anaconda3\lib\site-packages\pandocfilters.py", line 195, in applyJSONFilters
altered = walk(altered, action, format, meta)
File "D:\Anaconda3\lib\site-packages\pandocfilters.py", line 123, in walk
return {k: walk(v, action, format, meta) for k, v in x.items()}
File "D:\Anaconda3\lib\site-packages\pandocfilters.py", line 123, in <dictcomp>
return {k: walk(v, action, format, meta) for k, v in x.items()}
File "D:\Anaconda3\lib\site-packages\pandocfilters.py", line 110, in walk
res = action(item['t'],
File "D:\Anaconda3\lib\site-packages\pandoc_mermaid_filter.py", line 31, in mermaid
subprocess.check_call([MERMAID_BIN, "-i", src, "-o", dest])
File "D:\Anaconda3\lib\subprocess.py", line 359, in check_call
retcode = call(*popenargs, **kwargs)
File "D:\Anaconda3\lib\subprocess.py", line 340, in call
with Popen(*popenargs, **kwargs) as p:
File "D:\Anaconda3\lib\subprocess.py", line 858, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "D:\Anaconda3\lib\subprocess.py", line 1311, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] El sistema no puede encontrar el archivo especificado
Error running filter pandoc-mermaid:
Filter returned error status 1
The folder where the executable is located is apparently added to path. Any ideas on how I could fix it?
Specifications:
Windows 10 Home
pandoc 2.14.0.1
Thanks
Not solving the problem but providing a different way to achieve the same and in case it's useful to others who arrive here:
Mermaid-cli has the feature to act as a preprocessor for Markdown.
As such, you can do the following:
mkdir build
npx -p #mermaid-js/mermaid-cli mmdc -i file.md -o build/output-svg.md
# Generates lots of output-svg-1.svg files linked from the doc
cd build
pandoc output-svg.md -o output.html
It defaults to SVG images (which generally works fine for HTML but fails when targeting PDF). It has a less well documented feature to force PNG generation.
npx -p #mermaid-js/mermaid-cli mmdc -i file.md --outputFormat=png -o build/output-png.md
# Generates lots of output-png-1.png files linked from the doc
cd build
pandoc output-png.md -o output.pdf
I found success with the NPM library which adds a similar Mermaid filter for Pandoc: raghur/mermaid-filter.
You should be able to do:
pandoc file.md -f markdown -o out.html --filter=mermaid-filter
IMPORTANT: On Windows, you need to specify the filter as --filter mermaid-filter.cmd instead of --filter mermaid-filter (I missed this at first and was very confused)

How to debug my HTTPie or Postman so Akamai FastPurge API can make successful calls?

I wanted to use HTTPie or Postman to snip together a request for the Akamai FastPurge API, so I could see the structure and HTTP request as a whole with the aim to build a java module that builds the same HTTP request.
I've tried several tutorials provided by Akamai* and nothing seemed to work.
The credentials for FastPurge API are in their own edgegrid-file and I edited the config.json of HTTPie as suggested by Akamai.
It doesn't matter wether CP Codes or URL's are used, the result is the same.
The API does have read-write-authorisation and the credentials are correctly read from file.
I also tried using Postman, but Postman doesn't integrate the edgegrid-authorization so I have to do it by hand without knowing how it looks like. There exists a tutorial in how to set up an Akamai API in Postman, but it lacks the critical part of the pre-request-script.**
*: https://developer.akamai.com/akamai-101-basics-purging , https://community.akamai.com/customers/s/article/Exploring-Akamai-OPEN-APIs-from-the-command-line-using-HTTPie-and-jq?language=en_US , https://developer.akamai.com/api/core_features/fast_purge/v3.html , https://api.ccu.akamai.com/ccu/v2/docs/index.html
**: https://community.akamai.com/customers/s/article/Using-Postman-to-access-Akamai-OPEN-APIs?language=en_US
The error I receive in HTTPie are either this:
C:\python>http --auth-type=edgegrid -a default: :/ccu/v3/invalidate/url/production objects:='["https://www.example.com","http://www.example.com"]' --debug
HTTPie 1.0.3-dev
Requests 2.21.0
Pygments 2.3.1
Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit (Intel)]
c:\python\python.exe
Windows 10
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/doc#config",
"httpie": "1.0.3-dev"
},
"default_options": "['--verbose', '--traceback', '--auth-type=edgegrid', '--print=Hhb', '--timeout=300', '--style=autumn', '-adefault:']"
},
"config_dir": "%home",
"is_windows": true,
"stderr": "<colorama.ansitowin32.StreamWrapper object at 0x066D1A70>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>",
"stdin_encoding": "utf-8",
"stdin_isatty": true,
"stdout": "<colorama.ansitowin32.StreamWrapper object at 0x066D19D0>",
"stdout_encoding": "utf-8",
"stdout_isatty": true
}>
Traceback (most recent call last):
File "c:\python\lib\site-packages\httpie\input.py", line 737, in parse_items
value = load_json_preserve_order(value)
File "c:\python\lib\site-packages\httpie\utils.py", line 7, in load_json_preserve_order
return json.loads(s, object_pairs_hook=OrderedDict)
File "c:\python\lib\json\__init__.py", line 361, in loads
return cls(**kw).decode(s)
File "c:\python\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "c:\python\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\Scripts\http-script.py", line 11, in <module>
load_entry_point('httpie==1.0.3.dev0', 'console_scripts', 'http')()
File "c:\python\lib\site-packages\httpie\__main__.py", line 11, in main
sys.exit(main())
File "c:\python\lib\site-packages\httpie\core.py", line 210, in main
parsed_args = parser.parse_args(args=args, env=env)
File "c:\python\lib\site-packages\httpie\input.py", line 152, in parse_args
self._parse_items()
File "c:\python\lib\site-packages\httpie\input.py", line 358, in _parse_items
data_class=ParamsDict if self.args.form else OrderedDict
File "c:\python\lib\site-packages\httpie\input.py", line 739, in parse_items
raise ParseError('"%s": %s' % (item.orig, e))
httpie.input.ParseError: "objects:='[https://www.example.com,http://www.example.com]'": Expecting value: line 1 column 1 (char 0)
or that:
C:\python>http POST --auth-type=edgegrid -a default: :/ccu/v3/invalidate/cpcode/production < file.json
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest,edgegrid}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: unrecognized arguments: :/ccu/v3/invalidate/cpcode/production
A typical successful response looks like this:
{
"httpStatus" : 201,
"detail" : "Request accepted.",
"estimatedSeconds" : 420,
"purgeId" : "95b5a092-043f-4af0-843f-aaf0043faaf0",
"progressUri" : "/ccu/v2/purges/95b5a092-043f-4af0-843f-aaf0043faaf0",
"pingAfterSeconds" : 420,
"supportId" : "17PY1321286429616716-211907680"
}
I would really appreciate it if somebody could either help me in correcting my setup on either HTTPie or Postman, or give me the complete structure an Akamai FastPurge HTTP request has so I can skip the hassle with HTTPie or Postman.
Thank you very much, your help is much appreciated!
Postman now has an Akamai edgeGrid authorization type. It takes the contents of your .edgerc file.
I set it on the collection and let all of the calls inherit the authorization.

Re long list split

I have been struggling with this problem kinda while. Basicaly i need to split a loong HTML list(obviously i will not put it in here cos it is fricken massive). I tried the str.split() method but you can only put one parameter at the time. Soo i found the re.split() function. BUT! here is the thing if i try to split it, it blows this on me
Traceback (most recent call last):
File "/root/Plocha/FlaskWebsiteHere/Mrcasa_na_C.py", line 34, in <module>
a = re.split(' / |</h3><p>|: 1\) ', something_paragraf[0])
File "/usr/lib/python3.5/re.py", line 203, in split
return _compile(pattern, flags).split(string, maxsplit)
TypeError: expected string or bytes-like object
now i tried to resolve this but nothing :/. Please help!
SnailingGoat
Maybe something like this?
import re
something_paragraf = soup.find_all("div", {"class":"ukolRes"})
# convert 'bs4.element.ResultSet' to `str`
html = ''.join([s.text for s in something_paragraf])
# modify this regex split to suit your needs
# split string on punctuation
multiple_strings = re.split('(?<=[.,!/?]) +',html)
print(multiple_strings)
output:
['Matematika /', 'splnit do 21.', 'září (čtvrtek)\nnaučit se bezpodmínečně všechny tři mocninné vzorce\núkol byly příklady v sešitě na rozložení a složení vzorců\n\nAnglický jazyk /'
...
'lepidlo Kores,', 'tužky,', 'propiska či pero (modré)\n']

Django south from MySQL to postgresql

I first started using MySQL in one of my apps and I am now thinking of moving from MySQL to PostgreSQL.
I have South installed for migrations.
When I set up a new DB in postgres I successfully synced my apps and got to a complete halt in one of my last migrations.
> project:0056_auto__chg_field_project_project_length
Traceback (most recent call last):
File "./manage.py", line 11, in <module>
execute_manager(settings)
File "/Users/ApPeL/.virtualenvs/fundedbyme.com/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/Users/ApPeL/.virtualenvs/fundedbyme.com/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/ApPeL/.virtualenvs/fundedbyme.com/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "/Users/ApPeL/.virtualenvs/fundedbyme.com/lib/python2.7/site-packages/django/core/management/base.py", line 220, in execute
output = self.handle(*args, **options)
File "/Library/Python/2.7/site-packages/south/management/commands/migrate.py", line 105, in handle
ignore_ghosts = ignore_ghosts,
File "/Library/Python/2.7/site-packages/south/migration/__init__.py", line 191, in migrate_app
success = migrator.migrate_many(target, workplan, database)
File "/Library/Python/2.7/site-packages/south/migration/migrators.py", line 221, in migrate_many
result = migrator.__class__.migrate_many(migrator, target, migrations, database)
File "/Library/Python/2.7/site-packages/south/migration/migrators.py", line 292, in migrate_many
result = self.migrate(migration, database)
File "/Library/Python/2.7/site-packages/south/migration/migrators.py", line 125, in migrate
result = self.run(migration)
File "/Library/Python/2.7/site-packages/south/migration/migrators.py", line 99, in run
return self.run_migration(migration)
File "/Library/Python/2.7/site-packages/south/migration/migrators.py", line 81, in run_migration
migration_function()
File "/Library/Python/2.7/site-packages/south/migration/migrators.py", line 57, in <lambda>
return (lambda: direction(orm))
File "/Users/ApPeL/Sites/Django/fundedbyme/project/migrations/0056_auto__chg_field_project_project_length.py", line 12, in forwards
db.alter_column('project_project', 'project_length', self.gf('django.db.models.fields.IntegerField')())
File "/Library/Python/2.7/site-packages/south/db/generic.py", line 382, in alter_column
flatten(values),
File "/Library/Python/2.7/site-packages/south/db/generic.py", line 150, in execute
cursor.execute(sql, params)
File "/Users/ApPeL/.virtualenvs/fundedbyme.com/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute
return self.cursor.execute(query, args)
django.db.utils.DatabaseError: column "project_length" cannot be cast to type integer
I am wondering if there is some workaround for this?
You current migration works that way:
Alter column "project_length" to another type.
It is broken because you are making alter that is not supported by PostgreSQL.
You must fix your migration. You can change it to following migration (it will work, but probably can be done easier):
Create another column project_length_tmp with type you want to project_length have and some default value.
Make data migration from column project_length to project_lenght_tmp (see data migrations in south docs).
Remove column project_length.
Rename column project_length_tmp to project_length.
Kind complicated migration but it have two major strengths:
1. It will work on all databases.
2. It is compatible with your old migration, so you can just override old migration (change the file) and it will be fine.
Approach 2
Another approach to your problem would be just to remove all your migrations and start from scratch. If you have only single deployment of your project it will work fine for you.
You don't provide any details of the SQL being executed, but it seems unlikely that it's an ALTER TYPE failing - assuming the SQL is correct.
=> CREATE TABLE t (c_text text, c_date date, c_datearray date[]);
CREATE TABLE
=> INSERT INTO t VALUES ('abc','2011-01-02',ARRAY['2011-01-02'::date,'2011-02-03'::date]);
INSERT 0 1
=> ALTER TABLE t ALTER COLUMN c_text TYPE integer USING (length(c_text));
ALTER TABLE
=> ALTER TABLE t ALTER COLUMN c_date TYPE integer USING (c_date - '2001-01-01');
ALTER TABLE
=> ALTER TABLE t ALTER COLUMN c_datearray TYPE integer USING (array_upper(c_datearray, 1));
ALTER TABLE
=> SELECT * FROM t;
c_text | c_date | c_datearray
--------+--------+-------------
3 | 3653 | 2
(1 row)
There's not much you can't do. I'm guessing it's incorrect SQL being generated from this Django module you are using.