I'm unable to get an Ethereum LES getProofsV2 (or getProofs) request to produce a ProofsV2 (or Proofs) response with anything but an empty array of Nodes. This is for a request targeted to a GETH node open to LES connections.
The request arguments are [B_32 (blockHash), B_32 (key), B_32 (key2), P (fromLevel)]. I'm providing a blockHash from any recently mined block; a key as the Keccak256 hash of a 20-byte account (with transactions from months ago), a key2 as 32-bytes of 0, and fromLevel as 0. I'm expecting to get back an array of Nodes with the last node having the RLP encoding of the AccountState (with {nonce,balance,...}. Is that expectation correct? Or would I only expect to get an array of Nodes if key had transactions in the specified blockHash? [Fact is, I tried both - blockHash w/ or w/o transactions, still nodes is []].
Specifically, this is the request data:
ETH: LES-ProofsV2-Req: L 1: [
ETH: LES-ProofsV2-Req: L 4: [
ETH: LES-ProofsV2-Req: I 32: 0x0a89dd55d38929468c1303b92ab43ca57269ac864175fc6208ae739ffcc17c9b
ETH: LES-ProofsV2-Req: I 32: 0x272cf200ca06815ab2170fde0901e7da10ab1dddc31223184f139def3b09f670
ETH: LES-ProofsV2-Req: I 32: 0x0000000000000000000000000000000000000000000000000000000000000000
ETH: LES-ProofsV2-Req: I 0: 0x
ETH: LES-ProofsV2-Req: ]
ETH: LES-ProofsV2-Req: ]
Where the blockHash 0x0a89... is blockNumber 5732521 and key2 is the hash of account 0x49f4C50d9BcC7AfdbCF77e0d6e364C29D5a660DF.
For completeness, the response is:
ETH: LES-PROOFSV2: L 3: [
ETH: LES-PROOFSV2: I 0: 0x
ETH: LES-PROOFSV2: I 4: 0x11d1a228
ETH: LES-PROOFSV2: L 0: []
ETH: LES-PROOFSV2: ]
What is needed to make a successful GetProofsV2 request?
Related
I'm working on an aggregated config file parsing tool, hoping it can support .json, .yaml and .toml files. So, I have done the next tests:
The example.json config file is as:
{
"DEFAULT":
{
"ServerAliveInterval": 45,
"Compression": true,
"CompressionLevel": 9,
"ForwardX11": true
},
"bitbucket.org":
{
"User": "hg"
},
"topsecret.server.com":
{
"Port": 50022,
"ForwardX11": false
},
"special":
{
"path":"C:\\Users",
"escaped1":"\n\t",
"escaped2":"\\n\\t"
}
}
The example.yaml config file is as:
DEFAULT:
ServerAliveInterval: 45
Compression: yes
CompressionLevel: 9
ForwardX11: yes
bitbucket.org:
User: hg
topsecret.server.com:
Port: 50022
ForwardX11: no
special:
path: C:\Users
escaped1: "\n\t"
escaped2: \n\t
and the example.toml config file is as:
[DEFAULT]
ServerAliveInterval = 45
Compression = true
CompressionLevel = 9
ForwardX11 = true
['bitbucket.org']
User = 'hg'
['topsecret.server.com']
Port = 50022
ForwardX11 = false
[special]
path = 'C:\Users'
escaped1 = "\n\t"
escaped2 = '\n\t'
Then, the test code with output is as:
import pickle,json,yaml
# TOML, see https://github.com/hukkin/tomli
try:
import tomllib
except ModuleNotFoundError:
import tomli as tomllib
path = "example.json"
with open(path) as file:
config1 = json.load(file)
assert isinstance(config1,dict)
pickled1 = pickle.dumps(config1)
path = "example.yaml"
with open(path, 'r', encoding='utf-8') as file:
config2 = yaml.safe_load(file)
assert isinstance(config2,dict)
pickled2 = pickle.dumps(config2)
path = "example.toml"
with open(path, 'rb') as file:
config3 = tomllib.load(file)
assert isinstance(config3,dict)
pickled3 = pickle.dumps(config3)
print(config1==config2) # True
print(config2==config3) # True
print(pickled1==pickled2) # False
print(pickled2==pickled3) # True
So, my question is, since the parsed obj are all dicts, and these dicts are equal to each other, why their pickled codes are not the same, i.e., why is the pickled code of the dict parsed from json different to other two?
Thanks in advance.
The difference is due to:
The json module performing memoizing for object attributes with the same value (it's not interning them, but the scanner object contains a memo dict that it uses to dedupe identical attribute strings within a single parsing run), while yaml does not (it just makes a new str each time it sees the same data), and
pickle faithfully reproducing the exact structure of the data it's told to dump, replacing subsequent references to the same object with a back-reference to the first time it was seen (among other reasons, this makes it possible to dump recursive data structures, e.g. lst = [], lst.append(lst), without infinite recursion, and reproduce them faithfully when unpickled)
Issue #1 isn't visible in equality testing (strs compare equal with the same data, not just the same exact object in memory). But when pickle sees "ForwardX11" the first time, it inserts the pickled form of the object and emits a pickle opcode that assigns a number to that object. If that exact object is seen again (same memory address, not merely same value), instead of reserializing it, it just emits a simpler opcode that just says "Go find the object associated with the number from last time and put it here as well". If it's a different object though, even one with the same value, it's new, and gets serialized separately (and assigned another number in case the new object is seen again).
Simplifying your code to demonstrate the issue, you can inspect the generated pickle output to see how this is happening:
s = r'''{
"DEFAULT":
{
"ForwardX11": true
},
"FOO":
{
"ForwardX11": false
}
}'''
s2 = r'''DEFAULT:
ForwardX11: yes
FOO:
ForwardX11: no
'''
import io, json, yaml, pickle, pickletools
d1 = json.load(io.StringIO(s))
d2 = yaml.safe_load(io.StringIO(s2))
pickletools.dis(pickle.dumps(d1))
pickletools.dis(pickle.dumps(d2))
Try it online!
The output from that code for the json parsed input is (with # comments inline to point out important things), at least on Python 3.7 (the default pickle protocol and exact pickling format can change from release to release), is:
0: \x80 PROTO 3
2: } EMPTY_DICT
3: q BINPUT 0
5: ( MARK
6: X BINUNICODE 'DEFAULT'
18: q BINPUT 1
20: } EMPTY_DICT
21: q BINPUT 2
23: X BINUNICODE 'ForwardX11' # Serializes 'ForwardX11'
38: q BINPUT 3 # Assigns the serialized form the ID of 3
40: \x88 NEWTRUE
41: s SETITEM
42: X BINUNICODE 'FOO'
50: q BINPUT 4
52: } EMPTY_DICT
53: q BINPUT 5
55: h BINGET 3 # Looks up whatever object was assigned the ID of 3
57: \x89 NEWFALSE
58: s SETITEM
59: u SETITEMS (MARK at 5)
60: . STOP
highest protocol among opcodes = 2
while the output from the yaml loaded data is:
0: \x80 PROTO 3
2: } EMPTY_DICT
3: q BINPUT 0
5: ( MARK
6: X BINUNICODE 'DEFAULT'
18: q BINPUT 1
20: } EMPTY_DICT
21: q BINPUT 2
23: X BINUNICODE 'ForwardX11' # Serializes as before
38: q BINPUT 3 # and assigns code 3 as before
40: \x88 NEWTRUE
41: s SETITEM
42: X BINUNICODE 'FOO'
50: q BINPUT 4
52: } EMPTY_DICT
53: q BINPUT 5
55: X BINUNICODE 'ForwardX11' # Doesn't see this 'ForwardX11' as being the exact same object, so reserializes
70: q BINPUT 6 # and marks again, in case this copy is seen again
72: \x89 NEWFALSE
73: s SETITEM
74: u SETITEMS (MARK at 5)
75: . STOP
highest protocol among opcodes = 2
printing the id of each such string would get you similar information, e.g., replacing the pickletools lines with:
for k in d1['DEFAULT']:
print(id(k))
for k in d1['FOO']:
print(id(k))
for k in d2['DEFAULT']:
print(id(k))
for k in d2['FOO']:
print(id(k))
will show a consistent id for both 'ForwardX11's in d1, but differing ones for d2; a sample run produced (with inline comments added):
140067902240944 # First from d1
140067902240944 # Second from d1 is *same* object
140067900619760 # First from d2
140067900617712 # Second from d2 is unrelated object (same value, but stored separately)
While I didn't bother checking if toml behaved the same way, given that it pickles the same as the yaml, it's clearly not attempting to dedupe strings; json is uniquely weird there. It's not a terrible idea that it does so mind you; the keys of a JSON dict are logically equivalent to attributes on an object, and for huge inputs (say, 10M objects in an array with the same handful of keys), it might save a meaningful amount of memory on the final parsed output by deduping (e.g. on CPython 3.11 x86-64 builds, replacing 10M copies of "ForwardX11" with a single copy would reduce 590 MB for string data to just 59 bytes).
As a side-note: This "dicts are equal, pickles are not" issue could also occur:
When the two dicts were constructed with the same keys and values, but the order in which the keys were inserted differed (modern Python uses insertion-ordered dicts; comparisons between them ignore ordering, but pickle would be serializing them in whatever order they iterate in naturally).
When there are objects which compare equal but have different types (e.g. set vs. frozenset, int vs. float); pickle would treat them separately, but equality tests would not see a difference.
Neither of these is the issue here (both json and yaml appear to be constructing in the same order seen in the input, and they're parsing the ints as ints), but it's entirely possible for your test of equality to return True, while the pickled forms are unequal, even when all the objects involved are unique.
I am hiting an endpoint like this example with postman https://my_server/rest/api/users/getUser/format/SERIALIZE/:user_id/123
and getting back as a response the following json
a: 4 {s: 6 "status";N;s: 7: "results";a: 2: {i: 0;a: 15: {s: 6: "userId";s: 3 "456";s :8: "username";s: 6: "Tester" ... .. .. .}}}
Is there a way I can get rid of these funny characters like a: 4, ;N;s: 7: etc and have my response looking as a clean json array of objects?
I am trying to withdraw Eth and keep getting this issue
here is my code
CoinRPC[withdraw.currency].personal_unlockAccount(withdraw.channel.currency_obj.base_account, "", "0x30")
here is the log error
I, [2022-08-29T07:25:10.189241 #29938] INFO -- : ETH {"jsonrpc":"2.0","method":"personal_unlockAccount","params":[17958023718327819853794372076217808889400833786,"","0x30"],"id":"1"}
I, [2022-08-29T07:25:10.191302 #29938] INFO -- : {"jsonrpc"=>"2.0", "id"=>"1", "error"=>{"code"=>-32602, "message"=>"invalid argument 0: json: cannot unmarshal non-string into Go value of type common.Address"}}
I, [2022-08-29T07:25:10.191448 #29938] INFO -- : [error]: {"code"=>-32602, "message"=>"invalid argument 0: json: cannot unmarshal non-string into Go value of type common.Address"}
I am attempting to export my scss to JSON npm package sass-export
I can successfully export variables and maps one level deep, but the problem is I have a colour map(shown is just the first portion there are many colours in it) that is two levels deep. The error that is thrown is about unclosed parenthesis. Everything appears good to me, I have followed the docs to export maps but I can't find much more info on the subject. Thanks for any insight you can provide.
Successfully exported
$z-index: (
"below-base": -1,
"base":0,
"xs": 100,
"sm": 200,
"md": 300,
"lg": 400,
"xl": 500
);
Console Output
mapValue:Array(7)
0: {name: "below-base", value: "-1", compiledValue: "-1"}
1: {name: "base", value: "0", compiledValue: "0"}
2: {name: "xs", value: "100", compiledValue: "100"}
3: {name: "sm", value: "200", compiledValue: "200"}
4: {name: "md", value: "300", compiledValue: "300"}
5: {name: "lg", value: "400", compiledValue: "400"}
6: {name: "xl", value: "500", compiledValue: "500"}
Unsuccessful
$colour-palette: (
gray: (
0: #0D0D0D,
1: #1A1A1A,
2: #262626,
3: #333333,
4: #5C5C5C,
5: #858585,
6: #ADADAD,
7: #D4D6DB
),
);
Terminal Error Output
{ Error: unclosed parenthesis
at Object.module.exports.renderSync (C:\Users\tbilcke\Documents\repos\node_modules\node-sass\lib\index.js:439:16)
status: 1,
file: 'stdin',
line: 193,
column: 34,
message: 'unclosed parenthesis',
formatted: 'Error: unclosed parenthesis\n on line 193 of stdin\n>> #sass-export-id.gray{content:"#{(0: #0D0D0D}";}\n ---------------------------------^\n' }
Console Output
colour-palette: Array(1)
0:
compiledValue:"(gray: (0: #0D0D0D, 1: #1A1A1A, 2: #262626, 3: #333333, 4: #5C5C5C, 5: #858585, 6: #ADADAD, 7: #D4D6DB))"
mapValue: Array(1)
0: {name: "gray", value: "(0: #0D0D0D", compiledValue: ""}
length:1
__proto__: Array(0)
name: "$colour-palette"
value :
"(gray: (0: #0D0D0D,1: #1A1A1A,2: #262626,3: $gray-base,4: #5C5C5C,5: #858585,6: #ADADAD,7: #D4D6DB),)"
Sass Export - Working
let __root = path.join(__dirname, '../')
let __src = path.join(__dirname, '../src')
let exportPath = path.join(__src, 'scss/_test_cars.scss')
let importPath = path.join(__src, 'scss/')
let options = {
inputFiles: [exportPath],
includePaths: [importPath]
}
let asObject = exporter(options).getStructured()
process.env.styles = JSON.stringify(asObject)
I recently encountered the same issue and managed to resolve it with a relatively painless workaround.
If you take your child arrays and separate them out like so;
$colour-palette-gray: (
0: #0D0D0D,
1: #1A1A1A,
2: #262626,
3: #333333,
4: #5C5C5C,
5: #858585,
6: #ADADAD,
7: #D4D6DB
);
and then in a separate scss file (which you dont observe with sass-export) load them into the parent array
$colour-palette: (
'grey': $colour-palette-gray,
'blue': $colour-palette-blue,
'pink': $colour-palette-pink
);
you can also use the special comment syntax above each child array declaration to ensure that the nodes in the JSON are labelled correctly;
/**
* #sass-export-section="brand-colors"
*/
I am using the gulp implementation of the module in this way and it works as desired
Trying to unlock an account from the console in geth, getting the following responses
personal.unlockAccount(eth.accounts[0], "Password", 1000)
Error: invalid argument 2: cannot unmarshal non-string as hex data
personal.unlockAccount(eth.accounts[0], "Password")
true
personal.unlockAccount(eth.accounts[0], "Password", "0x1000")
unlock duration must be a number
personal.unlockAccount(eth.accounts[0], "Password", 0x1000)
Error: invalid argument 2: cannot unmarshal non-string as hex data
personal.unlockAccount(eth.accounts[0], "Password", 5)
Error: invalid argument 2: cannot unmarshal non-string as hex data
Never mind the Go-Etherium Developers broke their own code again.
As of Jan 10/2017