pysnmp: problem with resolving OCTET-STRING table index - exception

I'm trying to snmpwalk a table which uses MAC addresses as row indexes with pysnmp. The nextCmd command fails with following exception. It properly recognizes the OID suffix but then tries to resolve it with third-party MIB, which results in error. Can I do something differently? I have tried adding the lexicographicMode=False and lookupMib=False options.
The MacAdress used in the MIB is defined in SNMPv2-TC (RFC 2579):
MacAddress ::= TEXTUAL-CONVENTION
DISPLAY-HINT "1x:"
STATUS current
DESCRIPTION
"Represents an 802 MAC address represented in the
`canonical' order defined by IEEE 802.1a, i.e., as if it
were transmitted least significant bit first, even though
802.5 (in contrast to other 802.x protocols) requires MAC
addresses to be transmitted most significant bit first."
SYNTAX OCTET STRING (SIZE (6))
2023-02-13 10:20:28,103 pysnmp: resolving 1.3.6.1.4.1.x.6.255.255.255.255.255.255 as OID or label
2023-02-13 10:20:28,104 pysnmp: getNodeNameByOid: resolved :1.3.6.1.4.1.x.6.255.255.255.255.255.255 -> ('iso', 'org', 'dod', 'internet', 'private', 'enterprises').6.255.255.255.255.255.255
2023-02-13 10:20:28,104 pysnmp: resolved (<ObjectName value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.4.1.x....255.255.255.255]>,) into prefix <ObjectName value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.4.1.x]> and suffix <ObjectName value object, tagSet <TagSet object, tags 0:0:6>, payload [6.255.255.255.255.255.255]>
2023-02-13 10:20:28,104 pysnmp: getNodeNameByOid: resolved :1.3.6.1.4.1.x -> ('iso', 'org', 'dod', 'internet', 'private', 'enterprises').()
2023-02-13 10:20:28,104 pysnmp: resolved prefix <ObjectName value object, tagSet <TagSet object, tags 0:0:6>, payload [1.3.6.1.4.1.x]> into MIB node MibTableColumn((1, 3, 6, 1, 4, 1, x), <IpAddress schema object, tagSet <TagSet object, tags 64:0:0>, subtypeSpec <ConstraintsIntersection object, consts <ValueSizeConstraint object, consts 0, 65535>, <ValueSizeConstraint object, consts 4, 4>>, encoding iso-8859-1>)
2023-02-13 10:20:28,104 pysnmp: getNodeNameByOid: resolved :(1, 3, 6, 1, 4, 1, x) -> ('iso', 'org', 'dod', 'internet', 'private', 'enterprises').()
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/project/build/venv/lib/python3.8/site-packages/pysnmp/hlapi/asyncore/sync/cmdgen.py", line 357, in nextCmd
cmdgen.nextCmd(snmpEngine, authData, transportTarget, contextData,
File "/home/project/build/venv/lib/python3.8/site-packages/pysnmp/hlapi/asyncore/cmdgen.py", line 354, in nextCmd
vbProcessor.makeVarBinds(snmpEngine, varBinds),
File "/home/project/build/venv/lib/python3.8/site-packages/pysnmp/hlapi/varbinds.py", line 39, in makeVarBinds
__varBinds.append(varBind.resolveWithMib(mibViewController, ignoreErrors=False))
File "/home/project/build/venv/lib/python3.8/site-packages/pysnmp/smi/rfc1902.py", line 854, in resolveWithMib
self.__args[0].resolveWithMib(mibViewController)
File "/home/project/build/venv/lib/python3.8/site-packages/pysnmp/smi/rfc1902.py", line 451, in resolveWithMib
self.__indices = rowNode.getIndicesFromInstId(suffix)
File "/home/project/build/venv/lib/python3.8/site-packages/pysnmp/smi/mibs/SNMPv2-SMI.py", line 1257, in getIndicesFromInstId
raise error.SmiError(
pysnmp.smi.error.SmiError: Excessive instance identifier sub-OIDs left at MibTableRow((1, 3, 6, 1, 4, x), None): 255
UPDATE: I found out there is no exception if I use OID in numeric form. The ObjectIdentity object however must not be fully resolved (isFullyResolved() equals zero).

Related

parse json data to pandas column with missing quotes

I have a pandas dataframe "df". the dataframe has a field in it "json_field" that contains json data. I think it's actually yaml format. I'm trying to parse the json values in to their own columns. I have example data below. when I run the code below I'm getting an error when it hits the 'name' field. the code and the error are below. it seems like maybe it's having trouble because the value associated with the name field isn't quoted, or has spaces. does anyone see what the issue might be or suggest how to fix?
example:
print(df['json_field'][0])
output:
- start:"test"
name: Price Unit
field: priceunit
type: number
operator: between
value:
- '40'
- '60'
code:
import yaml
pd.io.json.json_normalize(yaml.load(df['json_field'][0]), 'name','field','type','operator','value').head()
error:
An error was encountered:
mapping values are not allowed here
in "<unicode string>", line 3, column 7:
name: Price Unit
^
Traceback (most recent call last):
File "/usr/local/lib64/python3.6/site-packages/yaml/__init__.py", line 94, in safe_load
return load(stream, SafeLoader)
File "/usr/local/lib64/python3.6/site-packages/yaml/__init__.py", line 72, in load
return loader.get_single_data()
File "/usr/local/lib64/python3.6/site-packages/yaml/constructor.py", line 35, in get_single_data
node = self.get_single_node()
File "/usr/local/lib64/python3.6/site-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/local/lib64/python3.6/site-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/local/lib64/python3.6/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/local/lib64/python3.6/site-packages/yaml/composer.py", line 110, in compose_sequence_node
while not self.check_event(SequenceEndEvent):
File "/usr/local/lib64/python3.6/site-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/local/lib64/python3.6/site-packages/yaml/parser.py", line 382, in parse_block_sequence_entry
if self.check_token(BlockEntryToken):
File "/usr/local/lib64/python3.6/site-packages/yaml/scanner.py", line 116, in check_token
self.fetch_more_tokens()
File "/usr/local/lib64/python3.6/site-packages/yaml/scanner.py", line 220, in fetch_more_tokens
return self.fetch_value()
File "/usr/local/lib64/python3.6/site-packages/yaml/scanner.py", line 580, in fetch_value
self.get_mark())
yaml.scanner.ScannerError: mapping values are not allowed here
in "<unicode string>", line 3, column 7:
name: Price Unit
desired output:
name field type operator value
Price Unit priceunit number between - '40' - '60'
Update:
I tried the suggestion below
import yaml
df['json_field']=df['json_field'].str.replace('"test"', "")
pd.io.json.json_normalize(yaml.safe_load(df['json_field'][0].lstrip()), 'name','field','type','operator','value').head()
and got the output below:
operator0 typefield
0 P priceunit
1 r priceunit
2 i priceunit
3 c priceunit
4 e priceunit
Seems to me that it's having an issue because your yaml is improperly formatted. That "test" should not be there at all if you want a dictionary named "start" with 5 mappings inside of it.
import yaml
a = """
- start:
name: Price Unit
field: priceunit
type: number
operator: between
value:
- '40'
- '60'
""".lstrip()
# Single dictionary named start, with 5 entries
yaml.safe_load(a)[0]
{'start':
{'name': 'Price Unit',
'field': 'priceunit',
'type': 'number',
'operator': 'between',
'value': ['40', '60']}
}
To do this with your data, try:
data = yaml.load(df['json_field'][0].replace('"test"', ""))
df = pd.json_normalize(yaml.safe_load(a)[0]["start"])
print(df)
name field type operator value
0 Price Unit priceunit number between [40, 60]
It seems that your real problem in one line earlier.
Note that your YAML fragment starts with start:"test".
One of key principles of YAML is that a key is separated
from its value by a colon and a space, whereas your
sample does not contain this (mandatory) space.
Maybe you should manually edit your input, i.e. add this
missing space.
Other solution is to write a "specialized pre-parser",
which adds such missing spaces and its result is fed into
the YAML parser. This can be an option if the number of such
cases is big.

How to debug my HTTPie or Postman so Akamai FastPurge API can make successful calls?

I wanted to use HTTPie or Postman to snip together a request for the Akamai FastPurge API, so I could see the structure and HTTP request as a whole with the aim to build a java module that builds the same HTTP request.
I've tried several tutorials provided by Akamai* and nothing seemed to work.
The credentials for FastPurge API are in their own edgegrid-file and I edited the config.json of HTTPie as suggested by Akamai.
It doesn't matter wether CP Codes or URL's are used, the result is the same.
The API does have read-write-authorisation and the credentials are correctly read from file.
I also tried using Postman, but Postman doesn't integrate the edgegrid-authorization so I have to do it by hand without knowing how it looks like. There exists a tutorial in how to set up an Akamai API in Postman, but it lacks the critical part of the pre-request-script.**
*: https://developer.akamai.com/akamai-101-basics-purging , https://community.akamai.com/customers/s/article/Exploring-Akamai-OPEN-APIs-from-the-command-line-using-HTTPie-and-jq?language=en_US , https://developer.akamai.com/api/core_features/fast_purge/v3.html , https://api.ccu.akamai.com/ccu/v2/docs/index.html
**: https://community.akamai.com/customers/s/article/Using-Postman-to-access-Akamai-OPEN-APIs?language=en_US
The error I receive in HTTPie are either this:
C:\python>http --auth-type=edgegrid -a default: :/ccu/v3/invalidate/url/production objects:='["https://www.example.com","http://www.example.com"]' --debug
HTTPie 1.0.3-dev
Requests 2.21.0
Pygments 2.3.1
Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit (Intel)]
c:\python\python.exe
Windows 10
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/doc#config",
"httpie": "1.0.3-dev"
},
"default_options": "['--verbose', '--traceback', '--auth-type=edgegrid', '--print=Hhb', '--timeout=300', '--style=autumn', '-adefault:']"
},
"config_dir": "%home",
"is_windows": true,
"stderr": "<colorama.ansitowin32.StreamWrapper object at 0x066D1A70>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>",
"stdin_encoding": "utf-8",
"stdin_isatty": true,
"stdout": "<colorama.ansitowin32.StreamWrapper object at 0x066D19D0>",
"stdout_encoding": "utf-8",
"stdout_isatty": true
}>
Traceback (most recent call last):
File "c:\python\lib\site-packages\httpie\input.py", line 737, in parse_items
value = load_json_preserve_order(value)
File "c:\python\lib\site-packages\httpie\utils.py", line 7, in load_json_preserve_order
return json.loads(s, object_pairs_hook=OrderedDict)
File "c:\python\lib\json\__init__.py", line 361, in loads
return cls(**kw).decode(s)
File "c:\python\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "c:\python\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\Scripts\http-script.py", line 11, in <module>
load_entry_point('httpie==1.0.3.dev0', 'console_scripts', 'http')()
File "c:\python\lib\site-packages\httpie\__main__.py", line 11, in main
sys.exit(main())
File "c:\python\lib\site-packages\httpie\core.py", line 210, in main
parsed_args = parser.parse_args(args=args, env=env)
File "c:\python\lib\site-packages\httpie\input.py", line 152, in parse_args
self._parse_items()
File "c:\python\lib\site-packages\httpie\input.py", line 358, in _parse_items
data_class=ParamsDict if self.args.form else OrderedDict
File "c:\python\lib\site-packages\httpie\input.py", line 739, in parse_items
raise ParseError('"%s": %s' % (item.orig, e))
httpie.input.ParseError: "objects:='[https://www.example.com,http://www.example.com]'": Expecting value: line 1 column 1 (char 0)
or that:
C:\python>http POST --auth-type=edgegrid -a default: :/ccu/v3/invalidate/cpcode/production < file.json
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest,edgegrid}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: unrecognized arguments: :/ccu/v3/invalidate/cpcode/production
A typical successful response looks like this:
{
"httpStatus" : 201,
"detail" : "Request accepted.",
"estimatedSeconds" : 420,
"purgeId" : "95b5a092-043f-4af0-843f-aaf0043faaf0",
"progressUri" : "/ccu/v2/purges/95b5a092-043f-4af0-843f-aaf0043faaf0",
"pingAfterSeconds" : 420,
"supportId" : "17PY1321286429616716-211907680"
}
I would really appreciate it if somebody could either help me in correcting my setup on either HTTPie or Postman, or give me the complete structure an Akamai FastPurge HTTP request has so I can skip the hassle with HTTPie or Postman.
Thank you very much, your help is much appreciated!
Postman now has an Akamai edgeGrid authorization type. It takes the contents of your .edgerc file.
I set it on the collection and let all of the calls inherit the authorization.

Jmeter : Read data (numbers) from CSV file in Jmeter

I'm trying to design a load test for some APIs, I need to read the data from a CSV file, Here is my CSV file :
amount,description
100,"100 Added"
-150,"-150 removed"
20, "20 added"
the amount is a number.
My CSV data config looks like this:
Image
When I put the body data like this :
Image
I have this error :
{"timestamp":1529427563867,"status":400,"error":"Bad Request","message":"JSON parse error: Unrecognized token 'amount': was expecting ('true', 'false' or 'null'); nested exception is com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'amount': was expecting ('true', 'false' or 'null')\n
and when I change it to this:
Image
I have this error:
"timestamp":1529427739395,"status":400,"error":"Bad Request","message":"JSON parse error: Cannot deserialize value of type `java.math.BigDecimal` from String \"amount\": not a valid representation; nested exception is com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `java.math.BigDecimal` from String \"amount\": not a valid representation\n at [Source: (PushbackInputStream); line: 2, column: 13]
How should I pass the parameters to be able to read from the CSV file?
P.S it works fine when I do all the things without CSV.
In your config, you have "Ignore first line" set to False.
Options:
1: Set "Ignore First Line" to True, for source files with header data included.
2: Leave the config setting as is, but remove the headers from the CSV source file.

Find Duplicate JSON Keys in Sublime Text 3

I have a JSON file that, for now, is validated by hand prior to being placed into production. Ideally, this is an automated process, but for now this is the constraint.
One thing I found helpful in Eclipse were the JSON tools that would highlight duplicate keys in JSON files. Is there similar functionality in Sublime Text or through a plugin?
The following JSON, for example, could produce a warning about duplicate keys.
{
"a": 1,
"b": 2,
"c": 3,
"a": 4,
"d": 5
}
Thanks!
There are plenty of JSON validators available online. I just tried this one and it picked out the duplicate key right away. The problem with using Sublime-based JSON linters like JSONLint is that they use Python's json module, which does not error on extra keys:
import json
json_str = """
{
"a": 1,
"b": 2,
"c": 3,
"a": 4,
"d": 5
}"""
py_data = json.loads(json_str) # changes JSON into a Python dict
# which is unordered
print(py_data)
yields
{'c': 3, 'b': 2, 'a': 4, 'd': 5}
showing that the first a key is overwritten by the second. So, you'll need another, non-Python-based, tool.
Even Python documentation says that:
The RFC specifies that the names within a JSON object should be
unique, but does not mandate how repeated names in JSON objects should
be handled. By default, this module does not raise an exception;
instead, it ignores all but the last name-value pair for a given name:
weird_json = '{"x": 1, "x": 2, "x": 3}'
json.loads(weird_json) {'x': 3}
The object_pairs_hook parameter can be used to alter this behavior.
So as pointed from docs:
class JsonUniqueKeysChecker:
def __init__(self):
self.keys = []
def check(self, pairs):
for key, _value in pairs:
if key in self.keys:
raise ValueError("Non unique Json key: '%s'" % key)
else:
self.keys.append(key)
return pairs
And then:
c = JsonUniqueKeysChecker()
print(json.loads(json_str, object_pairs_hook=c.check)) # raises
JSON is very easy format, not very detailed so things like that can be painful. Detection of doubled keys is easy but I bet it's quite a lot of work to forge plugin from that.

Creating JSON data from string and using json.dumps

I am trying to create JSON data to pass to InfluxDB. I create it using strings but I get errors. What am I doing wrong. I am using json.dumps as has been suggested in various posts.
Here is basic Python code:
json_body = "[{'points':["
json_body += "['appx', 1, 10, 0]"
json_body += "], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]"
print("Write points: {0}".format(json_body))
client.write_points(json.dumps(json_body))
The output I get is
Write points: [{'points':[['appx', 1, 10, 0]], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]
Traceback (most recent call last):
line 127, in main
client.write_points(json.dumps(json_body))
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 173, in write_points
return self.write_points_with_precision(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 197, in write_points_with_precision
status_code=200
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 127, in request
raise error
influxdb.client.InfluxDBClientError
I have tried with double quotes too but get the same error. This is stub code (to minimize the solution), I realize in the example the points list contains just one list object but in reality it contains multiple. I am generating the JSON code reading through outputs of various API calls.
json_body = '[{\"points\":['
json_body += '[\"appx\", 1, 10, 0]'
json_body += '], \"name\": \"WS1\", \"columns\": [\"RName\", \"RIn\", \"SIn\", \"OIn\"]}]'
print("Write points: {0}".format(json_body))
client.write_points(json.dumps(json_body))
I understand if I used the below things would work:
json_body = [{ "points": [["appx", 1, 10, 0]], "name": "WS1", "columns": ["Rname", "RIn", "SIn", "OIn"]}]
You don't need to create JSON manually. Just pass an appropriate Python structure into write_points function. Try something like that:
data = [{'points':[['appx', 1, 10, 0]],
'name': 'WS1',
'columns': ['RName', 'RIn', 'SIn', 'OIn']}]
client.write_points(data)
Please visit JSON.org for proper JSON structure. I can see several errors with your self-generated JSON:
The outer-most item can be an unordered object, enclosed by curly braces {}, or an ordered array, enclosed by brackets []. Don't use both. Since your data is structured like a dict, the curly braces are appropriate.
All strings need to be enclosed in double quotes, not single. "This is valid JSON". 'This is not valid'.
Your 'points' value array is surrounded by double brackets, which is unnecessary. Only use a single set.
Please check out the documentation of the json module for details on how to use it. Basically, you can feed json.dumps() your Python data structure, and it will output it as valid JSON.
In [1]: my_data = {'points': ["appx", 1, 10, 0], 'name': "WS1", 'columns': ["RName", "RIn", "SIn", "OIn"]}
In [2]: my_data
Out[2]: {'points': ['appx', 1, 10, 0], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}
In [3]: import json
In [4]: json.dumps(my_data)
Out[4]: '{"points": ["appx", 1, 10, 0], "name": "WS1", "columns": ["RName", "RIn", "SIn", "OIn"]}'
You'll notice the value of using a Python data structure first: because it's Python, you don't need to worry about single vs. double quotes, json.dumps() will automatically convert them. However, building a string with embedded single quotes leads to this:
In [5]: op_json = "[{'points':[['appx', 1, 10, 0]], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]"
In [6]: json.dumps(op_json)
Out[6]: '"[{\'points\':[[\'appx\', 1, 10, 0]], \'name\': \'WS1\', \'columns\': [\'RName\', \'RIn\', \'SIn\', \'OIn\']}]"'
since you fed the string to json.dumps(), not the data structure.
So next time, don't attempt to build JSON yourself, rely on the dedicated module to do it.