I was trying to create a python program to convert ETH to BTC. I was using the command :
client.transfer_money("ETH Account ID ", to="BTC Account ID", amount="0.1", currency=:"ETH")
I had obtained the account IDs using the command :
client.get_accounts()
And copy pasted the IDs onto my transfer command. However i get this error :
~/opt/anaconda3/envs/Coinbase/lib/python3.8/site-packages/coinbase/wallet/client.py in transfer_money(self, account_id, **params)
338 params['type'] = 'transfer'
339 response = self._post('v2', 'accounts', account_id, 'transactions', data=params)
--> 340 return self._make_api_object(response, Transaction)
341
342 def request_money(self, account_id, **params):
~/opt/anaconda3/envs/Coinbase/lib/python3.8/site-packages/coinbase/wallet/client.py in _make_api_object(self, response, model_type)
143 # All valid responses have a "data" key.
144 if data is None:
--> 145 raise build_api_error(response, blob)
146 # Warn the user about each warning that was returned.
147 warnings_data = blob.get('warnings', None)
APIError: APIError(id=):
Would someone be able to isolate what this error is ?
The official library is deprecated.
There were three errors introduced recently (which werenot handled since it's deprecated), I have handled those error in this fork library and published in pypi. You can try using that, if issues still persists, submit an issue on GitHub repo
Related
I have a Django application. Sometimes in production I get an error when uploading data that one of the values is too long. It would be very helpful for debugging if I could see which value was the one that went over the limit. Can I configure this somehow? I'm using MySQL.
It would also be nice if I could enable/disable this on a per-model or column basis so that I don't leak user data to error logs.
When creating model instances from outside sources, one must take care to validate the input or have other guarantees that this data cannot violate constraints.
When not calling at least full_clean() on the model, but directly calling save, one bypasses Django's validators and will only get alerted to the problem by the database driver at which point it's harder to obtain diagnostics:
class JsonImportManager(models.Manager):
def import(self, json_string: str) -> int:
data_list = json.loads(json_string) # list of objects => list of dicts
failed = 0
for data in data_list:
obj = self.model(**data)
try:
obj.full_clean()
except ValidationError as e:
print(e.message_dict) # or use better formatting function
failed += 1
else:
obj.save()
return failed
This is of course very simple, but it's a good boilerplate to get started with.
I'm using postman and newman to perform automated tests and I do a JUnit export in order to exploit them in TFS.
However, when I open my .xml report, failures are indicated as follows:
-<failure type="AssertionFailure">
-<![CDATA[Failed 1 times.]]>
</failure>
I would like to know if it is possible to customize the "Failed 1 times." information in order to pass more relevant data about the failure (ie. json body error and description)
Thank you
Alexandre
Well, finally I found out how to proceed (not a clean way but sufficient for my purpose, so far):
I impact the file C:\Users\<myself>\AppData\Roaming\npm\node_modules\newman\lib\reporters\junit\index.js
Request's data and response can be recovered from 'executions' object:
stringExecutions = JSON.stringify(executions); //provide information about the arguments of the object "executions"
from this I can take general information by json-parsing this element and extracting what I want:
jsonExecutions = JSON.parse(stringExecutions)
jsonExecutions[0].response._details.code // gives me the http return code,
jsonExecutions[0].response._details.name // gives me the status,
jsonExecutions[0].response._details.detail //gives a bit more details
Error data (at test case/testsuite level) can be recovered from the 'err.error' object:
stringData = JSON.stringify(err.error); jsonData = JSON.parse(stringData);
from that I extract the data I need, ie.
jsonData.name // the error type
jsonData.message // the error detail
jsonData.stacktrace // the error stack
by the way, in the original file, stack cannot be displayed as there is no 'stack' argument in error.err (it is named 'stacktrace').
Finally failure data (at test step/testcase level) can be recovered from the 'failures' object:
stringFailure = JSON.stringify(failures); jsonFailure = JSON.parse(stringFailure);
from this I extract:
jsonFailure[0].name // the failure type
jsonFailure[0].stack // the failure stack
For my purpose, I add response details from jsonExecutions to my testsuite error data, which is much more verbose in the XML report than previousely.
If there is a cleaner/smarter way to perform this, do not hesitate to tell me, I'll be grateful
Next step : do it clean by creating a custom reporter. :)
Alexandre
I'm writing a script that makes sure that all of our EC2 instances have DNS entries, and all of the DNS entries point to valid EC2 instances.
My approach is to try and get all of the resource records for our domain so that I can iterate through the list when checking for an instance's name.
However, getting all of the resource records doesn't seem to be a very straightforward thing to do! The documentation for GET ListResourceRecordSets seems to suggest it might do what I want and the boto equivalent seems to be get_all_rrsets ... but it doesn't seem to work as I would expect.
For example, if I go:
r53 = boto.connect_route53()
zones = r53.get_zones()
fooA = r53.get_all_rrsets(zones[0][id], name="a")
then I get 100 results. If I then go:
fooB = r53.get_all_rrsets(zones[0][id], name="b")
I get the same 100 results. Have I misunderstood and get_all_rrsets does not map onto ListResourceRecordSets?
Any suggestions on how I can get all of the records out of Route 53?
Update: cli53 (https://github.com/barnybug/cli53/blob/master/cli53/client.py) is able to do this through its feature to export a Route 53 zone in BIND format (cmd_export). However, my Python skills aren't strong enough to allow me to understand how that code works!
Thanks.
get_all_rrsets returns a ResourceRecordSets which derives from Python's list class. By default, 100 records are returned. So if you use the result as a list it will have 100 records. What you instead want to do is something like this:
r53records = r53.get_all_rrsets(zones[0][id])
for record in r53records:
# do something with each record here
Alternatively if you want all of the records in a list:
records = [r for r in r53.get_all_rrsets(zones[0][id]))]
When iterating either with a for loop or list comprehension Boto will fetch the additional records (up to) 100 records at a time as needed.
this blog post from 2018 has a script which allows exporting in bind format:
#!/usr/bin/env python3
# https://blog.jverkamp.com/2018/03/12/generating-zone-files-from-route53/
import boto3
import sys
route53 = boto3.client('route53')
paginate_hosted_zones = route53.get_paginator('list_hosted_zones')
paginate_resource_record_sets = route53.get_paginator('list_resource_record_sets')
domains = [domain.lower().rstrip('.') for domain in sys.argv[1:]]
for zone_page in paginate_hosted_zones.paginate():
for zone in zone_page['HostedZones']:
if domains and not zone['Name'].lower().rstrip('.') in domains:
continue
for record_page in paginate_resource_record_sets.paginate(HostedZoneId = zone['Id']):
for record in record_page['ResourceRecordSets']:
if record.get('ResourceRecords'):
for target in record['ResourceRecords']:
print(record['Name'], record['TTL'], 'IN', record['Type'], target['Value'], sep = '\t')
elif record.get('AliasTarget'):
print(record['Name'], 300, 'IN', record['Type'], record['AliasTarget']['DNSName'], '; ALIAS', sep = '\t')
else:
raise Exception('Unknown record type: {}'.format(record))
usage example:
./export-zone.py mydnszone.aws
mydnszone.aws. 300 IN A server.mydnszone.aws. ; ALIAS
mydnszone.aws. 86400 IN CAA 0 iodef "mailto:hostmaster#mydnszone.aws"
mydnszone.aws. 86400 IN CAA 128 issue "letsencrypt.org"
mydnszone.aws. 86400 IN MX 10 server.mydnszone.aws.
the output can be saved as a file and/or copied to the clipboard:
the Import zone file page allows to paste the data:
at the time of this writing the script was working fine using python 3.9.
I am new to CouchDB. I need to get 60 or more JSON files in a minute from a server.
I have to upload these JSON files to CouchDB individually as soon as I receive them.
I installed CouchDB on my Linux machine.
I hope some one can help me with my requirement.
If possible can someone help me with pseudo code.
My Idea:
Is to write a python script for uploading all JSON files to CouchDB.
Each and every JSON file must be each document and the data present in
JSON must be inserted same into CouchDB
(the specified format with values in a file).
Note:
These JSON files are Transactional, every second 1 file is generated
so I need to read the file upload as same format into CouchDB on
successful uploading archive the file into local system of different folder.
python program to parse the json and insert into CouchDb:
import sys
import glob
import errno,time,os
import couchdb,simplejson
import json
from pprint import pprint
couch = couchdb.Server() # Assuming localhost:5984
#couch.resource.credentials = (USERNAME, PASSWORD)
# If your CouchDB server is running elsewhere, set it up like this:
couch = couchdb.Server('http://localhost:5984/')
db = couch['mydb']
path = 'C:/Users/Desktop/CouchDB_Python/Json_files/*.json'
#dirPath = 'C:/Users/VijayKumar/Desktop/CouchDB_Python'
files = glob.glob(path)
for file1 in files:
#dirs = os.listdir( dirPath )
file2 = glob.glob(file1)
for name in file2: # 'file' is a builtin type, 'name' is a less-ambiguous variable name.
try:
with open(name) as f: # No need to specify 'r': this is the default.
#sys.stdout.write(f.read())
json_data=f
data = json.load(json_data)
db.save(data)
pprint(data)
json_data.close()
#time.sleep(2)
except IOError as exc:
if exc.errno != errno.EISDIR: # Do not fail if a directory is found, just ignore it.
raise # Propagate other kinds of IOError.
I would use CouchDB bulk API, even though you have specified that you need to send them to db one by one. For example, by implementing a simple queue that gets sent out every say 5 - 10 seconds via a bulk doc call will greatly increase performance of your application.
There is obviously a quirk in that and that is you need to know the IDs of the docs that you want to get from the DB. But for the PUTs it is perfect. (it is not entirely true, you can get ranges of docs using bulk operation if the IDs you are using for your docs can be sorted nicely).
From my experience working with CouchDB, I have a hunch that you are dealing with Transactional documents in order to compile them into some sort of sum result and act on that data accordingly (maybe creating next transactional doc in series). For that you can rely on CouchDB by using 'reduce' functions on the views you create. It takes a little practice to get reduce function working properly and is highly dependent on what it is you actually what to achieve and what data you are prepared to emit by the view so I can't really provide you with more detail on that.
So in the end the app logic would go something like that:
get _design/someDesign/_view/yourReducedView
calculate new transaction
add transaction to queue
onTimeout
send all in transaction queue
If I got that first part of why you are using transactional docs wrong all that would really change is the part where you getting those transactional docs in my app logic.
Also, before writing your own 'reduce' function, have a look at buil-in ones (they are alot faster then anything outside of db engine can do)
http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API
EDIT:
Since you are starting, I strongly recommend to have a look at CouchDB Definitive Guide.
NOTE FOR LATER:
Here is one hidden stone (well maybe not so much a hidden stone but not an obvious thing to look out for for the new-comer in any case). When you write reduce function make sure that it does not produce too much output for the query without boundaries. This will extremely slow down the entire view even when you provide reduce=false when getting stuff from it.
So you need to get JSON documents from a server and send them to CouchDB as you receive them. A Python script would work fine. Here is some pseudo-code:
loop (until no more docs)
get new JSON doc from server
send JSON doc to CouchDB
end loop
In Python, you could use requests to send the documents to CouchDB and probably to get the documents from the server as well (if it is using an HTTP API).
You might want to checkout the pycouchdb module for python3. I've used it myself to upload lots of JSON objects into couchdb instance. My project does pretty much the same as you describe so you can take a look at my project Pyro at Github for details.
My class looks like that:
class MyCouch:
""" COMMUNICATES WITH COUCHDB SERVER """
def __init__(self, server, port, user, password, database):
# ESTABLISHING CONNECTION
self.server = pycouchdb.Server("http://" + user + ":" + password + "#" + server + ":" + port + "/")
self.db = self.server.database(database)
def check_doc_rev(self, doc_id):
# CHECKS REVISION OF SUPPLIED DOCUMENT
try:
rev = self.db.get(doc_id)
return rev["_rev"]
except Exception as inst:
return -1
def update(self, all_computers):
# UPDATES DATABASE WITH JSON STRING
try:
result = self.db.save_bulk( all_computers, transaction=False )
sys.stdout.write( " Updating database" )
sys.stdout.flush()
return result
except Exception as ex:
sys.stdout.write( "Updating database" )
sys.stdout.write( "Exception: " )
print( ex )
sys.stdout.flush()
return None
Let me know in case of any questions - I will be more than glad to help if you will find some of my code usable.
last week I published my first Windows 8.1 app on the Windows Store. So far everything works fine but now two user reported that the app crashes immediately when being launched.
Additionally I discovered that there is a CrashDump listed in the Reports/Quality section of the Dashboard. I dowloaded the CrashDump and tried to find the source of the problem using WinDbg by following this instruction: http://blogs.msdn.com/b/ntdebugging/archive/2014/01/13/debugging-a-windows-8-1-store-app-crash-dump.aspx
I was able to follow the instruction almost up to the end but then the sos library is not found:
0:006> .sympath SRV*C:\Symbols*http://msdl.microsoft.com/download/symbols
...
0:006> .exr -1
ExceptionAddress: 769eb1d7 (combase+0x000fb1d7)
ExceptionCode: c000027b
ExceptionFlags: 00000001
NumberParameters: 2
Parameter[0]: 03f3f32c
Parameter[1]: 00000001
0:006> !error c000027b
Error code: (NTSTATUS) 0xc000027b (3221226107) - Anwendungsinterne Ausnahme.
0:006> .ecxr
eax=03f3f030 ebx=00000000 ecx=00000000 edx=00000000
esi=03f3f360 edi=03f3f030 eip=769eb01f esp=03f3f314
ebp=03f3f3bc iopl=0 nv up ei pl nz ac po nc cs=001b
ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000212
combase+0xfb01f: 769eb01f 6a03 push 3
0:006> knL
*** Stack trace for last set context - .thread/.cxr resets it
...
0:006> dt 03f3f32c combase!_STOWED_EXCEPTION_INFORMATION_HEADER*
0x05f182e4
+0x000 Size : 0x28
+0x004 Signature : 0x53453032
0:006> .formats 0x53453032
Evaluate expression:
Hex: 53453032
Decimal: 1397043250
Octal: 12321230062
Binary: 01010011 01000101 00110000 00110010
Chars: SE02
Time: Wed Apr 09 13:34:10 2014
Float: low 8.46917e+011 high 0
Double: 6.90231e-315
0:006> dt -a1 03f3f32c combase!_STOWED_EXCEPTION_INFORMATION_V2*
[0] # 03f3f32
---------------------------------------------
0x05f182e4
+0x000 Header : _STOWED_EXCEPTION_INFORMATION_HEADER
+0x008 ResultCode : 80131500
+0x00c ExceptionForm : 0y01
+0x00c ThreadId : 0y000000000000000000010001100101 (0x465)
+0x010 ExceptionAddress : 0x76943bff Void
+0x014 StackTraceWordSize : 4
+0x018 StackTraceWords : 0xa
+0x01c StackTrace : 0x04c6c010 Void
+0x010 ErrorText : 0x76943bff "趍ﯰ???" +0x
+0x020 NestedExceptionType : 0x314f454c
+0x024 NestedException : 0x05f1be44 Void
0:006> !error 80131500
Error code: (HRESULT) 0x80131500 (2148734208) - <Unable to get error code text>
0:006> dpS 0x04c6c010 La 7697a9f1 combase!RoOriginateLanguageException+0x3b [d:\blue_gdr\com\combase\winrt\error\error.cpp # 1083]
63da3bc6 mscorlib_ni+0x9b3bc6
63e41976 mscorlib_ni+0xa51976
63e415c1 mscorlib_ni+0xa515c1
5b72f9df System_Runtime_WindowsRuntime_ni+0x1f9df
5b72f965 System_Runtime_WindowsRuntime_ni+0x1f965
6372de66 mscorlib_ni+0x33de66
5b72f934 System_Runtime_WindowsRuntime_ni+0x1f934
5b6bff16 Windows_UI_ni+0x9ff16
64492a36 clr!COMToCLRDispatchHelper+0x28
0:006> !sos.pe
The call to LoadLibrary(sos) failed, Win32 error 0n2
"The system cannot find the file specified."
Please check your debugger configuration and/or network access.
0:006> .loadby sos clr
The call to LoadLibrary(c:\symbols\clr.dll\52E0B78469b000\sos) failed, Win32 error 0n126
"The system cannot find the file specified."
Please check your debugger configuration and/or network access.
I does not have experience with this kind of debugging and without the instruction I would not have known any of the commands that I had to use in WinDbg.
Does anyone have an idea how to go on from here?
I have uploaded the CrashDump to my OneDrive. Would be great if anyone with more experience could have a look at it:
http://1drv.ms/1gZzrRK
Is it somehow possible to get additional information from the users who reported the crash to the support? Can they extract a crash dump / error report from their systems?
Is there any possibility to supply a changed version to these users to check if these changes influence the problem?
Thank you very much!
In order to get more insight into what your app is doing when it's running on users' devices, you could add logging of some kind. Local logging to file is an option, but a poor one, since you have to let your user dig into hidden files to retreive your application log. It might be acceptable though, depending on your scenario. In that case you might want to look at this logging sample for Windows Store Apps that uses EWT to log.
Alternatives would be using some kind of logging framework. I haven't got any experience in any of those so I couldn't tell you which one to use.
As for getting a changed version to your user: simply build an app-package locally, put it on OneDrive and send them a link to it. They can sideload the app themselves (although it requires running a Powershell script and logging in with a live-account). I've used this approach with several clients and it's workable.