While configuring the Python Amazon Product API, it gives unknown locale error for 'locale'='in' - amazon-product-api

I am new to the Amazon Product API. I have installed the Python Amazon Product API. I am trying to initialise the Amazon API with my access no, secret key and affiliate tag, and locale. I am using 'locale' : 'in'. It gives unknown locale error. With other locale like "us", or "uk", it does not give an error. Can some one please help me out.
My code:
import amazonproduct
config = {
'access_key': '***************',
'secret_key': '**************************',
'associate_tag': '************',
'locale': 'in'
}
api = amazonproduct.API(cfg=config)
The error is:
UnknownLocale
Traceback (most recent call last)
<ipython-input-56-6af6386efe00> in <module>()
5 'locale': 'in'
6 }
----> 7 api = amazonproduct.API(cfg=config)
/Users/niharsuryawanshi/anaconda/lib/python2.7/site-packages/amazonproduct/api.pyc in __init__(self, access_key_id, secret_access_key, locale, associate_tag, processor, cfg)
153 self.host = HOSTS[self.locale]
154 except KeyError:
--> 155 raise UnknownLocale(locale)
156
157 # GAE does not allow timeouts to be specified manually
UnknownLocale: None

This is still under work:
https://bitbucket.org/basti/python-amazon-product-api/issues/58/dose-this-lib-support-indian-amazon-also
While the fix is trivial, a patch won't be officually accepted until the library's tests have been adjusted as well.

Related

Why can't Cloud Function access metadata server in Vertex AI example?

I followed this Vertex AI tutorial. However, at the last step, as the Cloud Function calls the prediction endpoint, it gets this failure.
This means it could not even access the metadata server. I.e., is not a permissions failure (though I did check that the myproject#appspot.gserviceaccount.com service account does have Project Editor role as specified). It is also an error strictly in Functions and IAM, not in Vertex.AI or other ML systems.
What is going wrong here?
Function execution took 673 ms, finished with status code: 500
Prediction request failed: <class 'google.api_core.exceptions.ServiceUnavailable'>: 503 Getting metadata from plugin failed with error: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
google.auth.exceptions.RefreshError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
File "<string>", line 3, in raise_from
six.raise_from(new_exc, caught_exc)
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 117, in refresh
self.refresh(request)
File "/env/local/lib/python3.7/site-packages/google/auth/credentials.py", line 133, in before_request
self._request, context.method_name, context.service_url, headers
File "/env/local/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 88, in _get_authorization_headers
callback(self._get_authorization_headers(context), None)
File "/env/local/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 101, in __call__
context, _AuthMetadataPluginCallback(callback_state, callback))
File "/env/local/lib/python3.7/site-packages/grpc/_plugin_wrapping.py", line 78, in __call__
Traceback (most recent call last):
The above exception was the direct cause of the following exception:
google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
response,
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 187, in get
token_json = get(request, path, params=params)
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 263, in get_service_account_token
request, service_account=self._service_account_email, scopes=scopes
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 113, in refresh
Traceback (most recent call last):
AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x3e0961671dd0>" raised exception!

Database connection error while celery worker remains idle for 24 hours

I have a django based web application where I am using Kafka to process some orders. Now I use Celery Workers to assign a Kafka Consumer to each topics. Each Kafka Consumer is assigned to a Kafka topic in the form of a Kafka tasks. However after a day or so, when I am submitting a task I am getting the following error :
_mysql.connection.query(self, query)
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
The above exception was the direct cause of the following exception:
Below is how my tasks.py file looks like :
#shared_task
def init_kafka_consumer(topic):
try:
if topic is None:
raise Exception("Topic is none, unable to initialize kafka consumer")
logger.info("Spawning new task to subscribe to topic")
params = []
params.append(topic)
background_thread = Thread(target=sunscribe_consumer, args=params)
background_thread.start()
except Exception :
logger.exception("An exception occurred while reading message from kafka")
def sunscribe_consumer(topic) :
try:
if topic is None:
raise Exception("Topic is none, unable to initialize kafka consumer")
conf = {'bootstrap.servers': "localhost:9092", 'group.id': 'test', 'session.timeout.ms': 6000,
'auto.offset.reset': 'earliest'}
c = Consumer(conf)
logger.info("Subscribing consumer to topic "+str(topic[0]))
c.subscribe(topic)
# Read messages from Kafka
try:
while True:
msg = c.poll(timeout=1.0)
if msg is None:
continue
if msg.error():
raise KafkaException(msg.error())
else:
try:
objs = serializers.deserialize("json", msg.value())
for obj in objs:
order = obj.object
order = BuyOrder.objects.get(id=order.id) #Getting an error while accessing DB
if order.is_pushed_to_kafka :
return
order.is_pushed_to_kafka = True
order.save()
from web3 import HTTPProvider, Web3, exceptions
w3 = Web3(HTTPProvider(INFURA_MAIN_NET_ETH_URL))
processBuyerPayout(order,w3)
except Exception :
logger.exception("An exception occurred while de-serializing message")
except Exception :
logger.exception("An exception occurred while reading message from kafka")
finally:
c.close()
except Exception :
logger.exception("An exception occurred while reading message from kafka")
Is there anyway that I could check if database connection exists as soon as a task is received and if not, I can re-establish the connection?
According to https://github.com/celery/django-celery-results/issues/58#issuecomment-418413369
and comments above putting this code:
from django.db import close_old_connections
close_old_connections()
which is closing old connection and opening new one inside your task should helps.

Create entity in a service using IDAS and ContextBroker

So I'm having some problems connection virtual devices to the contextBroker and i thing it's because of the Fiware-Service. I don't want to use the OpenIoT (even though that didn't worked for me either). I didn't manage to find any documentation about service creation and maybe i'm creating it wrong.
I did Python CreateService bus_auto 4jggokgpepnvsb2uv4s40d59ovand i'm not sure it returns me 201. I updated the config.ini file to work on MY service but when i send the observations it doesn't change the value of the entity on the contextBroker
I'm now running it in
My config.ini file:
[user]
# Please, configure here your username at FIWARE Cloud and a valid Oauth2.0 TOKEN for your user (you can use get_token.py to obtain a valid TOKEN).
username=
token=NULL
[contextbroker]
host=127.0.0.1
port=1026
OAuth=no
# Here you need to specify the ContextBroker database you are querying.
# Leave it blank if you want the general database or the IDAS service if you are looking for IoT devices connected by you.
fiware_service=bus_auto
[idas]
host=130.206.80.40
adminport=5371
ul20port=5371
OAuth=no
# Here you need to configure the IDAS service your devices will be sending data to.
# By default the OpenIoT service is provided.
fiware-service=bus_auto
fiware-service-path=/
apikey=4jggokgpepnvsb2uv4s40d59ov
[local]
#Choose here your System type. Examples: RaspberryPI, MACOSX, Linux, ...
host_type=CentOS
# Here please add a unique identifier for you. Suggestion: the 3 lower hexa bytes of your Ethernet MAC. E.g. 79:ed:af
# Also you may use your e-mail address.
host_id=db:00:ff
I'm using the python script GetEntity.py:
python2.7 GetEntity.py bus_auto_2
I also tried using a python script that i created:
import json
import urllib
import urllib2
BASE_URL = 'http://127.0.0.1:1026'
QUERY_URL = BASE_URL+'/v1/queryContext'
HEADERS = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
QUERY_EXAMPLE = {
"entities": [
{
"type": "bus_auto_2",
"isPattern": "false",
"id": "Room1"
}
]
}
def post(url, data):
""""""
req = urllib2.Request(url, data, HEADERS)
f = urllib2.urlopen(req)
result = json.loads(f.read())
f.close()
return result
if __name__ == "__main__":
print post(UPDATE_URL, json.dumps(UPDATE_EXAMPLE))
print post(QUERY_URL, json.dumps(QUERY_EXAMPLE))
I see the service is well created and actually I see one device defined within it.
I have even successfully sent an observation (t|23) bus_auto_2 device
Later, I check in the ContextBroker this entity: "thing:bus_auto_2" and I see the latest observation I sent.
Did you update in the config.ini file the FIWARE_SERVICE both at ContextBroker and IDAS sections ?
Cheers,
Looking to your script, it seems you are not including the Fiware-Service header in you queryContext request. Thus, the query is resolved in the "default service" and not in bus_auto service.
Probably changing the HEADERS map in the following way would solve the issue:
HEADERS = {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Fiware-Service: 'bus_auto'
}
EDIT: In addition to the above change, note that the BASE_URL is pointint to a local Orion instance, not the one connected with IDAS (which run in the same machine that IDAS). Thus, I think you also need to modify BASE_URL in the following way:
BASE_URL = 'http://130.206.80.40:1026'

GAE Python AssertionError: write() argument must be string

I am using Sublime Text 2 as my editor and creating a new Google App Engine project.
EDIT: I am running this code through localhost. I get this error on when viewing the app on appspot:
Status: 500 Internal Server Error Content-Type: text/plain Content-Length: 59 A server error occurred. Please contact the administrator.
I have this code:
import webapp2 as webapp
from google.appengine.ext.webapp.util import run_wsgi_app
class IndexPage(webapp.RequestHandler):
def get(self):
self.response.out.write('Hello, World!')
app = webapp.WSGIApplication([('/.*', IndexPage)], debug = True)
def main():
run_wsgi_app(app)
if __name__ == '__main__':
main()
It causes an AssertionError:
File "C:\Python27\lib\wsgiref\handlers.py", line 202, in write
assert type(data) is StringType,"write() argument must be string"
AssertionError: write() argument must be string
What does the error mean and what could be causing it?
GAE was not recognizing my app.yaml file properly. Once I fixed that, it worked. Thanks

OpenShift domain status failing

So I created an account at open shift, created an app, and installed the command line tool. when I do the command rhc domain status it fails:
Loaded suite /usr/bin/rhc-chk
Started
.E
===============================================================================
Error: test_connectivity(Test1_Connectivity)
ArgumentError: too few arguments
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:204:in `sprintf'
201: message = sprintf(get_message(:errors,name),*(args.shift || ''))
202: solution = get_message(:solutions,name)
203: if solution
=> 204: message << "\n" << sprintf(solution,*(args.shift || ''))
205: end
206: message
207: end
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:204:in `error_for'
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:270:in `test_connectivity'
===============================================================================
F
===============================================================================
Failure:
You need to be able to connect to the server in order to test authentication.
<false> is not true.
test_authentication(Test2_Authentication)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:280:in `test_authentication'
277: # Checking Authentication
278: #
279: def test_authentication
=> 280: assert $connectivity, error_for(:cant_connect)
281:
282: data = {'rhlogin' => $rhlogin}
283: response = fetch_url_json("/broker/userinfo", data)
===============================================================================
..F
===============================================================================
Failure: You must have an account on the server in order to test: whether you have a valid key loaded in your agent.
test_03_remote_ssh_keys(Test3_SSH)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:317:in `require_login'
314: end
315:
316: def require_login(test)
=> 317: flunk(error_for(:no_account,test)) if $user_info.nil?
318: end
319:
320: def require_remote_keys(test)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:321:in `require_remote_keys'
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:376:in `test_03_remote_ssh_keys'
===============================================================================
F
===============================================================================
Failure: You must have an account on the server in order to test: connecting to your applications.
test_04_ssh_connect(Test3_SSH)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:317:in `require_login'
314: end
315:
316: def require_login(test)
=> 317: flunk(error_for(:no_account,test)) if $user_info.nil?
318: end
319:
320: def require_remote_keys(test)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:383:in `test_04_ssh_connect'
===============================================================================
Finished in 2.403595 seconds.
7 tests, 8 assertions, 3 failures, 1 errors, 0 pendings, 0 omissions, 0 notifications
42.8571% passed
Not really understanding why it's not able to connect. I was able to use: rhc domain show, with no problems.
Anyone have any suggestions on how to fix this?
It's a bug that should get fixed in the upcoming release. Even though you see this error it shouldn't affect any other behaviour.