boto.sqs: read and write SQS messages - boto

I would like to read messages from one que and write them to another que. But, the message class is custom format and I am not sure how to write a message class and import it.
That is, the structure follows as:
import boto.sqs
#read messages from one que
conn = boto.sqs.connect_to_region("regionName")
q=conn.get_queue('queueName')
res=q.get_messages()
m = res[0].get_body() #This is the message I read
#Now, I want to write the message into another que
r = conn.get_queue('DifferentqueueName')
r.write(m)
Here, the code breaks and I get the following error messages:
224 new_msg = self.connection.send_message(self,
--> 225 message.get_body_encoded(), delay_seconds=delay_seconds,
226 message_attributes=message.message_attributes)
227 message.id = new_msg.id
AttributeError: 'unicode' object has no attribute 'get_body_encoded'
How can define a custom message class and use it to write to another que? Or, if I could just inherit the class when reading messages and use it for writing, that would be even easier. Can I do any of these?
Thank you.

The reason you are getting the error is because you are attempting to write a raw string to the queue rather than a Message object. Try this, instead:
import boto.sqs
#read messages from one que
conn = boto.sqs.connect_to_region("regionName")
q1 = conn.get_queue('queueName')
q2 = conn.get_queue('DifferentqueueName')
messages = q1.get_messages()
for message in messages:
msg_body = message.get_body() #This is the message I read
# Now, I want to write the message into another queue
new_msg = q2.new_message(msg_body)
q2.write(new_msg)

garnaat's code was problemetic for me (I'm reading the queue with the Java SDK maybe that is the reason) so I used a slight veriation on it:
import boto.sqs
from boto.sqs.message import RawMessage
conn = boto.sqs.connect_to_region("avilability_zone")
q1 = conn.get_queue('original_queue')
q2 = conn.get_queue('new_queue')
for i in range(1,400):
messages = q1.get_messages(10)
for message in messages:
msg_body = message.get_body()
new_msg = RawMessage()
new_msg.set_body(msg_body)
q2.write(new_msg)
q1.delete_message(message)
print("{}/400".format(i))

Related

trying to combine multiple table for report in odoo, but it give me a blank error

I am new around here, and I need some help
so, I am trying to make a report in odoo with Base report CSV, in the table, I have 2 relational field, and I don't know how to combine those table, I tried combining the function from the module Base report CSV like below, but it give an error, a blank error which only make me confused, anyone got any idea how I could do this?
from odoo import models
import csv
class csvreport(models.AbstractModel):
_name = 'report.hr_timesheet.report'
_inherit = 'report.report_csv.abstract'
def generate_csv_report(self, writer, data, partners):
writer.writeheader()
for obj in partners:
employee = self.env.cr.execute("""select hr_employee.name where hr_employee.id = %s;""", (obj.employee_id))
task = self.env.cr.execute("""select project_task.name where project_task.id = %s;""", (obj.project_id))
writer.writerow({
'name': obj.name,
'date': obj.date,
'unit_amount': obj.unit_amount,
'responsible': employee.fetchall(),
'task': task.fetchall(),
})
def csv_report_options(self):
res = super().csv_report_options()
res['fieldnames'].append('name')
res['fieldnames'].append('date')
res['fieldnames'].append('unit_amount')
res['fieldnames'].append('responsible')
res['fieldnames'].append('task')
res['delimiter'] = ';'
res['quoting'] = csv.QUOTE_ALL
return res
The Error :
Since i can't post picture, i'll just post a gdrive link
You should see the following error message in the error log:
ValueError: SQL query parameters should be a tuple, list or dict; ...
To fix that error pass query arguments in a tuple:
employee = self.env.cr.execute(query_str, (obj.employee_id.id, ))
You can't pass obj.employee_id (a record), because psycopg2 can't adapt type hr.employee.
To get the employee name just use dot notation:
employee_name = obj.employee_id.name
The from clause is missing for both queries and you can't call fetchall on employee or task because self.env.cr.execute will return None, to fetch the result, use the cursor fetch*() methods.
self.env.cr.fetchall()

Using past in gpt2

I am trying to run a script example from the huggingface documentation:
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past = None
for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
But I have an error:
TypeError: forward() got an unexpected keyword argument 'past'
What should I change to use 'past'?
Try updating past to past_key_values. I believe the documentation has been changed.

Extracting Text From Asynchronus Request Using grequests

I am trying to extract the text part from the request that I made through grequest library but I am unable to figure out how can I do so.
If we use Requests Library I would do
r = requests.get('www.google.com')
htmls.append(r.text)
Now if I am using grequests I can only get a list of response code and not text.
rs = (grequests.get(u) for u in urls)
result = grequests.map(rs)
What I've tried
result = grequests.map(rs.text)
I get an error using above piece of code AttributeError: 'generator' object has no attribute 'text'
My desired output is a list of html text where response code is 200 else the value should be None.
How can I achieve that?
Desired Output:
response_code = [<Response [200]>,<Response [404]>,<Response [200]>]
htmls = ['html1', None, 'html2']
You can use something like below
rs = (grequests.get(u) for u in urls)
responses = grequests.map(rs)
text = list(map(lambda d : d.text if d else None, responses))
print(text)
What you are getting back is a response array after you call the map. And then you can process this data using native map function

Repeatedly getting 500 error on CGI script - frontend HTML works but not CGI [MySQL]

What I want is for a MySQL query that I wrote a few weeks ago to return values from a database based on an entered query.
Here is the HTML script:
<!DOCTYPE html>
<head>
<meta charset=UTF-8">
</head>
<p>
Search by gene product name
</p>
<form name="search" method="post" action="XXXX-CGISCRIPT-XXXX">
<input id="search_entry" type="text" name="search_entry">
<input type="submit" value="Search">
</form>
</body></html>
and here is the CGI script:
#!/usr/bin/env python3
import mysql.connector
import cgi
import jinja2
import re
from biocode import utils
import cgitb
def main():
cgitb.enable()
loadTemp = jinja2.FileSystemLoader(searchpath="./templates")
environs = jinja2.Environment(loader=loadTemp)
template = environs.get_template('search_template.html')
form = cgi.FieldStorage()
acc = form.getvalue('search_entry')
conn = mysql.connector.connect(user = 'XXX', password = 'XXX',
host = 'localhost', database = 'XXX_chado')
    # MySQL query
qry = """SELECT f.uniquename, product.value, assem.uniquename, assem_type.name, g.fmin, g.fmax, g.strand
FROM feature f
JOIN cvterm polypeptide ON f.type_id=polypeptide.cvterm_id
JOIN featureprop product ON f.feature_id=product.feature_id
JOIN cvterm productprop ON product.type_id=productprop.cvterm_id
JOIN featureloc g ON f.feature_id=g.feature_id
JOIN feature assem ON g.srcfeature_id=assem.feature_id
JOIN cvterm assem_type ON assem.type_id=assem_type.cvterm_id
WHERE polypeptide.name = 'polypeptide'
AND productprop.name = 'gene_product_name'
AND product.value LIKE %s;"""
curs = conn.cursor()
curs.execute(qry, ('%' + acc + '%',))
for  (f.uniquename,  productprop.name)  in  curs:
print(template.render(curs=curs, accs=acc))
conn.close()
curs.close()
I keep getting a 500 internal server error and I'm not quite sure why. My understanding is that search_entry on the HTML side (which works) is what the user enters into the search box to be handled by the CGI script. The query is then returned by
form = cgi.FieldStorage()
acc = form.getvalue('search_entry')
and then later passed by
curs = conn.cursor()
curs.execute(qry, ('%' + acc + '%',))
to be used in the preceding MySQL query. The last few lines send variables over to the template (whose bridge I will cross when I get there) and close the connection.
My problem, I have trouble really understanding why it is that I keep getting the error message '500 Internal Server Error'. I have tried making several changes and caught many mistakes, as well as altered the formatting, but to no avail. If it was a template error, I don't think I would getting this server error, would I? I think it's a CGI error, and
import cgitb
...
cgitb.enable()
doesn't work, so I can't get a more detailed error message, regardless of where I place cgitb.enable() or if I have it print to file within the server.
Does anyone notice any obvious code errors that I could correct to at least stop getting the error? In the interest of full disclosure, this is for a school assignment.

how to chunk a csv (dict)reader object in python 3.2?

I try to use Pool from the multiprocessing module to speed up reading in large csv files. For this, I adapted an example (from py2k), but it seems like the csv.dictreader object has no length. Does it mean I can only iterate over it? Is there a way to chunk it still?
These questions seemed relevant, but did not really answer my question:
Number of lines in csv.DictReader,
How to chunk a list in Python 3?
My code tried to do this:
source = open('/scratch/data.txt','r')
def csv2nodes(r):
strptime = time.strptime
mktime = time.mktime
l = []
ppl = set()
for row in r:
cell = int(row['cell'])
id = int(row['seq_ei'])
st = mktime(strptime(row['dat_deb_occupation'],'%d/%m/%Y'))
ed = mktime(strptime(row['dat_fin_occupation'],'%d/%m/%Y'))
# collect list
l.append([(id,cell,{1:st,2: ed})])
# collect separate sets
ppl.add(id)
return (l,ppl)
def csv2graph(source):
r = csv.DictReader(source,delimiter=',')
MG=nx.MultiGraph()
l = []
ppl = set()
# Remember that I use integers for edge attributes, to save space! Dic above.
# start: 1
# end: 2
p = Pool(processes=4)
node_divisor = len(p._pool)*4
node_chunks = list(chunks(r,int(len(r)/int(node_divisor))))
num_chunks = len(node_chunks)
pedgelists = p.map(csv2nodes,
zip(node_chunks))
ll = []
for l in pedgelists:
ll.append(l[0])
ppl.update(l[1])
MG.add_edges_from(ll)
return (MG,ppl)
From the csv.DictReader documentation (and the csv.reader class it subclasses), the class returns an iterator. The code should have thrown a TypeError when you called len().
You can still chunk the data, but you'll have to read it entirely into memory. If you're concerned about memory you can switch from csv.DictReader to csv.reader and skip the overhead of the dictionaries csv.DictReader creates. To improve readability in csv2nodes(), you can assign constants to address each field's index:
CELL = 0
SEQ_EI = 1
DAT_DEB_OCCUPATION = 4
DAT_FIN_OCCUPATION = 5
I also recommend using a different variable than id, since that's a built-in function name.