in my django project
i have this function:
def mesaj_yolla():
fbid="my_facebook_id"
post_message_url = 'https://graph.facebook.com/v2.6/me/messages?access_token=<my_access_token>'
response_msg = json.dumps({"recipient":{"id":fbid}, "message":{"text":"hello"}})
status = requests.post(post_message_url, headers={"Content-Type": "application/json"},data=response_msg)
print(status)
it returns: <Response [400]>
what is wrong with these codes? i just want to send a message to user.
According to the API documentation, you should use recipients.id instead of recipients.user_id:
def mesaj_yolla():
fbid="my_facebook_id"
post_message_url = 'https://graph.facebook.com/v2.6/me/messages?access_token=<my_access_token>'
response_msg = json.dumps({"recipient":{"id":fbid}, "message":{"text":"hello"}})
status = requests.post(post_message_url, headers={"Content-Type": "application/json"},data=response_msg)
print(status)
That explains the HTTP 400 code (Bad request).
Related
I am using smtplib to send email to people in my company from a company mail server. The code here works fine. I just wonder how it works without a need to provide the port in smtplib.SMTP(). Additionally, I can define any sender address within a company domain and I am curious how it works like this. I'd appreciate if someone explains this to me or share a link so I learn about this.
def send_email(subject, sender, receiver):
msg = MIMEMultipart("alternative")
msg['Subject'] = subject
msg['From'] = sender
if type(receiver) is list:
msg['To'] = ", ".join(receiver)
else:
msg['To'] = receiver
# html message here
content = MIMEText(html, "html")
msg.attach(content)
content = MIMEText(html, "html")
with smtplib.SMTP('mailserver.companydomain.com') as smtp:
smtp.sendmail(sender, receiver,msg.as_string())
I've been trying to build up a tool that needs to fetch all files' URLs of GitHub code search's result. For example when you go the here and search for uber.com api_key. You'll see that there is 381 code results and I want to get all these 381 files' URLs.
In order to do that I learned how to use GitHub API V3 and made following function:
def fetchItems(search, GITHUB_API):
items = set()
response = {"items":[1]}
pageNumber = 1
while(response["items"]):
sleep(3) # trying to avoid rate limit, not successful though :(
url = "https://api.github.com/search/code"
params = {
"q" : search,
"per_page" : 30, # default value, it can be increased to 100
"page" : pageNumber
}
headers = {
"Accept" : "application/vnd.github+json",
"Authorization" : f"Bearer {GITHUB_API}"
}
r = requests.get(url=url, headers=headers, params=params, verify=False)
if r.status_code == 403: # if we exceed the rate limit, sleep until rate limit get reseted
epochReset = int(r.headers["X-Ratelimit-Reset"])
epochNow = time()
if epochNow < epochReset:
sleep((epochReset - epochNow) + 1)
sleep(1)
continue
response = json.loads(r.text)
for file in response["items"]:
items.add(file["html_url"])
pageNumber += 1
return items
page variable indicates the number of items that'll be returned in each page, and page is the page :). By increasing page number in every request, you should be able to get all items according to my understanding.
However when I opened my database and checked the items that have been written, I saw that there were only 377 files, so 4 of the files are missing.
Because of my repuation I can't post images, so click here.
I checked the db writer function and I'm sure that there is nothing wrong with that. Does GitHub API return missing items in JSON or am I doing something wrong ?
I'm using the following code to get drive log data from ReportsAPI:
SCOPES = ['https://www.googleapis.com/auth/admin.reports.audit.readonly']
DELEGATION_ACCT = 'xxx'
creds = service_account.Credentials.from_service_account_file(self._config['googletools']
['credentials-file'],
scopes=SCOPES)
delegate_creds = creds.with_subject(DELEGATION_ACCT)
service = build('admin', 'reports_v1', credentials=delegate_creds)
drive_logs = service.activities().list(userKey='all', applicationName='drive').execute()
And that returns me a list of drive logs.
I would like to get the same result but from stream, so that I could get the logs in a continues way
I have a Pyramid web-application that I am trying to unit-test.
In my tests file I have this snippet of code:
anyparam = {"isApple": "True"}
#parameterized.expand([
("ParamA", anyparam, 'success')])
def test_(self, name, params, expected):
request = testing.DummyRequest(params=params)
request.session['AI'] = ''
response = dothejob(request)
self.assertEqual(response['status'], expected, "expected response['status']={0} but response={1}".format(expected, response))
Whereas in my views:
#view_config(route_name='function', renderer='json')
def dothejob(request):
params = json.loads(request.body)
value = params.get('isApple') #true or false.
However, when I'm trying to unit-test it, I am getting this error:
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
However, when I make the same request via POST via web-browser it works perfectly fine.
By doing testing.DummyRequest(params=params) you are only populating request.params, not request.body.
You probably want to do something like:
request = testing.DummyRequest(json_body=params)
Also, you may want to use directly request.json_body in your code instead of json.loads(request.body).
We are using sunspot-rails to connect to websolr. I am trying to find out a way to add http headers to the outgoing request. The samples are present only for rsolr but not for sunspot-rails.(https://github.com/onemorecloud/websolr-demo-advanced-auth).
The purpose is to use the headers for authentication.Is there a way to add/modify http headers from sunspot-rails for both indexing and querying calls?
I think I found the answer to this:
https://groups.google.com/forum/#!searchin/ruby-sunspot/authentication/ruby-sunspot/-FtTQdg4czs/mvOuB7g8yCgJ
The example quoted by outoftime in this would be the solution to retrieve the http object.
class SolrConnectionFactoryWithTimeout
def initialize(timeout = 60)
#timeout = timeout
end
def connect(opts = {})
client = RSolr.connect(opts)
solr_connection = client.connection
http = solr_connection.connection
http.read_timeout = #timeout
client
end
end
Sunspot::Session.connection_class =
SolrConnectionFactoryWithTimeout.new(timeout.to_f)
Then use in combination with
http://ruby-doc.org/stdlib-2.0/libdoc/net/http/rdoc/Net/HTTP.html#label-Setting+Headers
req = Net::HTTP::Get.new(uri)
req['If-Modified-Since'] = file.mtime.rfc2822