OneNote API : Set Rule Lines options - onenote

Is it possible to set the Rule Lines options, under the View tab using the OneNote api.
I've had a look at the page and content end points but can't see anything that suggests itself.

It is possible to achieve this when creating a new page via the api. If you have a template page that is ruled and use page: copyToSection the resultant page will also be ruled.
POST https://graph.microsoft.com/beta/me/notes/pages/<id>/copyToSection
Content-type: application/json
Content-length: 52
{
"id": "id-value",
"groupId": "groupId-value"
}

Posting Yanir's comment as an answer:
This option is currently not supported through the API. I would encourage you to submit your idea to: onenote.uservoice.com/forums/245490-onenote-developer-apis, so we can explore working on it.

Related

Scrapy can't find form on page

I'm trying to write a spider that will automatically log in to this website. However, when I try using scrapy.FormRequest.from_response in the shell I get the error:
No <form> element found in <200 https://www.athletic.net/account/login/?ReturnUrl=%2Fdefault.aspx>
I can definitely see the form when I inspect element on the site, but it just did not show up in Scrapy when I tried finding it using response.xpath() either. Is it possible for the form content to be hidden from my spider somehow? If so, how do I fix it?
The form is created using Javascript, it's not part of the static HTML source code. Scrapy does not parse Javascript, thus it cannot be found.
The relevant part of the static HTML (where they inject the form using Javascript) is:
<div ng-controller="AppCtrl as appC" class="m-auto pt-3 pb-5 container" style="max-width: 425px;">
<section ui-view></section>
</div>
To find issues like this, I would either:
compare the source code from "View Source Code" and "Inspect" to each other
browse the web page with a browser without Javascript (when I develop scrapers I usually have one browser with Javascript for research and documentations and another one for checking web pages without Javascript)
In this case, you have to manually create your FormRequest for this web page. I was not able to spot any form of CSRF protection on their form, so it might be as simple as:
FormRequest(url='https://www.athletic.net/account/auth.ashx',
formdata={"e": "foo#example.com", "pw": "secret"})
However, I think you cannot use formdata, but instead they expect you to send JSON. Not sure if FormRequest can handle this, I guess you just want to use a standard Request.
Since they heavily use Javascript on their front end, you cannot use the source code of the page to find these parameters either. Instead, I used the developer console of my browser and checked the request/response that happened when I tried to login with invalid credentials.
This gave me:
General:
Request URL: https://www.athletic.net/account/auth.ashx
[...]
Request Payload:
{e: "foo#example.com", pw: "secret"}
Scrapy has a JsonRequest class to help with posting JSON. See here [https://docs.scrapy.org/en/latest/topics/request-response.html]
So something like the below should work
data = {"password": "pword", "username": "user"}
# JSON POST to API login URL
return JsonRequest(
url=url,
callback=self.after_login,
data=data,
)

Google Chrome show ajax response

I am using Google Chrome Developer Tools to try to see the response of some AJAX url's.
The problem is that when I click on the NETWORK TAB, then on the link, then on RESPONSE, I see this text : "THIS REQUEST HAS NO RESPONSE DATA AVAILABLE".
I have been using FIREBUG and I am 100% sure there is a response from that page.
Can somebody help with this ?
Thank you !
You can try manually checking if there's a response or not
So, generally when dealing with ajax, in most cases we use the POST, You can create a 'same structured' page to handle same input/response but using Get method and print the output data as normal.
This way you can see if there's any response/errors in your script very easily

Custom Twitter, G+, Facebook buttons

Google, Twitter, Facebook all seem to have their own styles, colors and worst - sizes. Is there any way to use custom images WHILE also retaining the share 'counts'? A simple API or workaround for all 3 services?
These API's usually render markup to your page which can be targeted & styled through CSS. So, implementing your own CSS that targets the rendered markup may be the simplest and cleanest approach.
Another option is to create your own HTML, CSS and JavaScript that acts as a fascade between the underlying API (Facebook, G+, Twitter) and the client.
These will return simple JSON responses with counts for property of re-Tweets & Likes for a specific URL:
http://urls.api.twitter.com/1/urls/count.json?url=SOME_URL_HERE
http://graph.facebook.com/SOME_URL_HERE
Examples:
TWITTER API CALL:
http://urls.api.twitter.com/1/urls/count.json?url=http://stackoverflow.com
:: JSON RESPONSE:
{"count":4504,"url":"http://stackoverflow.com/"}
FACEBOOK GRAPH CALL:
http://graph.facebook.com/http://stackoverflow.com
:: JSON RESPONSE:
{
"id": "http://stackoverflow.com",
"shares": 7004,
"comments": 3
}
Currently there is no public API method for Google...

Opinions on using HTTP request headers to switch between website (HTML) and api (JSON)

We have an ecommerce website that displays groups of products by category using a URL format that maps almost exactly to the REST URL format we would like to use for our forthcoming API.
e.g. example.com/products/latest or example.com/products/hats
Is it a valid pattern to use the same URL for visible (HTML) and invisible (JSON) results, and to use the Accept http request header to determine what should be returned.
i.e. if you call example.com/products/latest with Accept: application/json you get just the product data, but if you use text/html you get the full HTML page (header, footer, site chrome etc.)
And if so, is this a good idea - will we run into problems if, for instance, the website needs to change, but the API needs to be stable?
UPDATE: some helpful resources - here is an article[1] by Peter Williams discussing the use of the HTTP Accept header to version APIs, and I have also referenced an SO question[2] that reveals some of the problems of using this approach. Probably better to use a custom HTTP header?
[1] Making the case for using Accept: http://barelyenough.org/blog/2008/05/versioning-rest-web-services/
[2] Problems with jQuery (& IE): Cannot properly set the Accept HTTP header with jQuery
[3] Making the case for using Accept: http://blog.steveklabnik.com/2011/07/03/nobody-understands-rest-or-http.html
[4] Sitting on the fence: http://www.informit.com/articles/article.aspx?p=1566460
Using http headers is generally becoming the accepted way of determining this.
In ASP.NET MVC for example there is an IsAjaxRequest method that checks for the X-Requested-With header and if it is equal to "XMLHttpRequest" it is deemed to be an ajax request.
Last time I tried to do that (and this was a few years ago) I found I could not override the Accept header of an XMLHttpRequest object in Opera. If that isn't a worry for you, then go for it, that is how HTTP was designed to work.
I recommend setting your HTML response to have a higher q value then your JSON response though, some browsers send Accept: */*.
I have no experience with this, but Restful Web Services recommends that you version your API via the URL (e.g. api.example.com/v1/products/hats) — I’m not sure that would fit with using the same URLs for the website and the API.

Chrome extension, replace HTML in response code before browser displays it

i wonder if there is some way to do something like that:
If im on a specific site i want that some of javascript files to be loaded directly from my computer (f.e. file:///c:/test.js), not from the server.
For that i was thinking if there is a possibility to make an extension which could change HTML code in a response which browser gets right before displaying it. So whole process should look like that:
request is made
browser gets response from server
#response is changed# - this is the part when extension comes in
browser parse changed response and display page with that new response.
It doesnt even have to be a Chrome extension anyway. It should just do the job described above. It can block original file and serve another one (DNS/proxy?) or filter whole HTTP traffic in my computer and replace specific code to another one of matched response.
You can use the WebRequest API to achieve that. For example, you can add a onBeforeRequest listener and redirect some requests:
chrome.webRequest.onBeforeRequest.addListener(function(details)
{
var responseData = "<div>Some text</div>"
return {redirectUrl: "data:text/html," + encodeURIComponent(responseData)};
}, {urls: ["https://www.google.com/"]}, ["blocking"]);
This will display a <div> element with the text "some text" instead of the Google homepage. Note that you can only redirect to URLs that the web server itself is allowed to redirect to. This means that redirecting to file:/// URLs is not possible, and you can only redirect to files inside your extension if these are web accessible. data: and http: URLs work fine however.
In Windows you can use the Proxomitron (proxomitron.info) which is a local proxy that can intercept any page or file being loading into your browser and change it using regular expressions (no DOM parsing) however you want, before it is rendered by the browser.