I have a web page that is basically a form the user fills out.
I developed a C# VSTO Outlook 2010 Add-In that can create a JSON object based on the details of an appointment on a user's calendar. The JSON is then passed via the URL (in the Query String) to my web page. The object that is passed is used to automatically fill in the details on the web form. The web page is ASP.NET, although I don't think that's relevant.
This is the first time I've ever passed a JSON object in a URL. Are there any potential pitfalls I should watch out for? Anything that could go wrong? I saw in this question that someone said you can pass JSON objects via URL with "no problems" but the question there was "can I do it?" rather than "what problems might I run into?"
The most realistic problem (if you are not concerned with security) is the GET request size limitation. The IE, for example, have limitation of 2Kb for GET requests. So you could get into situation there your request will be trimmed.
And for the security issues mentioned, the GET requests are stored completely in browser history and thus can be potentially exposed to third-party.
Related
I am looking for a deeper explanation of the html form action attribute than is usually given. What is really happening when the user hits 'submit' in the browser? I assume that the browser sends some kind of message to the web server software. So the browser is communicating with for instance Nginx.
But the way people talk about the action attribute makes it sound as if the browser is really sending the data to some arbitrary URL. Like to a php script located at that URL, but that doesn't really make sense to me. Is the form data really being sent to the web server and then the web server parses the action attribute and attempts to somehow submit the parameters and values to a script located there? In that case the URL specified by the action attribute would really be more like a suggestion to the web server.
Can someone explain to me what is really going on? I find the idea of the form data being sent to a 'where' or anything other than to the web server quite confusing and I have not been able to find a deeper explanation anywhere. All paths seem to lead to the concept of the form data being sent to some URL as if that actually made sense.
I'm trying to integrate LinkedIn API v2 to the app I'm developing for my client and I need help with it. Basically, I need to allow users to fetch some of their LinkedIn profile data and save it to the platform. As I understood, the first version of the API will no longer be supported. https://developer.linkedin.com/docs
So, the problem is that the default field set I was able to retrieve is extremely limited. And it seems like I should apply for the Developer Program here to gain additional API access
https://business.linkedin.com/marketing-solutions/marketing-partners/become-a-partner/marketing-developer-program
I already submitted the application but haven't yet received any response. The frustrating part is that I'm not even sure if this is what I should do to get access.
Here's what I already discovered
Here it's said that the partner's program isn't available
https://www.linkedin.com/help/linkedin/answer/97491
Here it sends me to the partner program
https://developer.linkedin.com/support/faq
Should I choose marketing? https://developer.linkedin.com/partner-programs
I suppose so because other options seem to be irrelevant. So I already applied here
https://business.linkedin.com/marketing-solutions/marketing-partners/become-a-partner/marketing-developer-program
But still no answer
Here are the developers facing the same issues with no answer as well
https://www.linkedin.com/help/linkedin/forum/question/712591
https://www.linkedin.com/help/linkedin/forum/question/711176
https://www.linkedin.com/help/linkedin/forum/question/711027
Here seems to be the answer to a similar question but still, no specific link or steps to apply for a partner's program
LinkedIn API V2 - Can't get summary, skills and headline
Here they also tell about some partner's program but again without specifics
Linkedin oauth2 r_liteprofile not being returned from api
Here in the official doc, it's also said that I should apply to the program (which I did)
https://learn.microsoft.com/en-us/linkedin/consumer/integrations/self-serve/migration-faq?context=linkedin/consumer/context
I applied on the 23rd of January and I'm still waiting for the approval without even knowing if this program will give me the API access I need
So I need to know one of the following
If my application will be approved it'll give me the extended access to the API v2 (r_fullprofile permission)
If the application I submitted isn't enough what else should I do in order to get the extended access to the API v2 (r_fullprofile permission)
It feels to be a simple process and I don't really understand why it has to involve the Marketing Developer Program when I only need to access some of the fields. I'm sure there is a reason for that. Could anybody from support provide some steps that I or my client should take on order to get the API access?
I already created the app as a developer here and successfully tested it
https://www.linkedin.com/developers/apps
So, just to be clear, the problem is not in something not working technically. It's just that I receive a very limited set of field of a user's profile and I need to expand it
I am trying to get started with REST API calls by seeing how to format the API calls using a browser. Most examples I have found online use SDKs or just return all fields for a request.
For example, I am trying to use the Soundcloud API to view track information.
To start, I've made a simple request in the browser as follows http://api.soundcloud.com/tracks/13158665.json?client_id=31a9f4a3314c219bd5c79393a8a569ec which returns a bunch of info about the track in JSON format
(e.g. {"kind":"track","id":13158665,"created_at":"2011/04/06 15:37:43 ...})
Is it possible to only to get returned the "created_at" value using the browser? I apologize if this question is basic, but I don't know what keywords to search online. Links to basic guides would be nice, although I would prefer to stay out of using a specific SDK for the time being.
In fact, it's really hard to answer such question since it depends on the Web APIs. I mean if the API supports to return only a subset of fields, you could but if not, you will receive all the content. From what I saw on the documentation, it's not possible. The filters only allow you to get a subset of elements and not control the list of returned fields within elements.
Notice that you have a great application to execute HTTP requests (and also REST) in Chrome: Postman. This allows to execute all HTTP methods and not only GET ones and controls the headers and sent content and also see what is received back.
If you use Firefox, Firebug provides a similar thing.
To finish, you could have a look at this link to find out hints about the way Web APIs work and are designed: https://templth.wordpress.com/2014/12/15/designing-a-web-api/.
Hope it helps you and I answered you question,
Thierry
Straight from the browser bar you can utilize REST endpoints that respond to a GET message. That is what you are doing when you hit that URI, you are sending an HTTP GET message to that server and it is sending back a JSON.
You are not always guaranteed a JSON, or anything when hitting a known REST endpoint. What each endpoint returns when hit with a GET is specific to how it was built. In that case, it is built to return a JSON, but some may return an HTML page. In my personal experience, most endpoints that utilize JSON returns expect you to process that object in a computer fashion and don't give you a lot of options to get a specific field of the JSON. Here is a good link on how to process JSON utilizing JavaScript.
You can utilize REST clients (such as the Advanced REST Client for Chrome) to craft HTTP POST and PUT if a specific REST endpoint has the functionality built in to receive data and do something with it. For example, a lot of wiki style REST endpoints will allow you to create a page with a specifically crafted HTTP POST with either specific header information, URI parameters or a JSON as part of it.
you can install DHC client app in your chrome and send request like put or get
I am working on a project involving finding out what http requests were made by the user.
I have all the http request and response headers (but not the data), and I need to find out what content was requested by the user and what content was automatically sent (e.g. ads pages, streaming on the background, and all sorts of unrelevant content).
When recording the net traffic (even for a short period) alot of content gets generated, and most of it is not relevant.
since im no expert in http, i'd like some help with directions as of which headers I can safely use (assuming most web pages send them), and which headers might be omitted and so it will not be safe to rely on.
my current idea involves:
find all the html files, and check what the main html files were (no referrer or search engine referrer), and then recursively mark all the files called by these html files onward as relevant, and discard the rest.
the problem with this is that I've been told that I can't trust the referrer header, and I have no idea as of how to identify what html files were clicked by the user.
Every kind of help will be appreciated, sorry if the post is not formatted well, this is my first question here.
EDIT:
I've been told the question is'nt clear enough, so all I'm asking is for some way to determine which requests were triggered by the user and whic requests were automatically made
To determine which request was send by the user itself you should look at the first request send through the connection and look at it's response body.
All external files referenced in this first body which then consecutively get send to the user are most likely to be send automatically without the users interaction.
Time passing between requests could also be an factor worth looking at.
Another thing you already mentioned yourself would be looking at the Referer header. As far as the RFC 2616 14.36 goes it can be trusted, as the Referer header must not be sent if the Request URI comes from user input. Although there could be automatically send content which does not have the Referer header set, as it's optional.
Given a page like this, I am trying to extract all the answer text with a ruby web crawler.
I am using nokogiri and search('div[#class="answer_content"]').inner_text to access the answers, but I can't seem to access all the text, even when in fact I am logged in. About 200 words down, I'll get the message "sign up or log in to read full content."
Also, is this div class the correct one to use?
It seems to me that you need to authenticate yourself from the crawler. I've done it a few weeks ago. I used a firefox extension called Tamper Data which allowed me to see the requests made between the browser and the server. In my case, the authentication was handled by a session id; I just had to get it back and pass it to each request I made to the server.
But in your case, the authentication might be done by a different way, you'll have to see for yourself. Anyway, I can detail if it's not clear enough.