Serving Json from WCF Service with no extension in IIS - json

I have a WCF service set up to serve data through multiple endpoints (SOAP, JSON and XML) the SOAP and XML is working perfect, but when I try to view the json I get a prompt to download a file with the json results instead of displaying the results in the browser. This probably won't matter as the client will most likely be consuming the data from some sort of .net environment which will be able to handle the response natively, but I wanted to see if there was a way to display the json results in the browser just like the XML results.
An example of the url I am using to get the results:
http://localhost/api/Service.svc/json/GetResults?name=Test&test=test
This then prompts me to download a file named "GetReults" with no extension and the filetype is: application/json

If your goal is to view the content of the json response in the browser then change your settings in Firefox or else use another browser. I have tried a similar thing with IE and it showed the json content in the browser without making any changes. Not sure what Chrome will do.
I have a similar situation with a rest call that I post a request to. I was making a mock rest service in Grails and noticed that when I hit the live server or my mock server with Firefox it kept asking me to download the file, but not with IE. The problem I'm dealing with now is that I am trying to hit my mock endpoint with SoapUI and it is also asking me to download the file. If I hit the live server with SoapUI it does not ask me to download the file.
Still trying to figure this issue out.

This is exactly the desired behavior. The return response's content type is "application/json". Most browsers cannot display content with this content type inline (unless manually configured), so they prompt you to download the file.
If you actually save this file, and open it with -- say, note Notepad -- you will notice that the pure JSON response is contained in the file.
The inability to handle this content type and the browser forcing this download is almost never an issue, however. The reason is, the general use case for the usage of this JSON endpoint is either the ASP.NET AJAX framework-powered webpages (that automagically make these requests and parse responses by themselves), scripting environments like Python or Perl (which again would just get the requests and then parse them), or custom JavaScript frameworks.
Hope this helps!

Related

get a json file that a webpage is made of

I'm not familiar with web development but I believe this web page text content
https://almath123.github.io/semstyle_examples/
is made of two JSON files mentioned in it (semstyle_results.json and semstyle_results.json) and the JSON files are completely present in ram (If this is the correct term for referring to it) because when I disconnect the internet I can still browse the page and see the text content.
I want to download semstyle_results.json file. Is that possible? how can I do that?
Technically if you visit a website you're "downloading" the content. Your browser sends a request for information and a server responds by sending you the information. You're viewing that information locally. Dynamic sites poll or make further requests as you browse to keep the data updated and relevant, but it's sent to you.
If you want to easily download any of the content from the website, a simple way is to open up the development tools (CTRL + SHFT + I on windows for Firefox and Chrome), go to a source file and click save as. The network tab shows you requests that were made which includes not just files such as json but also the details of the request.
Here is a screenshot locating one of the json files in a Chrome-based browser (Brave)
The webpages may not always show that they will support json or xml return of data. For example if you inspect this webpage SEC EDGAR database using the method described above, it shows no json link but if you append index.json at the end of the link it will return the same data in json format or xml format, if you so please.
i.e: same website but with json endpoint
So it is always a good idea to see if the website hosts developer information. For example SEC EDGAR provides developer tools that mentions that the directory structure can be accessed via HTML, XML or JSON.
SEC developer information

Is there a way to scrape data from a website that is not available in the page's source?

What are the few things that I'll have to include in my code that will point me in the right direction?
For Example this website
Open your browser's debugger on Network tab and observe what are the requests when site is loading dynamic content (when you click). You'll see it's getting all the data using some API, for example: https://www.bestfightodds.com/api?f=ggd&b=3&m=16001&p=2
You can download all the data by changing parameters in this URL.
Usually that's enough but here it's more tricky as the data returned by the server is somehow encoded and not easily readable. You'd have to debug its javascript to find function which is used to decode this data before you can parse it.

how to get dynamic table data in html in reponse in JMeter

I am using the JMeter3.0. In my project I visited pages where the dynamic table contents are displayed as in the response.
Actually tabular format is showing, but data are not showing, and I would require data to extract values from those.
Can someone help me our here?
If you don't see the full response data it may mean that the data is being populated using i.e. AJAX technology by secondary JavaScript request(s).
As per JMeter Project main page:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).
so if your table is being populated via AJAX requests you need to simulate these requests somehow and get data from their responses. AJAX requests can be recorded using HTTP(S) Test Script Recorder, but when it comes to replaying them you need to do it a little bit differently comparing to "normal" sequential HTTP Requests, see How to Load Test AJAX/XHR Enabled Sites With JMeter article to learn how AJAX requests can be handled in JMeter tests.

Where is the Data stored on Website

I am at this website -
http://www.zoominfo.com/s/#!search/company/1.64.eyJjb21wYW55TmFtZSI6xIB2YWx1xIw6ImEiLCJpc1VzZWTEjXRyxJN9fQ%3D%3D
If you see the company name - Agilent Technologies Inc.
Its neither there in page source, nor in any json format.
But it does show in the Dom of Chrome Developer tool.
I have looked and analysed almost every requests that it sent, but still couldn't find where this data is saved.
By where the data is saved - I am looking to find where I can scrape that data from?
If by using python-requests and BeautifulSoup
I do see an XMLHTTPREQUEST made, not sure what that means, or if that is the clue to my answer.
I am still learning python, and it would be a very useful information if someone helps me with this.
Thanks in advance.
After the HTML is loaded, js requests for the data through an XMLHTTPREQUEST which is loaded right after the request is received on your client. That's why you see the DOM element right there using element inspector.
You didn't mention what goal you want to achieve or what tool you are using. Please be specific on your question. If you do not have any idea about this kind of pattern, google out angularjs, see some example.
do see an XMLHTTPREQUEST made, not sure what that means, or if that is the clue to my answer.
It means that javascript embedded in the page is sending an extra HHTP request to the web server. It is likely that the "Agilent Technologies Inc." text is being returned in the server's response to that request, and the javascript in the page is then injecting the text into the DOM in the appropriate place.
Where is the Data stored on Website
That is a completely different question ...
(You have already noted that the data (e.g. the company name) gets injected into the page displayed by your browser.)
On the server side, the data could be stored in the web server (or its back-end systems) in a variety of ways. Or it might not be stored at all. There is no way of knowing ... without looking at the server-side code and configurations.

Groovy: CyberNeko | User Agents | Browser Version

I'm currently using CyberNeko in an attempt to grab information I want from a website. However, I believe the website checks the user agent/browser version to keep from just grabbing the url content.
I am aware of using htmlunit to change the browser version, but not sure if I can go about this using CyberNeko.
Does anyone know if it's possible to do such a thing?
I've never used CyberNeko, but I thought it was just a HTML parser, i.e. I didn't think you could use it to issue the HTTP requests and actually download the web page.
It could be the fact that the HTTP request issued by CyberNeko is missing various headers such as the user agent header. An easy way to ensure that the HTTP request looks like a request sent from a browser is to use HttpClient instead of CyberNeko to download the web page. There's some example code available here.
Once you've successfully downloaded the page, use CyberNeko to parse out the bits you're interested in.