Downloading a lot of slippery data - html

I have access to a web interface for a large amount of data. This data is usually accessed by people who only want a handful of items. The company that I work for wants me to download the whole set. Unfortunately, the interface only allows you to see fifty elements (of tens of thousands) at a time, and segregates the data into different folders.
Unfortunately, all of the data has the same url, which dynamically updates itself through ajax calls to an aspx interface. Writing a simple curl script to grab the data is difficult due to this and due to the authentication required.
How can I write a script that navigates around a page, triggers ajax requests, waits for the page to update, and then scrapes the data? Has this problem been solved before? Can anyone point me towards a toolkit?
Any language is fine, I have a good working knowledge of most web and scripting languages.
Thanks!

I usually just use a program like Fiddler or Live HTTP Headers and just watch what's happening behind the scenes. 99.9% of the time you'll see that there's a querystring or REST call with a very simple pattern that you can emulate.

If you need to directly control a browser
Have you thought of using tools like WatiN which are actually used for UI testing purposes but I suppose you could use it to programmaticly make requests anywhere and act upon responses.
If you just need to get the data
But since you can do whatever you please you can just make usual web requests from a desktop application and parse results. You could customize it to your own needs. And simulate AJax requests at will by setting certain request headers.

Maybe this ?
Website scraping using jquery and ajax
http://www.kelvinluck.com/2009/02/data-scraping-with-yql-and-jquery/

Related

SQL drop down selections in HTML

I'm new to HTML and coding period. I've created a basic HTML page. In that page i want to create dropdown selections that produce outputs from my SQL database. MSSQL not MySQL.
EX: If I select a table or a column from dropdown one and then input a keyword for selection box 2. I want it to produce a table that shows the information in that table/column with that keyword.
If I select a medical name from dropdown and I want it to show only medical names that are equal to Diabetes. and then show me those rows from my database to a table. How would I show that in HTMl from connecting to the database, to creating the dropdown selection linked to the database, and then being able to select the criteria for what I want to be displayed. and then showing that in a table or list format.
Thank you in advance
OK, Facu Carbonel's answer is a bit... chaotic, so since this question (suprisingly) isn't closed yet, I'll write one myself and try to do better.
First of all - this is a VERY BROAD topic which I cannot answer directly. I could give a pile of code, but walking through it all would take pages of text and in the end you'd just have a solution for this one particular problem and could start from scratch with the next one.
So instead I'll take the same path that Factu Carbonel took and try to show some directions. I'll put keywords in bold that you can look up and research. They're all pieces of the puzzle. You don't need to understand each of them completely and thoroughly from the beginning, but be aware what they are and what they do, so that you can google finer details when you need them.
First of all, you need to understand the roles of the "server side" and "client side".
The client side is the browser (Chrome, Firefox, Internet Explorer, what have you). When you type an address in the address bar (or click a link or whatever), what the browser does is it parses the whole thing and extracts the domain name. For example, the link to this question is https://stackoverflow.com/questions/59903087/sql-drop-down-selections-in-html?noredirect=1#comment105933697_59903087 and the domain part of that is stackoverflow.com. The rest of this long jibberish (it's called an "URL" by the way) is also relevant, but later.
With the domain in hand the browser then uses the DNS system to convert that pretty name into an IP address. Then it connects via network to the computer (aka "server") designated by that IP address and issues a HTTP request (HTTP, not HTML - don't mix these up, they're not the same thing).
HTTP, by the way, is the protocol that is used on the web to communicate between the server and the browser. It's like a language that they both understand, so that the browser can tell the server hey, give me the page /questions/59903087/sql-drop-down-selections-in-html. And the server then returns the HTML for that page.
This, by the way, is another important point to understand about HTTP. First the browser makes its request, and the server listens. Then the server returns its response, and the browser listens. And then the connection is closed. There's no chit-chat back and forth. The browser can do another request immediately after that, but it will be a new request.
Now, the browser is actually pretty limited in what it can do. Through these HTTP requests it gets from the server the HTML code, the CSS code and the Javascript code. It also can get pictures, videos and sound files. And then it can display them according to the HTML and CSS. And Javascript code runs inside the browser and can manipulate the HTML and CSS as needed, to respond to the user's actions. But that's all.
It might seem that the Javascript code that runs inside the browser is all powerful, but that is only an illusion as well. It's actually quite limited, and on purpose. In order to prevent bad webpages from doing bad things, the Javascript in each page is essentially limited to that page only.
Note a few things that it CANNOT do:
It cannot connect to something that doesn't use HTTP. Like an SQL server.
It can make HTTP requests, but only to the same domain as the page (you can get around this via CORS, but that's advanced stuff you don't need to worry about)
It cannot access your hard drive (well, it can if the user explicitly selects a file, but that's it)
It cannot affect other open browser tabs
It cannot access anything in your computer outside the browser
This, by the way, is called "sandboxing" - like, the Javascript code in the browser is only allowed to play in its sandbox, which is the page in which it was loaded.
OK, so here we can see, that accessing your SQL server directly from HTML/CSS/Javascript is impossible.
Fortunately, we still need to talk about the other side of the equation - the web server which responded to the browser's requests and gave it the HTML to display.
It used to be, far back in the early days of the internet, that web servers only returned static files. Those days are long gone. Now we can make the webserver return -- whatever we want. We can write a program that inspects the incoming request from the browser, and then generates the HTML on the fly. Or Javascript. Or CSS. Or images. Or whatever. The good thing about the server side is - we have FULL CONTROL over it. There are no sandboxes, no limits, your program can do anything.
Of course, it can't affect anything directly in the browser - it can only respond to the browsers requests. So to make a useful application, you actually need to coordinate both sides. There's one program running in the browser and one program running on the web server. They talk through HTTP requests and together they accomplish what they need to do. The browser program makes sure to give the user a nice UI, and the server program talks to all the databases and whatnot.
Now, while in browser you're basically limited to just Javascript and the features the browser offers you, on the server side you can choose what web server software and what programming language you use. You can use the same Javascript, or you can go for something like PHP, Java (not the same as Javasctipt!), C#, Ruby, Python, and thousands of others. Each language is different and does things its own way, but at the end of the day what it will do is that it will receive the incoming requests from the browser and generate some sort of output that the browser expects.
So, I hope that this at least gives you some starting point and outlines where to go from here.
First of all there is something that you need to know to do this, and that is the difference between a front-end and a back-end.
Html is a front-end technology, they are called like that because that's what is shown to the user and the back-end it's all mechanisms that run behind the hood.
The thing is, in your front-end you can't do things of back-end, like do querys from a database, manage sessions and that kind of thing.
For that you need a back-end running behind, like php, ruby, node.js or some technology like that.
From the html you can only call functions on the server using things like <form action="/log" method="POST"> this wold call the action /log that you should have already program on your back-end. Don't get confuse with this, there is plenty of ways to sending request to your back-end and this is just one way to do it.
For your specific case I recommend you to look up for ajax, to do the query on your database with no need of the browser to refresh after the query is made.
Some topics you need to know to understand this is:
-what's front-end and back-end and their differences.
-what is client-server architecture
-ajax
-http requests
-how to work with a back-end, doing querys to the database, making routes, etc.
-and for last, wile your server it's not open to the world with your own domain name, what is localhost and how to use it.
I hope that this clarify a bit this, that is no easy thing, but with a bit of research and practice you will accomplish!

Game Maker Studio HTML5 localStorage issue

I'm embedding GMStudio game in browser using . I need to send some data to the game from site's frontend in JSON and to receive some data from the game in frontend to make consequent actions.
So, my idea before was to save data in cookies/localStorage and to get it in the game somehow, using HTTP functionality or DLL's. Also, I'd like to emit messages from the game using window.parent.postMessage and receive them in frontend correctly.
Alas, I did not find a way to implement this. I hope there's some consistent approach to this problem about which I do not know.
The backup plan is to use Game Maker http_post_string and web sockets to get user's data before game's start and to make frontend do something after game's ending. It's clumsy and insecure, however.
The standard approach is to make a JavaScript extension.
That is done by creating a blank extension, adding a blank JS file to it, defining the functions via the context menu on it, and then adding the implementations into the JS file. Then you'll be able to call them from GML side as per usual.
This way you can access LocalStorage\Cookies, transmit\receive data from JS backends, and overall mess with the runtime as you please (with various degrees of understanding required to access internal data).

http push django comet

I want to make a django server to refresh the content that you approach the database, if the idea is to first make the user see the current contents of the database and as the valley became the new content, this content comes and is placed above the previous content without reloading the page, in another part of the site is to make you change the current content with the new as it gets to the database?
evserver clearer is my choice, but really do not know how and what would be the most simple and efficient?
I think you should avoid HTTP Polling. Here's why:
if the frequency of the setInterval combined with the number of users on your web app is going to lead to a big resource drain. If you go through slides 9 to 19 in this presentation you'll see some quite dramatic figures for using Push (Note: this example uses a hosted service but hosting your own realtime server and using Push also has similar benefits)
between setInterval calls the data displayed in your app is potentially out of data. Using a Push technology means the instant that new data is available it can be push and displayed in your app. You don't want users looking at an app and thinking they are seeing correct information when they are not.
You should take a the following StackOverflow questions:
Django / Comet (Push): Least of all evils?
Need help understanding Comet in Python (with Django)
For Python/Comet see:
Python Comet Server
The latest recommendation for Comet in Python?
I'd recommend you also start considering "WebSockets" as well as "Comet". Most Comet servers now prefer to use a WebSocket connection when possible.
If you'd prefer to avoid installing and managing your own Comet/WebSocket solution then you could use a realtime hosted service which will allow you Push data through them using a REST API and your clients can receive events by embedding a JavaScript library and writing a small about of code to subscribe and receive the event.
The steps are quite straightforward:
Write a model to store data in DB
Write a view that will generate JSON-serialized data upon POST request.
Write a template that will contain JavaScript with setInterval() that will
proceed AJAX requests to the view and render recieved data. (I'd suggest using JQuery as it's well documented and widespread).

Getting same information firebug can get?

This all goes back to some of my original questions of trying to "index" a webpage. I was originally trying to do it specifically in java but now I'm opening it up to any language.
Before I tried using HTML unit and other methods in java to get the information I needed but wasn't successful.
The information I need to get from a webpage I can very easily find with firebug and I was wondering if there was anyway to duplicate what firebug was doing specifically for my needs. When I open up firebug I go to the NET tab, then to the XHR tab and it shows a constantly updating page with the information the server is updating. Then when I click on the request and look at the response it has the information I need, and this is all without ever refreshing the webpage which is what I am trying to do(not to mention the variables it is outputting do not show up in the html of the webpage)
So can anyone point me in the right direction of how they would go about this?
(I will be putting this information into a mysql database which is why i added it as a tag, still dont know what language would be best to use though)
Edit: These requests on the server are somewhat random and although it shows the url that they come from when I try to visit the url in firefox it comes up trying to open something called application/jos
Jon, I am fairly certain that you are confusing several technologies here, and the simple answer is that it doesn't work like that. Firebug works specifically because it runs as part of the browser, and (as far as I am aware) runs under a more permissive set of instructions than a JavaScript script embedded in a page.
JavaScript is, for the record, different from Java.
If you are trying to log AJAX calls, your best bet is for the serverside application to log the invoking IP, useragent, cookies, and complete URI to your database on receipt. It will be far better than any clientside solution.
On a note more related to your question, it is not good practice to assume that everyone has read other questions you have posted. Generally speaking, "we" have not. "We" is in quotes because, well, you know. :) It also wouldn't hurt for you to go back and accept a few answers to questions you've asked.
So, the problem is?:
With someone else's web-page, hosted on someone else's server, you want to extract select information?
Using cURL, Python, Java, etc. is too painful because the data is continually updating via AJAX (requires a JS interpreter)?
Plain jQuery or iFrame intercepts will not work because of XSS security.
Ditto, a bookmarklet -- which has the added disadvantage of needing to be manually triggered every time.
If that's all correct, then there are 3 other approaches:
Develop a browser plugin... More difficult, but has the power to do everything in one package.
Develop a userscript. This is much easier to do and technologies such as Greasemonkey deal with the XSS problem.
Use a browser macro technology such as Chickenfoot. These all have plusses and minuses -- which I won't get into.
Using Greasemonkey:
Depending on the site, this can be quite easy.   The big drawback, if you want to record data, is that you need your own web-server and web-application. But this server can be locally hosted on an XAMPP stack, or whatever web-application technology you're comfortable with.
Sample code that intercepts a page's AJAX data is at: Using Greasemonkey and jQuery to intercept JSON/AJAX data from a page, and process it.
Note that if the target page does NOT use jQuery, the library in use (if any) usually has similar intercept capabilities. Or, listening for DOMSubtreeModified always works, too.
If you're using a library such as jQuery, you may have an option such as the jQuery ajaxSend and ajaxComplete callbacks. These could post requests to your server to log these events (being careful not to end up in an infinite loop).

how to browse web site with script to get informations

I need to write a script that go to a web site, logs in, navigates to a page and downloads (and after that parse) the html of that page.
What I want is a standalone script, not a script that controls Firefox. I don't need any javascript support in that just simple html navigation.
If nothing easy to do this exists.. well then something that acts though a web browser (firefox or safari, I'm on mac).
thanks
I've no knowledge of pre-built general purpose scrapers, but you may be able to find one via Google.
Writing a web scraper is definitely doable. In my very limited experience (I've written only a couple), I did not need to deal with login/security issues, but in Googling around I saw some examples that dealt with them - afraid I don't remember URL's for those pages. I did need to know some specifics about the pages I was scraping; having that made it easier to write the scraper, but, of course, the scrapers were limited to use on those pages. However, if you're just grabbing the entire page, you may only need the URL(s) of the page(s) in question.
Without knowing what language(s) would be acceptable to you, it is difficult to help much more. FWIW, I've done scrapers in PHP and Python. As Ben G. said, PHP has cURL to help with this; maybe there are more, but I don't know PHP very well. Python has several modules you might choose from, including lxml, BeautifulSoup, and HTMLParser.
Edit: If you're on Unix/Linux (or, I presume, CygWin) You may be able to achieve what you want with wget.
If you wanted to use PHP, you could use the cURL functions to build your own simple web page scraper.
For an idea of how to get started, see: http://us2.php.net/manual/en/curl.examples-basic.php
This is PROBABLY a dumb question, since I have no knowledge of mac but what language are we talking about here, and also is this a website that you have control over, or something like a spider bot that google might use when checking page content? I know that in C# you can load in objects on other sites using an HttpWebRequest and a stream reader... In java script (this would only really work if you know what is SUPPOSED to be there) you could open the web page as the source of an iframe, and using java script traverse the contents of all the elements on the page... or better yet, use jquery.
I need to write a script that go to a web site, logs in, navigates to a page and downloads (and after that parse) the html of that page.
To me this just sounds like a POST or GET request to the URL of the login page could do the job.With the proper parameters username and password (depending on the form input names used on the page) set in the request, the result will be the html of the page that you can then parse as you please.
This can be done with virtually any language. What language do you want to use?
I recently did exactly what you’re asking for in a C# project. If login is required your first request is likely to be a post and include credentials. The response will usually include cookies which persist the identity across subsequent requests. Use Fiddler to look at what form data (field names and values) is being posted to the server when you logon normally with your browser. Once you have this you can construct an HttpWebRequest with the form data and store the cookies from the response in a CookieContainer.
The next step is to make the request for the content you actually want. This will be another HttpWebRequest with the CookieContainer attached. The response can be read by a StreamReader which you can than read and convert to a string.
Each time I’ve done this it has usually been a pretty laborious process to identify all the relevant form data and recreate the requests manually. Use Fiddler extensively and compare the requests your browser is making when using the site normally with the requests coming from your script. You may also need to manipulate the request headers; again, use Fiddler to construct these by hand, get them submitting correctly and the response as you expect then code it. Good luck!