Delay ajax GET function helps in this case? - html

I am using the following to save the html of a certain website in a string
function loadajax(dname) {
$.ajaxSetup({async: false});
$.get('https://www.example/?param=param1', function(response) {
var logfile = response;
//alert(logfile);
});
}
The problem is that in the html code there are some codes like {{sample}} which seems that there a not loaded yet when the Ajax call is getting the code. When I perform the operations manually I can clearly see HTML code instead of the " {{ }}'s ".
I have already tried {async: false}...

Related

Change html page with jqm (1.4.0), passing parameter

I am building several apps and want to be able to reuse som code as separate HTML pages by passing parameters to them.
I would really like to pass parameters via ajax with one of these:
Alt1
$.mobile.pageContainer.pagecontainer("change", "../Photo/Photo.html", { reload: true, parameter: "dummyParameter"});
$.mobile.changePage("../Photo/Photo.html", { reloadPage: true, parameter: "dummyParameter"});
Problem is that the page wont reload.
If I use the below link the page is loaded/reloaded, but I cant seem to find the passed parameter.
Alt2
Or through a basic link
(I would prefeer to not generate the url in javascript as in alt2 but if what it takes...)
I use this code to try to retreive the parameters:
$(document).on("pagebeforechange", function (e, data) {
if (data.toPage[0].id == "Photo") {
//var parameters = $(this).data("url").split("?")[1];
//var parameter = parameters.replace("paremeter=", "");
var stuff = data.options.stuff;
//showStuff("#p2", stuff);
}
});
While I'm at it, if someone uses type script. Visual studio complains about that this call signature isnt correct:
$(document).on("pagebeforechange", function (e, data)
Expects one argument, the event, not the data. The plugin generates correct javascript but the IDE complains.
Thanks!

How write and immediately read a file nodeJS

I have to obtain a json that is incrusted inside a script tag in certain page... so I can't use regular scraping techniques, like cheerio.
Easy way out, write the file (download the page) to the server and then read it using string manipulation to extract the json (there are several) work on them and save to my db hapily.
the thing is that I'm too new to nodeJS, and can't get the code to work, I think that I'm trying to read the file before it is fully written, and if read it time before obtain [Object Object]...
Here's what I have so far...
var http = require('http');
var fs = require('fs');
var request = require('request');
var localFile = 'tmp/scraped_site_.html';
var url = "siteToBeScraped.com/?searchTerm=foobar"
// writing
var file = fs.createWriteStream(localFile);
var request = http.get(url, function(response) {
response.pipe(file);
});
//reading
var readedInfo = fs.readFileSync(localFile, function (err, content) {
callback(url, localFile);
console.log("READING: " + localFile);
console.log(err);
});
So first of all I think you should understand what went wrong.
The http request operation is asynchronous. This means that the callback code in http.get() will run sometime in the future, but the fs.readFileSync, due to its synchronous nature will execute and complete even before the http request will actually be sent to the background thread that will execute it, since they are both invoked in what is commonly known as the (same) tick. Also fs.readFileSync returns a value and does not use a callback.
Even if you replace fs.readFileSync with fs.readFile instead the code still might not work properly since the readFile operation might execute before the http response is fully read from the socket and written to the disk.
I strongly suggest reading: stackoverflow question and/or Understanding the node.js event loop
The correct place to invoke the file read is when the response stream has finished writing to the file, which would look something like this:
var request = http.get(url, function(response) {
response.pipe(file);
file.once('finish', function () {
fs.readFile(localFile, /* fill encoding here */, function(err, data) {
// do something with the data if there is no error
});
});
});
Of course this is a very raw and not recommended way to write asynchronous code but that is another discussion altogether.
Having said that, if you download a file, write it to the disk and then read it all back again to the memory for manipulation, you might as well forgo the file part and just read the response into a string right away. Your code will then look something like so (this can be implemented in several ways):
var request = http.get(url, function(response) {
var data = '';
function read() {
var chunk;
while ( chunk = response.read() ) {
data += chunk;
}
}
response.on('readable', read);
response.on('end', function () {
console.log('[%s]', data);
});
});
What you really should do IMO is to create a transform stream that will strip away all the data you need from the response, while not consuming too much memory and yielding this more elegantly looking code:
var request = http.get(url, function(response) {
response.pipe(yourTransformStream).pipe(file)
});
Implementing this transform stream, however, might prove slightly more complex. So if you're a node beginner and you don't plan on downloading big files or lots of small files than maybe loading the whole thing into memory and doing string manipulations on it might be simpler.
For further information about transformation streams:
node.js stream api
this wonderful guide by substack
this post from strongloop
Lastly, see if you can use any of the million node.js crawlers already out there :-) take a look at these search results on npm
According to the http module help 'get' does not return the response body
This is modified from the request example on the same page
What you need to do is process the response with in the callback (function) passed into http.request so it can be called when it is ready (async)
var http = require('http')
var fs = require('fs')
var localFile = 'tmp/scraped_site_.html'
var file = fs.createWriteStream(localFile)
var req = http.request('http://www.google.com.au', function(res) {
res.pipe(file)
res.on('end', function(){
file.end()
fs.readFile(localFile, function(err, buf){
console.log(buf.toString())
})
})
})
req.on('error', function(e) {
console.log('problem with request: ' + e.message)
})
req.end();
EDIT
I updated the example to read the file after it is created. This works by having a callback on the end event of the response which closes the pipe and then it can reopen the file for reading. Alternatively you can use
req.on('data', function(chunk){...})
to process the data as it arrives without putting it into a temporary file
My impression is that you serializing a js object into JSON by reading it from a stream that's downloading a file containing HTML. This is do-able yet hard. Its difficult to know when you're search expression is found because if you parse as the chunks come in then you never know if you received only context and you could never find what you're looking for because it was split into 2 or many parts which were never analyzed as a whole.
You could try something like this:
http.request('u/r/l',function(res){
res.on('data',function(data){
//parse data as it comes in
}
});
This allows you to read data as it comes in. You can handle it to save to disc, db, or even parse it if you accumulated the contents within the script tags into a single string then parsed objects in that.

Calling a PHP script on button press with Sencha Architect

I've been looking at the documentation and tutorials for Sencha Architect, and I can't figure it out. What I want to is have a button press post a value to a PHP script on a server, and then retrieve the result from a PHP session variable. From what I've seen, I'm not sure if I can get it to call PHP at all, much less read a session variable.
I realize there may be a few questions in here (connecting the button to a controller/store, calling the script, reading the result), but I don't know enough about Architect to know if they're the correct ones.
EDIT: I think I've got the button connected to a controller, but I'm still not sure how to get it to call the PHP script.
EDIT 2:
I added a BasicFunction to the button, but I can't get it to work. Here's the code:
// Look up the items stack and get a reference to the first form it finds
var form = this.up('formpanel');
var values = form.getValues().getValues()[0];
Ext.Msg.alert('Working', 'Loading...', Ext.emptyfn);
Ext.Ajax.request({
url: 'http://wereani.ml/shorten-app.php',
method: 'POST',
params: {
url: values
},
success: function(response) {
Ext.Msg.alert('Link Shortened', Ext.JSON.decode(response).toString(), function() {
form.reset();
});
},
failure: function(response) {
Ext.Msg.alert('Error', Ext.JSON.decode(response).toString(), function() {
form.reset();
});
}
});
Also, is that the correct way to get the value from the field (itemID:url)? I couldn't find anything in the documentation for Touch about that.
Use an Ext.Ajax request in the listener for the button. docs.sencha.com/touch/2.2.1/?mobile=/api/Ext.Ajax.
The documentation there is pretty straightforward. If you have trouble please post some specifics and I'll try to write you an example.
Good luck, Brad

jQuery: How to replace content with JSON response

I am having difficulty replacing the content of an HTML element with a JSON object property. Here's my code:
url = '/blah/blah-blah';
data = $.getJSON(url);
$(this).parent('.status').replaceWith(data.content);
Now, I know that the correct JSON object is being returned and that it includes a properly formatted property called 'content'. (I am displaying it in the console). Secondly, I know that I am selecting the correct element to replace. (If I replace data.content with 'bingo!' I see the text displayed on screen.)
When I run the code above, however, I see the content of my element replaced with nothing. What am I doing wrong?
Note that I tried replacing data.content with data.responseJSON.content, but that didn't help.
Thanks!
You need to use a callback,
url = '/blah/blah-blah';
$.getJSON(url, function(data) {
$("some selector").parent('.status').replaceWith(data.content);
})
In your example, $.getJSON doesn't return anything meaningful -- probably just 'undefined'. Meanwhile, it makes your request. When getJSON succeeds, the result is passed to a handling function which does things with it. If you don't provide a callback, nothing will happen when you get a response back from the server.
or if you don't want to use a new selector, you can save $(this).
url = '/blah/blah-blah';
item = $(this)
$.getJSON(url, function(data) {
item.parent('.status').replaceWith(data.content);
})
The AJAX call is asynchronous, so the content hasn't arrived yet when you try to use it. When you display it in the console, you can't do that fast enough to see that the response doesn't arrive immediately.
Use a callback in the getJSON call to handle the data when it arrives:
url = '/blah/blah-blah';
$.getJSON(url, function(data) {
$(this).parent('.status').replaceWith(data.content);
});
Your code is executing before the .getJSON(url) call is completing. Try specifying a success handler like so:
$.getJSON(url, function(data) {
$(this).parent('.status').replaceWith(data.content);
});

HTML5 offline JSON doesn't work

I have a small HTML5 (using jQuery mobile) web app that caches its files to use them offline, however some parts don't seem to work once it's offline.
The files are cached OK (I can see them in the web inspector) but when I try to visit a page that uses jQuery to load a JSON file it doesn't load.
I tried creating an empty function to load the JSON files (when the index page is loaded) to see if that would help but it doesn't seem to make a difference.
Here's the function that doesn't want to work offline.
My question is: should it work offline or am I missing something?
// events page listing start
function listEvents(data){
$.getJSON('/files/events.json', {type: "json"},function (data) {
var output = '';
for (i in data)
{
var headline = data[i].headline;
var excerpt = data[i].rawtext;
output += '<div id="eventsList">';
output += '<h3>'+headline+'</h3>';
output += '<p>'+ excerpt +'<p>';
output += '</div>';
}
$("#eventsPageList").html(output).trigger("create");
});
}
I'm not really sure, if i'm right about this. But i think an ajax request will always fail when you are offline. It won't use the locally cached file. What you should try is, to cache the data in localStorage. When the ajax request fails, fallback to localStorage.
OK here's a version which seems to work, I read the json file and place it in localstorage then use the localstorage in the listEvents function.
When the page loads I call this function to add the json to localstorage
function cacheJson(data){
$.getJSON('/files/events.json',
{type: "json", cache: true},function (data) {
localStorage['events'] = JSON.stringify(data); });
}
Then this function to output the json (from localstorage) to the page, with an if else incase the localstorage doesn't contain the json.
function listEvents(data){
if (localStorage.getItem("events") === null) {
var output = '';
output += 'Sorry we have an error';
$("#eventsPageList").html(output).trigger("create");
}
else {
data = JSON.parse(localStorage['events']);
var output = '';
for (i in data)
{
var headline = data[i].headline;
var excerpt = data[i].rawtext;
output += '<div id="eventsList">';
output += '<h3>'+headline+'</h3>';
output += '<p>'+ excerpt +'<p>';
output += '</div>';
}
$("#eventsPageList").html(output).trigger("create");
}
}
It seems to work ok but am I missing something that could cause issues?
Is there a more efficient way of doing this?