I have been working with Famo.us just for a short while but now I am in need of consuming some JSON. In jQuery I would use the getJSON method to make the JSON call and get the data back in an object. Is there a way to do this in pure Famo.us? I ask because I have only found examples of jQuery being added to the app to make that JSON call. I am not sure that this is the best practice so I figured maybe someone could point me in the right direction.
$.getJSON('data/data.json', function(json) {
$.each(json, function(key,data){
seriesArr.push({
name: data.name,
y: data.Count,
drilldown: data.name
});
});
});
There is a Utility function in famo.us for loading an URL: Utility.loadURL
(https://famo.us/docs/utilities/Utility)
var Utility = require('famous/utilities/Utility');
Utility.loadURL('http://example.com', function (content) {
// Check response
if (!content) {
return;
}
// Consume response
var parsedContent = JSON.parse(content);
...
});
You can certainly use jQuery for making requests in Famo.us. Famo.us is designed as the presentation layer of the application. It does not care how the data gets in or out.
Just some things to keep in mind. When making requests, Try to time them such that all animation is complete. A request no matter the library will cause stuttering.
For instance using the setTransform callback method of a StateModifier..
state.setTransform(transform, transition, function(){
// Make request
});
So to sum things up, You are on the right path. With vanilla Famo.us you are free to make requests with whichever other library you wish. Just do so in a timely manner!
Good Luck!
Related
I am trying to load a GeoJSON file and to draw some graphics using it as a basis with D3 v5.
The problem is that the browser is skipping over everything included inside the d3.json() call. I tried inserting breakpoints to test but the browser skips over them and I cannot figure out why.
Code snippet below.
d3.json("/trip_animate/tripData.geojson", function(data) {
console.log("It just works"); // This never logs to console.
//...all the rest
}
The code continues on from the initial console.log(), but I omitted all of it since I suspect the issue is with the d3.json call itself.
The signature of d3.json has changed from D3 v4 to v5. It has been moved from the now deprecated module d3-request to the new d3-fetch module. As of v5 D3 uses the Fetch API in favor of the older XMLHttpRequest and has in turn adopted the use of Promises to handle those asynchronous requests.
The second argument to d3.json() no longer is the callback handling the request but an optional RequestInit object. d3.json() will now return a Promise you can handle in its .then() method.
Your code thus becomes:
d3.json("/trip_animate/tripData.geojson")
.then(function(data){
// Code from your callback goes here...
});
Error handling of the call has also changed with the introduction of the Fetch API. Versions prior to v5 used the first parameter of the callback passed to d3.json() to handle errors:
d3.json(url, function(error, data) {
if (error) throw error;
// Normal handling beyond this point.
});
Since D3 v5 the promise returned by d3.json() will be rejected if an error is encountered. Hence, vanilla JS methods of handling those rejections can be applied:
Pass a rejection handler as the second argument to .then(onFulfilled, onRejected).
Use .catch(onRejected) to add a rejection handler to the promise.
Applying the second solution your code thus becomes
d3.json("/trip_animate/tripData.geojson")
.then(function(data) {
// Code from your callback goes here...
})
.catch(function(error) {
// Do some error handling.
});
Since none of the answers helped, I had to find the solution on my own that works. I am using v4 and have to stick with it. The problem was (in my case) that d3.json worked the first time, but did not work the second or third time (with a HTML dropdown).
The idea is to use the initial function, and then simply to use a second function with
let data = await d3.json("URL");
instead of
d3.json("URL", function(data) {
Therefore, the general pattern becomes:
async function drawWordcloudGraph() {
let data = await d3.json("URL");
...
}
function initialFunction() {
d3.json("URL", function (data) {
...
});
}
initialFunction();
I have tried several approaches, and only this worked. Not sure if it can be simplified, please test on your own.
I've just started using VueJS and I'm really liking it! :) I would like to save the values in the querystring to a VueJS variable - this is something super simple in handlebars + express, but seems more difficult in Vue.
Essentially I am looking for something similar to -
http://localhost:8080/?url=http%3A%2F%2Fwww.fake.co.uk&device=all
const app = new Vue({
...
data: {
url: req.body.url,
device: req.body.device
}
...
});
Google seemed to point me to vue-router, but I'm not sure if that's really what I need/how to use it. I'm currently using express to handle my backend logic/routes.
Thanks,
Ollie
You can either to put all your parameters in hash of the url, e.g.:
window.location.hash='your data here you will have to parse to'
and it will change your url - the part after #
Or if you insist to put them as query parameters (what's going after ?) using one of the solutions from Change URL parameters
You can use URLSearchParams and this polyfill to ensure that it will work on most web browsers.
// Assuming "?post=1234&action=edit"
var urlParams = new URLSearchParams(window.location.search);
console.log(urlParams.has('post')); // true
console.log(urlParams.get('action')); // "edit"
console.log(urlParams.getAll('action')); // ["edit"]
console.log(urlParams.toString()); // "?post=1234&action=edit"
console.log(urlParams.append('active', '1')); // "?post=1234&action=edit&active=1"
Source:
https://davidwalsh.name/query-string-javascript
URLSearchParams
https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams
https://github.com/WebReflection/url-search-params/blob/master/build/url-search-params.js
See also:
https://stackoverflow.com/a/12151322/194717
I"m trying to pull images into a React List module using a JSON and can't figure out what I'm doing wrong.
This FIDDLE is supposed to grab two images from my server.
Code:
var Playlist = React.createClass({
render() {
var playlistImages = [];
$.getJSON('http://k1r.com/json/playlist_tn.json', function(data){
playlistImages = data;
});
return (
<List list={playlistImages.images} />
)
}
});
UPDATED FIDDLE
I'm not sure you can use modules directly in JSFiddle, but apart from that the main issue is that you are fetching some asynchronous data directly in your render method and React isn't going to wait on that to finish before rendering your List.
The suggested approach (via the docs: https://facebook.github.io/react/tips/initial-ajax.html) is to make your data request inside of the componentDidMount or componentWillMount lifecycle methods then use setState() to trigger a re-render when the data has been received, which should then correctly render your List.
I have to obtain a json that is incrusted inside a script tag in certain page... so I can't use regular scraping techniques, like cheerio.
Easy way out, write the file (download the page) to the server and then read it using string manipulation to extract the json (there are several) work on them and save to my db hapily.
the thing is that I'm too new to nodeJS, and can't get the code to work, I think that I'm trying to read the file before it is fully written, and if read it time before obtain [Object Object]...
Here's what I have so far...
var http = require('http');
var fs = require('fs');
var request = require('request');
var localFile = 'tmp/scraped_site_.html';
var url = "siteToBeScraped.com/?searchTerm=foobar"
// writing
var file = fs.createWriteStream(localFile);
var request = http.get(url, function(response) {
response.pipe(file);
});
//reading
var readedInfo = fs.readFileSync(localFile, function (err, content) {
callback(url, localFile);
console.log("READING: " + localFile);
console.log(err);
});
So first of all I think you should understand what went wrong.
The http request operation is asynchronous. This means that the callback code in http.get() will run sometime in the future, but the fs.readFileSync, due to its synchronous nature will execute and complete even before the http request will actually be sent to the background thread that will execute it, since they are both invoked in what is commonly known as the (same) tick. Also fs.readFileSync returns a value and does not use a callback.
Even if you replace fs.readFileSync with fs.readFile instead the code still might not work properly since the readFile operation might execute before the http response is fully read from the socket and written to the disk.
I strongly suggest reading: stackoverflow question and/or Understanding the node.js event loop
The correct place to invoke the file read is when the response stream has finished writing to the file, which would look something like this:
var request = http.get(url, function(response) {
response.pipe(file);
file.once('finish', function () {
fs.readFile(localFile, /* fill encoding here */, function(err, data) {
// do something with the data if there is no error
});
});
});
Of course this is a very raw and not recommended way to write asynchronous code but that is another discussion altogether.
Having said that, if you download a file, write it to the disk and then read it all back again to the memory for manipulation, you might as well forgo the file part and just read the response into a string right away. Your code will then look something like so (this can be implemented in several ways):
var request = http.get(url, function(response) {
var data = '';
function read() {
var chunk;
while ( chunk = response.read() ) {
data += chunk;
}
}
response.on('readable', read);
response.on('end', function () {
console.log('[%s]', data);
});
});
What you really should do IMO is to create a transform stream that will strip away all the data you need from the response, while not consuming too much memory and yielding this more elegantly looking code:
var request = http.get(url, function(response) {
response.pipe(yourTransformStream).pipe(file)
});
Implementing this transform stream, however, might prove slightly more complex. So if you're a node beginner and you don't plan on downloading big files or lots of small files than maybe loading the whole thing into memory and doing string manipulations on it might be simpler.
For further information about transformation streams:
node.js stream api
this wonderful guide by substack
this post from strongloop
Lastly, see if you can use any of the million node.js crawlers already out there :-) take a look at these search results on npm
According to the http module help 'get' does not return the response body
This is modified from the request example on the same page
What you need to do is process the response with in the callback (function) passed into http.request so it can be called when it is ready (async)
var http = require('http')
var fs = require('fs')
var localFile = 'tmp/scraped_site_.html'
var file = fs.createWriteStream(localFile)
var req = http.request('http://www.google.com.au', function(res) {
res.pipe(file)
res.on('end', function(){
file.end()
fs.readFile(localFile, function(err, buf){
console.log(buf.toString())
})
})
})
req.on('error', function(e) {
console.log('problem with request: ' + e.message)
})
req.end();
EDIT
I updated the example to read the file after it is created. This works by having a callback on the end event of the response which closes the pipe and then it can reopen the file for reading. Alternatively you can use
req.on('data', function(chunk){...})
to process the data as it arrives without putting it into a temporary file
My impression is that you serializing a js object into JSON by reading it from a stream that's downloading a file containing HTML. This is do-able yet hard. Its difficult to know when you're search expression is found because if you parse as the chunks come in then you never know if you received only context and you could never find what you're looking for because it was split into 2 or many parts which were never analyzed as a whole.
You could try something like this:
http.request('u/r/l',function(res){
res.on('data',function(data){
//parse data as it comes in
}
});
This allows you to read data as it comes in. You can handle it to save to disc, db, or even parse it if you accumulated the contents within the script tags into a single string then parsed objects in that.
I've been looking at the documentation and tutorials for Sencha Architect, and I can't figure it out. What I want to is have a button press post a value to a PHP script on a server, and then retrieve the result from a PHP session variable. From what I've seen, I'm not sure if I can get it to call PHP at all, much less read a session variable.
I realize there may be a few questions in here (connecting the button to a controller/store, calling the script, reading the result), but I don't know enough about Architect to know if they're the correct ones.
EDIT: I think I've got the button connected to a controller, but I'm still not sure how to get it to call the PHP script.
EDIT 2:
I added a BasicFunction to the button, but I can't get it to work. Here's the code:
// Look up the items stack and get a reference to the first form it finds
var form = this.up('formpanel');
var values = form.getValues().getValues()[0];
Ext.Msg.alert('Working', 'Loading...', Ext.emptyfn);
Ext.Ajax.request({
url: 'http://wereani.ml/shorten-app.php',
method: 'POST',
params: {
url: values
},
success: function(response) {
Ext.Msg.alert('Link Shortened', Ext.JSON.decode(response).toString(), function() {
form.reset();
});
},
failure: function(response) {
Ext.Msg.alert('Error', Ext.JSON.decode(response).toString(), function() {
form.reset();
});
}
});
Also, is that the correct way to get the value from the field (itemID:url)? I couldn't find anything in the documentation for Touch about that.
Use an Ext.Ajax request in the listener for the button. docs.sencha.com/touch/2.2.1/?mobile=/api/Ext.Ajax.
The documentation there is pretty straightforward. If you have trouble please post some specifics and I'll try to write you an example.
Good luck, Brad