Here is a snippet from my Realurl configuration:
'postVarSets' => array (
'_DEFAULT' => array (
'package' => array (
'0' => array (
'GETvar' => 'packageid',
),
),
What does this code do? Does this retrieve a POST variable called package? And in the array there is a variable packageid?
I'm searching for a path element called package which redirects to a certain page, but I doesn't quite know how this works ..
The whole RealURL configuration is about telling RealURL how to encode/decode URLs. postVarSets is one of the configuration options that uses a keyword to identify a part of the URL.
In your case it tells RealURL that if there's a keyword package in the URL, the first thing that follows this keyword should be set as a GET variable packageid. So URL...
http://www.example.com/page-uid-1/package/123
...should be equivalent to...
http://www.example.com/index.php?id=1&packageid=123
Related
When utilizing the following script, I can see through the Chrome Dev Tool console that the API returns an assortment of sections such as object, orbit, phys_param, signature, and proto. I want to take the data provided in the object, orbit, and phys_param sections and extrapolate them into an HTML table. On this table, I would like to have a subset of specific data from these sections. As an example, I would like to make a table that has the fullname variable that is present on the object section, the mean motion variable that is present on the orbit section, and the diameter variable in the phys_param section. I also noticed that some of the data is inside a key-value list which itself is an element of another list. How can I access these specific data section within the returned lists?
fetch('https://ssd-api.jpl.nasa.gov/sbdb.api?spk=2000001&phys-par=1')
.then(response => response.json()) .
then(data => { console.log(data) })
If you're using jQuery, you can try the jqPropertyGrid plugin. I believe this will do close to what you're looking for.
You can use the "prettify" version of JSON.stringify() as in
JSON.stringify(d,null,2)
to create a human readable version of the data. Then you can build the table like this:
fetch('https://ssd-api.jpl.nasa.gov/sbdb.api?spk=2000001&phys-par=1')
.then(response =>response.json()).then(d =>{
d.d={}; // add a property `d` as a sub-object with addressable names:
d.phys_par.concat(d.orbit.elements).forEach(e=>
d.d[e.title.replaceAll(" ","")]=[e.title,e.value,e.sigma,e.units]);
document.querySelector("#out").innerHTML=
"<h2>"+d.object.fullname+"</h2>"
+"<table><tr><th></th><th>value</th><th>sigma</th><th>units</th></tr>"
+"<tr><td>"+d.d.diameter.join("</td><td>")+"</td></tr>"
+"<tr><td>"+d.d.meanmotion.join("</td><td>")+"</td></tr>"
+"</table>"
// + JSON.stringify(d,null,2) // uncomment this line to show whole data structure ...
});
#out {white-space:pre}
<div id="out"></div>
I 'm going to make a CakePHP application which collaborates with a mobile application. Mobile application is going to query CakePHP app to check if some vouchers are valid or not. I need to change some of my views such that their outputs be in json format in order to be parsed by mobile application easily.
To be specific if mobile app calles example.com/vouchers/check/1234 Cake should return something like this as response: {"validity":"valid"} or {"validity":"invalid"} which is the result of checking the validity of voucher with id 1234.
Basically, you should use extensions when you expect non HTML responses (JSON in this case)
So request
/vouchers/check/1234.json
and use the JsonView as per docs and as per ajax-and-cakephp tutorial.
To sum it up:
Use this to allow json extension being enabled:
Router::parseExtensions();
Router::setExtensions(array('json', ...));
Don't forget to include the RequestHandler component in the controllers $components list.
Add this to your action:
$data = array(
'validity' => ...,
);
$this->set(compact('data')); // Pass $data to the view
$this->set('_serialize', 'data'); // Let the JsonView class know what variable to use
I'm trying to split out some JSON strings in order to be parsed by this RestKit's iPhone library, but CakePHP is splitting out an incompatible string. For example, the string below is what it's currently splitting out:
1. {"Question":{"id":"1","content":"Test","player_id":"1","points":"0","votes":"0","created":"0000-00-00 00:00:00"},"Player":{"id":"1","username":"player_test"}}
I need to have something like:
2. {"Question":{"id":"1","content":"Test","player_id":"1","points":"0","votes":"0","created":"0000-00-00 00:00:00","Player":{"id":"1","username":"player_test"}}}
Note that the Player response should be part of Question.
The way the models are setup on Cake is that 'Question' belongs to 'Player' which the latter hasMany 'Question'
I am looking for the proper way of telling Cake to output something like the response #2 above. Any suggestions?
You can use afterFind() callback of your Question model to nest the Player record inside the Question record. Or modify the results array as required after fetching. The various function of Hash class might help you in reformatting the array.
You can add a custom method to your Question model that returns the result in the desired format. This will keep your code clean and keep the data-processing/formatting logic in your Model (where it should be in most cases);
For example, inside your Question model:
public function getQuestionsForPlayer($playerId)
{
$results = $this->find('all', array(
'fields' => array(/* fields */),
'conditions' => array(/* ..... */
/* etc */
));
// Process your result to be in the right format
// Hash::extract() and other Hash:: methods
// may be helpful here as #ADmad mentioned
return $processedResult;
}
As ADmad mentioned, the Hash utility may be helpful. Documentation is located here:
http://book.cakephp.org/2.0/en/core-utility-libraries/hash.html
I'm trying to grab the href value in <a> HTML tags using Nokogiri.
I want to identify whether they are a path, file, URL, or even a <div> id.
My current work is:
hrefvalue = []
html.css('a').each do |atag|
hrefvalue << atag['href']
end
The possible values in a href might be:
somefile.html
http://www.someurl.com/somepath/somepath
/some/path/here
#previous
Is there a mechanism to identify whether the value is a valid full URL, or file, or path or others?
try URI:
require 'uri'
URI.parse('somefile.html').path
=> "somefile.html"
URI.parse('http://www.someurl.com/somepath/somepath').path
=> "/somepath/somepath"
URI.parse('/some/path/here').path
=> "/some/path/here"
URI.parse('#previous').path
=> ""
Nokogiri is often used with ruby's URI or open-uri, so if that's the case in your situation you'll have access to its methods. You can use that to attempt to parse the URI (using URI.parse). You can also generally use URI.join(base_uri, retrieved_href) to construct the full url, provided you've stored the base_uri.
(Edit/side-note: further details on using URI.join are available here: https://stackoverflow.com/a/4864170/624590 ; do note that URI.join that takes strings as parameters, not URI objects, so coerce where necessary)
Basically, to answer your question
Is there a mechanism to identify whether the value is a valid full
url, or file, or path or others?
If the retrieved_href and the base_uri are well formed, and retrieved_href == the joined pair, then it's an absolute path. Otherwise it's relative (again, assuming well formed inputs).
If you use URI to parse the href values, then apply some heuristics to the results, you can figure out what you want to know. This is basically what a browser has to do when it's about to send a request for a page or a resource.
Using your sample strings:
%w[
somefile.html
http://www.someurl.com/somepath/somepath
/some/path/here
#previous
].each do |u|
puts URI.parse(u).class
end
Results in:
URI::Generic
URI::HTTP
URI::Generic
URI::Generic
The only one that URI recognizes as a true HTTP URI is "http://www.someurl.com/somepath/somepath". All the others are missing the scheme "http://". (There are many more schemes you could encounter. See the specification for more information.)
Of the generic URIs, you can use some rules to sort through them so you'd know how to react if you have to open them.
If you gathered the HREF strings by scraping a page, you can assume it's safe to use the same scheme and host if the URI in question doesn't supply one. So, if you initially loaded "http://www.someurl.com/index.html", you could use "http://www.someurl.com/" as your basis for further requests.
From there, look inside the strings to determine whether they are anchors, absolute or relative paths. If the string:
Starts with # it's an anchor and would be applied to the current page without any need to reload it.
Doesn't contain a path delimiter /, it's a filename and would be added to the currently retrieved URL, substituting the file name, and retrieved. A nice way to do the substitution is to use File.dirname , File.basename and File.join against the string.
Begins with a path delimiter it's an absolute path and is used to replace the path in the original URL. URI::split and URI::join are your friends here.
Doesn't begin with a path delimiter, it's a relative path and is added to the current URI similarly to #2.
Regarding:
hrefvalue = []
html.css('a').each do |atag|
hrefvalue << atag['href']
end
I'd use this instead:
hrefvalue = html.search('a').map { |a| a['href'] }
But that's just me.
A final note: URI has some problems with age and needs an update. It's a useful library but, for heavy-duty URI rippin' apart, I highly recommend looking into using Addressable/URI.
I am building an rss feed discovery service by scraping a page URL and finding the <link> tags in the page header. The problem is some URLs take really long to serve the page source so my code gets stuck at file_get_contents($url) very often.
Is there a way to do this with a predefined timeout, for example if 10 seconds have passed and there is still no content served then simply drop that URL and move to the next one?
I was thinking to use the maxLen parameter to get only a part of the source (<head>..</head>) but I'm not sure if this would really stop after the received bytes are reached of would still require the full page load. The other issue with this is that I have no idea what value to set here because every page has different content in the head tag so sizes vary.
I've just been reading about this, so this is theory only right now.. but..
This is the function definition, notice the resource context part:
string file_get_contents ( string $filename [, bool $use_include_path = false [, **resource $context** [, int $offset = -1 [, int $maxlen ]]]] )
If you specify the result of a stream_context_create() function and pass it the timeout value in it's options array, it just might work.
$context = stream_context_create($opts);
Or you could create the stream and set it's timeout directly:
http://www.php.net/manual/en/function.stream-set-timeout.php
Hope you have some success with it.
Use the 'context' parameter. You can create a stream context by using the 'stream_context_create' function, and specifying in the http context the desired timeout.
$context = stream_context_create(array(
'http' => array(
'timeout' => YOUR_TIMEOUT,
)
));
$content = file_get_contents(SOME_FILE, false, $context);
More information:
Here and also here.