Wrote a very simple function to pull data from the espn api and display in default/index. However default/index is a blank page.
At this point I'm not even trying to parse through the JSON - I just want to see something on my browser.
default.py:
import urllib2
import json
#espn_uri being pulled from models/db.py
def index():
r = urllib2.Request(espn_uri)
opener = urllib2.build_opener()
f = opener.open(r)
status = json.load(f)
return dict(status)
default/index.html:
{{status}}
Thank you!
Try: return dict(status=status)
return dict(status) works because status it itself a dict, and dict(status) just copies it. But it's probably got no key named status, or at least nothing interesting.
And yes, you need =.
As JLundell advises, first return paired data via the dictionary:
return dict(my_status=status)
Second, as you've worked out, use the following to access the returned, rather than the local variable in index.html. Make sure you use the equals sign here or nothing will display
{{=my_status}}
When it comes to JSON, you can return the data using
return my_status.json()
Several other options are available to return data as a list, or to return HTML.
Finally, I recommend that you make use of jQuery and AJAX ($.ajax), so that the AJAX return value can be easily assigned to a JS object. This will also allow you to handle success or errors in the form of JS functions.
Related
I have an object in my database following a file upload that look like this
a:1:{s:4:"file";a:3:{s:7:"success";b:1;s:8:"file_url";a:2:{i:0;s:75:"http://landlordsplaces.com/wp-content/uploads/2021/01/23192643-threepersons.jpg";i:1;s:103:"http://landlordsplaces.com/wp-content/uploads/2021/01/364223-two-female-stick-figures.jpg";}s:9:"file_path";a:2:{i:0;s:93:"/var/www/vhosts/landlordsplaces.com/httpdocs/wp-content/uploads/2021/01/23192643-threepersons.jpg";i:1;s:121:"/var/www/vhosts/landlordsangel.com/httpdocs/wp-content/uploads/2021/01/364223-two-female-stick-figures.jpg";}}}
I am trying with no success to parse extract the two jpg urls programmatically from the object so i can show the images ont he site. Tried assigning parse(object) but that isnt helping. I just need to get the urls out.
Thank you in anticipation of any general direction
What you're looking at is not a JSON string. It is a serialized PHP object. If this database entry was created by Forminator, you should use the Forminator API to retrieve the needed form entry. The aforementioned link points to the get_entry method, which I suspect is what you're looking for (I have never used Forminator), but in any case, you should look for a method that will return that database entry as a PHP object containing your needed URLs.
In case it is ever of any help to anyone the answer to the question was based on John input. The API has the classes to handle that without needing to understand the data structure.
Forminator_API::initialize();
$form_id = 1449; // ID of a form
$entry_id = 3; // ID of an entry
$entry = Forminator_API::get_entry( $form_id, $entry_id );
$file_url = $entry->meta_data['upload-1']['value']['file']['file_url'];
$file_path = $entry->meta_data['upload-1']['value']['file']['file_path'];
var_dump($entry); //contains paths and urls
Hope someone benefits.
For web2py there are generic views e.g. for JSON.
I could not find a sample.
When looking at the web2py manual 10.1.2 and 10.1.6, its written:
'.. define a "generic.csv" file, but one would have to specify the name of the object to be serialized ("animals" in the example)'
Looking at the generic pdf view
{{
import os
from gluon.contrib.generics import pdf_from_html
filename = '%s/%s.html' % (request.controller,request.function)
if os.path.exists(os.path.join(request.folder,'views',filename)):
html=response.render(filename)
else:
html=BODY(BEAUTIFY(response._vars))
pass
=pdf_from_html(html)
}}
and also the specified csv (Manual charpter 10.1.6):
{{
import cStringIO
stream=cStringIO.StringIO() animals.export_to_csv_file(stream)
response.headers['Content-Type']='application/vnd.ms-excel'
response.write(stream.getvalue(), escape=False)
}}
Massimo is writing: 'web2py does not provide a "generic.csv";'
He is not fully against it but..
So lets try to get it and deactivate when necessary.
The generic view should look similar to (the non working)
(well, this we better call pseudocode as it is not working):
{{
import os
from gluon.contrib.generics export export_to_csv_file(stream)
filename = '%s/%s' % (request.controller,request.function)
if os.path.exists(os.path.join(request.folder,'views',filename)):
csv=response.render(filename)
else:
csv=BODY(BEAUTIFY(response._vars))
pass
= export_to_csv_file(stream)
}}
Whats wrong?
Or is there a sample?
Is there a reson not to have a generic csv?
{{
import os
from gluon.contrib.generics export export_to_csv_file(stream)
filename = '%s/%s' % (request.controller,request.function)
if os.path.exists(os.path.join(request.folder,'views',filename)):
csv=response.render(filename)
else:
csv=BODY(BEAUTIFY(response._vars))
pass
= export_to_csv_file(stream)
}}
Adapting the generic.pdf code so literally as above would not work for CSV output, as the generic.pdf code is first executing the standard HTML template and then simply converting the generated HTML to a PDF. This approach does not make sense for CSV, as CSV requires data of a particular structure.
As stated in the documentation:
Notice that one could also define a "generic.csv" file, but one would
have to specify the name of the object to be serialized ("animals" in
the example). This is why we do not provide a "generic.csv" file.
The execution of a view is triggered by a controller action returning a dictionary. The keys of the dictionary become available as variables in the view execution environment (the entire dictionary is also available as response._vars). If you want to create a generic.csv view, you therefore need to establish some conventions about what variables are in the returned dictionary as well as the possible structure(s) of the returned data.
For example, the controller could return something like dict(data=mydata). The code in generic.csv would then access the data variable and could convert it to CSV. In that case, there would have to be some convention about the structure of data -- perhaps it could be required to be a list of dictionaries or a DAL Rows object (or optionally either one).
Another possible convention is for the controller to return something like dict(columns=mycolumns, rows=myrows), where columns is a list of column names and rows is a list of lists containing the data for each row.
The point is, there is no universal convention for what the controller might return and how that can be converted into CSV, so you first need to decide on some conventions and then write generic.csv accordingly.
For example, here is a very simple generic.csv that would work only if the controller returns dict(rows=myrows), where myrows is a DAL Rows object:
{{
import cStringIO
stream=cStringIO.StringIO() rows.export_to_csv_file(stream)
response.headers['Content-Type']='application/vnd.ms-excel'
response.write(stream.getvalue(), escape=False)
}}
I tried:
# Sample from Web2Py manual 10.1.1 Page 464
def count():
session.counter = (session.counter or 0) + 1
return dict(counter=session.counter, now = request.now)
#and my own creation from a SQL table (if possible used for json and csv):
def csv_rt_bat_c_x():
battdat = db().select(db.csv_rt_bat_c.rec_time, db.csv_rt_bat_c.cellnr,
db.csv_rt_bat_c.volt_act, db.csv_rt_bat_c.id).as_list()
return dict(battdat=battdat)
Bot times I get an error when trying csv. It works for /default/count.json but not for /default/count.csv
I suppose the requirement:
dict(rows=myrows)
"where myrows is a DAL Rows object" is not met.
I have a list of about 13,000 URLs that I want to extract info from, however, not every URL actually exists. In fact the majority don't. I have just tried passing all 13,000 urls through html() but it takes a long time. I am trying to work out how to see if the urls actually exist before parsing them to html(). I have tried using httr and GET() functions, as well as rcurls and url.exists() functions. For some reason url.exist() always returns FALSE values even when the URL does exist, and the way I am using GET() always returns a success, I think this is because the page is being redirected.
The following URLs represent the type of pages I am parsing, the first does not exist
urls <- data.frame('site' = 1:3, 'urls' = c('https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-1&unit=SLE010',
'https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-2&unit=HMM202',
'https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-2&unit=SLE339'))
urls$urls <- as.character(urls$urls)
For GET(), the problem is that the second URL doesn't actually exist but it is redirected and therefore returns a "success".
urls$urlExists <- sapply(1:length(urls[,1]),
function(x) ifelse(http_status(GET(urls[x, 'urls']))[[1]] == "success", 1, 0))
For url.exists(), I get three FALSE returned even though the first and third urls do exist.
urls$urlExists2 <- sapply(1:length(urls[,1]), function(x) url.exists(urls[x, 'urls']))
I checked these two posts 1, 2 but I would prefer not to use a useragent simply because I am not sure how to find mine or whether it would change for different people using this code on other computers. Therefore making the code harder to pick up and use by others. Both posts answers suggest using GET() in httr. It seems that GET() is probably the preferred method but I would need to figure out how to deal with the redirection issue.
Can anyone suggest a good way in R to test the existence of a URL before parsing them to html()? I would also be happy for any other suggested work around for this issue.
UPDATE:
After looking into the returned value from GET() I figured out a work around, see answers for details.
With httr, use url_success() and redirect following turned off:
library(httr)
urls <- c(
'https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-1&unit=SLE010',
'https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-2&unit=HMM202',
'https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-2&unit=SLE339'
)
sapply(urls, url_success, config(followlocation = 0L), USE.NAMES = FALSE)
url_success(x) is deprecated; please use !http_error(x) instead.
So update the solution from hadley.
> library(httr)
>
> urls <- c(
> 'https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-1&unit=SLE010',
> 'https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-2&unit=HMM202',
> 'https://www.deakin.edu.au/current-students/unitguides/UnitGuide.php?year=2015&semester=TRI-2&unit=SLE339'
> )
>
> !sapply(urls, http_error)
After a suggestion from #TimBiegeleisen I looked at what was returned from the funtion GET(). It seems that if the url exists GET() will return this url as a value, but if it is redirected a different url is returned. I just changed the code to look at whether the url returned by GET() matched the one I submitted.
urls$urlExists <- sapply(1:length(urls[,1]), function(x) ifelse(GET(urls[x, 'urls'])[[1]] == urls[x,'urls'], 1, 0))
I would be interested in learning about any better methods that people use for the same thing.
I'm creating a small REST API in Bottle.
I only want to return JSON via the response, and although I could use the error decorator #error(status_code) for every HTTP status code there is, so it outputs JSON, I find this quite lengthy and not very practical.
Does anybody know a better way to do this?
Ok, so I think I have managed a way to do this, although I don't know how memory/programmatically efficient it is:
import bottle
import json
app = bottle.default_app() # or bottle.app()
for http_code in bottle.HTTP_CODES:
#app.error(http_code)
def json_error(error):
bottle.response.default_content_type = "application/json"
return json.dumps(dict(error=error.status_code, message=error.status_line))
so I'm trying to make a call to a bible verse API in my Meteor application. I made a template with name="display", with a simple {{checkitout}} in the template.
Then for the template, I tried to make the call in its corresponding helper. It looks like this (in coffeescript, but Javascript readers should understand as well):
#Template.display.helpers
checkitout:->
result = Meteor.http.call("GET","http://labs.bible.org/api/passage=john%203:2&type=json")
console.log(result)
The URL is a JSON of a bible verse, but the problem is, the Meteor.http.call requires a third argument, a "callback" (because this is in the client folder). I read some documentation + examples and have no idea what it means.
Also, if I call it like this, is result exactly the JSON file, or do I need to fit it within a new hash? And what does a callback mean? Can someone give me an example?
As helpers are synchronous and API calls are not, you need to store the call result in a reactive variable and return it from the helper:
verse = "Loading..."
verseLoaded = false
verseDep = new Deps.Dependency()
Template.Display.checkItOut = ->
verseDep.depend()
unless verseLoaded
verseLoaded = true
Meteor.http.get "...", (error, result) ->
verse = "..."
verseDep.changed()
verse
On the client, the callback is required as you said. So this is something you could do to query an API and display the JSON result:
Template.Display.helpers
checkItOut: ->
Meteor.http.get 'https://graph.facebook.com/facebook', (error, result) ->
if not error
console.log result # display the the open graph result
Note 1: To use these functions, you need to add the HTTP package to your project with $ meteor add http. You can find further information in the documentation.
Note 2: In your situation, you cannot make an API call client-side due to the Access-Control-Allow-Origin Policy. So, the solution would be to use a method and make the call server-side.
# Client-side
Template.Display.helpers
checkItOut: ->
Meteor.call 'getBibleText', (error, result) ->
if not error
console.log result
# Server-side (server directory)
Meteor.methods
'getBibleText': ->
result = HTTP.get 'http://labs.bible.org/api/?passage=john%203:2&type=xml'
return result