I have got a full xml dump of wikitravel.org. Now I want to get the URLs from
[[Image:Iwamotoji PilgrimGirl.JPG|thumb|print=full|Pilgrim traveling on foot, [[Kubokawa]]]]
There used to be an API for this. But seems its been disabled now.
While doing some R&D I found that the URLs are not in specific pattern.
Chicago Main image
Chicago city Bus
Can you pleaselet me know how can I get the actual URLs (thumb as well as the regular) to the images in an aritical.
Pretty much all the content of Wikitravel has been forked over to Wikivoyage, which does have a functional API. So you could just query the Wikivoyage API instead.
After spending some more time on R&D. I found the following algorithm
$base_url = "http://wikitravel.org/upload/shared/";
$image_name = "XYZ 123.JPG";
$image_name = str_replace(" ","_",$image_name);
$md5 = md5($image_name); // MD5 hash of the image
$dir = substr($md5, 0,1).'/'.substr($md5, 0,2);
$image_url = $base_url . $dir . $image_name;
Source: What are the strangely named components in file paths? from Commons FAQ
I hope it will help others.
Related
I want to replicate the following image, which pixel-wise tells (look at the legend) how many images are available in a ee.ImageCollection.
https://i.stack.imgur.com/BgoMR.png [1]
I thank you for any help in advance!
References:
[1] Masoud Mahdianpari, Bahram Salehi, Fariba Mohammadimanesh,
Brian Brisco, Saeid Homayouni, Eric Gill, Evan R. DeLancey & Laura Bourgeau-Chavez
(2020) Big Data for a Big Country: The First Generation of Canadian Wetland Inventory Map
at a Spatial Resolution of 10-m Using Sentinel-1 and Sentinel-2 Data on the Google Earth
Engine Cloud Computing Platform, Canadian Journal of Remote Sensing, 46:1, 15-33, DOI:
10.1080/07038992.2019.1711366[enter image description here]
If you use geemap Python package you can use geemap.image_count function and Map.add_colorbar method to add a colorbar. If you use JavaScript code editor you could use the code below which is derived and modified from geemap repo and you could check this out for adding a colorbar:
// Acquire Sentinel-2 Image Collection
var collection = ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED')
// Get one band's name
var band = collection.first().bandNames().get(0)
// Generate desired image where each pixel value represents the number of Images in the Image Collection
var image = collection.filterBounds(geometry)
.filterDate('2017-01-01', '2020-04-15')
.filter(ee.Filter.listContains("system:band_names", band))
.select([band])
.reduce(ee.Reducer.count())
.clip(geometry)
var vis = {"min":0,"max":900,"palette":["00FFFF","0000FF"]}
Map.addLayer(image, vis, 'NDWI')
I have an object in my database following a file upload that look like this
a:1:{s:4:"file";a:3:{s:7:"success";b:1;s:8:"file_url";a:2:{i:0;s:75:"http://landlordsplaces.com/wp-content/uploads/2021/01/23192643-threepersons.jpg";i:1;s:103:"http://landlordsplaces.com/wp-content/uploads/2021/01/364223-two-female-stick-figures.jpg";}s:9:"file_path";a:2:{i:0;s:93:"/var/www/vhosts/landlordsplaces.com/httpdocs/wp-content/uploads/2021/01/23192643-threepersons.jpg";i:1;s:121:"/var/www/vhosts/landlordsangel.com/httpdocs/wp-content/uploads/2021/01/364223-two-female-stick-figures.jpg";}}}
I am trying with no success to parse extract the two jpg urls programmatically from the object so i can show the images ont he site. Tried assigning parse(object) but that isnt helping. I just need to get the urls out.
Thank you in anticipation of any general direction
What you're looking at is not a JSON string. It is a serialized PHP object. If this database entry was created by Forminator, you should use the Forminator API to retrieve the needed form entry. The aforementioned link points to the get_entry method, which I suspect is what you're looking for (I have never used Forminator), but in any case, you should look for a method that will return that database entry as a PHP object containing your needed URLs.
In case it is ever of any help to anyone the answer to the question was based on John input. The API has the classes to handle that without needing to understand the data structure.
Forminator_API::initialize();
$form_id = 1449; // ID of a form
$entry_id = 3; // ID of an entry
$entry = Forminator_API::get_entry( $form_id, $entry_id );
$file_url = $entry->meta_data['upload-1']['value']['file']['file_url'];
$file_path = $entry->meta_data['upload-1']['value']['file']['file_path'];
var_dump($entry); //contains paths and urls
Hope someone benefits.
I am working on page which is going to present 20 products. I would like to avoid using any db(page is going to be simple) so I am thinking about storing products' data in [globals] array. Case is that each product description is quite long between 500 and 1000 words and it is formatted which makes this very complicated. I am wondering if is possible to use similiar to nowdoc from php method to manage such long texts in free-fat-framework frane(http://www.php.net/manual/en/language.types.string.php#language.types.string.syntax.nowdoc)
Do you have any other idea to store long text in arrays in 3f?
Thanks in advance
Macrin
The user guide has an example of a very long string:
[globals]
str="this is a \
very long \
string"
Me, I would keep each product's description (with any other info, like photo url or price) in a seperate text file in a dedicated directory (let's say products). Then in index.php or any other route handler I would scan this directory and load the descriptions:
$productsDir = dir(__DIR__ . '/products');
$productsInfo = [];
foreach (new DirectoryIterator($productsDir) as $fileInfo) {
if($fileInfo->isDot()) continue;
$productsInfo[] = file_get_contents($fileinfo->getPathname());
}
var_dump($productsInfo);
You can use the JIG database and its data mapper.
https://fatfreeframework.com/3.6/jig-mapper
It can store your product items in plain .json files and you also get some basic CRUD and search functionality. You can also hook in Cortex later, if you ever want to upgrade to a real DB.
G'day Everyone,
I am looking for a raster layer for human population/habitation in Australia. I have tried finding some free datasets online but couldn't really find anything in a useful formate. I thought it might be interesting to try and scrape population data from wikipedia and make my own raster layer. To this end I have tried getting the info from wiki, but not knowing anything about html has not help me.
The idea is to supply a list of all the towns in Australia that have wiki pages and extract the appropriate data into a data.frame.
I can get the webpage source data into R, but am stuck on how to extract the particular data that I want. The code below shows where I am stuck, any help would be really appreciated or some hints in the right direction.
I thought I might be able to use readHTMLTable() because, in the normal webpage, the info I want is off to the right in a nice table. But when I use this function I get an error (below). Is there any way I can specify this table when I am getting the source info?
Sorry if this question doesn't make much sense, I don't have any idea what I am doing when it comes to searching HTML files.
Thanks for your help, it is greatly appreciated!
Cheers,
Adam
require(RJSONIO)
loc.names <- data.frame(town = c('Sale', 'Bendigo'), state = c('Victoria', 'Victoria'))
u <- paste('http://en.wikipedia.org/wiki/',
sep = '', loc.names[,1], ',_', loc.names[,2])
res <- lapply(u, function(x) htmlParse(x))
Error when I use readHTMLTable:
tabs <- readHTMLTable(res[1])
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘readHTMLTable’ for signature ‘"list"’
For instance, some of the data I need looks like this in the html stuff. My question is how do I specify these locations in the HTML stuff I have?
/ <span class="geo">-38.100; 147.067
title="Victoria (Australia)">Victoria</a>. It has a population (2011) of 13,186
res returns a list in this case you need to use res[[1]] rather then res[1] to access its elements.
Using readHTMLTable on these elements will give you all tables. The tables with geo info is contained in a table with class = "infobox vcard" you can just extract these tables seperately then pass them to readHTMLTable
require(XML)
lapply(sapply(res, getNodeSet, path = '//*[#class="infobox vcard"]')
, readHTMLTable)
If you are not familiar with xpaths the selectr package allows you to use css selectors which maybe easier.
require(selectr)
> querySelectorAll(res[[1]], "table span .geo")
[[1]]
<span class="geo">-38.100; 147.067</span>
[[2]]
<span class="geo">-38.100; 147.067</span>
I have an entity that one of it's members is the actual city, country that it was "created" - the user only gives me lat, long coordinates and I need to use the google API to reverse geocode the coordinates into city, country.
My question - What is the best way to fetch that information? is it safe enough to use a 3rd party web service in the middle of a record creating process? What are the common ways to do that?
It's certainly safe enough. The only issue is response time (as you will be calling the remote web service synchronously), but if you're doing this once only on each insert I don't think that would be much of a problem.
The Google Geocoding API will return results in XML format, so you just need to call the web service URL and pull the information you need from the response.
Here's an example reverse geocoding result:
http://maps.googleapis.com/maps/api/geocode/xml?latlng=40.714224,-73.961452&sensor=false
You don't say which language you're using, but I'm assuming PHP—here's a very basic example of parsing the response and displaying the addresses using SimpleXML:
$lat = 40.714224;
$lng = -73.961452;
$xml = simplexml_load_file("http://maps.googleapis.com/maps/api/geocode/xml?latlng=" . $lat . "," . $lng . "&sensor=false");
foreach($xml->result as $result)
{
foreach($result->address_component as $addresscomponent)
{
echo $addresscomponent->long_name . ' ';
}
echo '<br />';
}