gis:find-features
I have not understood how I have to use this code. I have read that the code is like this:
gis:find-features VectorDataset property-name specified-value
I don't know what is "property-name" and "specified-value". Did I define them in GIS?
I have only the file .shp with the map. I need to use "gis:find-features" because I have found this in a code that is similar to the simulation that I need to do.
Thanks a lot for your help!!!
Related
I am trying to display a diagram imported with draw.io editor and saved as xml file. To parse and render the diagram's xml I use the mxGraph library that display it correctly except for the encoded images. I mean this part of the xml:
<mxCell style="vsdxID=65;fillColor=none;gradientColor=none;image;aspect=fixed;image=data:image/jpg,/9j/4AAQSkZJRgABAQAAAQABAA...
all other aspects are handled fine (shapes, color, lines, ...) but the data:image/jpeg is simply ignored. I don't have any error in console nor broken img tags are generated or similar things.
What/where coukd be the problem?
Marco
Ok, I got it by myself... thanks to Colin tip!
The solution is to change
figure;
in
shape=figure;
and the picture is rendered as expected :-)
Probably it is a bug in the draw.io import/export functionality
I'm trying to find a specific link from a web page using windows command line and tools. I think Xidel can do what I want to do.
In the page, the link is used like this:
file: 'http://link.link/index.txt'
Note: there's only one line like this. Now if I can set something like
file: '{%link}'
then I'll be able to extract the link. Also if I want to change the word index.txt to something like root.txt and then use aria2 to download the link as http://link.link/root.txt , what do I need to do?
(I don't have any experience with any of these tools/command like scripts, I just wanted to make something that does this (some alternatives are already available but I want to do it myself) and this only. So I did search for it and have an idea on how can I do it but extrating the exact url seems to be the hardest part since I couldn't find anything that might help me in xidel's docs)
Xidel is meant to extract data from HTML/XML/json files, but it can also extract from CSV's and TXT if you know how to use the $raw variable and xidel/xquery functions, like extract(), tokenize() and replace().
Post the URL or the source (or part thereof) of the webpage and I'll see how I can help you.
Does anyone know where to get the file format spec for the USGS IMG files that they are using now for NED (DEM) data? I would like to write code to read them directly.
It turns out the "Grid FLoat" format is much more straightforward to use - I recommend this for "roll your own" terrain data coders!
I want to convert / transfer HTML Transitional Code into a "MS-Word readable"-format...pdf would also do the job.
The converter should be a standalone program which I can reach by console...
P.S.: The input is created by TinyMCE and after this stored in a OracleDB
P.P.S.: It should be able to understand CSS for div-positioning
P.P.P.S: It should be Open Source :)
Thank you :)
Looks like your are looking for something like wkhtmltopdf.
Here is a guy who blogged about his integration of that tool. It can convert HTML to PDF and includes css: http://beebole.com/en/blog/general/convert-html-to-pdf-with-full-css-support-an-opensource-alternative-based-on-webkit/
I'm trying to find links in the following format:
http://subdomain.subdomain.domain.tld/subfolder/randomstring.html
Basically, I need a regex that looks for http:// and stops looking when it finds .html. Everything in between shouldn't matter. I.e., more/less subdomains, variable TLD and variable folder.
Is this possible?
((http://)?=(.html))
What I've got so far (not functional) is this. I'm really not familiar with the look-ahead assertion so I might be on the wrong track.
Anyways, any help is going to be greatly appreciated!
Look ahead? You only need a non-greedy match everything.
/http:\/\/.*?\.html/
I would use something like: /http:\/\/[^<>\s]+?\.html/
Can be enhanced, but at least won't match stuff like:
http://something.com/ has a lot of .html files