Pagination in Arcgis wfs request - gis

Is it possible to send WFS request with pagination? I tried with STARTINDEX but it's not working. I want to fetch features within a certain limit
Eg: http://example.com/ArcGIS/services/<mapping service name>/MapServer/WFSServer?VERSION=1.1.0&SERVICE=WFS&REQUEST=GetFeature&TYPENAME=<type name>&STARTINDEX=10&MAXFEATURE=10
Or: how will I fetch only objectid / featureid with WFS request so that I can send filter with request?

Well to answer your last question first to request an object by featureID use something like:
http://example.com/geoserver/wfs?
service=wfs&
version=2.0.0&
request=GetFeature&
typeName=namespace:featuretype&
featureID=feature
To do the pagination you use something like:
http://example.com/geoserver/wfs?
service=wfs&
version=2.0.0&
request=GetFeature&
typeName=namespace:featuretype&
startindex=0&
maxFeatures=10
Some notes on this however. I have only tested against Geoserver and I know that in version 2.1.x of geoserver the startindex has no affect on the results. I know it does work in version 2.3.x. If you are using a particular version of Geoserver I would suggest you look it up. I am not sure what is supported in other applications that serve out wfs. You will need to check with them but what I have given above is in accordance with the WFS spec.
One final thing is you may want to add some sorting when doing this. i.e.
http://example.com/geoserver/wfs?
service=wfs&
version=2.0.0&
request=GetFeature&
typeName=namespace:featuretype&
startindex=0&
maxFeatures=10&
sortBy=namespace:field
The reason for this is that without sorting when data is updated the data may change between requests meaning you may not see results ect between pages. That is still possible even with sorting, particularly if not sorting on ID or not using an incremental style ID field. However normally sorting on an ID field will ensure consitancy in paging.
As you are using ArcGis some of that may not be relevate to you. I am pretty sure however that the latest versions of Arc Gis support the WFS Spec for the startindex field.

Related

WMS GetFeatureinfo for featurelayers

I'm looking the way how to use browser to query all WMS features( with all attributes the feature have) by just defining the layer parameter or what parameters I need to add to get desired result? Request all features WMS is serving, output format must be txt, gml or xml.
Something like this...
wms?request=GetFeatureInfo&QUERY_LAYERS=my_layer&info_format=application/vnd.ogc.gml&select_all_features.
It's not possible, a WMS GetFeatureInfo operation fetches a point location (pixel coordinate) in a map image (created through a GetMap operation).
Also a WMS also doesn't serve out features it serves out images (or videos) that are representations of some input often, but not always, a vector data set.
What you need is for there to be a WFS (or WCS) these are 'download' services that let you get at the actual data.
It is not possible using WMS, but using WFS you can get all the GetFeatureInfo and also apply some limited amount of conditions to get the desired results.
For this you need to use WFS REQUEST=GetFeature
The documentation for WMS GetFeature describes the various options in more details.
A simple example would be
GET
http://example.com/geoserver/wfs?
service=wfs&
version=2.0.0&
request=GetFeature&
typeNames=namespace:featuretype&
count=N
GET
http://example.com/geoserver/wfs?
service=wfs&
version=1.1.0&
request=GetFeature&
typeName=namespace:featuretype&
maxFeatures=N
If you want to filter by BBOX you can use the following, as mentioned in the documentation
POST
http://example.com/geoserver/wfs?
service=wfs&
version=2.0.0&
request=GetFeature&
typeNames=namespace:featuretype&
srsName=CRS&
bbox=a1,b1,a2,b2
For me it worked with GET too.
Note if bbox=a1,b1,a2,b2 does not work, try bbox=a1,b1,a2,b2,CRS
For me even with srsName=CRS&bbox=a1,b1,a2,b2 I got "features": [], with srsName=CRS&bbox=a1,b1,a2,b2,CRS I got results

How to find how many json endpoints an api has

I’m in the middle of making an Express app. It’s just a learning project.
I’m getting some info from an Anime api called jikan.me, it provides info about different Anime series like a picture url and synopsis.
For example one is at https://api.jikan.me/anime/16 .
Now, the jikan api might have a json endpoint at anime/1 but there's nothing at anime/2.
I want to find a list of all the numbers (https://api.jikan.me/anime/[numbers]) that actually contain endpoints.
I've tried simply going to https://api.jikan.me/anime but it returns error: No ID/Path Given.
I'm expecting there is likely no absolute answer to this problem but that I might learn something about server-side code along the way.
Where would I begin to look to find this info?
This is a bit late but, Jikan is an unofficial REST API for MyAnimeList. The IDs are respective to the IDs on MAL. For example; https://myanimelist.net/anime/1 can be parsed through https://api.jikan.moe/anime/1 but the ID 2 does not exist on MAL. It's a 404, hence that error.
To initially get some IDs, you can try the search endpoint.
Furthermore, I'll be releasing REST 2.2 quite soon (this month) which will give you the ability to parse from pages like these and thus you'll get another endpoint that provides a handful of IDs to get their data from.
Source: I'm the developer of Jikan
If it's not in the documentation it's probably information not available to you... a REST api needs to be specifically configured to offer certain endpoints, that number at the end might just be an ID that's searched for in an internal database and there's no way for the application to know if there's gonna be something there; all they can do is return an error message for you to handle as is the case here.

Query Chrome inspector network tab logs?

I'm working in an app that makes many Ajax calls and results in a huge Network log in the Chrome inspector. I know that there are ways to query by things like mime-type, but I'm looking for a more fully-featured query capability.
For example, I'd like to be able to only see the pub/sub polling calls with a query like:
request_url:match(/pubnub.com/)
Or just see the GETs with:
request_method:GET
Is there a tool that makes queries like this possible?
You have a several options available. There are various pre-defined filter modes, as mentioned in the Network Analysis Reference (as you referenced in your question).
You can use the method filter with method:GET or method:POST in the input to only show requests of a particular method type. If you place a - beforehand, the filter will negate, e.g. -method:GET, will show all requests other than ones that are GETs.
There's also a filter type called domain, which is useful for only showing requests that match a particular domain. The options are limited though:
domain:stackoverflow.com would show all requests for the StackOverflow domain.
domain:*.google.co.uk would show all requests that are sub-domains
of Google UK.
Filtering request path (Method 1)
There's a better approach to filtering particular request paths. You can simply put pubnub.com in the filter input and it will match exactly what you put. You can also negate it with - beforehand, so entering -pubnub.com will show all requests that don't contain that in the path.
Filtering request path (Method 2)
You can also use Regex in the filter input, but not for the special filter modes (e.g. method, domain, etc.). So, in your case you could also use /pubnub.com/ as you filter. You can do more complex regular expressions, for instance, using /^((?!pubnub.com).)*$/ would do the equivalent of -pubnub.com via negative lookahead.
The reason I highlight Method 2 is because I fixed the feature a while ago in DevTools, as a result of another similar question that ended up being a bug in Method 1. Both bugs are completely fixed now though. See this for history of the problem if you're interested.

Does DSE search support json queries through the HTTP API (if not, can it be enabled?)

In the newer versions of Solr, you can pass in json queries through the HTTP API by either just passing it in as the data, or by using "query?json={json doc here}".
We noticed in the DSE version, it's using "select" as the handler and not "query" (not sure if they are different), but attempting to pass in a select?json={"q":":"} or select?json={"query":":"} always yields no results and a curl just passing those also yields no results. It looks like it's a supported feature based on : http://docs.datastax.com/en/latest-dse/datastax_enterprise/srch/srchJSON.html, but not sure if it includes json facets like: http://yonik.com/json-facet-api/.
The main reason we want to use it is the advanced sub-faceting (we need pivot facets with date range groupings), and we'd prefer to do it over the http api for a variety of reasons.
Any help is appreciated. Thanks!
The JSON facet API is introduced in Solr 5.1.
https://issues.apache.org/jira/browse/SOLR-7214
The latest DSE as of today ships Solr 4.10. This API is not yet supported.

Skipping precaching: Cannot read property 'concat' of null`

Here's my question: How might I try to get rid of the 'skipping precaching' and cache everything that comes in from https://laoadventist.info/beta/r as the precache list?
Also, is it correct for me to set precache="https://laoadventist.info/beta/r" or should I be setting that to a function that grabs the data and returns it instead?
Skipping precaching: Cannot read property 'concat' of null
comes out on the console when using My Polymer App
<platinum-sw-cache default-cache-strategy="fastest" cache-config-file="cache-config.json" precache="https://laoadventist.info/beta/r">
I am assuming correctly I can precahce a URL like this, right?
I am trying to load a json result from laravel 5.1 to set what my precache should be... I know it's not the most elegant, but I'm new to Polymer, cache, service workers, etc, and using this app as a learning opportunity. It'll be a bit different at the end of the day, but for now I just want to load everything. :)
I want to precache all of the data so that a user can fully utilize this app when offline (though later I'll set it up so that they don't have to precache loads and loads of json requests, only the ones they want, like per book - but that's for later).
If you have a array of resource URLs that you want precached, the cleanest way to specify them is to use the cacheConfigFile option and to point to a JSON file that contains your array as its precache property. Take a look at the example in the docs for cacheConfigFile: https://elements.polymer-project.org/elements/platinum-sw?active=platinum-sw-cache
You shouldn't have to use the precache attribute on the element if you're using cacheConfigFile.
It sounds like you're using Polymer Starter Kit, and that will create the JSON config file and populate it for you with an array corresponding to your local resources. But if you'd like to specify additional resources to be precached, you can modify the build process to append those URLs to the auto-generated list.
The reason you're seeing that error is because you're pointing to a JSON config file that is effectively empty, and is just meant for the development environment.