Open Layer ( Geoserver ) Spatial Search not returning any features - gis

I Created one generic layer in the GeoServer for the open layer ( OL3) and add a few features in the same layer, I'm trying to do the spatial search but the call is not returning any features.
Here is the post-call I'm making to WFS ( Geoserver ) URL http://xyz:5002/geoserver/wfs
In Form Data we are sending query parameter
cql_filter : INTERSECTS(geometry,POLYGON((13.222566632788059 78.15759658813475,13.201844057837434 78.15141677856444,13.211202857957346 78.17956924438474,13.222900853445154 78.17888259887692,13.222566632788059 78.15759658813475)))
service : WFS
request : GetFeature
version :1.1.0
typename : layerName
outputFormat : json
srsname : EPSG:4326
The result I'm getting
{
"type": "FeatureCollection",
"features": [],
"totalFeatures": 0,
"numberMatched": 0,
"numberReturned": 0,
"timeStamp": "2022-02-03T09:23:51.415Z",
"crs": null
}
Geoserver I'm using 2.20.2
Can anyone please let me know what is the mistake I'm doing when I doing the spatial search query

Related

How can I find Revit document derivatives URNs in BIM360 Docs using the Forge Data Management API

I have some Revit files stored in a BIM360 project. I am trying to visualize those files inside Forge Viewer. Now Forge Viewer won't work directly with Revit file/documents, but require the 'urn' of a translated file, in 'svf' format.
I could transform my Revit file into the 'svf' file using the Forge Model Derivative API, but that consume some credits, and I shouldn't be able to do this because when uploading a Revit file into BIM360, the translation is happening already there.
I was wondering then, how do I find out the 'urn' of the underlying 'svf' file for my Revit document ?
I found few resources helping there, when browsing the content of my BIM360 folder, or checking the versions of my Revit document using Forge Data Management API, I should be able to access a derivative object in the response which represent the derived model that can be used by the Forge viewer.
https://forums.autodesk.com/t5/bim-360-api-forum/connecting-forge-viewer-with-bim-360/td-p/6742779
However for me, there are no derivatives object in the API response, see below a sample of the API response (I have obfuscated some data for security purpose):
{
"type": "versions",
"id": "urn:adsk.wipprod:fs.file:vf.XXXXXXXXXXXXXXXXXXXX?version=1",
"attributes": "#{name=139200.33_Amenities Building_R21.rvt; displayName=139200.33_Amenities Building_R21.rvt; createTime=2021-09-03T04:24:18.0000000Z; createUserId=XXXXXXXXXX; createUserName=Holmes Consulting; lastModifiedTime=2021-09-03T04:28:02.0000000Z; lastModifiedUserId=XXXXXXXXXXXX; lastModifiedUserName=XXXXXXXXXX; versionNumber=1; storageSize=19808256; fileType=rvt; extension=}",
"links": "#{self=; webView=}",
"relationships": "#{item=; links=; refs=; downloadFormats=; derivatives=; thumbnails=; storage=}"
},
I am using the API call as used in the link I provided above:https://developer.api.autodesk.com/data/v1/projects/:project_id/folders/:folder_id/contents
Why is it that my response contains so little data ?
First thank Eason for contributing.
Since my derivatives object was empty I tried to directly use the 'urn' of my object version.
When listing all my folder documents using the folder get content API method mentioned in my issue, I get all the documents in the 'data' item array and all their versions in the 'included' version array. We need to use the document version id to build the urn. See my sample below:
"included": [
{
"type": "versions",
"id": "urn:adsk.wipprod:fs.file:vf.l9pc9re6QOmeEVHvTCTlIQ?version=1",
"attributes": "#{name=139200.33_Amenities Building_R21.rvt; displayName=139200.33_Amenities Building_R21.rvt; createTime=2021-09-03T04:24:18.0000000Z; createUserId=XXXXXX; createUserName=XXXXXXXX; lastModifiedTime=2021-09-03T04:28:02.0000000Z; lastModifiedUserId=XXXXXXXXXXXX; lastModifiedUserName=XXXXXXXXXXXX; versionNumber=1; storageSize=19808256; fileType=rvt; extension=}",
"links": "#{self=; webView=}",
"relationships": "#{item=; links=; refs=; downloadFormats=; derivatives=; thumbnails=; storage=}"
},
Now the id has to be base64 encoded. I am using https://www.freeformatter.com/base64-encoder.html to encode the id urn:adsk.wipprod:fs.file:vf.l9pc9re6QOmeEVHvTCTlIQ?version=1. Beware the result will be dXJuOmFkc2sud2lwcHJvZDpmcy5maWxlOnZmLmw5cGM5cmU2UU9tZUVWSHZUQ1RsSVE/dmVyc2lvbj0 which is not valid in my JS code to load the document in the Forge Viewer, because of the /. It needs to be replaced with a _. So eventually the bit of JS that load my document into the Forge Viewer looks like this:
var documentId = 'urn:dXJuOmFkc2sud2lwcHJvZDpmcy5maWxlOnZmLmw5cGM5cmU2UU9tZUVWSHZUQ1RsSVE_dmVyc2lvbj0x'; //139200.33_Amenities Building_R21.rvt
Autodesk.Viewing.Initializer(options, function() {
var htmlDiv = document.getElementById('forgeViewer');
viewer = new Autodesk.Viewing.GuiViewer3D(htmlDiv);
viewer.start();
Autodesk.Viewing.Document.load(documentId, onDocumentLoadSuccess, onDocumentLoadFailure);
function onDocumentLoadSuccess(viewerDocument) {
// Choose the default viewable - most likely a 3D model, rather than a 2D sheet.
var defaultModel = viewerDocument.getRoot().getDefaultGeometry();
viewer.loadDocumentNode(viewerDocument, defaultModel);
}
function onDocumentLoadFailure() {
console.error('Failed fetching Forge manifest');
}
});
Please find it in the id value of relationships.data.derivatives. For example,
"derivatives": {
"data": {
"type": "derivatives",
"id": "dXJuOmFkc2sud2lwcHJvZDpmcy5maWxlOnZmLkVueWtrU3FjU0lPVTVYMGhRdy1mQUM_dmVyc2lvbj0x"
},
// ...
},
Or check this line: https://github.com/Autodesk-Forge/learn.forge.viewhubmodels/blob/nodejs/routes/datamanagement.js#L155
const viewerUrn = (version.relationships != null && version.relationships.derivatives != null ? version.relationships.derivatives.data.id : null);

ADF - Data Flow- Json Expression for Property name

I have a requirement to convert the json into csv(or a SQL table) or any other flatten structure using Data Flow in Azure Data Factory. I need to take the property names at some hierarchy and values of the child properties at lower of hierrarchy from the source json and add them both as column/row values in csv or any other flatten structure.
Source Data Rules/Constraints :
Parent level data property names will change dynamically (e.g. ABCDataPoints,CementUse, CoalUse, ABCUseIndicators names are dynamic)
The hierarchy always remains same as in below sample json.
I need some help in defining Json path/expression to get the names ABCDataPoints,CementUse, CoalUse, ABCUseIndicators etc. I am able to figure out how to retrieve the values for the properties Value,ValueDate,ValueScore,AsReported.
Source Data Structure :
{
"ABCDataPoints": {
"CementUse": {
"Value": null,
"ValueDate": null,
"ValueScore": null,
"AsReported": [],
"Sources": []
},
"CoalUse": {
"Value": null,
"ValueDate": null,
"AsReported": [],
"Sources": []
}
},
"ABCUseIndicators": {
"EnvironmentalControversies": {
"Value": false,
"ValueDate": "2021-03-06T23:22:49.870Z"
},
"RenewableEnergyUseRatio": {
"Value": null,
"ValueDate": null,
"ValueScore": null
}
},
"XYZDataPoints": {
"AccountingControversiesCount": {
"Value": null,
"ValueDate": null,
"AsReported": [],
"Sources": []
},
"AdvanceNotices": {
"Value": null,
"ValueDate": null,
"Sources": []
}
},
"XYXIndicators": {
"AccountingControversies": {
"Value": false,
"ValueDate": "2021-03-06T23:22:49.870Z"
},
"AntiTakeoverDevicesAboveTwo": {
"Value": 4,
"ValueDate": "2021-03-06T23:22:49.870Z",
"ValueScore": "0.8351945854483925"
}
}
}
Expected Flatten structure
Background:
After having multiple calls with ADF experts at Microsoft(Our workplace have Microsoft/Azure partnership), they concluded this is not possible with out of the box activities provided by ADF as is, neither by Dataflow(need not to use data flow though) nor Flatten feature. Reasons are Dataflow/Flatten only unroll the Array objects and there are no mapping functions available to pick the property names - Custom expression are in internal beta testing and will in PA in near future.
Conclusion/Solution:
We concluded with an agreement based on calls with Microsoft emps ended up to go multiple approaches but both needs the custom code - with out custom code this is not possible by using out of box activities.
Solution-1 : Use some code to flatten as per requirement using a ADF Custom Activity. The downside of this you need to use an external compute(VM/Batch), the options supported are not on-demand. So it is little bit expensive but works best if have continuous stream workloads. This approach also continuously monitor if input sources are of different sizes because the compute needs to be elastic in this case or else you will get out of memory exceptions.
Solution-2 : Still needs to write the custom code - but in a function app.
Create a Copy Activity with source as the files with Json content(preferably storage account).
Use target as Rest Endpoint of function(Not as a function activity because it has 90sec timeout when called from an ADF activity)
The function app will takes Json lines as input and parse and flatten.
If you use the above way so you can scale the number of lines cane be send in each request to function and also scale the parallel requests.
The function will do the flatten as required to one file or multiple files and store in blob storage.
The pipeline will continue from there as needed from there.
One problem with this approach is if any of the range is failed the copy activity will retry but it will run the whole process again.
Trying something very similar, is there any other / native solution to address this?
As mentioned in the response above, has this been GA yet? If yes, any reference documentation / samples would be of great help!
Custom expression are in internal beta testing and will in PA in near future.

How to parse JsonArray in Scala and writing them in a DataFrame?

Using my Scala HTTP Client I retrieved a response in JSON format from an API GET call.
My end goal is to write this JSON content to an AWS S3 bucket in order to make it available as a table on RedShift running a simple AWS Glue crawler.
My thinking is to parse this JSON message and somehow converting into a Spark DataFrame, so later on I can save it to my preferred S3 location in the format of .csv, .parquet, or whatever.
The JSON file looks like this
{
"response": {
"status": "OK",
"start_element": 0,
"num_elements": 100,
"categories": [
{
"id": 1,
"name": "Airlines",
"is_sensitive": false,
"last_modified": "2010-03-19 17:48:36",
"requires_whitelist_on_external": false,
"requires_whitelist_on_managed": false,
"is_brand_eligible": true,
"requires_whitelist": false,
"whitelist": {
"geos": [],
"countries_and_brands": []
}
},
{
"id": 2,
"name": "Apparel",
"is_sensitive": false,
"last_modified": "2010-03-19 17:48:36",
"requires_whitelist_on_external": false,
"requires_whitelist_on_managed": false,
"is_brand_eligible": true,
"requires_whitelist": false,
"whitelist": {
"geos": [],
"countries_and_brands": []
}
}
],
"count": 148,
"dbg_info": {
"warnings": [],
"version": "1.18.1621",
"output_term": "categories"
}
}
}
The content I would like to map to a Dataframe is the one contained by the "categories" JSON Array.
I have managed to parse the message using json4s.JsonMethods method parse this way:
val parsedJson = parse(request) \\ "categories"
Obtaining the following:
output: org.json4s.JValue = JArray(List(JObject(List((id,JInt(1)), (name,JString(Airlines)), (is_sensitive,JBool(false)), (last_modified,JString(2010-03-19 17:48:36)), (requires_whitelist_on_external,JBool(false)), (requires_whitelist_on_managed,JBool(false)), (is_brand_eligible,JBool(true)), (requires_whitelist,JBool(false)), (whitelist,JObject(List((geos,JArray(List())), (countries_and_brands,JArray(List()))))))), JObject(List((id,JInt(2)), (name,JString(Apparel)), (is_sensitive,JBool(false)), (last_modified,JString(2010-03-19 17:48:36)), (requires_whitelist_on_external,JBool(false)), (requires_whitelist_on_managed,JBool(false)), (is_brand_eligible,JBool(true)), (requires_whitelist,JBool(false)), (whitelist,JObject(List((geos,JArray(List())), (countries_and_brands,JArray(List()))))))))
However, I am completely lost on how to proceed. I have even tried using another library for Scala called uJson:
val json = (ujson.read(request))
val tuples = json("response")("categories").arr /* <-- categories is an array */ .map { item =>
(item("id"), item("name"))
This time I have only parsed two fields for testing, but this shouldn't change much. Hence, I obtained the following structure:
tuples: scala.collection.mutable.ArrayBuffer[(ujson.Value, ujson.Value, ujson.Value, ujson.Value)] = ArrayBuffer((1,"Airlines",false,"2010-03-19 17:48:36"), (2,"Apparel",false,"2010-03-19 17:48:36"))
However, also this time I do not know how to move forward and everything I try results in errors, mostly related to format incompatibility.
Please, feel free to propose any other approach to achieve my goal even if it changes totally my workflow. I rather learn something properly. Thanks
We can use the following code to convert JSON to Spark Dataframe/Dataset
val df00 =
spark.read.option("multiline","true").json(Seq(JSON_OUTPUT).toDS())

How can I get common response format for Lambda API?

I am using Node.js for AWS Lambda + API Gateway APIs.
I have multiple Lambda functions and each giving different response formats as it integrated multiple third party SDKs like Stripe/DynamoDB and all.
Is there any way to get common response for all the functions like below?
{
"success" : true,
"data" : { RESPONSEFROMLAMBDA },
"messages" : null,
"code" : 200,
"description" : "OK"
}
The third-party services your Lambda functions are using shouldn't have any bearing on the response format. You just need to update all the API Gateway endpoints to use a mapping template with this format.

Convert json to proper geojson with geojson.min.js

So i have json data with column geo_coordinates, i use leaflet library to display geojson, unfortunately leaflet cannot read geo_coordinates because it doesnt know that is type Point and coordinates for it.
I was looking after best solution and i though that i should change geo_coordinates on "type": Point "coordinates" :
I found geojson.min.js, it looks simple to use, but i am doing sth wrong mayby sb can help.
var map;
// set up the map
map = new L.Map('map');
// create the tile layer with correct attribution
var kanomapUrl='http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png';
var kanomap = new L.TileLayer(kanomapUrl, {minZoom: 1, maxZoom: 18});
map.setView(new L.LatLng(8.783268,11.95733),1);
map.addLayer(kanomap);
$.getJSON("/api/facility/", function(data) {
var geojson = L.geoJson(data);
GeoJSON.parse(geojson, {Point: ['geo_coordinates']});
geojson.addTo(map);
});
My json
[{"id": 10, "name": "Berlin", "country": "Germany", "geo_coordinates": "1.153757,11.634342"}, ......]
gejJson is wroking i checked it with some data from google and it is displayed correctly, the same when i type some geojson manually, then everything works. So i am sure i do sth wrong with geojson library to convert my json properly
thanks
I assume you are speaking of https://www.npmjs.com/package/geojson ...
Your geo_coordinates field is too complex ...
The library requires 2 distinct fields from your data structure to define a Point (one for latitude, one for longitude).
Why do you want to convert to geojson ? Can't you process the data yourself ?
You just need to tweak a few things:
1) As #FranceImage mentioned you have to set up your coordinates in your source data
differently, for example as an array:
{"id": 10, "name": "Berlin", "country": "Germany", "geo_coordinates": [1.153757, 11.634342]}
2) In Leaflet, it seems if you add raw GeoJson data, you have to use the addData function
var result = GeoJSON.parse(geojson, {Point: 'geo_coordinates'});
var geojson = L.geoJson().addData(result);
See here for a demo: http://jsfiddle.net/n82d4s91/5/