MapServer SOS (Sensor Observation Service) Configuration - gis

I tried to set up MapServer SOS but I faced a problem: the SOS doesn't return anything. You may see the map file I have created below:
MAP
NAME "SOS_DEMO"
STATUS ON
SIZE 400 300
EXTENT -180 -90 180 90
UNITS METERS
SHAPEPATH "C:\ms4w\apps\tutorial\data"
IMAGECOLOR 255 255 255
WEB
IMAGEPATH "C:\ms4w\apps\tutorial\templates"
IMAGEURL "C:\ms4w\apps\tutorial\images"
METADATA
"sos_onlineresource" "http://127.0.0.1:8282/cgi-bin/mapserv.exe?map=c:/ms4w/mysos.map?"
"sos_title" "My SOS Demo Server"
"sos_srs" "EPSG:4326"
"sos_enable_request" "*"
END
END
PROJECTION
"init=epsg:4326"
END
LAYER
NAME "sos_point"
METADATA
"sos_procedure" "ifgi-sensor-1"
"sos_offering_id" "WQ1289"
"sos_observedproperty_id" "Water Quality"
"sos_describesensor_url" "http://127.0.0.1:8181/DescribeSensor.xml"
END
TYPE POINT
STATUS ON
DATA 'sospoint'
PROJECTION
"init=epsg:4326"
END
CLASS
NAME 'sospoint'
STYLE
COLOR 255 128 128
END
END
END
END
As you see, I tried to retrieve sensor data from a shapefile. The message returned by the SOS is:
<om:ObservationCollection xmlns:gml="http://www.opengis.net/gml" xmlns:ows="http://www.opengis.net/ows/1.1" xmlns:swe="http://www.opengis.net/swe/1.0.1" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:sos="http://www.opengis.net/sos/1.0" xmlns:ms="http://mapserver.gis.umn.edu/mapserver" xmlns:om="http://www.opengis.net/om/1.0" gml:id="WQ1289" xsi:schemaLocation="http://www.opengis.net/om/1.0 http://schemas.opengis.net/om/1.0.0/om.xsd http://mapserver.gis.umn.edu/mapserver http://127.0.0.1:8282/cgi-bin/mapserv.exe?map=c:/ms4w/mysos.map?service=WFS&version=1.1.0&request=DescribeFeatureType&typename=urban">
<om:member>
<om:Observation>
<om:procedure xlink:href="urn:ogc:def:procedure:ifgi-sensor-1"/>
<om:observedProperty>
<swe:CompositePhenomenon gml:id="Water Quality" dimension="3">
<swe:component xlink:href="urn:ogc:def:property:OGC-SWE:1:Id"/>
<swe:component xlink:href="urn:ogc:def:property:OGC-SWE:1:sensor_nam"/>
<swe:component xlink:href="urn:ogc:def:property:OGC-SWE:1:sensor_val"/>
</swe:CompositePhenomenon>
</om:observedProperty>
<om:resultDefinition>
<swe:DataBlockDefinition>
<swe:components>
<swe:DataRecord/>
</swe:components>
<swe:encoding>
<swe:TextBlock tokenSeparator="," blockSeparator=" " decimalSeparator="."/>
</swe:encoding>
</swe:DataBlockDefinition>
</om:resultDefinition>
<om:result></om:result>
</om:Observation>
</om:member>
</om:ObservationCollection>
Although I put 6 observations into the shapefile but the SOS doesn't return any. Would you please let me know what I should do to resolve the problem?!
Thanks,
Ebrahim

Perhaps better ask here? https://gis.stackexchange.com/

Related

Load custom package model to get model vocabulary in AllenNLP python interface

I'm trying to get the vocabulary from some publicly-available pre-trained models (that aren't mine) using the python interface of AllenNLP, using self.vocab. However, I'm running into problems trying to load in the model. I'm looking to get the vocabulary from the dygiepp models, using the following code:
from allennlp.models.model import Model
scierc_model = Model.from_archive('https://s3-us-west-2.amazonaws.com/ai2-s2-research/dygiepp/master/scierc.tar.gz')
However, I get the following error:
---------------------------------------------------------------------------
ConfigurationError Traceback (most recent call last)
/tmp/local/63381207/ipykernel_7616/3549263982.py in <module>
----> 1 scierc_model = Model.from_archive('https://s3-us-west-2.amazonaws.com/ai2-s2-research/dygiepp/master/scierc.tar.gz')
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/model.py in from_archive(cls, archive_file, vocab)
480 from allennlp.models.archival import load_archive # here to avoid circular imports
481
--> 482 model = load_archive(archive_file).model
483 if vocab:
484 model.vocab.extend_from_vocab(vocab)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in load_archive(archive_file, cuda_device, overrides, weights_file)
231 # Instantiate model and dataset readers. Use a duplicate of the config, as it will get consumed.
232 dataset_reader, validation_dataset_reader = _load_dataset_readers(
--> 233 config.duplicate(), serialization_dir
234 )
235 model = _load_model(config.duplicate(), weights_path, serialization_dir, cuda_device)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in _load_dataset_readers(config, serialization_dir)
267
268 dataset_reader = DatasetReader.from_params(
--> 269 dataset_reader_params, serialization_dir=serialization_dir
270 )
271 validation_dataset_reader = DatasetReader.from_params(
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/common/from_params.py in from_params(cls, params, constructor_to_call, constructor_to_inspect, **extras)
586 "type",
587 choices=as_registrable.list_available(),
--> 588 default_to_first_choice=default_to_first_choice,
589 )
590 subclass, constructor_name = as_registrable.resolve_class_name(choice)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/common/params.py in pop_choice(self, key, choices, default_to_first_choice, allow_class_names)
322 """{"model": "my_module.models.MyModel"} to have it imported automatically."""
323 )
--> 324 raise ConfigurationError(message)
325 return value
326
ConfigurationError: dygie not in acceptable choices for dataset_reader.type: ['babi', 'conll2003', 'interleaving', 'multitask', 'multitask_shim', 'sequence_tagging', 'sharded', 'text_classification_json']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically.
The error describes how to fix the error from the command line, but not in the python interface. I additionally tried adding the line import dygie to my code to import the missing package, but that didn't solve the problem.
Wondering if anyone knows how to get around this?
To run this model, you'll need to have the code from this repo: https://github.com/dwadden/dygiepp.
In particular, you need to import the DyGIE dataset reader from here: https://github.com/dwadden/dygiepp/blob/master/dygie/data/dataset_readers/dygie.py#L29

Token indices sequence length is longer than the specified maximum sequence length for this model (651 > 512) with Hugging face sentiment classifier

I'm trying to get the sentiments for comments with the help of hugging face sentiment analysis pretrained model. It's returning error like Token indices sequence length is longer than the specified maximum sequence length for this model (651 > 512) with Hugging face sentiment classifier.
Below I'm attaching the code please look at it
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
import transformers
import pandas as pd
model = AutoModelForSequenceClassification.from_pretrained('/content/drive/MyDrive/Huggingface-Sentiment-Pipeline')
token = AutoTokenizer.from_pretrained('/content/drive/MyDrive/Huggingface-Sentiment-Pipeline')
classifier = pipeline(task='sentiment-analysis', model=model, tokenizer=token)
data = pd.read_csv('/content/drive/MyDrive/DisneylandReviews.csv', encoding='latin-1')
data.head()
Output is
Review
0 If you've ever been to Disneyland anywhere you...
1 Its been a while since d last time we visit HK...
2 Thanks God it wasn t too hot or too humid wh...
3 HK Disneyland is a great compact park. Unfortu...
4 the location is not in the city, took around 1...
Followed by
classifier("My name is mark")
Output is
[{'label': 'POSITIVE', 'score': 0.9953688383102417}]
Followed by code
basic_sentiment = [i['label'] for i in value if 'label' in i]
basic_sentiment
Output is
['POSITIVE']
Appending the total rows to empty list
text = []
for index, row in data.iterrows():
text.append(row['Review'])
I'm trying to get the sentiment for all the rows
sent = []
for i in range(len(data)):
sentiment = classifier(data.iloc[i,0])
sent.append(sentiment)
The error is :
Token indices sequence length is longer than the specified maximum sequence length for this model (651 > 512). Running this sequence through the model will result in indexing errors
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-19-4bb136563e7c> in <module>()
2
3 for i in range(len(data)):
----> 4 sentiment = classifier(data.iloc[i,0])
5 sent.append(sentiment)
11 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1914 # remove once script supports set_grad_enabled
1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1917
1918
IndexError: index out of range in self
some of the sentences in your Review column of the data frame are too long. when these sentences are converted to tokens and sent inside the model they are exceeding the 512 seq_length limit of the model, the embedding of the model used in the sentiment-analysis task was trained on 512 tokens embedding.
to fix this issue you can filter out the long sentences and keep only smaller ones (with token length < 512 )
or you can truncate the sentences with truncating = True
sentiment = classifier(data.iloc[i,0], truncation=True)
If you're tokenizing separately from your classification step, this warning can be output during tokenization itself (as opposed to classification).
In my case, I am using a BERT model, so I have MAX_TOKENS=510 (leaving room for the sequence-start and sequence-end tokens).
token = AutoTokenizer.from_pretrained("your model")
tokens = token.tokenize(
text, max_length=MAX_TOKENS, truncation=True
)
Now, when you run your classifier, the tokens are guaranteed not to exceed the maximum length.

Hyperparameter tuning using tensorboard.plugins.hparams api with custom loss function

I am building a neural network with my own custom loss function (pretty long and complicated). My network is unsupervised so my input and expected output are identical and also at the moment I am using one single input (just trying to optimize the loss for a single input).
I am trying to use tensorboard.plugins.hparams api for hyperparameter tuning and don't know how to incorporate my custom loss function there. I'm trying to follow the code suggested on the Tensorflow 2.0 website.
This is what the website suggests:
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
METRIC_ACCURACY = 'accuracy'
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(
hparams=[HP_NUM_UNITS, HP_DROPOUT, HP_OPTIMIZER],
metrics=[hp.Metric(METRIC_ACCURACY, display_name='Accuracy')],
)
I need to change that as I don't want to use the dropout layer, so I can just delete that. In terms of the METRIC_ACCURACY, I don't want to use accuracy as that has no use in my model but rather use my custom loss function. If I were to do the regular fit model it would look like this:
model.compile(optimizer=adam,loss=dl_tf_loss, metrics=[dl_tf_loss])
So I tried to change the suggested code into the following code but I get an error and am wondering how I should change it so that it suits my needs. Here is what I tried:
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
#METRIC_LOSS = dl_tf_loss
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(hparams=[HP_NUM_UNITS, HP_OPTIMIZER],metrics=
[hp.Metric(dl_tf_loss, display_name='Loss')])
It gives me the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-26-27d079c6be49> in <module>()
5
6 with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
----> 7 hp.hparams_config(hparams=[HP_NUM_UNITS, HP_OPTIMIZER],metrics=[hp.Metric(dl_tf_loss, display_name='Loss')])
8
3 frames
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in hparams_config(hparams, metrics, time_created_secs)
127 hparams=hparams,
128 metrics=metrics,
--> 129 time_created_secs=time_created_secs,
130 )
131 return _write_summary("hparams_config", pb)
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in hparams_config_pb(hparams, metrics, time_created_secs)
161 domain.update_hparam_info(info)
162 hparam_infos.append(info)
--> 163 metric_infos = [metric.as_proto() for metric in metrics]
164 experiment = api_pb2.Experiment(
165 hparam_infos=hparam_infos,
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in <listcomp>(.0)
161 domain.update_hparam_info(info)
162 hparam_infos.append(info)
--> 163 metric_infos = [metric.as_proto() for metric in metrics]
164 experiment = api_pb2.Experiment(
165 hparam_infos=hparam_infos,
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in as_proto(self)
532 name=api_pb2.MetricName(
533 group=self._group,
--> 534 tag=self._tag,
535 ),
536 display_name=self._display_name,
TypeError: <tensorflow.python.eager.def_function.Function object at 0x7f9f3a78e5c0> has type Function, but expected one of: bytes, unicode
I also tried running the following code:
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(hparams=[HP_NUM_UNITS, HP_OPTIMIZER],metrics=
[dl_tf_loss])
but got the following error:
AttributeError Traceback (most recent call last)
<ipython-input-28-6778bdf7f1b1> in <module>()
8
9 with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
---> 10 hp.hparams_config(hparams=[HP_NUM_UNITS, HP_OPTIMIZER],metrics=[dl_tf_loss])
2 frames
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in <listcomp>(.0)
161 domain.update_hparam_info(info)
162 hparam_infos.append(info)
--> 163 metric_infos = [metric.as_proto() for metric in metrics]
164 experiment = api_pb2.Experiment(
165 hparam_infos=hparam_infos,
AttributeError: 'Function' object has no attribute 'as_proto'
Would greatly appreciate any help.
Thanks in advance!
I figured it out.
The original METRIC_ACCURACY that I changed to METRIC_LOSS is apparently just the name, I needed to write 'tf_dl_loss' as a string and not as the function.
In the proceeding parts of the tuning, I needed to anyway write my fit command, there I inserted the actual loss function as I showed in my example of the regular fit function.
Highly recommend this as a way of tuning the hyperparameters.
You might be interested by this demo. Compiling the model with dl_tf_loss in the metric will waste time. It is possible to let hp.Metric know about other recorded summaries in different directories using the group argument.

GetMap request return white image

I just get a problem trying to display my map with GetMap request using Mapserver but it return a white image. I did search but i didn't found an answer :
My map file :
MAP
IMAGETYPE PNG
EXTENT -21 20 1 36
SIZE 700 400
IMAGECOLOR 255 255 255
PROJECTION
"init=epsg:4326"
END
OUTPUTFORMAT
NAME png
MIMETYPE image/png
DRIVER GD/PNG
EXTENSION png
IMAGEMODE PC256
TRANSPARENT FALSE
END
WEB
METADATA
"wms_title" "Dans Layers and Stuff"
"wms_onlineresource" "http://localhost:81/cgi-bin/mapserv.exe?"
"wms_enable_request" "*"
"wms_srs" "EPSG:4326"
"wms_feature_info_mime_type" "text/html"
"wms_format" "image/png"
END
END
LAYER
NAME map1
TYPE polygon
STATUS default
CONNECTIONTYPE postgis
CONNECTION "dbname=postgres host=localhost port=5432 user=postgres"
DATA "geom from comgeo"
PROJECTION
"init=epsg:4326"
END
METADATA
"wms_title" "map1"
END
PROCESSING "SCALE=AUTO"
CLASS
STYLE
COLOR 232 232 232
OUTLINECOLOR 32 32 32
END
END
END
END
And this the Link i used for my request :
http://localhost:81/cgi-bin/mapserv.exe?map=/wamp64/www/wordpress/map1.map&version=1.3.0&request=GetMap&CRS=EPSG:4326&bbox=-21,20,1,36&width=760&height=360&layers=map1&styles=&FORMAT=image/png&TRANSPARENT=TRUE
The BBox values are correct. Thank you
You appear to be missing SERVICE=WMS parameter on your URL.
I did solve the probleme when i replace epsg:4326 with CRS:84 and the URL :
http://localhost:81/cgi-bin/mapserv.exe?map=/wamp64/www/wordpress/map1.map&request=GetMap&SERVICE=WMS&version=1.3.0&CRS=CRS:84&bbox=-21,20,1,36&width=700&height=400&layers=map1&styles=&FORMAT=image/png&TRANSPARENT=TRUE
Version of WMS 1.1.1 and WMS 1.3.0 have different request parameter for coordinate system : SRS=EPSG:4326 for 1.1.1 and CRS=CRS:84 for 1.3.0
see mapserver wms

How do I feed in my own data into PyAlgoTrade?

I'm trying to use PyAlogoTrade's event profiler
However I don't want to use data from yahoo!finance, I want to use my own but can't figure out how to parse in the CSV, it is in the format:
Timestamp Low Open Close High BTC_vol USD_vol [8] [9]
2013-11-23 00 800 860 847.666666 886.876543 853.833333 6195.334452 5248330 0
2013-11-24 00 745 847.5 815.01 860 831.255 10785.94131 8680720 0
The complete CSV is here
I want to do something like:
def main(plot):
instruments = ["AA", "AES", "AIG"]
feed = yahoofinance.build_feed(instruments, 2008, 2009, ".")
Then replace yahoofinance.build_feed(instruments, 2008, 2009, ".") with my CSV
I tried:
import csv
with open( 'FinexBTCDaily.csv', 'rb' ) as csvfile:
data = csv.reader( csvfile )
def main( plot ):
feed = data
But it throws an attribute error. Any ideas how to do this?
I suggest to create your own Rowparser and Feed, which is much easier than it sounds, have a look here: yahoofeed
This also allows you to work with intraday data and cleanup the data if needed, like your timestamp.
Another possibility, of course, would be to parse your file and save it, so it looks like a yahoo feed. In your case, you would have to adapt the columns and the Timestamp.
Step A: follow PyAlgoTrade doc on GenericBarFeed class
On this link see the addBarsFromCSV() in CSV section of the BarFeed class in v0.16
On this link see the addBarsFromCSV() in CSV section of the BarFeed class in v0.17
Note
- The CSV file must have the column names in the first row.
- It is ok if the Adj Close column is empty.
- When working with multiple instruments:
--- If all the instruments loaded are in the same timezone, then the timezone parameter may not be specified.
--- If any of the instruments loaded are in different timezones, then the timezone parameter should be set.
addBarsFromCSV( instrument, path, timezone = None )
Loads bars for a given instrument from a CSV formatted file. The instrument gets registered in the bar feed.
Parameters:
(string) instrument – Instrument identifier.
(string) path – The path to the CSV file.
(pytz) timezone – The timezone to use to localize bars.Check pyalgotrade.marketsession.
Next:
A BarFeed loads bars from CSV files that have the following format:
Date Time, Open, High, Low, Close, Volume, Adj Close
2013-01-01 13:59:00,13.51001,13.56,13.51,13.56789,273.88014126,13.51001
Step B: implement a documented CSV-file pre-formatting
Your CSV data will need a bit of sanity ( before will be able to be used in PyAlgoTrade methods ),however it is doable and you can create an easy transformator either by hand or with a use of a powerful numpy.genfromtxt() lambda-based converters facilities.
This sample code is intended for an illustration purpose, to see immediately the powers of converters for your own transformations, as CSV-structure differs.
with open( getCsvFileNAME( ... ), "r" ) as aFH:
numpy.genfromtxt( aFH,
skip_header = 1, # Ref. pyalgotrade
delimiter = ",",
# v v v v v v
# 2011.08.30,12:00,1791.20,1792.60,1787.60,1789.60,835
# 2011.08.30,13:00,1789.70,1794.30,1788.70,1792.60,550
# 2011.08.30,14:00,1792.70,1816.70,1790.20,1812.10,1222
# 2011.08.30,15:00,1812.20,1831.50,1811.90,1824.70,2373
# 2011.08.30,16:00,1824.80,1828.10,1813.70,1817.90,2215
converters = { 0: lambda aString: mPlotDATEs.date2num( datetime.datetime.strptime( aString, "%Y.%m.%d" ) ), #_______________________________________asFloat ( 1.0, +++ )
1: lambda aString: ( ( int( aString[0:2] ) * 60 + int( aString[3:] ) ) / 60. / 24. ) # ( 15*60 + 00 ) / 60. / 24.__asFloat < 0.0, 1.0 )
# HH: :MM HH MM
}
)
You can use pyalgotrade.barfeed.addBarsFromSequence with list comprehension to feed in data from CSV row by row/bar by bar. Basically you create a bar from each row, pass OHLCV as init parameters and extra columns with additional data in a dictionary. You can try something like this (with all the required imports):
data = pd.DataFrame(index=pd.date_range(start='2021-11-01', end='2021-11-05'), columns=['Open', 'High', 'Low', 'Close', 'Adj Close', 'Volume', 'ExtraCol1', 'ExtraCol3', 'ExtraCol4', 'ExtraCol5'], data=np.random.rand(5, 10))
feed = yahoofeed.Feed()
feed.addBarsFromSequence('instrumentID', data.index.map(lambda i:
BasicBar(
i,
data.loc[i, 'Open'],
data.loc[i, 'High'],
data.loc[i, 'Low'],
data.loc[i, 'Close'],
data.loc[i, 'Volume'],
data.loc[i, 'Adj Close'],
Frequency.DAY,
data.loc[i, 'ExtraCol1':].to_dict())
).values)
The input data frame was created with random values to make this example easier to reproduce, but the part where the bars are added to the feed should work the same for data frames from CSVs given that the valid column names are used.