I'm writing gis application using DotSpatial 2.0
Now I have data from town hall and
Shapefile sf = Shapefile.OpenFile(Path);
throws
DotSpatial.Projections.ProjectionException
'Projection Not Found'
ProjectionInfo proj = ProjectionInfo.Open(Path);
throws:
DotSpatial.Projections.InvalidEsriFormatException
Could anyone tell what is wrong with this file? and how to handle this case?
The file content is:
PROJCS["ETRS89 / Poland CS2000 zone 6",
GEOGCS["ETRS89",
DATUM["European Terrestrial Reference System 1989",
SPHEROID["GRS 1980", 6378137.0, 298.257222101,
AUTHORITY["EPSG","7019"]],
TOWGS84[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
AUTHORITY["EPSG","6258"]],
PRIMEM["Greenwich", 0.0, AUTHORITY["EPSG","8901"]],
UNIT["degree", 0.017453292519943295],
AXIS["Geodetic longitude", EAST],
AXIS["Geodetic latitude", NORTH],
AUTHORITY["EPSG","4258"]],
PROJECTION["Transverse_Mercator", AUTHORITY["EPSG","9807"]],
PARAMETER["central_meridian", 18.0],
PARAMETER["latitude_of_origin", 0.0],
PARAMETER["scale_factor", 0.999923],
PARAMETER["false_easting", 6500000.0],
PARAMETER["false_northing", 0.0],
UNIT["m", 1.0],
AXIS["Easting", EAST],
AXIS["Northing", NORTH],
AUTHORITY["EPSG","2177"]]
Other applications like SpatialManager and ArcMap read it corectly.
Related
I want to animate a bar chart in manim and it works just fine. However, the bar_names are long and have to be displayed rather small. Is there a way to rotate them so they can be displayed bigger?
CONFIG = {
"max_value" : 100,
"bar_names" : ["Fleisch von Wiederkäuern","Anderes Fleisch, Fisch","Milchprodukte","Früchte", "Snacks, etc.","Gemüse","Pflanzliche Öle","Getreideprodukte", "Pflanzliche Proteine"],
"bar_label_scale_val" : 0.2,
"bar_stroke_width" : 0,
"width" : 10,
"height" : 6,
"label_y_axis" : False,
}
def construct(self):
composition = [96.350861, 18.5706488, 14.7071608, 8.25588773, 7.33856028, 4.24083463, 1.65574964, 1.36437485, 1]
chart = BarChart(values=composition, **self.CONFIG)
self.play(Write(chart), run_time=2)```
Maybe something like this?
(I just made new labels so delete the old ones, or delete their size)
(also I modified some of the names because my Latex crashed with some characters)
CONFIG = {
"height": 4,
"width": 10,
"n_ticks": 4,
"tick_width": 0.2,
"label_y_axis": False,
"y_axis_label_height": 0.25,
"max_value": 100,
"bar_colors": [BLUE, YELLOW],
"bar_fill_opacity": 0.8,
"bar_stroke_width": 0,
"bar_names": ["Fleisch von Wiederkuern","Anderes Fleisch, Fisch","Milchprodukte","Frchte", "Snacks, etc.","Gemse","Pflanzliche le","Getreideprodukte", "Pflanzliche Proteine"],
"bar_label_scale_val": 0
}
def construct(self):
bar_names=["Fleisch von Wiederkuern","Anderes Fleisch, Fisch","Milchprodukte","Frchte", "Snacks, etc.","Gemse","Pflanzliche le","Getreideprodukte", "Pflanzliche Proteine"]
Lsize=0.55
Lseparation=1.1
Lpositionx=-5.4
Lpositiony=2
bar_labels = VGroup()
for i in range(len(bar_names)):
label = TexMobject(bar_names[i])
label.scale(Lsize)
label.move_to(DOWN*Lpositiony+(i*Lseparation+Lpositionx)*RIGHT)
label.rotate(np.pi*(1.5/6))
bar_labels.add(label)
composition = [96.350861, 18.5706488, 14.7071608, 8.25588773, 7.33856028, 4.24083463, 1.65574964, 1.36437485, 1]
chart = BarChart(values=composition, **self.CONFIG)
chart.shift(UP)
self.play(Write(chart),Write(bar_labels), run_time=2)
# Manim Community Version 0.7.0 in Google Colab
%%manim -qm -v WARNING BarChartExample2
import numpy as np
mobject.probability.np = np
class BarChartExample2(Scene):
CONFIG = {
"height": 4,
"width": 10,
"n_ticks": 4,
"tick_width": 0.2,
"label_y_axis": False,
"y_axis_label_height": 0.25,
"max_value": 100,
"bar_colors": [BLUE, YELLOW],
"bar_fill_opacity": 0.8,
"bar_stroke_width": 0,
"bar_names": ["Fleisch von Wiederkuern","Anderes Fleisch, Fisch","Milchprodukte",
"Frchte", "Snacks, etc.","Gemse","Pflanzliche le","Getreideprodukte", "Pflanzliche Proteine"],
"bar_label_scale_val": 0
}
def construct(self):
bar_names=["Fleisch von Wiederkuern","Anderes Fleisch, Fisch","Milchprodukte",
"Frchte", "Snacks, etc.","Gemse","Pflanzliche le","Getreideprodukte", "Pflanzliche Proteine"]
Lsize=0.55
Lseparation=1.1
Lpositionx=-5.4
Lpositiony=2
bar_labels = VGroup()
for i in range(len(bar_names)):
#label = TexMobject(bar_names[i])
label = MathTex(bar_names[i])
label.scale(Lsize)
label.move_to(DOWN*Lpositiony+(i*Lseparation+Lpositionx)*RIGHT)
label.rotate(np.pi*(1.5/6))
bar_labels.add(label)
composition = [96.350861, 18.5706488, 14.7071608, 8.25588773, 7.33856028, 4.24083463, 1.65574964, 1.36437485, 1]
chart = BarChart(values=composition,**self.CONFIG)
chart.shift(UP)
self.play(Write(chart),Write(bar_labels), run_time=2)
I am trying to train a model using Yolo V5.
I have the issue of Data base not found.
I have a train, test and valid files that contain all the image and labels files.
I have tested the files on googlecolap and it dose work. However, on my local machine it shows the issue of Exception: Dataset not found.
(Yolo_5) D:\\YOLO_V_5\Yolo_V5\yolov5>python train.py --img 416 --batch 8 --epochs 100 --data /data.yaml --cfg models/yolov5s.yaml --weights '' --name yolov5s_results --cache
Using torch 1.7.0 CUDA:0 (GeForce GTX 1080, 8192MB)
Namespace(adam=False, batch_size=8, bucket='', cache_images=True, cfg='models/yolov5s.yaml', data='.\\data.yaml', device='', epochs=100, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[416, 416], local_rank=-1, log_imgs=16, multi_scale=False, name='yolov5s_results', noautoanchor=False, nosave=False, notest=False, project='runs/train', rect=False, resume=False, save_dir='runs\\train\\yolov5s_results55', single_cls=False, sync_bn=False, total_batch_size=8, weights="''", workers=16, world_size=1)
Start Tensorboard with "tensorboard --logdir runs/train", view at http://localhost:6006/
Hyperparameters {'lr0': 0.01, 'lrf': 0.2, 'momentum': 0.937, 'weight_decay': 0.0005, 'warmup_epochs': 3.0, 'warmup_momentum': 0.8, 'warmup_bias_lr': 0.1, 'box': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'anchors': 3, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0, 'perspective': 0.0, 'flipud': 0.0, 'fliplr': 0.5, 'mosaic': 1.0, 'mixup': 0.0}
WARNING: Dataset not found, nonexistent paths: ['D:\\me1eye\\Yolo_V5\\valid\\images']
Traceback (most recent call last):
File "train.py", line 501, in <module>
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 78, in train
check_dataset(data_dict) # check
File "D:\me1eye\YOLO_V_5\Yolo_V5\yolov5\utils\general.py", line 92, in check_dataset
raise Exception('Dataset not found.')
Exception: Dataset not found.
Internal process exited
(Olive_Yolo_5) D:\me1eye\YOLO_V_5\Yolo_V5\yolov5>
there is a much simpler solution. Just go into data.yaml wherever you saved it and change the relative paths to absolut - i.e. just write the whole path! e.g.
train: C:\hazlab\BCCD\train\images
val: C:\hazlab\BCCD\valid\images
nc: 3
names: ['Platelets', 'RBC', 'WBC']
job done - note, as you are in Windows, there is a known issue in the invocation of tain.py - do not use quotes on the file names in the CLI e.g.
!python train.py --img 416 --batch 16 --epochs 100 --data C:\hazlab\BCCD\data.yaml --cfg ./models/custom_yolov5s.yaml --weights '' --name yolov5s_results --cache
Well! I have also encountered this problem and now I fix it.
All you have to do is to keep train, test, validation (these three folders containing images and labels), and yolov5 folder (that is cloned from GitHub) in the same directory. Also, another thing is that the 'data.yaml' file has to be inside the yolov5 folder.
Command to train the model would be like this:
!python train.py --img 416 --batch 16 --epochs 10 --data ./data.yaml --cfg ./models/yolov5m.yaml --weights '' --name yolov5m_results
The issue is due to not found actual dataset path. I found same issue when i trained the Yolov5 model on custom dataset using google colab, I did the following to resolve this.
Make sure provide correct path of data.yaml of dataset.
Make sure path of dataset in data.yaml should be be corrected.
train, test, and valid key should contain path with respect to the main path of the dataset.
Example data.yaml file given below.
path: /content/drive/MyDrive/car-detection-dataset
train: train/images
val: valid/images
test: test/images
nc: 1
names: ['car']
I have created the CSV file which contains label name and word frequency.
e.g.
0, 4.0, 0.0, 0.0, 1.0, 0.0
0, 0.0, 1.0, 2.0, 0.0, 0.0
1, 1.0, 0.0, 0.0, 0.0, 3.0
Where the index zero represents the label (0 and 1)
My question is, How to import this kind CSV file into mallet to generate instance list? How to pass this file to Näive Bayes Classifier?
I found the answer to my own question.
In mallet, there are some pipes which create CSV to feature vector.
pipeList.add(new Csv2Array());
pipeList.add(new Target2Label());
pipeList.add(new Array2FeatureVector());
Output for above example:
0 and 1: It takes as target name.
for the first line:
1(1)=4.0
2(2)=0.0
3(3)=0.0
4(4)=1.0
5(5)=0.0
same for other two lines.
I'm new to programming and am trying to parse some data returned from Yelp's API. From this data, how could I return something like just the phone number (display_phone) and address? Thank you
Result for business "little-miss-bbq-phoenix-2" found:
{ u'categories': [[u'Barbeque', u'bbq']],
u'display_phone': u'+1-602-437-1177',
u'id': u'little-miss-bbq-phoenix-2',
u'image_url': u'http://s3-media2.fl.yelpcdn.com/bphoto/4Rcm0IIbRhdo-4Z4KPvuXQ/ms.jpg',
u'is_claimed': True,
u'is_closed': False,
u'location': { u'address': [u'4301 E University Dr'],
u'city': u'Phoenix',
u'coordinate': { u'latitude': 33.421587,
u'longitude': -111.989088},
u'country_code': u'US',
u'display_address': [ u'4301 E University Dr',
u'Phoenix, AZ 85034'],
u'geo_accuracy': 9.5,
u'postal_code': u'85034',
u'state_code': u'AZ'},
u'mobile_url': u'http://m.yelp.com/biz/little-miss-bbq-phoenix-2',
u'name': u'Little Miss BBQ',
u'phone': u'6024371177',
u'rating': 5.0,
u'rating_img_url': u'http://s3-media1.fl.yelpcdn.com/assets/2/www/img/f1def11e4e79/ico/stars/v1/stars_5.png',
u'rating_img_url_large': u'http://s3-media3.fl.yelpcdn.com/assets/2/www/img/22affc4e6c38/ico/stars/v1/stars_large_5.png',
u'rating_img_url_small': u'http://s3-media1.fl.yelpcdn.com/assets/2/www/img/c7623205d5cd/ico/stars/v1/stars_small_5.png',
u'review_count': 403,
u'reviews': [ { u'excerpt': u"I saw that this place had almost 400 reviews and that they have a perfect 5 star rating. It sounded too good to be true BUT it's worth every star and...",
u'id': u'-9poa0ycpVnOveVlqbYE9Q',
u'rating': 5,
u'rating_image_large_url': u'http://s3-media3.fl.yelpcdn.com/assets/2/www/img/22affc4e6c38/ico/stars/v1/stars_large_5.png',
u'rating_image_small_url': u'http://s3-media1.fl.yelpcdn.com/assets/2/www/img/c7623205d5cd/ico/stars/v1/stars_small_5.png',
u'rating_image_url': u'http://s3-media1.fl.yelpcdn.com/assets/2/www/img/f1def11e4e79/ico/stars/v1/stars_5.png',
u'time_created': 1431095420,
u'user': { u'id': u'd43iQ50HjWIl4vN4rBgoVQ',
u'image_url': u'http://s3-media4.fl.yelpcdn.com/photo/sn17KBbRjXOEELnjSir1tg/ms.jpg',
u'name': u'Jason J.'}}],
u'snippet_image_url': u'http://s3-media4.fl.yelpcdn.com/photo/sn17KBbRjXOEELnjSir1tg/ms.jpg',
u'snippet_text': u"I saw that this place had almost 400 reviews and that they have a perfect 5 star rating. It sounded too good to be true BUT it's worth every star and...",
u'url': u'http://www.yelp.com/biz/little-miss-bbq-phoenix-2'}
I used the fastgreedy algorithm in igraph for my community detection in a weighted, undirected graph. Afterwards I wanted to have a look at the modularity and I got different values for different methods and I am wondering why. I included a short example, which demonstrates my problem:
library(igraph)
d<-matrix(c(1, 0.2, 0.3, 0.9, 0.9,
0.2, 1, 0.6, 0.4, 0.5,
0.3, 0.6, 1, 0.1, 0.8,
0.9, 0.4, 0.1, 1, 0.5,
0.9, 0.5, 0.8, 0.5, 1), byrow=T, nrow=5)
g<-graph.adjacency(d, weighted=T, mode="lower",diag=FALSE, add.colnames=NA)
fc<-fastgreedy.community(g)
fc$modularity[3]
#[1] -0.05011095
modularity(g,membership=cutat(fc,steps=2),weights=get.adjacency(g,attr="weight"))
#[1] 0.07193047
I would expect both of the values to be identical and if I try the same with an unweighted graph, I get the same values.
d2<-round(d,digits=0)
g2<- graph.adjacency(d2, weighted=NULL, mode="lower",diag=FALSE, add.colnames=NA)
fc2<-fastgreedy.community(g2)
plot(fc2,g2)
fc2$modularity[3]
#[1] 0.15625
modularity(g2,membership=cutat(fc2,steps=2))
#[1] 0.15625
Another user had a similar problem, but I have the current version of igraph, so that should not be the problem. Can someone explain to me why there is a difference or is there a problem with my code I don't see?
The line
modularity(g,membership=cutat(fc,steps=2),weights=get.adjacency(g,attr="weight"))
is wrong. If you want to pass the weights of edges to modularity(), do it with E(g)$weight:
modularity(g, membership = cutat(fc, steps = 2), weights = E(g)$weight)
# [1] -0.05011095