netlogo GIS Extension exception: invalid cell size on line 5 - gis

How can I solve a netlogo error like
Extension exception: invalid cell size on line 5
When I try to load an AsciiGrid (.asc) raster with :
set slope gis:load-dataset "data_carto/DTMBanyulsEPSG2154/small_slope.asc"
I have find the github extention code (line 88) but I don't realy understand how it work
thank's
MAJ :
The header of my asc file :
ncols 346
nrows 270
xllcorner 3.087906007412
yllcorner 42.451833343014
dx 0.000106344549
dy 0.000106459930
0 27.467638015747070312 31.712091445922851562 35.38886260986328125 36.1437835693359375 36.798412322998046875 36.798412322998046875 36.37$
0 26.552234649658203125 31.561212539672851562 35.23743438720703125 35.762996673583984375 35.20586395263671875 35.20586395263671875 34.34$
0 27.206226348876953125 29.196367263793945312 30.581308364868164062 29.855892181396484375 29.219537734985351562 29.219537734985351562 29$
There is somthing wrong ?

The GIS extension is expecting line 5 of your .asc file to start with "CELLSIZE" (the value of the CELL_SIZE constant here), in either upper or lower case. If line 5 doesn't start with that value, the extension reports an error as you're seeing. If your .asc file doesn't have cellsize on line 5, you may need to re-arrange the lines of the .asc file.

Finaly I have find where my error come from ... :-)
#Eric Russell was of course right!
my error come from the gdal transformation of my tif file to asc file...
After the 1.9 version (I believe) we need to add a special option in the gdal_translate commande ! -co FORCE_CELLSIZE=TRUE.
with :
gdal_translate -of "AAIGrid" -b 1 -co FORCE_CELLSIZE=TRUE DTMBanyulsEPSG2154/small_slope.tif DTMBanyulsEPSG2154/small_slope.asc
It work and the header is :
ncols 321
nrows 250
xllcorner 3.087906007412
yllcorner 42.451920815321
cellsize 0.000114626835

I had a similar problem while rasterizing a shapefile. It was solved by reprojecting the rasterized file (.asc) to my target SRC in QGIS. I hope this helps ;)

Related

Failed in generating Tesseract traineddata

I'm using Tesseract v5.0.1.20220118 on Windows 10, training a font only have letter "P" and "Q".
When I get to the step
mftraining -F font_properties.txt -U unicharset -O normal.unicharset pq.normal.exp0.tr
The pffmtable file is not generated.
And when I run code cntraining pq.normal.exp0.tr
It shows me
Reading pq.normal.exp0.tr ...
Clustering ...
N == sizeof(Cluster->Mean):Error:Assert failed:in file ../../../src/classify/cluster.cpp, line 2526
Why it goes wrong? How can I fix it?
I only have inttemp and shapetable generated, but the tutorial says there will be four files include shapetable, inttemp, pffmtable and normproto, I wonder that maybe is beacuse of the font only have letter "P" and "Q", but I have no idea how to solve it.
Please read the docs:
https://tesseract-ocr.github.io/tessdoc/#training-for-tesseract-5
Use the right tools:
https://github.com/tesseract-ocr/tesstrain

I am getting an error when training yolov5

I am trying to train a yolov5 model, but I'm getting an exception error when I try to execute the training module. The error occurs after the model is loaded and when it tries to read the training images. Below is my code and an excerpt of the error. Any help would be appreciated.
!python train.py --img 640 --batch 16 --epochs 150 --data pollen_data.yaml --weights yolov5x.pt
Model summary: 567 layers, 86217814 parameters, 86217814 gradients, 204.2 GFLOPs
Transferred 739/745 items from yolov5x.pt
Scaled weight_decay = 0.0005
optimizer: SGD with parameter groups 123 weight (no decay), 126 weight, 126 bias
albumentations: version 1.0.3 required by YOLOv5, but version 0.1.12 is currently installed
Traceback (most recent call last):
File "/content/yolov5/utils/datasets.py", line 405, in __init__
t = t.read().strip().splitlines()
File "/usr/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 643, in <module>
main(opt)
File "train.py", line 539, in main
train(opt.hyp, opt, device, callbacks)
File "train.py", line 227, in train
prefix=colorstr('train: '), shuffle=True)
File "/content/yolov5/utils/datasets.py", line 110, in create_dataloader
prefix=prefix)
File "/content/yolov5/utils/datasets.py", line 415, in __init__
raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {HELP_URL}')
Exception: train: Error loading data from /content/datasets/images/training/im0.jpg: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
The training images I have (im0.jpg and im1.jpg) are two large files. The first has dimensions of 9058 x 11185, and the second file is 13385 x 12832. I realize they are not square but I'm assuming that the train.py module will make them square, so it's okay. Is that right?
Or could the non-square dimensions be causing the choke?
Also, what is the meaning of the exception "error loading data from /content/datasets/images/training/im0.jpg: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte"?
Thank you.
I've been using yolov5 for the past 1 month.I must say your error is wierd.
And also, you cant train your model with image size as 12000. By default it should be 640.In your case it might change based on your dataset but i'm quite sure that it wont be 12000.
There is a mistake in your data directory also.
--data /content/datasets/annotations/dataset.yaml.txt
The data file wont have '.txt' extension. It should be a '.yaml' file. SO change that to
--data /content/datasets/annotations/dataset.yaml
It should start training after these changes. If not, close this question and please provide additional information and ask another question.
the error
'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
is raised when you use a image of format which is not mentioned by default.
IMG_FORMATS = 'bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp' # include image suffixes
But you have mentioned that it is an jpg. I'm confused now. And also if it helps, pls try this solution provided in this issue. link

GNU Radio + HackRF: RuntimeError: firdes check failed: 0 < fa <= sampling_freq / 2

I just started using GNU Radio, I must say I am quite a noob but I have some background on RF related stuff.
Here's the thing:
I recorded a file that I now want to repeat through my HackRF and GNU Radio.
This is the exact settings for the filter:
The settings you see are casual (since I cannot get it working I started testing with random values).
This is the error I get:
Executing: /usr/bin/python3 -u /home/scare/LAB/RadioFrequencies/GNU Radio/reply_433.py
gr-osmosdr 0.2.0.0 (0.2.0) gnuradio 3.8.2.0
built-in sink types: uhd hackrf bladerf soapy redpitaya file
[INFO] [UHD] linux; GNU C++ version 11.1.0; Boost_107600; UHD_4.0.0.0-0-unknown
Using HackRF One with firmware 2017.02.1
Traceback (most recent call last):
File "/home/scare/LAB/RadioFrequencies/GNU Radio/reply_433.py", line 211, in <module>
main()
File "/home/scare/LAB/RadioFrequencies/GNU Radio/reply_433.py", line 187, in main
tb = top_block_cls()
File "/home/scare/LAB/RadioFrequencies/GNU Radio/reply_433.py", line 137, in __init__
firdes.high_pass(
File "/usr/lib/python3.9/site-packages/gnuradio/filter/filter_swig.py", line 124, in high_pass
return _filter_swig.firdes_high_pass(*args, **kwargs)
RuntimeError: firdes check failed: 0 < fa <= sampling_freq / 2
Done (return code 1)
Where obviously the interesting part is the RuntimeError: firdes check failed: 0 < fa <= sampling_freq / 2
Unfortunately, I don't get what that 'fa' stands for.
Any idea?
Cheers
I just got done solving this same error. The error is caused by a filter's Cut-off and transition parameters being set incorrectly (in my case far too large). GNU radio handles the variable 'samp_rate' differently for each block and filters seem to interpret it was a point to center the filter on (that's my take on it so don't quote me).
I also looked in the source code and can't find anything helpful on 'fa'
So try adjusting your cutoff to be something below samp_rate and make your transition width something to the tune of 250e3. I used GUI sliders to set the filter how I liked and I will make these permanent in the final version.
Screen Cap of Settings Here
Slider Settings For Both Sliders
Mike Ossmann's "SDR with HackRF One, Lesson 10 - Filters helped" me out here. Also just a great SDR lecture series for GNU radio if you haven't come across them yet. (just make sure to use the QT GUI).
I hope this helped. I am pretty new to GNU so sorry if the explanation is a little half-baked.
fa is the cutoff_frequency in the function that is throwing the error message. The cutoff frequency has to be greater than 0 and no more than the Nyquist limit. There are some functions called sanity_check_xxx (xxx being whether one cutoff or 2, i.e. bandpass, and optionally c for complex) around line 750 in gr_filter/lib/firdes.cc in the GNU Radio repository on GitHub.
In the question the samp_rate would need to be at least 800MHz to support a high pass cutoff of 400Mhz. As far as I can tell sample rate is used the same way in these filter functions as anywhere else in GNU Radio.
I ran into the same error message because I used 'firdes.band_passinstead offirdes.complex_band_pass` and the low cutoff was negative, which it should be for the complex band pass filter.

Netlogo GIS extension: raster won't patch NetLogo world

I'm trying to import a raster grid to NetLogo but am encountering many issues. My raster file is only 57x41 pixels (I want each pixel here to represent a patch) and the world envelope is [-382875 -381135 700185 701445]. I am also trying to match my raster-dataset value to the patch variable fuel-code in a .csv file. However when I run the code (below) I encounter errors. I'm not using a set coordinate projection in netlogo since my original raster is not in an acceptable projection type for NetLogo (I removed the .prj file associated with the raster when importing the .asc file). Below is my code (with included error messages to the code I tried to edit):
extensions [ csv table gis]
globals [ fuel-type-40 fuel-code setrial1]
to dictionary-file ;put in the setup procedure
ca
;load the ascii file
set setrial1 gis:load-dataset "setrial_ascii.asc"
;match dimensions of raster to the dimensions of the Netlogo world
;I've tried each of the below codes independently, not together
resize-world 0 gis:width-of setrial1 0 gis:height-of setrial1 ;ERROR: Java Heap space error
gis:set-world-envelope gis:envelope-of setrial1 ;ERROR: can't modify a patch's coordinates
;below is visuals of width and height of setrial1
print gis:height-of setrial1 ;41
print gis:width-of setrial1 ;57
print envelope-of setrial1 ;[-382875 -381135 700185 701445]
; Load the csv
set fuel-type-40 but-first csv:from-file "fuel-type-40.csv"
;print fuel-type-40
; Pull first value (Fuel-code)
set fuel-code map first fuel-type-40
;print fuel-code
ask patches [
; Randomly set patch 'land cover' for this example. change for raster
gis:apply-raster setrial1 fuel-code
]
end
You might report which errors you are having. Remember to always read the stackoverflow guide to asking questions.
I'd suggest you to focus on using a rasterfile with all your data on it (ESRI files have a .dbf file which supports data), and thus avoid using both extensions. By having a raster file with a defined resolution (e.g. 100 x 100 m), the resize-world function should work smoothly. Try following my answer to this question.

IDL: read ascii header of binary file

I'm having an enforced introduction to idl trying to debug some old code.
I have a binary image file that has an ascii header (It's a THEMIS IR BTR image of Mars, if that is of interest). The code opens the file as unit 1 using OPENR, then reads the first 256 bytes of it using ASSOC(1,BYTARR(256)). The return from that is 256 ascii character dex values, but they are mostly high or low numbers that do not correspond to alpha-numeric characters, and are not related to the header that I know is on the file.
One thing that may help with diagnostics: the original file is a g-zipped version of the file. If I try to open it directly (using less, for example) it allows me to read the header. But if I unzip it first (gzip -c filename.IMG.gz > filename.IMG) and then try to read it again I get binary gobbledegook. (less gives me a warning before opening: "filename.IMG may be a binary file. See it anyway?").
Any suggestions?
Here's the IDL code:
CLOSE,1
OPEN,1,FILENAME
A = ASSOC(1,BYTARR(256))
B = A[0]
print,'B - ',B
H = STRING(B)
print,'H - ',H
And this is what it gives me:
B - 31 139 8 8 7 17 238 79 0 3 ... (and on for 256 characters)
H - [Some weird symbol]
I've tried it on a purely ascii test file and it works as expected.
31 139 8 is the beginning of a GZIP header for a "deflated" file.
http://www.gzip.org/zlib/rfc-gzip.html#file-format
So yes, the file looks like it needs to be decompressed first.
Try decompressing the file with gunzip, and check the header again. If it is 31 139 08... again, it looks like it has been compressed twice.
Otherwise, whatever it is, it is likely that it's been finally decompressed. It remains to be seen why the uncompressed file isn't being decoded.
Try the COMPRESS keyword to OPEN:
openr, 1, filename, /compress
The COMPRESS keyword refers to a compressed file, so it is both for reading and writing compressed files.