Why is the spatial resolution of image from WorldView 3 like this? - gis

The output of this image from gdalinfo 20141030-wv03.tif is like this:
Driver: GTiff/GeoTIFF
Files: 20141030-wv03.tif
Size is 16484, 15253
Coordinate System is:
GEOGCRS["WGS 84",
DATUM["World Geodetic System 1984",
ELLIPSOID["WGS 84",6378137,298.257223563,
LENGTHUNIT["metre",1]]],
PRIMEM["Greenwich",0,
ANGLEUNIT["degree",0.0174532925199433]],
CS[ellipsoidal,2],
AXIS["geodetic latitude (Lat)",north,
ORDER[1],
ANGLEUNIT["degree",0.0174532925199433]],
AXIS["geodetic longitude (Lon)",east,
ORDER[2],
ANGLEUNIT["degree",0.0174532925199433]],
ID["EPSG",4326]]
Data axis to CRS axis mapping: 2,1
Origin = (113.959353776485997,23.091020758099145)
Pixel Size = (0.000002966620901,-0.000002966620901)
Metadata:
AREA_OR_POINT=Area
DataType=Generic
Image Structure Metadata:
COMPRESSION=LZW
INTERLEAVE=PIXEL
Corner Coordinates:
Upper Left ( 113.9593538, 23.0910208) (113d57'33.67"E, 23d 5'27.67"N)
Lower Left ( 113.9593538, 23.0457709) (113d57'33.67"E, 23d 2'44.78"N)
Upper Right ( 114.0082556, 23.0910208) (114d 0'29.72"E, 23d 5'27.67"N)
Lower Right ( 114.0082556, 23.0457709) (114d 0'29.72"E, 23d 2'44.78"N)
Center ( 113.9838047, 23.0683958) (113d59' 1.70"E, 23d 4' 6.22"N)
Band 1 Block=128x128 Type=Byte, ColorInterp=Red
NoData Value=256
Band 2 Block=128x128 Type=Byte, ColorInterp=Green
NoData Value=256
Band 3 Block=128x128 Type=Byte, ColorInterp=Blue
NoData Value=256
The spatial resolution is (0.000002966620901,-0.000002966620901), how to understand this value?
I also check another image from WorldView 2, ths output is:
Driver: GTiff/GeoTIFF
Files: 20150708.tif
Size is 9984, 10132
Coordinate System is:
PROJCRS["WGS 84 / UTM zone 50N",
BASEGEOGCRS["WGS 84",
DATUM["World Geodetic System 1984",
ELLIPSOID["WGS 84",6378137,298.257223563,
LENGTHUNIT["metre",1]]],
PRIMEM["Greenwich",0,
ANGLEUNIT["degree",0.0174532925199433]],
ID["EPSG",4326]],
CONVERSION["UTM zone 50N",
METHOD["Transverse Mercator",
ID["EPSG",9807]],
PARAMETER["Latitude of natural origin",0,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8801]],
PARAMETER["Longitude of natural origin",117,
ANGLEUNIT["degree",0.0174532925199433],
ID["EPSG",8802]],
PARAMETER["Scale factor at natural origin",0.9996,
SCALEUNIT["unity",1],
ID["EPSG",8805]],
PARAMETER["False easting",500000,
LENGTHUNIT["metre",1],
ID["EPSG",8806]],
PARAMETER["False northing",0,
LENGTHUNIT["metre",1],
ID["EPSG",8807]]],
CS[Cartesian,2],
AXIS["(E)",east,
ORDER[1],
LENGTHUNIT["metre",1]],
AXIS["(N)",north,
ORDER[2],
LENGTHUNIT["metre",1]],
USAGE[
SCOPE["unknown"],
AREA["World - N hemisphere - 114°E to 120°E - by country"],
BBOX[0,114,84,120]],
ID["EPSG",32650]]
Data axis to CRS axis mapping: 1,2
Origin = (291153.100000000034925,2705938.760000000242144)
Pixel Size = (0.510000000000000,-0.510000000000000)
Metadata:
AREA_OR_POINT=Area
TIFFTAG_XRESOLUTION=1
TIFFTAG_YRESOLUTION=1
Image Structure Metadata:
INTERLEAVE=BAND
Corner Coordinates:
Upper Left ( 291153.100, 2705938.760) (114d56'22.85"E, 24d27'10.88"N)
Lower Left ( 291153.100, 2700771.440) (114d56'25.58"E, 24d24'22.98"N)
Upper Right ( 296244.940, 2705938.760) (114d59'23.60"E, 24d27'13.32"N)
Lower Right ( 296244.940, 2700771.440) (114d59'26.26"E, 24d24'25.41"N)
Center ( 293699.020, 2703355.100) (114d57'54.57"E, 24d25'48.15"N)
Band 1 Block=9984x1 Type=Byte, ColorInterp=Red
Band 2 Block=9984x1 Type=Byte, ColorInterp=Green
Band 3 Block=9984x1 Type=Byte, ColorInterp=Blue
The spatial resolution is (0.510000000000000,-0.510000000000000). How do I understand their difference between them? Thanks.

Your images are in two different coordinate systems.
Your second file 20150708.tif is in an UTM projection (UTM 50N to be exact) which has map units in meters - that's why the pixel resolutions is in meters (0.51m).
Your first file 20141030-wv03.tif is in a geographic coordinate system, the widely used World Geodetic System 1984 (or WGS84) which has map units in degrees, giving you the pixel resolution also in (decimal) degrees. On the equator 0.00001 degrees is around 1.11 meters so both images have likely the same resolution.
For more info on WGS84 vs UTM, this post on GIS stackexchange might be interesting.

Related

What is the units of distance_col and max_distance in geopandas sjoin_nearest?

geopandas.sjoin_nearest takes parameters max_distance and distance_col. What is the units of the distances / how do I interpret them? Is it degrees?
https://geopandas.org/en/stable/docs/reference/api/geopandas.sjoin_nearest.html#geopandas.sjoin_nearest
While geopandas provides utilities for converting between coordinate systems (e.g. to_crs), most operations in geopandas ignore the projection information. Spatial operations such as distance, area, buffer, etc. are done in whatever units the geometries are in. If your geometries are in meters, these will be in meters. If they're in degrees, they'll be in degrees.
For example, let's take a look at the natural earth dataset. You can see that the geometry column is in lat/lon coordinates by just looking at the values:
In [1]: import geopandas as gpd
In [2]: gdf = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
In [3]: gdf
Out[3]:
pop_est continent name iso_a3 gdp_md_est geometry
0 920938 Oceania Fiji FJI 8374.0 MULTIPOLYGON (((180.00000 -16.06713, 180.00000...
1 53950935 Africa Tanzania TZA 150600.0 POLYGON ((33.90371 -0.95000, 34.07262 -1.05982...
2 603253 Africa W. Sahara ESH 906.5 POLYGON ((-8.66559 27.65643, -8.66512 27.58948...
3 35623680 North America Canada CAN 1674000.0 MULTIPOLYGON (((-122.84000 49.00000, -122.9742...
4 326625791 North America United States of America USA 18560000.0 MULTIPOLYGON (((-122.84000 49.00000, -120.0000...
.. ... ... ... ... ... ...
172 7111024 Europe Serbia SRB 101800.0 POLYGON ((18.82982 45.90887, 18.82984 45.90888...
173 642550 Europe Montenegro MNE 10610.0 POLYGON ((20.07070 42.58863, 19.80161 42.50009...
174 1895250 Europe Kosovo -99 18490.0 POLYGON ((20.59025 41.85541, 20.52295 42.21787...
175 1218208 North America Trinidad and Tobago TTO 43570.0 POLYGON ((-61.68000 10.76000, -61.10500 10.890...
176 13026129 Africa S. Sudan SSD 20880.0 POLYGON ((30.83385 3.50917, 29.95350 4.17370, ...
[177 rows x 6 columns]
Specifically, it's in WGS84 (aka EPSG:4326). The units are degrees:
In [4]: gdf.crs
Out[4]:
<Geographic 2D CRS: EPSG:4326>
Name: WGS 84
Axis Info [ellipsoidal]:
- Lat[north]: Geodetic latitude (degree)
- Lon[east]: Geodetic longitude (degree)
Area of Use:
- name: World.
- bounds: (-180.0, -90.0, 180.0, 90.0)
Datum: World Geodetic System 1984 ensemble
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
If we call the area property, geopandas will issue a warning, but it will still calculate the area for us. The total area of the earth comes out to 21,497 degrees^2, which roughly 1/3 of 180*360:
In [6]: gdf.area.sum()
<ipython-input-6-10238de14784>:1: UserWarning: Geometry is in a geographic CRS. Results from 'area' are likely incorrect. Use 'GeoSeries.to_crs()' to re-project geometries to a projected CRS before this operation.
gdf.area.sum()
Out[6]: 21496.990987992736
If we instead use an equal area projection, we'll get something much closer to the land area of the earth, in m^2:
In [10]: gdf.to_crs('+proj=cea').area.sum() / 1e3 / 1e3 / 1e6
Out[10]: 147.36326937311017

Geopandas geoseries area - units?

I have a GeoPandas DataFrame. I apply a couple of functions to the GeoDataFrame.
Buffer - https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoSeries.buffer.html
This creates a set of points around the coordinates / geometry.
import geopandas
from shapely.geometry import Point, LineString, Polygon
s = geopandas.GeoSeries(
[
Polygon([(0, 0), (1, 1), (0, 1)]),
Polygon([(10, 0), (10, 5), (0, 0)]),
Polygon([(0, 0), (2, 2), (2, 0)]),
LineString([(0, 0), (1, 1), (0, 1)]),
Point(0, 1)
]
)
s.buffer(0.2)
Now, I'd like to measure the area of the buffered geometry using: https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoSeries.area.html
s.area
The crs info of the DataFrame is:
<Derived Projected CRS: ESRI:102003>
Name: USA_Contiguous_Albers_Equal_Area_Conic
Axis Info [cartesian]:
- E[east]: Easting (metre)
- N[north]: Northing (metre)
Area of Use:
- name: United States (USA) - CONUS onshore - Alabama; Arizona; Arkansas; California; Colorado; Connecticut; Delaware; Florida; Georgia; Idaho; Illinois; Indiana; Iowa; Kansas; Kentucky; Louisiana; Maine; Maryland; Massachusetts; Michigan; Minnesota; Mississippi; Missouri; Montana; Nebraska; Nevada; New Hampshire; New Jersey; New Mexico; New York; North Carolina; North Dakota; Ohio; Oklahoma; Oregon; Pennsylvania; Rhode Island; South Carolina; South Dakota; Tennessee; Texas; Utah; Vermont; Virginia; Washington; West Virginia; Wisconsin; Wyoming.
- bounds: (-124.79, 24.41, -66.91, 49.38)
Coordinate Operation:
- name: USA_Contiguous_Albers_Equal_Area_Conic
- method: Albers Equal Area
Datum: North American Datum 1983
- Ellipsoid: GRS 1980
- Prime Meridian: Greenwich
My question is: what is the units of the area calculated?
The units of area would be metre^2. The units are reported in the CRS.
Presumably, the actual shapefile has very large values (e.g. in the ~1e6 range). See e.g. this question: Interpreting GeoPandas Polygon coordinates

Contrast emmeans: post-hoc t-test as the average differences of the differences between baseline and treatment periods

I am using the lme4 package in R to undertake linear mixed effect models (LMM). Essentially all participants received two interventions (an intervention treatment and a placebo (control)) and were separated by a washout period. However, the order or sequence they received the interventions differed.
An interaction term of intervention and visit was included in the LMM with eight levels including all combinations of intervention (2 levels: control and intervention) and visit (4 levels: visit 1=baseline 1, visit 2, visit 3=post-randomization baseline 2, visit 4).
My question is how do I determine the intervention effect by a post-hoc t-test as the average differences of the differences between interventions, hence between visits 1 and 2 and between visits 3 and 4. I also want to determine the effects of the intervention and control compared to baseline.
Please see code below:
model1<- lmer(X ~ treatment_type:visit_code + (1|SID) + (1|SID:period), na.action= na.omit, data = data.x)
emm <- emmeans(model1 , ~treatment_type:visit_code)
My results of model 1 is:
emm
treatment_type visit_code emmean SE df lower.CL upper.CL
Control T0 -0.2915 0.167 26.0 -0.635 0.0520
Intervention T0 -0.1424 0.167 26.0 -0.486 0.2011
Control T1 -0.2335 0.167 26.0 -0.577 0.1100
Intervention T1 0.0884 0.167 26.0 -0.255 0.4319
Control T2 0.0441 0.167 26.0 -0.299 0.3876
Intervention T2 -0.2708 0.168 26.8 -0.616 0.0748
Control T3 0.1272 0.167 26.0 -0.216 0.4708
Intervention T3 0.0530 0.168 26.8 -0.293 0.3987
Degrees-of-freedom method: kenward-roger
Confidence level used: 0.95
I first created a matrix/ vectors:
#name vectors
Control.B1<- c(1,0,0,0,0,0,0,0) #control baseline 1 (visit 1)
Intervention.B1<- c(0,1,0,0,0,0,0,0) #intervention baseline 1 (visit 1)
Control.A2<- c(0,0,1,0,0,0,0,0) #post control 1 (visit 2)
Intervention.A2<- c(0,0,0,1,0,0,0,0) #post intervention 1 (visit 2)
ControlB3<- c(0,0,0,0,1,0,0,0) #control baseline 2 (visit 3)
Intervention.B3<- c(0,0,0,0,0,1,0,0) #intervention baseline 2 (visit 3)
Control.A4<- c(0,0,0,0,0,0,1,0) #post control 2 (visit 4)
Intervention.A4<- c(0,0,0,0,0,0,0,1) #post intervention 2 (visit 4)
Contbaseline = (Control.B1 + Control.B3)/2 # average of control baseline visits
Intbaseline = (Intervention. B1 + Intervention.B3)/2 # average of intervention baseline visits
ControlAfter= (Control.A2 + Control.A4)/2 # average of after control visits
IntervAfter= (Intervention.A2 + Intervention.A4)/2 # average of after intervention visits
Control.vs.Baseline = (ControlAfter-Contbaseline)
Intervention.vs.Baseline = (IntervAfter-Intbaseline)
Control.vs.Intervention = ((Control.vs.Baseline)-(Intervention.vs.Baseline))
the output of these are as follows:
> Control.vs.Baseline
[1] -0.5 0.0 0.5 0.0 -0.5 0.0 0.5 0.0
> Intervention.vs.Baseline
[1] 0.0 -0.5 0.0 0.5 0.0 -0.5 0.0 0.5
> Control.vs.Intervention
[1] -0.5 0.5 0.5 -0.5 -0.5 0.5 0.5 -0.5
Is this correct to the average differences of the differences between baseline and treatment periods?
Many thanks in advance!
A two-period crossover is the same as a repeated 2x2 Latin square. My suggestion for future such experiments is to structure the data accordingly, using variables for sequence (rows), period (columns), and treatment (assigned in the pattern (A,B) first sequence and (B,A) second sequence. The subjects are randomized to which sequence they are in.
So with your data, you would need to add a variable sequence that has the level AB for those subjects who receive the treatment sequence A, A, B, B, and level BA for those who receive B, B, A, A (though I guess the 1st and 3rd are really baseline for everybody).
Since there are 4 visits, it helps keep things sorted if you recode that as two factors trial and period, as follows:
visit trial period
1 base 1
2 test 1
3 base 2
4 test 2
Then fit the model with formula
model2 <- lmer(X ~ (sequence + period + treatment_type) * trial +
(1|SID:sequence), ...etc...)
The parenthesized part is the standard model for a Latin square. Then the analysis can be done without custom contrasts as follows:
RG <- ref_grid(model2) # same really as emmeans() for all 4 factors
CHG <- contrast(RG, "consec", simple = "trial")
CHG <- update(CHG, by = NULL, infer = c(TRUE, FALSE))
CHG contains the differences from baseline (trial differences for each combination of the other three factors. The update() step removes the by variables saved from contrast(). Now, we can get the marginal means and comparisons for each factor:
emmeans(CHG, consec ~ treatment_type)
emmeans(CHG, consec ~ period)
emmeans(CHG, consec ~ sequence)
These will be the same results you got the other way via custom contrasts. The one that was a difference of differences before is now handled by sequence. This works because in a 2x2 Latin square, the main effect of each factor is confounded with the two-way interaction of the other two factors.

I have 40x tiff images and i want to use them to build 20x, 10x levels. How can I do that?

My problem is that I have TIFF images with multiple levels (40x,20x,10x,5x) but some only have 40x level and I need all of them. My question is if there is any way to get 20x or other levels from 40x level images. I read about a library called vips but I don't understand how to use it to my specific problem.
Yes, libvips can compute the missing levels for you. For example:
$ vips copy k2.tif x.tif[pyramid]
The [pyramid] is an option to the libvips TIFF writer to enable pyramid output. You can check the result with tiffinfo:
$ tiffinfo x.tif
TIFF Directory at offset 0x9437192 (900008)
Image Width: 1450 Image Length: 2048
Tile Width: 128 Tile Length: 128
Resolution: 72.009, 72.009 pixels/inch
Bits/Sample: 8
Sample Format: unsigned integer
Compression Scheme: None
Photometric Interpretation: RGB color
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 3
Planar Configuration: single image plane
TIFF Directory at offset 0x11797866 (b4056a)
Subfile Type: reduced-resolution image (1 = 0x1)
Image Width: 725 Image Length: 1024
Tile Width: 128 Tile Length: 128
Resolution: 72.009, 72.009 pixels/inch
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: RGB color
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 3
Planar Configuration: single image plane
TIFF Directory at offset 0x12388198 (bd0766)
Subfile Type: reduced-resolution image (1 = 0x1)
Image Width: 362 Image Length: 512
Tile Width: 128 Tile Length: 128
Resolution: 72.009, 72.009 pixels/inch
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: RGB color
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 3
Planar Configuration: single image plane
TIFF Directory at offset 0x12585098 (c0088a)
Subfile Type: reduced-resolution image (1 = 0x1)
Image Width: 181 Image Length: 256
Tile Width: 128 Tile Length: 128
Resolution: 72.009, 72.009 pixels/inch
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: RGB color
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 3
Planar Configuration: single image plane
TIFF Directory at offset 0x12634494 (c0c97e)
Subfile Type: reduced-resolution image (1 = 0x1)
Image Width: 90 Image Length: 128
Tile Width: 128 Tile Length: 128
Resolution: 72.009, 72.009 pixels/inch
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: RGB color
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 3
Planar Configuration: single image plane
You can see it's written a five level pyramid, with each level being a tiled TIFF with 128 x 128 pixel tiles. That might or might not be correct for your application, you'd need to give a lot more information. For example:
$ vips copy k2.tif x.tif[pyramid,compression=jpeg,Q=85,tile-width=256,tile-height=256]
Might be better. Check the docs.

NetLogo, how to hatch a turtle at a certain distance on Gis layers

My patches contain attributes such as elevation :
set mnt gis:load-dataset "F:/StageM2/Modelisation/Modele/mnt.asc"
gis:apply-raster mnt alt
gis:set-transformation (list 567887.504252 573503.504252 6183200.86463 6187628.86463) (list min-pxcor max-pxcor min-pycor max-pycor)
gis:set-world-envelope gis:envelope-of mnt
and turtles are created from raster of forest :
to import-foret93
set foret-93 gis:load-dataset "F:/StageM2/Modelisation/Modele/foret76_93.asc"
gis:apply-raster foret-93 f93
ask patches with [f93 = 1]
[
set pcolor black
set foret93? true
;ask n-of 2813 patches with [foret93? = true] [ hatch 2813 ]
sprout-arbres 1 [set color pink
set size 4]
]
end
The layers have the same spatial reference : RGF1993, so it is in meters.
Now, I want to create new turtles from existing turtles and randomly in a radius of 150m from the turtle (the new turtle can be hatch at 1m or at 130m). For instance, I ask just one turtle to hatch a turtle at a distance giving by an input box in the interface named dispersal-dist.
to disp-graines
ask turtle 2918
[
hatch-arbres 1
[
let seedX xcor
let seedY ycor
let ran-bear random 360
lt ran-bear
move-to one-of patches in-radius dispersal-dist
set color magenta
set size 15
]
]
end
But the created turtle go further then the dispersal distance giving in meters.
Did I forget something to transforme the netlogo scale in meters ? Or it is another problem?
Thank you in advance for your help !
While your raster dataset is expressed in meters in your case, your patches don't have a real world scale. Assuming your raster has square pixels, you can calculate a patch scale with something like:
let patch-scale (item 1 gis:world-envelope - item 0 gis:world-envelope ) / world-width
You could then use it in your existing code:
move-to one-of patches in-radius dispersal-dist / patch-scale
If your pixels in your raster have different real world height and width, you will have to do the patch-scale for the horizontal and vertical dimension separately.