Trouble identifying patches of habitat in ArcMap - gis

I’m having some trouble identifying patches of habitat. Ive had some help with this issue before but I cannot get anything to work.
I downloaded the Arcgrid (zipped) from this website http://www.kew.org/gis/projects/mad_veg/datasets_gis.html. I’ve managed to open the data in ArcMap 10.1 and it displays all the habitat types in Madagascar. I wanted to determine how much of a specific habitat there was (humid forest) and I found that just with the number of pixels. But obviously that’s the total area that area and in reality is broken and fragmented into thousands of little forests. I need to find a way to determine how many patches of forest there are and what size they are.
If it helps ill just explain what it is I’m trying to do. I am studying a species of lemur, and a community needs a minimum 4km2 of forest. I’m trying to how much viable habitat I left in Madagascar. The overall area doesn’t give me that because I could be made of patches too small to support a community. I need a way to find out how much littoral forest there is left in patches over a certain size.
I’m no expert in GIS and someone suggested I run a code python such as
import arcpy
from arcpy import env
from arcpy.sa import *
env.workspace = "Q:\Veggrid"
inZoneData = "vegetation"
zoneField = "Value"
outTable = "zonalgeomout02.dbf"
processingCellSize = 29
arcpy.CheckOutExtension("Spatial")
outZonalGeometryAsTable = ZonalGeometryAsTable(inZoneData, zoneField, "AREA", processingCellSize)
However each time I run this code ArcMap loads for a while and just crashes. I tried making the cell size smaller but it didn’t make a difference. Like I I’m not expert and Im not sure what to do. People have suggested downloading various packages but it’s a university computer and it doesn’t seem to allow it
Any help / advice would be greatly appreciated

This code works fine.
Try to increase processingCellSize (For example: 1000).

Related

Problem loading "decomposable-attention-elmo" with `Predictor.from_path`

I'm trying to load the decomposable attention model proposed in this paper The decomposable attention model (Parikh et al, 2017) combined with ELMo embeddings trained on SNLI., and used the code suggested as the demo website described:
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/decomposable-attention-elmo-2020.04.09.tar.gz", "textual_entailment")
predictor.predict(
hypothesis="Two women are sitting on a blanket near some rocks talking about politics.",
premise="Two women are wandering along the shore drinking iced tea."
)
I found this from log:
Did not use initialization regex that was passed: .*token_embedder_tokens\._projection.*weight
and the prediction was also different from what I got on the demo website (which I intended to see). Did I miss anything here?
Also, I tried the two other versions of the pretrained model, decomposable-attention-elmo-2018.02.19.tar.gz and decomposable-attention-elmo-2020.02.10.tar.gz. Neither of them works and I got this error:
ConfigurationError: key "token_embedders" is required at location "model.text_field_embedder."
What do I need to do to get the exact output as presented in the demo website?
ELMo is a bit difficult in this way in that it keeps state, and you don't get the same output if you call it twice. It depends on what you processed beforehand. In general, ELMo should be warmed up with a few queries before using it seriously.
If you're still seeing large discrepancies in the output, let us know and we'll look into it.
The old versions of the model don't work with the new code. That's why we published the new model versions.

Converting CSV into GRIB2 data for mapping in Leaflet

TL;DR: I'm looking for some resources on generating GRIB2 data sets on the fly, ideally using in-house-generated wind data in a CSV format.
We have a bunch of data for a series of localized weather stations monitoring wind information around our city. They report in at ~2-3 minute intervals (far more frequent than standard weather data), and from their reports we have lat, lon, wind speed, and wind direction. Someone went and told the boss about these really slick visualizations, like this that can display wind speed and direction, and it's my job to make it happen.
The above plug-in for Leaflet, GitHub here, as well as several others, all use GRIB2 data, which from my research involves a left/right set of data and an up/down set of data for a series of points plotted out across a region.
The problem I'm having is that I've only found a handful of tools that interact with GRIB2 data, and most seem to decode data from the GRIB2 dataset, and only one tool running on Fortran seems to exist that compiles GRIB2 data together.
So, is there any way to generate GRIB2 data on the fly using proprietary data at 2-3 minute intervals?
I've gone through this resource on NOAA's website, which is where I found a few tools.
I know how frustrating it can be to work with GRIB and some of the other science/weather related formats. This may not be the best answer, but it might be your only answer as I find these types of questions to only gather dust because of the general lack of knowledge with the formats and tools.
From what I remember, CDO tools (link here) can do some magical things - but I am not that experienced with it. I do use it for converting satellite data to plain text and it's been an absolute lifesaver! So I will explain :
My suggestion was to first convert the CSV to netCDF. I had a link saved for this a long time ago, but never came to really needing it. (discussion here). Essentially, some python code should be able to do the conversion for you. There may be several ways to do this, but I have never looked into it beyond initial research.
Next, you should be able to convert .nc to .grib using CDO. I know it can do quite alot. Here is a discussion regarding this, so it must be able to be done.
I also see at this link where someone converts grib to netcdf, but you should be able to do it in reverse as well. I just don't know the exact commands. From the link :
As an example of use of CDO, converting from GRIB to netCDF can be as simple as
cdo -f nc copy file.grb file.nc
I would suspect its just the reverse but probably something like :
cdo -f grb file.nc file.grb
Hopefully you can put things together for it to work without being too hack-y.
You can do this in a simple python script using pandas , xarray and cfgrib
import pandas as pd
import cfgrib
data = pd.read_csv('your_csv_data.csv')
xarray_data = data.to_xarray()
cfgrib.to_grib(xarray_data, 'out2.grib')
Please note that you have to define grib specifications first before you store as grib data.

QGIS/GRASS Watershed Analysis output cell size issues

Gooday,
As my first post I offer a humble apology if this is such a basic question as to make any of you cringe. I feel like I have exhausted my search skills and read through the QGIS documentation frequently and thoroughly.
My problem is with any of the hydrology analysis in GRASS through QGIS. I have a GRASS mapset that uses a 30m resolution DEM for my area. The extent of the mapset is set to include only the DEM (ie. no whitespace or no-data areas) and yet when I run r.fill.dir, r.basin or r.watershed I get back an image that has a resolution of nearly 73xxmetres. I am most used to ArcGIS where in the hydrology tools the user can define the working environment and set the output resolution to match the input resolution. Is there anyway to set that in GRASS or am I missing a basic step somewhere? I feel like I set up GRASS correctly as other non analytical map work is working fine.
Any help would be much appreciated.
Sincere regards,
Grant McGee

How can I analyze live data from webcam?

I am going to be working on self-chosen project for my college networking class and I just had a couple questions to help get me started in the right direction.
My project will involve creating a new "physical" link over which data, in the form of text, will be transmitted from one computer to another. This link will involve one computer with a webcam that reads a series of flashing colors (black/white) as binary and converts it to text. Each series of flashes will simulate a packet of data. I will be using OSX an the integrated webcam in a Macbook, the flashing computer will either be windows or osx.
So my questions are: which programming languages or API's would be best for reading live webcam data and analyzing the color of a certain area as well as programming and timing the flashes? Also, would I need to worry about matching the flash rate of the "writing" computer and the frame capture rate of the "reading" computer?
Thank you for any help you might be able to provide.
Regarding the frame capture rate, Shannon sampling theorem says that "perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal being sampled". In other words if your flashing light switches 10 times per second, you need a camera of more than 20fps to properly capture that. So basically check your camera specs, divide by 2, lower the resulting a little and you have your maximum flashing rate.
Whatever can get the frames will work. If the light conditions in which the camera works are gonna be stable, and the position of the light on images is gonna be static then it is gonna be very very easy with checking the average pixel values of a certain area.
If you need additional image processing you should probably also find out about OpenCV (it has bindings to every programming language).
To answer your question about language choice, I would recommend java. The Java Media Framework is great and easy to use. I have used it for capturing video from webcams in the past. Be warned, however, that everyone you ask will recommend a different language - everyone has their preferences!
What are you using as the flashing device? What kind of distance are you trying to achieve? Something worth thinking about is how are you going to get the receiver to recognise where within the captured image to look for the flashes. Some kind of fiducial marker might be necessary. Longer ranges will make this problem harder to resolve.
If you're thinking about shorter ranges, have you considered using a two-dimensional transmitter? (given that you're using a two-dimensional receiver, it makes sense) and maybe have a transmitter that shows a sequence of QR codes (or similar encodings) on a monitor?
You will have to consider some kind of error-correction encoding, such as a hamming code. While encoding would increase the data footprint, it might give you overall better bandwidth given that you can crank up the speed much higher without having to worry about the odd corrupt bit.
Some 'evaluation' type material might include you discussing the obvious security risks in using such a channel - anyone with line of sight to the transmitter can eavesdrop! You could suggest in your writeup using some kind of encryption, a block cipher in CBC would do, but would require a key-exchange prior to transmission, so you could think about public key encryption.

Solar system computer model

I'm interested in building a 3D model of our solar system for web use (probably with AS3 and papervision) and have been looking into how I would go about encoding the planetary positions. My idea was to download the already calculated positions from NASA as calculating the positions myself seems a but overcomplicated. I'm not sure though whether I should use a helio centric or an earth centric encoding.
I wanted to know if there are any one with any experience in this. Which approach would be better? The NASA JPL website seems to have the positions of all the major bodies in our solar system as earth centric. I can see this becoming a problem later on though when adding Voyager and Mars Lander missions to the model?
Any feedback, comments and links are very welcome.
EDIT: I have a rough model running that uses heliocentric coordinates, but I haven't been able to find the coordinates for all planets in this format.
UPDATE:
I don't have a lot of detail to provide for know because I really don't know what I'm doing (from the space point of view). I wanted to get a handle on 3D programming, and am interested in space. The idea was that I would make a rough solar system simulator with at first all the planets and their orbiters (maybe excluding satellites at first). Perhaps include a news aggregator and some links to news/resources and so on. The general idea would be to allow people to click around and get super excited about going to the moon and Mars (for a starter).
In the long run I hopefully would be able to add in satellites and the moon missions (scroll back in time to the 70's and see the moon missions).
So to answer Arrieta's question the idea was not to calculate eclipses but to build an easy to approach, interactive space exploratorium, and learn some 3D and space related stuff on the way.
Glad you want to build your own simulator, but depending on what you want to do it may be far from an easy task. The simplest approach is as follows:
Download the JPL-DE405 ephemerides and the subroutines for retrieving the planetary positions (wrt Solar System Barycenter).
Request for timespan, compute the positions, and display them to the screen in a visually appealing manner
Done
Now, why would you want to do this? If you want to view the planet's orbits, that's it. You are done. If you want to compute geometric events (like eclipses, or line-of-sight, or ilumination) then you are in a whole different ball game. That's astronautics, and it is not simple.
Please be more specific. The distinction you make of "geocentric" or "heliocentric" coordinates really has no major difficulty involved. If you have all the states in heliocentric frame, you can compute the geocentric frame by simple vector subtraction. That's not the problem! The problems are a thousand more, but you need to be specific so we can provide more guidance.
JPL has provided high quality ephemerides for decades now, and we have a full team of brilliant people working on it. It is one of the most difficult things to get right!
Again, provide more details or check out other sources of information.
Please google "Solar System Simulator" (done here, at JPL) and see if it fulfills your needs.
Cheers.
It may be worth you checking out the ASCOM Platform (we also have a stack exchange site called ASCOM Answers).
The ASCOM Platform has several useful libraries for doing this sort of thing.
USNO NOVAS (Naval Observatory Vector Astrometry)
Kepler orbit engine
The USNO/NOVAS stuff was originally written in C and we've wrapped it up in .NET for ease of use from C# and VB.
As an added bonus (actually it's the raison d’être for ASCOM), the Platform makes it easy for you to control things like telescopes, it's used by Microsoft's World Wide Telescope for exactly that purpose. I tmight be a fun extension to your model to be able to point a telescope at things.
I'd probably start (well, I did a while back) with heliocentric coordinates and get a few of the planets up and running. But sooner or later you'll want to write a heliocentric-to-geocentric coordinate conversion routine, and its inverse. For some bodies, such as artificial satellites the geocentric coordinates will be easier to deal with.
You can use the astro-phys api to get a JSON formatted state vector for all the planets. It calculates them using JPL's de406 so it's pretty accurate and uses the solar system barycenter.
Although, if you know where the sun is relative to the earth and you're in a geocentric model, you can subtract the position of the sun from all of the bodies (including earth) to be heliocentric.