I am using the LibTiff.NET library to load GeoTiff data in C# (inside Unity).
**NOTE - I looked at GDAL also, but faced similar issues as outlined below, and would much prefer to use LibTiff if possible.
I would ultimately like to be able to take a lat/long value and have a function that returns a chunk of pixel data for a 50m area around that point, streamed from a GeoTiff image on disk (not storing whole image in RAM).
I have a test file that is representative of what my software will be given in production.
I am trying to figure out how to read or compute the lat/long extents of the test file image, as I can't find a good tutorial or sample online which contains this functionality.
I can read the width+height of the file in the TiffTags, but many other values that seem critical for computing the extents such as X and Y resolutions are not present.
It also appears like the lat/long extents (or a bounding box) are not present in the tags.
At this point I am led to believe there may be more tags or header data that I am not familiar with because when I load the test file into Caris EasyView I can see a number of properties that I would like to read or compute from the file:
Is it possible to obtain this data using LibTiff?
Or is there a better system I should use? (wrapped GDAL maybe?)
** NOTE: I cannot link the test file due to NDA, plus it's enormous
This is for a 32-bit geotiff:
int width = tiff.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int height = tiff.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int samplesPerPixel = tiff.GetField(TiffTag.SAMPLESPERPIXEL)[0].ToInt();
int bitsPerSample = tiff.GetField(TiffTag.BITSPERSAMPLE)[0].ToInt();
int bytesPerSample = bitsPerSample / 8;
byte[] scanline = new byte[tiff.ScanlineSize()];
float[] scanline32Bit = new float[tiff.ScanlineSize() / 4];//divide by 4 for byte word
FieldValue[] modelTiePointTag = tiff.GetField(TiffTag.GEOTIFF_MODELTIEPOINTTAG);
byte[] modelTransformation = modelTiePointTag[1].GetBytes();
double originLon = BitConverter.ToDouble(modelTransformation, 24);
double originLat = BitConverter.ToDouble(modelTransformation, 32);
FieldValue[] modelPixelScaleTag = tiff.GetField(TiffTag.GEOTIFF_MODELPIXELSCALETAG);
byte[] modelPixelScale = modelPixelScaleTag[1].GetBytes();
double pixPerLong = BitConverter.ToDouble(modelPixelScale, 0);
double pixPerLat = BitConverter.ToDouble(modelPixelScale, 8) * -1;
Related
I'm using AS3, but general programming wisdom unspecific to AS3 is great too!
I am creating my first game, a top-down dungeon crawler with tile-based navigation, and I am deciding how to store my maps. I need to be able to access a specific tile at any point in time. My only thought so far is to use nested Vectors or Arrays with the first level being the row and the second being the column, something like this:
private var map:Array = new Array(Array(0,1,0,0,1,1,0),Array(0,1,0,1,0,1,0));
private var row2col3:uint = map[1][2];
/*map would display as such:*/
#|##||#
#|#|#|#
Ultimately, the idea is to build a Map class that will be easily extensible and, again, allow free access to any specific tile. I am looking for help in determining an effective/efficient design architecture for that Map class.
Thanks!
As stated in the comments I would upload and give my source code for a 12 hour challenge project to create a tile based level editor. The source code can be found at: GitHub BKYeates
This level editor focuses on textures being a power of 2, and uses blitting for drawing on all the textures. It can read, write, and store partial tiles. There is also some functionality to erase and draw on collision boxes.
Now in regards to how the storage should be setup, it is really up to you. If you are going to be storing lots of information I recommend using Vectors. Vectors perform faster than most other container types except for ByteArray (if used correctly). In my level editor I used a Vector with a particular setup.
The Vector I used named _map in a class called tilemodel. tilemodel is responsible for updating all the storage information when a change is made. The _map variable is setup like so:
_map = new Vector.<Vector.<Vector.<Object>>>();
This is a pretty heavily nested Vector and in the end stores, can you believe it, an Object! Which admittedly really chunks out the performance gains you get from using Vector when you are indexing the furthest nested elements.
But ignore that because the indexing gain from this setup is really key. The reason it is setup this way is because I can reference a layer, a row, and a column to grab a specific tile object. For example, I have a tile on layer 2 in row 12 column 13 that I want to access:
var tileObject:Object = _map[2][12][13];
That works perfectly for pretty much any scenario I could use in my tile based game, and the speed is comparatively better than that of a Object or Dictionary when this is being accessed multiple times (i.e. - in a loop which happens often).
The level editor is designed to use all blitting and leave onus to my management classes for storage. The speed gain from doing this is very high, and the way it is currently setup the tilemodel can store partial bitmaps making it slightly more flexible than your standard rigidness of a power of 2 texture reader.
Feel free to look through the source code. But here is a summary of what some of the classes do:
tilecontroller - Issues state changes and updates to tilemanager and tilemodel
tilemanager - Responsible for texture drawing and removal.
tilemodel - Stores and updates the current map on state changes.
r_loader - Loads all assets from assetList.txt (paths set to images
there).
hudcontroller - Currently this was the last thing I was working on, lets you draw on collision boxes that are stored in a separate file alongside the map.
g_global & g_keys - Global constants and static methods use
ubiquitously
LevelEditor - Main class, also designed as "View" class ( see MVC pattern: MVC Pattern )
Also as I've mentioned it can read back all the storage. The class used for that I did not upload to GitHub, but figured I would show the important method here:
//#param assets needs to be the list of loaded bitmap images
public function generateMap( assets:* ):void {
var bmd:BitmapData = new BitmapData( g_global.stageWidth, g_global.stageHeight, true, 0 );
_canvas = new Bitmap( bmd, "auto", true );
_mapLayer.addChild( _canvas );
_canvas.bitmapData.unlock();
g_global.echo( "generating map" );
var i:int, j:int, m:int;
for ( m = 0; m < _tiles.length; m++ ) {
for ( i = 0; i < _tiles[m].length; i++ ) {
for ( j = 0; j < _tiles[m][i].length; j++ ) {
//wondering why im type casting in this evaluation? _tiles[i][j].tile == int( _tiles[i][j].tile )
//the level editor stores tiles that are larger than the grid size at indices containing values that are a percent of the tile size
var tile:Object = _tiles[m][i][j];
if ( tile != null && int( tile.tile ) == tile.tile ) {
addTile( g_global.GRIDSIZE * tile.column, g_global.GRIDSIZE * tile.row, { index:tile.tile, bitmap:assets[ tile.tile ] }, tile.rotation );
}
}
}
}
_canvas.bitmapData.lock();
}
Anyway I hope this information finds you well. Good luck!
I asked a similar question a while back: https://gamedev.stackexchange.com/questions/60433/is-it-more-efficient-to-store-my-tile-grid-as-a-dictionary-or-an-array. I'm not sure that it would really matter whether it's an Array or a Vector (the differences in efficiency seem to differ between FP versions, etc.).
But, yeah, you probably want to use one or the other (not a Dictionary or anything), and you probably want to index it like [y * width + x], not [x][y]. Reasons: Efficiency and not having overly complicated data structures.
Also if you need to be able to regularly access the Array or Vector outside of that class, just make the variable internal or public or whatever; making it private and wrapping over it with functions, while being more prim-and-proper class design, would still be overkill.
One method I am using right now for my own thing is that I'm storing my tiles in a black and white pixel bitmap (and wrote a wrapper around that). I'm not sure how efficient this is overall as I've never benchmarked it and just wrote it quickly to create a map for testing purposes, but I am finding that it does offer an advantage in that I can draw my maps in an image editor and view them easily while still allowing random pixel/tile access.
Looking at your sample code, I'm guessing you have only two types of tiles right now, so you could just use black and white pixels as well if you want to try it.
I've done the 2d array method as well (using it still actually for other parts) which works fine too, but perhaps can be harder to visualise at larger sizes. Looking forward to Bennett's answer.
I have an application that uses the Google Map Javascript API v3 and a UIWebView to display a map onscreen. While on this map I can use the app to collect multiple points of GPS data to represent a line.
After collecting 1460-1480 points the app quits unexpectedly (pinch zooming on the map makes the app quit before the 1400+ threshold is reached). It appears to be a memory issue (my app is the blue wedge of the pie chart).
The map screen does receive multiple memory warnings that are handled in an overridden DidReceiveMemoryWarning method in this screen. There is some prior code that called NSUrlCache.SharedCache.RemoveAllCachedResponses.
public override void DidReceiveMemoryWarning()
{
// BEFORE
uint diskUsage = NSUrlCache.SharedCache.CurrentDiskUsage;
uint memUsage = NSUrlCache.SharedCache.CurrentMemoryUsage;
int points = _currentEntityManager.GeometryPointCount;
Console.WriteLine(string.Format("BEFORE - diskUsage = {0}, memUsage = {1}, points = {2}", diskUsage, memUsage, points));
NSUrlCache.SharedCache.RemoveAllCachedResponses();
// AFTER
diskUsage = NSUrlCache.SharedCache.CurrentDiskUsage;
memUsage = NSUrlCache.SharedCache.CurrentMemoryUsage;
points = _currentEntityManager.GeometryPointCount;
Console.WriteLine(string.Format("AFTER - diskUsage = {0}, memUsage = {1}, points = {2}", diskUsage, memUsage, points));
base.DidReceiveMemoryWarning();
}
I added the BEFORE and AFTER sections so I could track cache contents before and after RemoveAllCachedResponses is called.
The shared cache is configured when the application starts (prior to my working on this issue it was not being configured at all).
uint cacheSizeMemory = 1024 * 1024 * 4;
uint cacheSizeDisk = 1024 * 1024 * 32;
NSUrlCache sharedCache = new NSUrlCache(cacheSizeMemory, cacheSizeDisk, "");
NSUrlCache.SharedCache = sharedCache;
When we're on this screen collection point data and we receive a low memory warning, RemoveAllCachedResponses is called and the Before/After statistics are printed to the console. Here are the numbers for the first low memory warning we receive.
BEFORE - diskUsage = 2258864, memUsage = 605032, points = 1174
AFTER - diskUsage = 1531904, memUsage = 0, points = 1174
Which is what I would expect to happen - flushing the cache reduces disk and memory usage (though I would expect the the disk usage number to also go to zero).
All subsequent calls to RemoveAllCachedResponses display these statistics (this Before/After is immediately prior to the app crashing).
BEFORE - diskUsage = 1531904, memUsage = 0, points = 1471
AFTER - diskUsage = 1531904, memUsage = 0, points = 1471
This leads me to believe one of two things - 1. RemoveAllCachedResponses is not working (unlikely) or 2. there's something in the disk cache that can't be removed because it's currently in use, something like the current set of map tiles.
Regarding #2, I'd like to believe this, figuring the reduction in disk usage on the first call represents a set of tiles that were no longer being used because of a pinch zoom in, but no pinch zooming or panning at all was done on this map, i.e. only one set of initial tiles should have been downloaded and cached.
Also, we are loading the Google Map Javascript API file as local HTML so it could be that this file is what's remaining resident in the cache. But the file is only 18,192 bytes which doesn't jib with the 1,531,904 bytes remaining in the disk cache.
I should also mention that the Android version of this app (written with Xamarin.Android) has no such memory issue on its map screen - it is possible to collect 5500+ points without incident).
So why does the disk cache not go to zero when cleared?
Thanks in advance.
I am confused about this too. You can take a look at the question I asked.
removeAllCachedResponses can not clear sharedURLCache?
The comrade said "it has some additional overhead in the cache and it's not related to real network data."
I have a C++ QT application that is sending a ByteArray of ARGB values via a TCP socket. The image is 100 x 100. The problem is that when I read off the ByteArray I can't seem to get the image to display right. I've tried lots of different ways and here is AS3 code that's got me the closest.
bitmapData.lock();
for(var x = 0;x<100; x++){
for(var y = 0;y<100; y++){
alphaValue = byteArray.readUnsignedByte();
redValue = byteArray.readUnsignedByte();
greenValue = byteArray.readUnsignedByte();
blueValue = byteArray.readUnsignedByte();
color = redValue << 16 | greenValue << 8 | blueValue;
bitmapData.setPixel(y,x,color);
}
}
bitmapData.unlock();
I'm just not using the alpha value at the moment because I'm not sure what to do with it. Qt says "The image is stored using a 32-bit RGB format (0xffRRGGBB)." I tried using readUnsignedInt() too, but that didn't work either.
Here is what I'm trying to send and what I see on screen.
Sent : Received in Flash
It's close, but not just right. Does anyone have any idea what I may be doing wrong?
Much easier would be to use readUnsignedInt instead of multiple calls to readByte. Then use bitmap.setPixel32 instead of setPixel. That way you don't have to deconstruct and reconstruct the values.
You might also have to switch the endian-ness (e.g. byteArray.endian = Endian.LITTLE_ENDIAN) depending on how it's encoded. I just can't remember off-hand which way around it is, so you can try that if it doesn't work first time.
I'm having some trouble understanding how to use KissFFT (1.2.9) correctly. All I am trying to achieve for now is to perform an FFT and then immediately perform an iFFT to reconstruct the original signal again. The code snippet below demonstrates what I'm doing:
void test(short* timeDomainData, int length)
{
// Create the configurations for FFT and iFFT...
kiss_fftr_cfg fftConfiguration = kiss_fftr_alloc( length, 0, NULL, NULL );
kiss_fftr_cfg ifftConfiguration = kiss_fftr_alloc( length, 1, NULL, NULL );
// Allocate space for the FFT results (frequency bins)...
kiss_fft_cpx* fftBins = new kiss_fft_cpx[ length / 2 + 1 ];
// FFT...
kiss_fftr( fftConfiguration, timeDomainData, fftBins );
// iFFT...
kiss_fftri( ifftConfiguration, fftBins, timeDomainData );
}
What I found is that this actually crashes at run-time. I found that by dividing the size by 2 when creating the KissFFT configurations stopped the crashing:
kiss_fftr_cfg fftConfiguration = kiss_fftr_alloc( length / 2, 0, NULL, NULL );
kiss_fftr_cfg ifftConfiguration = kiss_fftr_alloc( length / 2, 1, NULL, NULL );
However, when I play the reconstructed audio data it's mostly silent with the odd crackle.
Can anyone point me in the right direction?
Many thanks,
P
Edit 1: This is how I include the KissFFT header file and define the FIXED_POINT variable:
#define FIXED_POINT 16
#include "kiss_fftr.h"
This ensures that the typedef'd 'kiss_fft_scalar' type is forced to int16_t (short).
Edit 2: The target platform is Android, so I have also added the following to my Android.mk file:
LOCAL_CPPFLAGS += -DFIXED_POINT
I noticed you are sending in shorts. Are you sure you've compiled everything to use int16_t as the DATATYPE? Sometimes a mismatch of preprocessor environments can cause a problem.
Also, the fixed point version scales downward both directions (fwd,inv). So if you expect to reconstruct your signal, you'll want to multiply things by a total of nfft.
I'd recommend multiplying with saturation in two stages.
e.g. if you're doing an FFT+IFFT of size 1024, then multiply by 32 after the FFT, then again by 32 after the IFFT.
I'm not sure about the silence, but if you're getting lots of crackles then it may because you're processing adjacent blocks independently rather than using Overlap-Add, where you effectively cross-fade between each block to get a smoother characteristic.
I'm struggling to do the same thing in Android, haven't got it yet (see here!), but I can see a problem in your code: "fftBins" needs to be "length" size. The reason is that it is the raw transform, not the frequency magnitude/phases... I think? Or have I got it wrong?
Please see the class I have created at http://textsnip.com/see/WAVinAS3 for parsing a WAVE file in ActionScript 3.0.
This class is correctly pulling apart info from the file header & fmt chunks, isolating the data chunk, and creating a new ByteArray to store the data chunk. It takes in an uncompressed WAVE file with a format tag of 1. The WAVE file is embedded into my SWF with the following Flex embed tag:
[Embed(source="some_sound.wav", mimeType="application/octet-stream")]
public var sound_class:Class;
public var wave:WaveFile = new WaveFile(new sound_class());
After the data chunk is separated, the class attempts to make a Sound object that can stream the samples from the data chunk. I'm having issues with the streaming process, probably because I'm not good at math and don't really know what's happening with the bits/bytes, etc.
Here are the two documents I'm using as a reference for the WAVE file format:
http://www.lightlink.com/tjweber/StripWav/Canon.html
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Right now, the file IS playing back! In real time, even! But...the sound is really distorted. What's going on?
The problem is in the onSampleData handler.
In your wav file, the amplitudes are stored as signed shorts, that is 16 bit integers. You are reading them as 32 bit signed floats. Integers and floats are represented differently in binary, so that will never work right.
Now, the player expects floats. Why did they use floats? Don't know for sure, but one good reason is that it allows the player to accept a normalized value for each sample. That way you don't have to care or know what bitdept the player is using: the max value is 1, and the min value is -1, and that's it.
So, your problem is you have to convert your signed short to a normalized signed float. A short takes 16 bits, so it can store 2 ^ 16 (or 65,536) different values. Since it's signed and the sign takes up one bit, the max value will be 2 ^ 15. So, you know your input is the range -32,768 ... 32,767.
The sample value is normalized and must be in the range -1 ... 1, on the other hand.
So, you have to normalize your input. It's quite easy. Just take the read value and divide it by the max value, and you have your input amplitude converted to the range -1 ... 1.
Something like this:
private function onSampleData(evt:SampleDataEvent):void
{
var amplitude:int = 0;
var maxAmplitude:int = 1 << (bitsPerSample - 1); // or Math.pow(2, bitsPerSample - 1);
var sample:Number = 0;
var actualSamples:int = 8192;
var samplesPerChannel:int = actualSamples / channels;
for ( var c:int = 0; c < samplesPerChannel ; c++ ) {
var i:int = 0;
while(i < channels && data.bytesAvailable >= 2) {
amplitude = data.readShort();
sample = amplitude / maxAmplitude;
evt.data.writeFloat(sample);
i++;
}
}
}
A couple of things to note:
maxAmplitude could (and probably
should) be calculated when you read
the bitdepth. I'm doing it in the
method just so you can see it in the
pasted code.
Although maxAmplitude is calculated
based on the read bitdepth and thus
will be correct for any bitdepth,
I'm reading shorts in the loop, so
if your wav file happens to use a
different bitdepth, this function
will not work correctly. You could
add a switch and read the necessary
ammount of data (i.e., readInt if
bitdepth is 32). However, 16 bits is
such a widely used standard, that I
doubt this is practically needed.
This function will work for
stereo wavs. If you want it to work
for mono, re write it to write the
same sample twice. That is, for each
read, you do two writes (your input
is mono, but the player expects 2
samples).
I removed the EOF catch, as you can
know if you have enough data to read
from your buffer checking
bytesAvailable. Reaching the end of
stream is not exceptional in any
way, IMO, so I'd rather control that
case without an exception handler,
but this is just a personal
preference.