Can I modify the total number of pages in multi-page tiff? - tiff

I am receiving data from a camera and saving each image as a page in a multi-page tiff. I can set that each file has e.g. 100 pages and I am calling:
TIFFSetField(out, TIFFTAG_PAGENUMBER, page_number, total_pages);
However, if I am unable to write data to disk fast enough, I will stop the acquisition. At this point I may have written 50 out of 100 pages into a multi-page tiff. Now the multi-page tiff file reports total number of pages as 100, but only 50 pages have been actually written. Some applications will report 100 pages, but for pages 51-100 there will be no data and images will appear to be black.
Therefore I would need to update the total_pages number at the moment when I am ending the disk writing to the value of the last written page.
Can this be done at all? Is the total_pages value written once into a common header which I could update and fix the file in this way, or is this value written into each page which means I would have to edit each page that has already been written to disk? Or is there any better approach how to handle this?

Actually, the solution is quite simple.
Once your image stream has ended and before you close the file, you have to iterate through all the directories (images in the multi-page tiff) and update the TIFFTAG_PAGENUMBER to the total pages written.
The catch is you have to do it before you close the tiff by calling TIFFClose. Once a TIFF is closed, its TAGS cannot be edited anymore. (see http://www.libtiff.org/libtiff.html):
Note that unlike the stdio library TIFF image files may not be opened for both reading and writing; there is no support for altering the contents of a TIFF file.
if (pagesTotal - pagesWritten > 0)
{
for (int i = 0; i < pagesWritten; i++)
{
int retVal = TIFFSetDirectory(out, i);
retVal = TIFFSetField(out, TIFFTAG_PAGENUMBER, i, pagesWritten);
retVal = TIFFWriteDirectory(out);
}
}
TIFFClose(out);
pagesTotal is number of pages that we inteded to write into this multi-page file
pagesWritten is number of pages that we actually wrote into the file

Related

how to make Ghidra use a function's complete/original stackframe for decompiled code

I have a case where some function allocates/uses a 404 bytes temporary structure on the stack for its internal calculations (the function is self-contained and shuffles data around within that data structure). Conceptually the respective structure seems to consist of some 32-bit counters followed by an int[15] and a byte[80] array, and then an area that might or might not actually be used. Some of the generated data in the tables seems to represent offsets that are again used by the function to navigate within the temporary structure.
Unfortunately Ghidra's decompiler makes a total mess while trying to make sense of the function: In particular it creates separate "local_.." int-vars (and then uses a pointer to that var) for what should correctly be a pointer into the function's original data-structure (e.g. pointing into one of the arrays).
undefined4 local_17f;
...
dest= &local_17f;
for (i = 0xf; i != 0; i = i + -1) {
*dest = 0;
dest = dest + 1;
}
Ghidra does not seem to understand that an array based data access is actually being used at that point. Ghirda's decompiler then also generates a local auStack316[316] variable which unfortunately seems to cover only a part of the respective local data structure used by the original ASM code (at least Ghidra actually did notice that a temporary memory buffer is used). As a result the decompiled code basically uses two overlapping (and broken) shadow data structures that should correctly just be the same block of memory.
Is there some way to make Ghidra's decompiler use the complete 404 bytes block allocated by the function as an auStack404 thus bypassing Ghidra's flawed interpretation logic and actually preserve the original functionality of the ASM code?
I think I found something.. In the "Listing" view the used local-variable layout is shown as a comment under the function's header. It seems that by right clicking on a respective local-var line in that comment, "set data type" can be applied to a respective local variable. Ah, and then there is what I've been looking for under "Function/"Edit stack frame" :-)

Octave script showing old result every time

I have an octave script, that is reading a text file with (fopen, fseek and fread) functions. In this file, binary data is stored.
First, I read the file in a loop like this:
fid = fopen('myfile.txt', 'rb');
fseek(fid, 0);
for i = 1:5
data = fread(fid, 1000);
......
...<Opertions I want to do>
......
......
endfor
It takes 1000 bits in one iterations and computes the results for every 1000 bits.
Then I read the file from different position by changing the fseek line like below:
fseek(fid, 1000);
But still it is giving the same result as it gave for the first slot (even now I am not reading the first slot) when I read the file from the beginning.
Then I did the same thing on my other computer and their it worked for the first time, but in the second attempt it is showing the same behavior. First I thought may be there is a problem with my scripts or generated file, but then it worked fine for the first time on the other computer then I think there is some kind of problem with Octave. May be I need to clear the memory or something.
Has anyone ever faced this type of problem?

ImageJ Macro for converting multichannel Tiffs to Tiffs with only specified channels

I have a quite simple programming question I was hoping somebody could help me with.
I'm working with Tiff files with several channels (all contained in a .lif file, which is Leica format). I want a way to easily convert all my Tiffs to Tiffs containing only a few of the channels (which I specify). Right now I'm doing it manually, for each image and it is tedious. I have no experience in writing macros and some help or a starting point would be much appreciated. I'm sure it's not a complicated macro to write.
As of now I'm using the following manual routine and commands after I have opened all my Tiffs:
Image > Stacks > Stack to Images - separates the stacked imaged into individual images
Close images I dont wan't to be in stack.
Image > Stacks > Images to Stack - Returns the remaining images to a stack and renames it.
Image > Hyperstacks > Stack to Hyperstack - here I change it so that the image has 3 channels.
Save the new Tiff with the desired channels and name.
Close the Tiff and repeat for all Tiffs.
What I want is a Macro that loops the above steps for all open Tiffs, letting the user specify the channels (eg. keep channels: 2,3 and 5). I know it's a very simple programming task, but I could really use some help getting it done.
Thanks!
Johannes
There are several less complex possibilities to create a stack with only a subset of channels:
Image > Stacks > Tools > Make Substack..., which lets you specify the channels/slices, and gets recorded as:
run("Make Substack...", "channels=1,3-5");
Image > Duplicate..., where you can select a continuous range of channels, such as:
run("Duplicate...", "duplicate channels=1-5");
To apply this procedure to all images in a folder, have a look at the Process Folder template in the Script Editor (Templates > IJ1 Macro > Process Folder) and at the documentation on the Fiji wiki:
Scripting toolbox
How to apply a common operation to a complete directory
Thanks for the help Jan Eglinger, back from vacation I managed to write the macro, which was simple with your help :) Based on the template it looks like this (I just gave them incremental names which is fine for my purpose, but could be made more allround i guess):
/*
* Macro to for converting multichannel Tiffs to Tiffs with only specified channels, processes multiple images in a folder
*/
input = getDirectory("Input directory");
output = getDirectory("Output directory");
Dialog.create("File type");
Dialog.addString("File suffix: ", ".tif", 5);
Dialog.show();
suffix = Dialog.getString();
processFolder(input);
function processFolder(input) {
list = getFileList(input);
for (i = 0; i < list.length; i++) {
if(File.isDirectory(input + list[i]))
processFolder("" + input + list[i]);
if(endsWith(list[i], suffix))
processFile(input, output, list[i]);
}
}
function processFile(input, output, file) {
open(input + file);
print("Processing: " + input + file);
run("Make Substack...", "channels=1,2,4"); //Specify which channels should be in the final tif
print("Saving to: " + output);
saveAs("Tiff", output + i);
close("*");
}

how to organize more than 40000 mp3 files in wp8?

I want to place about 45000 mp3 files in my wp8 app,
all of them are sound effect and less than 5k bytes,
however the total space are more than 150m bytes.
I think there are 2 ways to store them.
store each of mp3 as a separate file, it need more than 150m space in logical and actually more than 220m
use a binary file to save all of them in one file, maybe like below structure:
first 4bytes : --length of mp3 file name;
first byte[]: --store the mp3 file name;
senc 4bytes: --length of mp3 file;
senc byte[]: --store the real content of mp3;
and repeat this to append all of them to one file.
It need only 150m , however I have to seek the position of each mp3 file.
Which one you think is better? I prefer to the second solution. However I don't find any api can seek from offset 0 to offset 150*1024*1024,and maybe this will raise performance issue.
I'd use the second option with an index and use of BinaryReader. You can seek to a position (reference) in a file (something like this):
byte[] mp3File;
// Get the file.
var file = await dataFolder.OpenStreamForReadAsync("combined_mp3s.bin");
using (var binReader = new BinaryReader(file))
{
binReader.Postion = offset; // using your math ...
var fileName = binReader.ReadString(); (length prefixed string read)
// or you could read the size on your own and read the characters
var mp3FileLen = binReader.ReadInt32();
mp3File = binReader.ReaderBytes(mp3FileLen);
}
I would suggest you have a hash/dictionary of the file names and starting position of the data for each file stored separately so your application can quickly locate the contents of a file without doing a scan.
Downloading
You may also want to consider breaking the huge file into some smaller files for a better download experience. You could append to the huge file as the contents of a group or individual file is available on the phone.
The problem with your second alternative is that by avoiding the filesystem, you lose a great deal of convenience that the filesystem would afford you.
Perhaps most importantly, you can now (using your proposed data-structure) no longer retrieve a file with a particular name without scanning (potentially) the entire file.
The extra space in the filesystem is probably well used.

Display data that already loaded Angular JS

I have an chtml page using Angular Js that contains list with objects from database
but it takes a long time to load the data.
how can I load just 10 objects and display them, and than continue to load the data
and show the rest of data in Angular Js????
It sounds like you have a lot of data that you want to slowly load to the front end so you don't have to wait. The only method I can think of for adding data periodically would be a setInterval.
the key would be to create a new variable
$scope.displayObjects = []
That you could contiously append to like so
for(var x = currentIndex; x < currentIndex + step && x < $scope.objects.length; x++){
$scope.displayObjects.push($scope.objects[x]);
}
Then just set up an interval to continuously call that. You will also need to make use of
$scope.$apply()
to tell angular to re-apply itself (http://docs.angularjs.org/api/ng.$rootScope.Scope#$apply).
http://jsfiddle.net/A9uND/20/ You can adjust how much it loads with each step via the step variable.
Next time include a jsfiddle (or something similar) so it's easier to assist you.
While the method outlined above will work, I would suggest tracking how much you can see and then only loading the relevant ones. As you scroll add more along the way.