I have created .tmx file from tile editor.
it is saved in uncomressed format.
but now i wnat it in zlib compressed format.
i used Edit > Preferences but after changing it to comressed format , it is not reflecting any changes.
Thanks
I remember experiencing this as well.
I think it's a bug. Try saving the file with a different name and close and reopen the TilEd again. After that the settings should apply and you will save it with the correct compression.
http://code.google.com/p/libgdx/wiki/GraphicsTileMaps says that certain libgdx backends (GWT) do not support all compressions. So be careful with that.
Related
A table containing almost four thousand records includes a mediumblob field for each record that contains the record's associated PDF report. Under both MySQL Workbench and phpMyAdmin the relevant DOCUMENT column displays the data as a BLOB button or link. In the case of phpMyAdmin the link also indicates the size of the data the Blob contains.
The issue is that when the Blob button/link is clicked, under MySQL Workbench opening any of the files using the SQL Editor only displays the raw Blob data and under phpMyAdmin th link only allows the Blob data to be saved as a .bin file instead of displaying or saving the data as a viewable PDF file. All previous attempts to retrieve the original PDFs using PHP have failed - see related earlier thread: Extract Pdf from MySql Dump Saved as Text.
The filename field in the table shows that all the stored files are PDF files. Further research and tests indicate that the mediumblob data has been stored as application/octet-streams.
My question is how can the original PDFs be retrieved as readable PDFs? Is it possible for a .bin file saved from the database to be converted or used to recover the original PDF file?
Any assistance would be greatly appreciated.
In line with my assumption and Isaac's suggestion the only solution was to be able to speak to one of the software developers. It transpires that the documents have been zipped using an third-party library as well as the header being removed before then being stored in the database.
The third-party library used is version 2.0.50727 of Chilkat, available from www.chilkatsoft.com. That version no longer appears to be available, but hopefully at least one of the later versions may do the job.
Thanks again for everyone's input and assistance.
Based on the discussion in the comments, it sounds like you'll need to either refer to the original source code or consult with the original developer to determine exactly how the data was stored.
Using phpMyAdmin to download the mediumblob data as a file will download a .bin file in many cases, I actually don't recall how it determines content type (for instance, a PNG file will download with a .png extension, but most other binary files simply download as a .bin when phpMyAdmin isn't sure what the extension should be, PDF included). So the behavior you're seeing from phpMyAdmin is expected and correct, but since the .bin file doesn't work when it's renamed to .pdf that means something has probably gone wrong with the import and upload.
BLOB data is often stored in a pretty standardized way, but it seems your data doesn't follow that method.
Without us seeing the code directly, we can't guess what exactly happened with storing the data and would only be guessing.
I'm kinda a programming noobie but here it goes:
I opened an image file with the program binaryviewer (http://www.proxoft.com/BinaryViewer.aspx) to see its binary code.
Then I used its copy function to first copy the binary data as a .txt file, then as a .jpeg file. The resulting files are quite smaller than the original image file and are completely not readable as images.
Why are the resulting images so much smaller? What kind of data is getting lost in this process and are there ways to prevent that?
Are there specific ways to recreate the image of a file containing only the 0s and 1s of a original image file?
Whatever binary viewer you are using, it just looks at the raw bytes as stored in the file on the disk.
1) When saving 'as text' is itself determines in which format it writes the binary information to a text file. You should look that up in its documentation.
2) It is very unlikely that it has knowledge about the structure of jpg files. So again, when you save to a .jpg file, it itself chooses how to output the bytes, dumps them to a file named .jpg, but it does not have the on-disk structure of a .jpg. For any image viewer trying to read the file, it's just garbage.
But as I said in my comments, without knowing what 'binary viewer' you are talking about it's not possible to be more specific.
I'm able to use HTML5 standard File API and IndexedDB to store large binary files in the browser.
However, when offline, I need to be able to open these files. Using data URLs works great for small files, but none of the browsers support 10Mb file opening through data URL. Is there any other solution, except for non-standard window.webkitRequestFileSystem?
I've actually found an answer here: https://developer.mozilla.org/en/docs/Web/API/Blob.
It is possible to save result of FileReader.readAsArrayBuffer in IndexedDB. When offline, it is possible to create a blob from this typed array and then create data URL to be passed to window.open function. Works with large files!
Is it possible to store HTML5 local storage data to some other .txt or .doc or excel file?
Because, i want to backup the local storage data to some other file.
If your question is asking whether it is possible to tell the browser to translate local storage to a .txt or .doc file, the answer is no. Local storage is implemented in the browser and stays in the browser (as defined by W3C).
If you want to have some mechanism that converts local storage data to a file system file, you probably want to use the File API instead.
You can create a client side file using HTML5. This link shows how to do that. Be careful though, not all browsers support this feature.
Hi does anyone know why MS Office such as doc, docx and xls can no longer be viewed when retrieved from a mysql db when stored as Blob?
The doc and docx used to download and open without any problem, but now it no longer recognises the file format.
I'd like to ditto your problem. Images and plain text files upload/download from mysql blob field. Doc and docx files seemed to be corrupted. I've read somewhere of a rumor of mysql truncating the last 4 bits but I can't verify that.
I have used xvi32 (a hex editor) to compare local originals of files with versions dowloaded from BLOB/LONGBLOB fields. It seems that extra bytes, which I think represent a CRLF are appended, as far as I can work out by Windows when the file is written. This doesn't seem to be a problem for some graphic formats which are to some extent fault-tolerant, but the office XML format files are corrupted by this extra data.
I have tried using ob_clean() and ob_flush() [that is, in php] before printing/echoing the file contents, but still corrupted as far as Office is concerned.
I know this is an old thread but I would appreciate any solutions anyone might have found since it was last updated.
Did you try with a short txt file instead of .doc and see if the contents are different than what you expected?