Unreadable bytecode database tar.gz on windows (Maxmind) - mysql

Following my previous question (Maxmind world cities database issue (MySql)), for which I did not receive any solution, just closed my question with couple comments (anyway, thanks for the comments).
I repost my question in an other way : how could somebody import a database contained in txt file under bytecode form, file compressed in a tar.gz file (may be twice), and this on MySQL for Windows.
Here is the file : http://www.maxmind.com/app/worldcities
Thanks in advance,

This is a problem which seems to be affecting a number of people, me included. The problem is currently being discusssed at the MaxMind forums. You may find it helpful to look- hopefully it can be resolved soon.
[EDIT] It's been solved! The file WAS compressed twice, as you said. See the link for details.

I found the solution with a_horse help : as he said, the file is twice zipped (tar.gz), but in the wrong way.
So here is the process : gunzip the tar.gz file. You gonna have a worldcitiespop.txt. Rename this file as a tar.gz. Gunzip (force if it's required) this file. You gonna obtain a worldcitiespop.tar file. Rename this file as a txt and here is it!

When you have malformed files of this sort, the first advisable thing is to use a program like file. file looks at the first few bytes of a file for magic numbers which identify the format of the file, ignoring the potentially-misleading extension. Using this tool, you could have determined the filetype, changed the extension to the appropriate one, and continued extracting until you had the plaintext you were after.
I hope you'll pardon the broad answer, especially after you've already found a solution to your specific problem, but for the purposes of future visitors to the site, it is more likely they have the general problem of "unable to open a file which has the wrong extension" than your specific issue.

Related

How and where does dconf/GSettings store configuration data?

Yesterday I tried updating from MATE 1.4 to MATE 1.6. I didn't like some things about it, and I decided to switch back, at least for now. One of the changes was a switch from the mateconf configuration system to GNOME 3's GSettings. As I understand this is a frontend to a system called dconf (or connected some other way).
This rendered many of my settings viod. I figured I could try to migrate them, but unlike gconf and mateconf, which created convenient folders in my home directory and filled them with XML I could edit or copy, I wasn't able to find any trace of dconf's settings storage.
A new Control Center is provided (and mandatory to install) but I don't want to be clicking through dozens of dialogs just to restore settings I already have. The Configuration Editor utility might be okay, but it only works with mateconf.
So what I want to know is where I can find the files created by dconf and how I can modify them directly, without relying on special tools.
I almost forgot that I asked this, until abo-abo commented on it. I now see that this is a SuperUser question, but for some reason I can't flag it. I would if I was able to.
The best solution I found was to install dconf-tools, which is like the old conf-editors.
As for the actual location of the data on disk, it seems to be stored in /var/etc/dconf as Gzipped text files, but I'm not entirely sure because I'm not using Mate 1.6 right now. I wouldn't advise editing them directly.
I've been having another issue with dconf, and I checked the folder that I mentioned above. It doesn't even exist. There now seems to be a single configuration file at ~/.config/dconf/[USERNAME]. It isn't in text format, so special tools are required to edit it.
This might be the result to an update to dconf.
I had a similar problem (was trying to back up keyboard custom shortcuts). The path for that was:
dconf dump /org/gnome/desktop/wm/keybindings/ > wm-keybindings.dconf.bak
dconf dump /org/gnome/settings-daemon/plugins/media-keys/ > media-keys-keybindings.dconf.bak
This thanks to redionb's answer on Reddit.

Read all CSV files in a directory into an internal table

I have a parameter and, on F4, we can choose the directory. I'm trying to figure out how to choose a folder and read the content of all the files in it (the files are in .CSV) to an internal table. I think I have to use TMP_GUI_DIRECTORY_LIST_FILES function. Hope I'm explaining myself. Thank you.
You'll have to do this manually: first read the list of files, the go through each file and process its contents. There may be some odd function modules to read CSV files, but be aware that many of them are broken - for example, they just clip the lines that exceed a certain length. Therefore I won't recommend any of them - personally, I'd implement the CSV import part myself.
If you have access to the transaction KCLJ in your system you could analyze the coding behind it. This tool has an option to interpret CSV files so you might find interesting function modules that might help you with your tasks.
EDIT: I looked at it very quickly and the piece of coding you could reuse is reconvert_format from include RKCDFILEINCFOR. An example how to call it is located starting from line 128 in the same include.

How difficult would it be to add a message on 1000+ html files?

I have over 1000 html files that I need to edit in the exact same way. I need to;
Add a simple javascript code at the top of each file.
Put some kind of message at the top (it can be anything, as long as it displays the message I want it to).
I was wondering, do I have to edit each file manually to do this? Is there not .htaccess hacks or anything like that?
Any suggestions/help would be appreciated.
I you are using linux, or have installed Cygwin on windows, then sed may be the quickest way to edit the files.
Combined with find, it can be used to very quickly add (or indeed edit) many files.
For example, the following command will replace all instances of the word 'old' with 'new' in all .html files:
find . -name "*.html" -exec sed -i "s/old/new/g" '{}' \;
There are many other examples online.
You can use .htaccess to autoprepend some code, but to be honest, a global find/replace would be a better idea in many ways.
I don't know what OS you use, but as a Mac Developer, http://www.hexmonkeysoftware.com/ is a neat little tool that does find and replace over loads of files.
Otherwise, a quick python script would be easy to write to do this.
If there is any common structure to the files, and their content is valuable and going to be used further in some way, then I would consider going the opposite route and extracting all that information, storing it in a database (or something) and presenting it like normal. This would provide more flexibility in presentation, and could even make the data useful/usable in other ways.

mv() while reading

on Linux ext3 filesystem, what happens if mv() is called on the same file (file descriptor) while reading the file? It is actually an exam question and I can only say something like:
CPU traps OS for interrupt handling
etc, etc.
I would appreciate if OS guys out there can help me out, please :D
Linux rename man page:
That explains most of the details of this.
If one or more processes have the file open when the last link is removed,
the link shall be removed before rename() returns, but the removal of the
file contents shall be postponed until all references to the file are closed.

Decipher binary format of file

I have a binary file to which I'm trying to write however I dont have the file format specification nor have found it using google, I've been looking at the file using a hex editor but so far has only give me a headache, is there a better way to decipher the format of the file so that I can append data to it?
File carving tools such as scalpel won't really help here. They're made for extracting files with known header and/or footer signatures from a memory dump or some larger, composite file.
For your scenario, I would recommend a hex editor with templating capability, like the 010 Editor. This will allow you to name and annotate "fields" in the binary as you learn more about what each part of the file does. Unfortunately, the process of finding out what each field does is mostly manual. As a methodology, just start playing with it. Change some values in your current binary and see what happens. Expect to spend significant time on it, but also enjoy the process!
you may want to search it with a open source forensic application like foremost or scalpel. They will do most of the grunt work for you, you just likely wont learn anything.