Edit csv tables in emacs 24 - csv

How can I edit csv tables with the comfort I can edit tables in org mode?
I tried csv-mode, but its code is unmaintained since August 2004 and the code says
This package is intended for use with GNU Emacs 21 (only).
Which package will support editing csv files in Emacs 24 best?
One solution could be to use org-mode, but I could not set the column seperator to , yet.
If org-mode is running I could convert the table to the | separated layout with C-c |. But I think it should be possible to make it easier. Something like this (not working example):
tea, price
fruit, 3.45
earl grey, 2.42
ginger, 1.63
# eval: (setq 'org-mode-separator ",")
# eval: (org-mode)
# End:

Work on csv-mode has been restarted. See https://sites.google.com/site/fjwcentaur/emacs. You can install the latest version from elpa with M-x package-install csv-mode. It works perfectly with emacs 24.

This is a bit quick & dirty, but might work for you: You can temporarily replace all commas with pipes (using M-%) and then turn on the org-table minor mode with:
M-x orgtbl-mode
When you're done, you replace all pipes with commas again. Just make sure that there aren't any pipe characters or quotes commas in your cell fields before you begin.

Related

Do line endings matter when moving mysql database between windows and linux?

I am exporting a MySQL database on a windows machine (running XAMPP) to then import into a Linux server (using cmdline or phpMyAdmin IMPORT "filename.sql")
The dbdump file has mixed LF/CRLF line endings, and I know Linux uses LF for line endings.
Will this cause a problem?
Thanks
I anticipate that the mysql program on each platform would expect "its" line-ending style. I honestly don't know if, on each platform, it is "smart enough" to know what to do with each kind of file. (Come to think of it, maybe it does ...) Well, there's one sure way to find out ...
You say that the file has mixed(?!) line-endings? That's very atypical ...
However, there are ready-made [Unix/Linux] utilities, dos2unix and unix2dos, which can handle this problem – and I'm quite sure that Windows has them too. Simply run your dump-file through the appropriate one before using it.
you can import db from Windows to Linux without problem.
MySQL's SQL tokenizer skips all whitespace characters, according to ctype.h.
https://github.com/mysql/mysql-server/blob/8.0/mysys/sql_chars.cc#L94-L95
else if (my_isspace(cs, i))
state_map[i] = MY_LEX_SKIP;
The my_isspace() function tests that in character set cs, the character i is whitespace. For example in ASCII, this includes:
space
tab
newline (\n)
carriage return (\r)
vertical tab (\v)
form feed (\f)
All of the whitespace characters are considered the same for this purpose. So there's no problem using CRLF or LF for line endings in SQL code.
But if your data (i.e. string values) contain different line endings, those line endings will not be converted.

Fuzzing command line arguments [argv]

I have a binary I've been trying to fuzz with AFL, the only thing is AFL only fuzzes STDIN, and File inputs and this binary takes input through its arguments pass_read [input1] [input2]. I was wondering if there are any methods/fuzzers that allow fuzzing in this manner?
I don't not have the source code so making a harness is not really applicable.
Michal Zalewski, the creator of AFL, states in this post:
AFL doesn't support argv fuzzing, because TBH, it's just not horribly useful in
practice. There is an example in experimental/argv_fuzzing/ showing how to do it
in a general case if you really want to.
Link to the mentioned example on GitHub: https://github.com/google/AFL/tree/master/experimental/argv_fuzzing
There are some instructions in the file argv-fuzz-inl.h (haven't tried myself).
Bash only Solution
As an example, lets generate 10 random strings and store them in a file
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 10 | head -n 10 > string-file.txt
Next, lets read 2 lines from string-file and pass it into our application
exec handle< string-file.txt
while read string1 <&handle ; do
read string2 <&handle
pass_read $line1 $line2 >> crash_file.txt
done
exec handle<&-
We then have any crashes stored within crash_file.txt for further analysis.
This may not be the most elegant solution, but perhaps you gives you an idea of some other possibilities if no tool necessarily fulfills the current requirements
I looked at the AFLplusplus repo on GitHub. Inside AFLplusplus/utils/argv_fuzzing/, there is a Makefile. If you run it, you will get a .so file (a shared library) that you can use to do argv fuzzing, even if you only have the binary. Obviously, you must use AFL_PRELOAD. You can read more in the README.

astyle: problems excluding files and directories using the "--exclude" option

I recently hit a usage problem with astyle that I have been unable to figure out. I am not sure if this is a bug, or I am simply using the astyle tool incorrectly.
I am attempting to use the "--exclude" option to omit files and directories from processing, but continue to get an "unmatched" exclude error and astyle terminates:
bwallace$ ls -l foo.c
-rw-r--r-- 1 bwallace 1767304860 22 Aug 1 21:36 foo.c
bwallace$ astyle ./foo.c --exclude=./foo.c -v
Artistic Style 2.04 08/03/2014
Exclude (unmatched) ./foo.c
Artistic Style has terminated
When I pass the "-i" (ignore exclude errors) astyle processes the file as expected. Hence, it seems to be a problem with the "exclude" statement.
bwallace$ astyle ./foo.c --exclude=./foo.c -v -i
Artistic Style 2.04 08/03/2014
Exclude (unmatched) ./foo.c
Unchanged ./foo.c
0 formatted 1 unchanged 0.00 seconds 2 lines
Is this a bug? Am I using astyle incorrectly? Any help would be appreciated.
Excluding a directory is done using simple string contains matching rather than matching actual directories. I've been having the same issue and figured it out by looking at the source here.
Adding a lot of options is a bit tedious. I've found it's easiest to create an options file. There are instructions on the astyle website about where to put it.
To exclude multiple files or directories you need to have multiple "--exclude" options in the file:
--exclude=dir/subdir1
--exclude=dir/subdir2
Try this: astyle "*.c" --exclude=foo.c - that should do the trick.
The . in your exclude statement is one of the issues. Using a wildcard for Astyle's input ("*.c") also seems to be required.
This is definitely weird behaviour on Astyle's side.
An unmatched exclude flag results in an "exclude error" and AStyle terminates.
When you add --ignore-exclude-errors, AStyle continues despite this error. I usually add this flag to my options files.
For the record - I'm using AStyle 3.1, so it could be that this improved in the meantime.

Use LibreOffice to convert HTML to PDF from Mac command in terminal?

I'm trying to convert a HTML file to a PDF by using the Mac terminal.
I found a similar post and I did use the code they provided. But I kept getting nothing. I did not find the output file anywhere when I issued this command:
./soffice --headless --convert-to pdf --outdir /home/user ~/Downloads/*.odt
I'm using Mac OS X 10.8.5.
Can someone show me a terminal command line that I can use to convert HTML to PDF?
I'm trying to convert a HTML file to a PDF by using the Mac terminal.
Ok, here is an alternative way to do convert (X)HTML to PDF on a Mac command line. It does not use LibreOffice at all and should work on all Macs.
This method (ab)uses a filter from the Mac's print subsystem, called xhtmltopdf. This filter is usually not meant to be used by end-users but only by the CUPS printing system.
However, if you know about it, know where to find it and know how to run it, there is no problem with doing so:
The first thing to know is that it is not in any desktop user's $PATH. It is in /usr/libexec/cups/filter/xhtmltopdf.
The second thing to know is that it requires a specific syntax and order of parameters to run, otherwise it won't. Calling it with no parameters at all (or with the wrong number of parameters) it will emit a small usage hint:
$ /usr/libexec/cups/filter/xhtmltopdf
Usage: xhtmltopdf job-id user title copies options [file]
Most of these parameter names show that the tool clearly related to printing. The command requires in total at least 5, or an optional 6th parameter. If only 5 parameters are given, it reads its input from <stdin>, otherwise from the 6ths parameter, a file name. It always emits its output to <stdout>.
The only CLI params which are interesting to us are number 5 (the "options") and the (optional) number 6 (the input file name).
When we run it on the command line, we have to supply 5 dummy or empty parameters first, before we can put the input file's name. We also have to redirect the output to a PDF file.
So, let's try it:
/usr/libexec/cups/filter/xhtmltopdf "" "" "" "" "" my.html > my.pdf
Or, alternatively (this is faster to type and easier to check for completeness, using 5 dummy parameters instead of 5 empty ones):
/usr/libexec/cups/filter/xhtmltopdf 1 2 3 4 5 my.html > my.pdf
While we are at it, we could try to apply some other CUPS print subsystem filters on the output: /usr/libexec/cups/filter/cgpdftopdf looks like one that could be interesting. This additional filter expects the same sort of parameter number and orders, like all CUPS filters.
So this should work:
/usr/libexec/cups/filter/xhtmltopdf 1 2 3 4 5 my.html \
| /usr/libexec/cups/filter/cgpdftopdf 1 2 3 4 "" \
> my.pdf
However, piping the output of xhtmltopdf into cgpdftopdf is only interesting if we try to apply some "print options". That is, we need to come up with some settings in parameter no. 5 which achieve something.
Looking up the CUPS command line options on the CUPS web page suggests a few candidates:
-o number-up=4
-o page-border=double-thick
-o number-up-layout=tblr
do look like they could be applied while doing a PDF-to-PDF transformation. Let's try:
/usr/libexec/cups/filter/xhtmltopdfcc 1 2 3 4 5 my.html \
| /usr/libexec/cups/filter/cgpdftopdf 1 2 3 4 5 \
"number-up=4 page-border=double-thick number-up-layout=tblr" \
> my.pdf
Here are two screenshots of results I achieved with this method. Both used as input files two HTML files which were identical, apart from one line: it was the line which referenced a CSS file to be used for rendering the HTML.
As you can see, the xhtmltopdf filter is able to (at least partially) take into account CSS settings when it converts its input to PDF:
Starting 3.6.0.1 , you would need unoconv on the system to converts documents.
Using unoconv with MacOS X
LibreOffice 3.6.0.1 or later is required to use unoconv under MacOS X. This is the first version distributed with an internal python script that works. No version of OpenOffice for MacOS X (3.4 is the current version) works because the necessary internal files are not included inside the application.
I just had the same problem, but I found this LibreOffice help post. It seems that headless mode won't work if you've got LibreOffice (the usual GUI version) running too. The fix is to add an -env option, e.g.
libreoffice "-env:UserInstallation=file:///tmp/LibO_Conversion" \
--headless \
--invisible \
--convert-to csv file.xls

Similar language features to compare with Perl and Ruby __END__

Background
Perl and Ruby have the __END__ and __DATA__ tokens that allow embedding of arbitrary data directly inside a source code file.
Although this practice may not be well-advised for general-purpose programming use, it is pretty useful for "one-off" quick scripts for routine tasks.
Question:
What other programming languages support this same or similar feature, and how do they do it?
Perl supports the __DATA__ marker, which you can access the contents of as though it were a regular file handle.
Fortran has a DATA statement that sounds like what you're looking for.
Basic on the VIC20 and C64 had a "Data" command that worked something like this
100 DATA 1,2,3
110 DATA 4,5,6
Data could be read via a READ command.
I no longer have a c64 to test my code on.
SAS has the datalines construct which is used for embedding an external data file inside the source program, e.g. in the following program, there are 5 datalines (the terminator is the semi-colon on a line by itself)
data output;
input name $ age;
datalines;
Jim 14
Sarah 11
Hannah 9
Ben 9
Timothy 4
;
run;