SQLite extension binaries - json

sqlite.org provides windows binaries for the core functions. Are there any pre-built DLLs for the various standard extensions - free text search, virtual tables and JSON in particular? I notice that the command shell as distributed does not support the table-valued JSON functions.
This seems a very obvious request, given the ready availability of binaries for SQLite in other respects, but I can't find anywhere online hosting pre-built extension libraries.

The command-line shell, as distributed, does support the table-valued JSON functions:
sqlite> select * from json_tree('["hello",["world"]]');
key value type atom id parent fullkey path
---------- ------------------- ---------- ---------- ---------- ---------- ---------- ----------
["hello",["world"]] array 0 $ $
0 hello text hello 1 0 $[0] $
1 ["world"] array 2 0 $[1] $
0 world text world 3 2 $[1][0] $[1]
Anyway, the SQLite library is meant to be embedded into your application, i.e., the sqlite3.c file (and any needed extensions not already included in the amalgamation) is to be directly compiled together with your other sources.

Related

What's the unique id of a Cubin (cuda binary)

For common elf binaries, one could use readelf -n <binary file> to get the unique build id as follows:
$ readelf -n matrix
Displaying notes found in: .note.gnu.build-id
Owner Data size Description
GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)
Build ID: 30935f4bf2ea2d76a53538b28ed7ffbc51437e3c
Displaying notes found in: .note.ABI-tag
Owner Data size Description
GNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag)
OS: Linux, ABI: 3.2.0
yet for cuda binaries, aka cubin, readelf -n does not work. So how to get the unique id of a cubin? Or what's the unique id of a cubin?
For common elf binaries, one could use readelf -n to get the unique build id
That isn't strictly true. What you are referring to as the "build id" is the .note.gnu.build-id section of the elf header, which is populated by a specific feature of the gnu linker ld and (when enabled) creates some identifying bits for that notes field from the contents of the elf section. The resulting ID value isn't unique, in that two files containing identical code will hash to the same ID, as far as I am aware. If you use another linker, that field isn't populated (the LLD linker clang uses certainly didn't, last time I checked)
So how to get the unique id of a cubin? Or what's the unique id of a cubin?
There isn't one. The NVIDIA elf headers don't contain a .notes section and the resulting elf files are not linked with the GNU linker, they are linked with nvlink and that doesn't have the same ID functionality, to the best of my knowledge.

Octave parallel package: User-defined function call issues

I have trouble calling a user-defined function with pararrayfun (likewise for parcellfun). When I execute the following code:
pkg load parallel
function retval = mul(x,y)
retval = x*y;
endfunction
vector_x = 1:2^3;
vector_y = 1:2^3;
vector_z = pararrayfun(nproc, #(x,y) mul(x,y), vector_x, vector_y)
vector_z = pararrayfun(nproc, #(x,y) x*y, vector_x, vector_y)
I get the following output:
vector_z =
-1 -1 -1 -1 -1 -1 -1 -1
vector_z =
1 4 9 16 25 36 49 64
That is, the call to the user-defined function does not seem to work, whereas the same as an anonymous function is working.
The machine is x86_64 with Debian bullseye and 5.10.0-1-amd64 kernel. Octave's version is 6.1.1~hg.2020.12.27-1. The pkg list command gives me:
Package Name | Version | Installation directory
--------------+---------+-----------------------
dataframe | 1.2.0 | /usr/share/octave/packages/dataframe-1.2.0
parallel *| 4.0.0 | /usr/share/octave/packages/parallel-4.0.0
struct *| 1.0.16 | /usr/share/octave/packages/struct-1.0.16
Funny thing is that the same code works flawless on armv7l with Debian buster and 4.14.150-odroidxu4 kernel. That is the call to the user-defined function and the anonymous function produce the output:
parcellfun: 8/8 jobs done
vector_z =
1 4 9 16 25 36 49 64
parcellfun: 8/8 jobs done
vector_z =
1 4 9 16 25 36 49 64
On that machine Octave's version is 4.4.1 and pkg list gives:
Package Name | Version | Installation directory
--------------+---------+-----------------------
dataframe | 1.2.0 | /usr/share/octave/packages/dataframe-1.2.0
parallel *| 3.1.3 | /usr/share/octave/packages/parallel-3.1.3
struct *| 1.0.15 | /usr/share/octave/packages/struct-1.0.15
What is wrong and how can I fix this behavior?
This is probably a bug, but do note that the new version of parallel has introduced a few limitations as per its documentation (also see the latest release news) which may relate to what's happening here.
Having said that, I want to clarify this sentence:
the call to the user-defined funtion does not seem to work, whereas the same as an anonymous function is working.
That's not what's happening. You're passing an anonymous function in both cases. It's just that the first calls mul inside, and the second calls mtimes.
As for your error (bug?) this may have something to do with mul being a 'command-line' function. It's not clear from the documentation if command-line functions are a limitation and this is simply an oversight in the docs, or if ill-treatment of command-line functions is a genuine bug. I think if you put it in its own file it should work fine. (and equally, if you do, it's worth passing it as a handle directly, rather than wrapping it inside another anonymous function).
Having said that, I think the -1's you see are basically "error returns" from inside pararrayfun's guts. The reason for this is the following: if instead of creating mul as a command-line function, you make it an anonymous function:
mul = #(x,y) x * y
Observe what the three calls below return:
x = pararrayfun( nproc, #(x,y) mul(x,y), vector_x, vector_y ) # now it works as expected.
x = pararrayfun( nproc, mul, vector_x, vector_y ) # same: mul is a valid handle expecting two inputs
x = pararrayfun( nproc, #mul, vector_x, vector_y ) # x=-1,-1,-1,-1,-1,-1,-1,-1
If you had tried the last command using normal array fun, you would have seen an error relating to the fact that you accidentally passed #mul instead of mul, when mul is a proper handle. In pararrayfun, it just does the calculation, and presumably -1 was the return value from an internal error.
I don't know exactly why a command-line function fails, but presumably it has something to do with the fact that pararrayfun creates separate octave instances under the hood, which need access to all function definitions, and perhaps command-line functions cannot be transfered / compiled in the new instance as easily as in the parent instance, because of the way they are created / compiled in the current session.
In any case, I think you'll solve your problem if instead of a command-line function definition, you create an external function or (if dealing with simple enough functions) a handle to an anonymous function.
However, I would still submit a bug to the octave bug tracker to help the project :)

Getting full binary control flow graph from Radare2

I want to get a full control flow graph of a binary (malware) using radare2.
I followed this post from another question on SO. I wanted to ask if instead of ag there is another command that gives the control flow graph of the whole binary and not only the graph of one function.
First of all, make sure to install radare2 from git repository and use the newest version:
$ git clone https://github.com/radare/radare2.git
$ cd radare2
$ ./sys/install.sh
After you've downloaded and installed radare2, open your binary and perform analysis on it using the aaa command:
$ r2 /bin/ls
-- We fix bugs while you sleep.
[0x004049a0]> aaa
[x] Analyze all flags starting with sym. and entry0 (aa)
[x] Analyze function calls (aac)
[x] Analyze len bytes of instructions for references (aar)
[x] Check for objc references
[x] Check for vtables
[x] Type matching analysis for all functions (aaft)
[x] Propagate noreturn information
[x] Use -AA or aaaa to perform additional experimental analysis.
Adding ? after almost every command in radare will output the subcommands. For example, you know that the ag command and its subcommands can help you to output the visual graphs so by adding ? to ag you can discover its subcommands:
[0x00000000]> ag?
Usage: ag<graphtype><format> [addr]
Graph commands:
| aga[format] Data references graph
| agA[format] Global data references graph
| agc[format] Function callgraph
| agC[format] Global callgraph
| agd[format] [fcn addr] Diff graph
... <truncated> ...
Output formats:
| <blank> Ascii art
| * r2 commands
| d Graphviz dot
| g Graph Modelling Language (gml)
| j json ('J' for formatted disassembly)
| k SDB key-value
| t Tiny ascii art
| v Interactive ascii art
| w [path] Write to path or display graph image (see graph.gv.format and graph.web)
You're searching for the agCd command which will output a full call-graph of the program in dot format.
[0x004049a0]> agCd > output.dot
The dot utility is part of the Graphviz software which can be installed using sudo apt-get install graphviz.
You can view your output in any offline dot viewer, paste the output into an online Graphviz viewer and even convert the dot file to PNG:
$ r2 /bin/ls
[0x004049a0]> aa
[x] Analyze all flags starting with sym. and entry0 (aa)
[0x004049a0]> agCd > output.dot
[0x004049a0]> !!dot -Tpng -o callgraph.png output.dot

get_num_processes() takes no keyword arguments (CSV <-> CASSANDRA)

I want to export a cassandra DB to a csv file, but
cqlsh:marvel> SELECT * FROM personajes ;
name | skills
------------+--------
Iron Man | Tech
Spider Man | Lab
cqlsh:marvel> COPY personajes (name, skills) TO 'temp.csv';
get_num_processes() takes no keyword arguments
Tested in:
[cqlsh 5.0.1 | Cassandra 2.1.14 | CQL spec 3.2.1 | Native protocol v3]
[cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4]
Thank you
Delete pylib/cqlshlib/copyutil.so and copyutil.c (if you have it - I didn't).
The exact path depends on your OS I guess. On Ubuntu 14.04 copyutil.so would be a symlink inside /usr/lib/pymodules/python2.7/cqlshlib.
Just delete or rename it and you should be good to go. Worked for me at least.
For reference: This is indeed a bug and the same bug as https://issues.apache.org/jira/browse/CASSANDRA-11574, which I've learned after opening https://issues.apache.org/jira/browse/CASSANDRA-11816. Turned out the fix-version in the first ticket was wrong for Cassandra 2.2

Use LibreOffice to convert HTML to PDF from Mac command in terminal?

I'm trying to convert a HTML file to a PDF by using the Mac terminal.
I found a similar post and I did use the code they provided. But I kept getting nothing. I did not find the output file anywhere when I issued this command:
./soffice --headless --convert-to pdf --outdir /home/user ~/Downloads/*.odt
I'm using Mac OS X 10.8.5.
Can someone show me a terminal command line that I can use to convert HTML to PDF?
I'm trying to convert a HTML file to a PDF by using the Mac terminal.
Ok, here is an alternative way to do convert (X)HTML to PDF on a Mac command line. It does not use LibreOffice at all and should work on all Macs.
This method (ab)uses a filter from the Mac's print subsystem, called xhtmltopdf. This filter is usually not meant to be used by end-users but only by the CUPS printing system.
However, if you know about it, know where to find it and know how to run it, there is no problem with doing so:
The first thing to know is that it is not in any desktop user's $PATH. It is in /usr/libexec/cups/filter/xhtmltopdf.
The second thing to know is that it requires a specific syntax and order of parameters to run, otherwise it won't. Calling it with no parameters at all (or with the wrong number of parameters) it will emit a small usage hint:
$ /usr/libexec/cups/filter/xhtmltopdf
Usage: xhtmltopdf job-id user title copies options [file]
Most of these parameter names show that the tool clearly related to printing. The command requires in total at least 5, or an optional 6th parameter. If only 5 parameters are given, it reads its input from <stdin>, otherwise from the 6ths parameter, a file name. It always emits its output to <stdout>.
The only CLI params which are interesting to us are number 5 (the "options") and the (optional) number 6 (the input file name).
When we run it on the command line, we have to supply 5 dummy or empty parameters first, before we can put the input file's name. We also have to redirect the output to a PDF file.
So, let's try it:
/usr/libexec/cups/filter/xhtmltopdf "" "" "" "" "" my.html > my.pdf
Or, alternatively (this is faster to type and easier to check for completeness, using 5 dummy parameters instead of 5 empty ones):
/usr/libexec/cups/filter/xhtmltopdf 1 2 3 4 5 my.html > my.pdf
While we are at it, we could try to apply some other CUPS print subsystem filters on the output: /usr/libexec/cups/filter/cgpdftopdf looks like one that could be interesting. This additional filter expects the same sort of parameter number and orders, like all CUPS filters.
So this should work:
/usr/libexec/cups/filter/xhtmltopdf 1 2 3 4 5 my.html \
| /usr/libexec/cups/filter/cgpdftopdf 1 2 3 4 "" \
> my.pdf
However, piping the output of xhtmltopdf into cgpdftopdf is only interesting if we try to apply some "print options". That is, we need to come up with some settings in parameter no. 5 which achieve something.
Looking up the CUPS command line options on the CUPS web page suggests a few candidates:
-o number-up=4
-o page-border=double-thick
-o number-up-layout=tblr
do look like they could be applied while doing a PDF-to-PDF transformation. Let's try:
/usr/libexec/cups/filter/xhtmltopdfcc 1 2 3 4 5 my.html \
| /usr/libexec/cups/filter/cgpdftopdf 1 2 3 4 5 \
"number-up=4 page-border=double-thick number-up-layout=tblr" \
> my.pdf
Here are two screenshots of results I achieved with this method. Both used as input files two HTML files which were identical, apart from one line: it was the line which referenced a CSS file to be used for rendering the HTML.
As you can see, the xhtmltopdf filter is able to (at least partially) take into account CSS settings when it converts its input to PDF:
Starting 3.6.0.1 , you would need unoconv on the system to converts documents.
Using unoconv with MacOS X
LibreOffice 3.6.0.1 or later is required to use unoconv under MacOS X. This is the first version distributed with an internal python script that works. No version of OpenOffice for MacOS X (3.4 is the current version) works because the necessary internal files are not included inside the application.
I just had the same problem, but I found this LibreOffice help post. It seems that headless mode won't work if you've got LibreOffice (the usual GUI version) running too. The fix is to add an -env option, e.g.
libreoffice "-env:UserInstallation=file:///tmp/LibO_Conversion" \
--headless \
--invisible \
--convert-to csv file.xls