get_num_processes() takes no keyword arguments (CSV <-> CASSANDRA) - csv

I want to export a cassandra DB to a csv file, but
cqlsh:marvel> SELECT * FROM personajes ;
name | skills
------------+--------
Iron Man | Tech
Spider Man | Lab
cqlsh:marvel> COPY personajes (name, skills) TO 'temp.csv';
get_num_processes() takes no keyword arguments
Tested in:
[cqlsh 5.0.1 | Cassandra 2.1.14 | CQL spec 3.2.1 | Native protocol v3]
[cqlsh 5.0.1 | Cassandra 3.0.5 | CQL spec 3.4.0 | Native protocol v4]
Thank you

Delete pylib/cqlshlib/copyutil.so and copyutil.c (if you have it - I didn't).
The exact path depends on your OS I guess. On Ubuntu 14.04 copyutil.so would be a symlink inside /usr/lib/pymodules/python2.7/cqlshlib.
Just delete or rename it and you should be good to go. Worked for me at least.
For reference: This is indeed a bug and the same bug as https://issues.apache.org/jira/browse/CASSANDRA-11574, which I've learned after opening https://issues.apache.org/jira/browse/CASSANDRA-11816. Turned out the fix-version in the first ticket was wrong for Cassandra 2.2

Related

Insert Object Array or CSV file content into Kusto Table

Unable to insert data from object array or csv file into kusto table
My goal is to build a pipeline in Azure DevOps which reads data using PowerShell and writes the data into Kusto Table.
I was able to write the data which I have read from PowerShell to object Array or csv file but I am unable to figure out the ways in which this data can be inserted into Kusto table.
Could any one suggest the best way to write the data into kusto
one option would be to write your CSV payload to blob storage, then ingest that blob into your target table, by:
using a "queued ingestion" client in one of the client libraries: https://learn.microsoft.com/en-us/azure/kusto/api/
note that the .NET ingestion client library also provides you with methods to IngestFromStream or IngestFromDataReader, which handle writing the data to intermediate blob storage so that you don't have to
or by
issuing an .ingest command: https://learn.microsoft.com/en-us/azure/kusto/management/data-ingestion/ingest-from-storage. though using "direction ingestion" is less recommended for Production volumes
another option (not recommended for Production volume), would be using the .ingest inline (AKA "ingest push") option: https://learn.microsoft.com/en-us/azure/kusto/management/data-ingestion/ingest-inline
for example:
.create table sample_table (a:string, b:int, c:datetime)
.ingest inline into table sample_table <|
hello,17,2019-08-16 00:52:07
world,71,2019-08-16 00:52:08
"isn't, this neat?",-13,2019-08-16 00:52:09
which will append the above records to the table:
| a | b | c |
|-------------------|------|-----------------------------|
| hello | 17 | 2019-08-16 00:52:07.0000000 |
| world | 71 | 2019-08-16 00:52:08.0000000 |
| isn't, this neat? | -13 | 2019-08-16 00:52:09.0000000 |

Getting full binary control flow graph from Radare2

I want to get a full control flow graph of a binary (malware) using radare2.
I followed this post from another question on SO. I wanted to ask if instead of ag there is another command that gives the control flow graph of the whole binary and not only the graph of one function.
First of all, make sure to install radare2 from git repository and use the newest version:
$ git clone https://github.com/radare/radare2.git
$ cd radare2
$ ./sys/install.sh
After you've downloaded and installed radare2, open your binary and perform analysis on it using the aaa command:
$ r2 /bin/ls
-- We fix bugs while you sleep.
[0x004049a0]> aaa
[x] Analyze all flags starting with sym. and entry0 (aa)
[x] Analyze function calls (aac)
[x] Analyze len bytes of instructions for references (aar)
[x] Check for objc references
[x] Check for vtables
[x] Type matching analysis for all functions (aaft)
[x] Propagate noreturn information
[x] Use -AA or aaaa to perform additional experimental analysis.
Adding ? after almost every command in radare will output the subcommands. For example, you know that the ag command and its subcommands can help you to output the visual graphs so by adding ? to ag you can discover its subcommands:
[0x00000000]> ag?
Usage: ag<graphtype><format> [addr]
Graph commands:
| aga[format] Data references graph
| agA[format] Global data references graph
| agc[format] Function callgraph
| agC[format] Global callgraph
| agd[format] [fcn addr] Diff graph
... <truncated> ...
Output formats:
| <blank> Ascii art
| * r2 commands
| d Graphviz dot
| g Graph Modelling Language (gml)
| j json ('J' for formatted disassembly)
| k SDB key-value
| t Tiny ascii art
| v Interactive ascii art
| w [path] Write to path or display graph image (see graph.gv.format and graph.web)
You're searching for the agCd command which will output a full call-graph of the program in dot format.
[0x004049a0]> agCd > output.dot
The dot utility is part of the Graphviz software which can be installed using sudo apt-get install graphviz.
You can view your output in any offline dot viewer, paste the output into an online Graphviz viewer and even convert the dot file to PNG:
$ r2 /bin/ls
[0x004049a0]> aa
[x] Analyze all flags starting with sym. and entry0 (aa)
[0x004049a0]> agCd > output.dot
[0x004049a0]> !!dot -Tpng -o callgraph.png output.dot

SQLite extension binaries

sqlite.org provides windows binaries for the core functions. Are there any pre-built DLLs for the various standard extensions - free text search, virtual tables and JSON in particular? I notice that the command shell as distributed does not support the table-valued JSON functions.
This seems a very obvious request, given the ready availability of binaries for SQLite in other respects, but I can't find anywhere online hosting pre-built extension libraries.
The command-line shell, as distributed, does support the table-valued JSON functions:
sqlite> select * from json_tree('["hello",["world"]]');
key value type atom id parent fullkey path
---------- ------------------- ---------- ---------- ---------- ---------- ---------- ----------
["hello",["world"]] array 0 $ $
0 hello text hello 1 0 $[0] $
1 ["world"] array 2 0 $[1] $
0 world text world 3 2 $[1][0] $[1]
Anyway, the SQLite library is meant to be embedded into your application, i.e., the sqlite3.c file (and any needed extensions not already included in the amalgamation) is to be directly compiled together with your other sources.

How to set up system properties per user?

We're upgrading to play 2.3.5 and it's the first time I've used the activator.
If I run the activator headless, I can still pass in a bunch of command line flags, but if I try out the new UI I don't know how to pass in overrides for my developer setup (which are different from other developers). I don't see a way to set unique java properties in a meta activator config that we would exclude from version control.
-Dlogger.file=./conf/my-special-logger.xml -Dprop1=special -Dconfig.file=./conf/my-special-file.conf
I can symlink my-special-file.conf to application.conf and get most of what I want. It's not really an ideal solution and if I leave the symlink in place during bundling, the packager blows up.
[error] (*:stage) Duplicate mappings:
[error] ./my-project/target/universal/stage/conf/my-special-file.conf
[error] from
[error] ./my-project/conf/application.conf
[error] ./my-project/conf/my-special-file.conf
Typesafe Activator uses ~/.activator/activatorconfig.txt as a means of setting Java system properties.
With the following ~/.activator/activatorconfig.txt:
-Dhello=world
I could query for the hello property in the shell:
[play-new-app] $ eval sys.props("hello")
[info] ans: String = world
As a reference - this is for Play 2.3.5:
[play-new-app] $ dependencies
...
+------------------------------------------------------------+------------------------------------------------------------+--------------------------------------------+
| Module | Required by | Note |
+------------------------------------------------------------+------------------------------------------------------------+--------------------------------------------+
...
+------------------------------------------------------------+------------------------------------------------------------+--------------------------------------------+
| com.typesafe.play:play_2.11:2.3.5 | com.typesafe.play:play-ws_2.11:2.3.5 | As play_2.11-2.3.5.jar |
| | com.typesafe.play:play-jdbc_2.11:2.3.5 | |
| | play-new-app:play-new-app_2.11:1.0-SNAPSHOT | |
| | com.typesafe.play:play-cache_2.11:2.3.5 | |
+------------------------------------------------------------+------------------------------------------------------------+--------------------------------------------+

Is there something like csv or json but more graphical and better to read for humans?

For example CSV and JSON are human and machine readable text formats.
Now I am looking for something similar even more graphical for table data representation.
Instead of:
1,"machines",14.91
3,"mammals",1.92
50,"fruit",4.239
789,"funghi",29.3
which is CSV style or
[
[1,"machines",14.91],
[3,"mammals",1.92],
[50,"fruit",4.239],
[789,"funghi",29.3]
]
which is JSON style, and I am not going to give an XML example, something similar like this is what I have in mind:
1 | "machines"| 14.91
3 | "mammals" | 1.92
50 | "fruit" | 4.239
789 | "funghi" | 29.3
There should be reader and writer libraries for it for some languages and it should somehow be a standard. Of course I could roll my own but if there is also a standard I'd go with that.
I have seen similar things as part of wiki or markup languages, but it should serve as a human easily editable data definition format and be read and also written by software libraries.
That's not exactly what markup and wiki languages are for. What I am looking for belongs more to the csv,json and xml family.
I would checkout textile. It has a table syntax almost exactly like what you described.
For example, the table in your example would be constructed like this:
| 1 | machines | 14.91 |
| 3 | mammals | 1.92 |
| 50 | fruit | 4.239 |
| 789 | funghi | 29.3 |
An alternative (albeit not optimized for tabular data), is YAML, which is nice for JSON-ish type data.
Alternatively you could also look at the CSV editor's i.e.
CsvEd
CsvEasy
ReCsvEditor
There whole purpose is to display CSV and update data in a more readable Format. The ReCsvEditor will display both Xml and Csv files in a a similar format.
Google CsvEditor, you will find plenty