Update automatic attributes in Opscode Chef (serialized_object) - json

I had a couple of nodes in my chef server that had a problem while bootstrapping and missed the FQDN and domain automatic attributes due to which they were not indexed by SOLR and not searchable by knife. I could not rebootstrap these machines, but wanted to fix this and it took me a while to do so. Therefore I am posting this hoping that it will save others some time.

Automatic attributes are stored by Chef in the database and are not editable by knife (see Chef Attributes Overview). They are stored in chef's database as a column named serialized_object in the nodes table in hex and is in fact a gzipped JSON string.
To obtain the JSON string:
Use a PostgreSQL client to connect to the chef PostgreSQL (you can find the credentials on the chef server in /etc/chef-server/chef-server-secrets.json)
Save the contents of the serialized_object to a file say serialized_object.hex (it should look something like '\x1f8b08000...')
Run: xxd -p -r serialized_object.hex > serialized_object.gz
Run: gunzip serialized_object.gz
Now the file serialized_object contains the attributes in JSON format which you can edit. After editing you can store its contents back in chef server by following this:
Run: gzip serialized_object
Run: xxd -p serialized_object.gz > serialized_object.hex
Now you need to use the PostgreSQL client and insert the Hex data (be sure to remove prefix backslashes and x from the hex string) with the following query:
update nodes set serialized_object = decode('1f8b08000...','hex') where name = ''
Hope this helps someone :)

Related

How to run a cypher script file from Terminal with the cypher-shell neo4j command?

I have a cypher script file and I would like to run it directly.
All answers I could find on SO to the best of my knowledge use the command neo4j-shell which in my version (Neo4j server 3.5.5) seems to be deprecated and substituted with the command cyphershell.
Using the command sudo ./neo4j-community-3.5.5/bin/cypher-shell --help I got the following instructions.
usage: cypher-shell [-h] [-a ADDRESS] [-u USERNAME] [-p PASSWORD]
[--encryption {true,false}]
[--format {auto,verbose,plain}] [--debug] [--non-interactive] [--sample-rows SAMPLE-ROWS]
[--wrap {true,false}] [-v] [--driver-version] [--fail-fast | --fail-at-end] [cypher]
A command line shell where you can execute Cypher against an
instance of Neo4j. By default the shell is interactive but you can
use it for scripting by passing cypher directly on the command
line or by piping a file with cypher statements (requires Powershell
on Windows).
My file is the following which tries to create a graph from csv files and it comes from the book "Graph Algorithms".
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data" AS base
WITH base + "transport-nodes.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MERGE (place:Place {id:row.id})
SET place.latitude = toFloat(row.latitude),
place.longitude = toFloat(row.latitude),
place.population = toInteger(row.population)
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data/" AS base
WITH base + "transport-relationships.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MATCH (origin:Place {id: row.src})
MATCH (destination:Place {id: row.dst})
MERGE (origin)-[:EROAD {distance: toInteger(row.cost)}]->(destination)
When I try to pass the file directly with the command:
sudo ./neo4j-community-3.5.5/bin/cypher-shell neo_4.cypher
first it asks for username and password but after typing the correct password (the wrong password results in the error The client is unauthorized due to authentication failure.) I get the error:
Invalid input 'n': expected <init> (line 1, column 1 (offset: 0))
"neo_4.cypher"
^
When I try piping with the command:
sudo cat neo_4.cypher| sudo ./neo4j-community-3.5.5/bin/cypher-shell -u usr -p 'pwd'
no output is generated and no graph either.
How to run a cypher script file with the neo4j command cypher-shell?
Use cypher-shell -f yourscriptname. Check with --help for more description.
I think the key is here:
cypher-shell -- help
... Stuff deleted
positional arguments:
cypher an optional string of cypher to execute and then exit
This means that the paremeter is actual cypher code, not a file name. Thus, this works:
GMc#linux-ihon:~> cypher-shell "match(n) return n;"
username: neo4j
password: ****
+-----------------------------+
| n |
+-----------------------------+
| (:Job {jobName: "Job01"}) |
| (:Job {jobName: "Job02"}) |
But this doesn't (because the text "neo_4.cypher" isn't a valid cypher query)
cypher-shell neo_4.cypher
The help also says:
example of piping a file:
cat some-cypher.txt | cypher-shell
So:
cat neo_4.cypher | cypher-shell
should work. Possibly your problem is all of the sudo's. Specifically the cat ... | sudo cypher-shell. It is possible that sudo is protecting cypher-shell from some arbitrary input (although it doesn't seem to do so on my system).
If you really need to use sudo to run cypher, try using the following:
sudo cypher-shell arguments_as_needed < neo_4.cypher
Oh, also, your script doesn't have a return, so it probably won't display any data, but you should still see the summary reports of records loaded.
Perhaps try something simpler first such as a simple match ... return ... query in your script.
Oh, and don't forget to terminate the cypher query with a semi-colon!
The problem is in the cypher file: each line should end with a semicolon: ;. I still need sudo to run the program.
The file taken from the book seems to contain other errors as well actually.

How to import/load/run mysql file using golang?

I’m trying to run/load sql file into mysql database using this golang statement but this is not working:
exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}", "<", file abs path )
But when i use following command in windows command prompt it’s working perfect.
mysql -u {username} -p{db password} {db name} < {file abs path}
So what is the problem?
As others have answered, you can't use the < redirection operator because exec doesn't use the shell.
But you don't have to redirect input to read an SQL file. You can pass arguments to the MySQL client to use its source command.
exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}",
"-e", "source {file abs path}" )
The source command is a builtin of the MySQL client. See https://dev.mysql.com/doc/refman/5.7/en/mysql-commands.html
Go's exec.Command runs the first argument as a program with the rest of the arguments as parameters. The '<' is interpreted as a literal argument.
e.g. exec.Command("cat", "<", "abc") is the following command in bash: cat \< abc.
To do what you want you have got two options.
Run (ba)sh and the command as argument: exec.Command("bash", "-c", "mysql ... < full/path")
Pipe the content of the file in manually. See https://stackoverflow.com/a/36383984/8751302 for details.
The problem with the bash version is that is not portable between different operating systems. It won't work on Windows.
Go's os.exec package does not use the shell and does not support redirection:
Unlike the "system" library call from C and other languages, the os/exec package intentionally does not invoke the system shell and does not expand any glob patterns or handle other expansions, pipelines, or redirections typically done by shells.
You can call the shell explicitly to pass arguments to it:
cmd := exec.Command("/bin/sh", yourBashCommand)
Depending on what you're doing, it may be helpful to write a short bash script and call it from Go.

How to attach volume to pod's post start life cycle hook?

The use case trying out is, way to initialize the postgres database after it starts up. I saw the post start hooks in the openshift pod lifecycle. I can't put the sql statements using here-document or in command line ( Docker command fails due to max length issue ).
So looking a option to save the SQL statements in a file via ConfigMap and attach it to the post container before it starts, so that the psql command can execute it. I couldn't see a way to attach the volume from the DeploymentConfig from the official document. Is there any way I can do it ?
Document I referred - openshift-doc
I found a workaround to pass the long SQL statements to the post life-cycle pods.
Set the SQL statements in the DeploymentConfig ENV variable. These ENV variables are accessible inside the life cycle pods also, so then we can easily do the bellow command
post:
failurePolicy: Abort
execNewPod:
command:
- /bin/bash
- '-c'
- >-
echo $INIT_SQL_STATEMENTS | psql "sslmode=allow
host=postgres user=postgres password=postgres"
containerName: postgres
.....
env:
- name: POSTGRESQL_ADMIN_PASSWORD
value: postgres
- name: INIT_SQL_STATEMENTS
value: >-
create user haridas with encrypted password 'haridas';...
Another option which I've employed in the past is to pass in the sql statements in a parameters file. This will then allow you to more easily configuration manage the sql commands (e.g. check it into git) and declutter your deployment configuration (DC). Here is what I did:
Move your post hook to the DC portion of the template file. Let me
know if you need steps on how to export, modify, and re-import a
template file, but I didn't want to over-complicate this procedure
unnecessarily.
Add a parameter to the template file called SQL_COMMANDS like this:
parameters:
- description: The SQL commands to run.
displayName: SQL commands
name: SQL_COMMANDS
required: true
In the post hook code of the template (DC section) run the SQL_COMMANDS like this:
execNewPod:
command:
- /bin/sh
- -c
- echo "${SQL_COMMANDS}" | psql -h ${DATABASE_SERVICE_NAME} -U ${POSTGRESQL_USER} -d ${POSTGRESQL_DATABASE};
Note the other variables in the command are also passed in as parameters.
Create a parameters file similar to this:
POSTGRESQL_USER=postgres
POSTGRESQL_PASSWORD=somepassword
POSTGRESQL_DATABASE=myDatabase
SQL_COMMANDS="CREATE TABLE Configuration(CONFIGURATION_ID character varying(255)
NOT NULL, description character varying(255)
NOT NULL, key character varying(255) NOT NULL, value text NOT NULL,
PRIMARY KEY (CONFIGURATION_ID) ); INSERT INTO Configuration
(CONFIGURATION_ID, description, key, value) VALUES ('10', ... etc."
Deploy your app using the template and pass in the parameters from the file:
oc new-app <template name> --param-file=ParametersFile.txt

DatabaseError: 1 (HY000): Can't create/write to file '2015-04-06 20:48:33.418000'.csv (Errcode: 13 - Permission denied)

I am designing an application in Python and trying to write to a CSV file, but I am getting this error:
DatabaseError: 1 (HY000): Can't create/write to file '2015-04-06 20:48:33.418000'.csv (Errcode: 13 - Permission denied)
The Code:
def generate_report(self):
conn=mysql.connector.connect(user='root',password='',host='localhost',database='mydatabase')
exe2 = conn.cursor()
exe2.execute("""SELECT tbl_site.Site_name, State_Code, Country_Code,Street_Address, instrum_start_date, instrum_end_date, Comment INTO OUTFILE %s FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\\\' LINES TERMINATED BY '\\n'FROM tbl_site JOIN tbl_site_monit_invent ON site_id = tbl_Site_site_id """, (str(datetime.datetime.now()),))
I can run this code without any errors on a Mac, but I need it to work on Windows.
How can I resolve this error?
Simple really. A colon character is not a valid character in a filename on Windows. It's not allowed.
Reference: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247%28v=vs.85%29.aspx
The colon character is in the list of "reserved characters", along with several others. (NOTE: One use of the colon character is as a separator for an Alternate Data Stream on NTFS. Ref: http://blogs.technet.com/b/askcore/archive/2013/03/24/alternate-data-streams-in-ntfs.aspx
Followup
The question has been significantly edited since my previous answer was provided. Some notes:
I'm not very familiar with running MySQL on Windows OS. Most of my work with MySQL server is on Linux.
The SELECT ... INTO OUTFILE statement will cause the MySQL server to attempt to write a file on the server host.
The MySQL user (the user logged in to MySQL) must have the FILE privilege in order to use the SELECT ... INTO OUTFILE statement.
Also, the OS account that is running MySQL server must have OS permissions to write a file to the specified directory, and the file to be written must not already exist. Also, the filename must conform to the naming rules for filenames on OS filesystem.
Ref: https://dev.mysql.com/doc/refman/5.5/en/select-into.html
For debugging this type of issue, I strongly recommend you echo out the actual SQL text that is going to be sent to the MySQL server. And then take that SQL text and run it from a different client, like the mysql command line client.
For debugging a privileges issues, you can use a much simpler statement. Test writing a file to a directory that is known to exist, that is known the mysql server has permissions to write files to, and with a filename that does not exist and that conforms to the rules for the OS and filesystem.
For example, on a normal Linux box, we could test with something like this:
mysql> SELECT 'bar' AS foo INTO OUTFILE '/tmp/mysql_foo.csv'
Before we run that, we can easily verify that the /tmp directory exists, that it is writable by the OS account that is running the mysql server, and that the filename conforms to the rules for the filesystem, and that the filename doesn't exist, e.g.
$ su - mysql
$ ls -l /tmp/mysql_foo.csv
$ echo "foo" >/tmp/mysql_foo.csv
$ cat /tmp/mysql_foo.csv
$ rm /tmp/mysql_foo.csv
$ ls -l /tmp/mysql_foo.csv
Once we get over that hurdle, we can move on to testing writing a file to a different directory, a file with a more more complex filename. Once we get that plumbing working, we can work on getting actual data, into a usable csv format.
The original question seems to indicate that the MySQL server is running on Windows OS, and it seems to indicate that the filename attempting to be written contains semicolon characters. Windows does not allow semicolon as part a filename.
It was simply permission error.

.db file and MySQL

I am having real issues with a .db file its around 20gb in size with three tables and the rest data.
I am on a mac so i am having to use some crappy apps but it wont open in Access.
Does any one know what software will produce a .db file and what software will allow me to open it and export it as a CSV or MySQL file ?
Also if the connection was interrupted during transit could this effect the file ?
Since mac is BSD-based now, try opening a terminal and executing the command file /path/to/large/db -- it should tell you at least what file type the DB is, and from there you can determine what program to use to open it. It might be MySQL, might be PostGreSQL, might be SQLite -- file will tell you.
Example:
$ file a.db
a.db: SQLite 3.x database
$ file ~/.kde/share/apps/amarok/mysqle/amarok/tracks.{frm,MYD,MYI}
~/.kde/share/apps/amarok/mysqle/amarok/tracks.frm: MySQL table definition file Version 10
~/.kde/share/apps/amarok/mysqle/amarok/tracks.MYD: data
~/.kde/share/apps/amarok/mysqle/amarok/tracks.MYI: MySQL MISAM compressed data file Version 1
So it's SQLite v3? Then try
sqlite3 /path/to/db
and you can perform pretty much standard SQL from the CLI. At the CLI, you can type .tables to list all the tables in that DB. -- Or if you prefer a GUI, there are a few options listed in this question. Accepted answer was SQLite manager for Firefox.
Then you could drop tables or delete as you see fit.
Here's an example of dumping a csv to stdout:
$ sqlite3 -separator ',' -list a.db "SELECT * FROM t"
3,4
3,5
100,200
And to store it to a file -- the > operator redirects output to a file you name:
$ sqlite3 -separator ',' -list a.db "SELECT * FROM t" > a.csv
$ cat a.csv # puts the contents of a.csv on stdout
3,4
3,5
100,200
-separator ',' indicates that fields should be delimited by a comma; -list means to put row data on the same line, using the delimiter; a.db indicates which db to use; and "SELECT * FROM t" is just the SQL command to execute.
I'm not a Mac user but if it's a SQLite file I've heard great things about Base.