Import MySQL DB Schema - mysql

I have successfully used mysqldump to export a db schema without data. I now want to import this db schema.
I have tried a couple of methods but come across the error related to the < character.
Any ideas?

As the error shows, input redirection in PowerShell does not work with the <sign. You would do it using the Cmdlet Get-Content and use piping to put the output into the input of the command:
Get-Content v:\mcsdb.sql | mysql -u root --sql --recreate-schema mcs
If you are interested in more in-depth info, refer to the Documentation.

Related

mongoimport fails due to invalid character in massive file, possibly an issue with the character encoding

When I run the following command:
mongoimport -v -d ntsb -c data xml_results.json --jsonArray
I get this error:
2020-07-15T22:51:41.267-0400 using write concern: &{majority false 0}
2020-07-15T22:51:41.270-0400 filesize: 68564556 bytes
2020-07-15T22:51:41.270-0400 using fields:
2020-07-15T22:51:41.270-0400 connected to: mongodb://localhost/
2020-07-15T22:51:41.270-0400 ns: ntsb.data
2020-07-15T22:51:41.271-0400 connected to node type: standalone
2020-07-15T22:51:41.271-0400 Failed: error processing document #1: invalid character '}' looking for beginning of object key string
2020-07-15T22:51:41.271-0400 0 document(s) imported successfully. 0 document(s) failed to import.
I have tried all the solutions in this file and nothing worked. My JSON file is 60ish MB in size so it would be really hard to go through it and find the bracket issue. I believe that it is a problem with the UTF-8 formatting maybe? I take an XML file I downloaded on the internet and convert it into JSON with a Python script. When I try the --jsonArray flag, it gives the same error. Any ideas? Thanks!
It turns out within this massive file there were a few unnecessary commas. I was able to use Pythons built in JSON parsing to jump to lines with errors and remove them manually. As far as I can tell, the invalid character had nothing to do with the } but with the comma that caused it to expect another value before the closing bracket.
After solving this, I was still unable to import successfully because now the file was too large. The trick around this was to surround all the JSON objects with array brackets [] and use the following command: mongoimport -v -d ntsb -c data xml_results.json --batchSize 1 --jsonArray
After a few seconds the data imported successfully into Mongo.

How to run a cypher script file from Terminal with the cypher-shell neo4j command?

I have a cypher script file and I would like to run it directly.
All answers I could find on SO to the best of my knowledge use the command neo4j-shell which in my version (Neo4j server 3.5.5) seems to be deprecated and substituted with the command cyphershell.
Using the command sudo ./neo4j-community-3.5.5/bin/cypher-shell --help I got the following instructions.
usage: cypher-shell [-h] [-a ADDRESS] [-u USERNAME] [-p PASSWORD]
[--encryption {true,false}]
[--format {auto,verbose,plain}] [--debug] [--non-interactive] [--sample-rows SAMPLE-ROWS]
[--wrap {true,false}] [-v] [--driver-version] [--fail-fast | --fail-at-end] [cypher]
A command line shell where you can execute Cypher against an
instance of Neo4j. By default the shell is interactive but you can
use it for scripting by passing cypher directly on the command
line or by piping a file with cypher statements (requires Powershell
on Windows).
My file is the following which tries to create a graph from csv files and it comes from the book "Graph Algorithms".
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data" AS base
WITH base + "transport-nodes.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MERGE (place:Place {id:row.id})
SET place.latitude = toFloat(row.latitude),
place.longitude = toFloat(row.latitude),
place.population = toInteger(row.population)
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data/" AS base
WITH base + "transport-relationships.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MATCH (origin:Place {id: row.src})
MATCH (destination:Place {id: row.dst})
MERGE (origin)-[:EROAD {distance: toInteger(row.cost)}]->(destination)
When I try to pass the file directly with the command:
sudo ./neo4j-community-3.5.5/bin/cypher-shell neo_4.cypher
first it asks for username and password but after typing the correct password (the wrong password results in the error The client is unauthorized due to authentication failure.) I get the error:
Invalid input 'n': expected <init> (line 1, column 1 (offset: 0))
"neo_4.cypher"
^
When I try piping with the command:
sudo cat neo_4.cypher| sudo ./neo4j-community-3.5.5/bin/cypher-shell -u usr -p 'pwd'
no output is generated and no graph either.
How to run a cypher script file with the neo4j command cypher-shell?
Use cypher-shell -f yourscriptname. Check with --help for more description.
I think the key is here:
cypher-shell -- help
... Stuff deleted
positional arguments:
cypher an optional string of cypher to execute and then exit
This means that the paremeter is actual cypher code, not a file name. Thus, this works:
GMc#linux-ihon:~> cypher-shell "match(n) return n;"
username: neo4j
password: ****
+-----------------------------+
| n |
+-----------------------------+
| (:Job {jobName: "Job01"}) |
| (:Job {jobName: "Job02"}) |
But this doesn't (because the text "neo_4.cypher" isn't a valid cypher query)
cypher-shell neo_4.cypher
The help also says:
example of piping a file:
cat some-cypher.txt | cypher-shell
So:
cat neo_4.cypher | cypher-shell
should work. Possibly your problem is all of the sudo's. Specifically the cat ... | sudo cypher-shell. It is possible that sudo is protecting cypher-shell from some arbitrary input (although it doesn't seem to do so on my system).
If you really need to use sudo to run cypher, try using the following:
sudo cypher-shell arguments_as_needed < neo_4.cypher
Oh, also, your script doesn't have a return, so it probably won't display any data, but you should still see the summary reports of records loaded.
Perhaps try something simpler first such as a simple match ... return ... query in your script.
Oh, and don't forget to terminate the cypher query with a semi-colon!
The problem is in the cypher file: each line should end with a semicolon: ;. I still need sudo to run the program.
The file taken from the book seems to contain other errors as well actually.

Using Apache Drill

I am trying to use Apache Drill. The instructions at https://drill.apache.org/docs/drill-in-10-minutes/ seem to be very straightforward but after following them I get the following error:
show files;
Error: VALIDATION ERROR: SHOW FILES is supported in workspace type schema only. Schema [] is not a workspace schema.
Missing config for the path to files maybe?
Looks like you are issuing this command without connecting to any schema. You can issue this command after switching to particular schema using 'use '.Issue 'show schemas' to list available schemas.
If you are using sqlline, You may specify schema while connecting to sqlline as below (to connect schema 'dfs') .
sqlline -u "jdbc:drill:schema=dfs;zk=<zk node>:<zk port>"

cloudControl mysqls.free import database

I am trying to import a SQL file into my mysqls.free addon on cloudControl. It is not working. Documentation says:
To import an sql file into a MySQL database use the following command.
$ mysql -u MYSQLS_USER -p --host=MYSQLS_SERVER --ssl-ca=PATH_TO_CERTIFICATE/mysql-ssl-ca-cert.pem MYSQLS_DATABASE < MYSQLS_HOSTNAME.sql
I was able to connect to the SQL server, but there it says: MYSQLS_HOSTNAME not MYSQLS_SERVER and MYSQLS_USERNAME not MYSQLS_USER.
Do I need to enter different credentials?
Gracias!
thank you for the hint in the documentation, as you mentioned, the MYSQLS_USER and MYSQLS_SERVER placeholder were wrong. It is fixed now https://www.cloudcontrol.com/dev-center/Add-on%20Documentation/Data%20Storage/MySQLs

Update automatic attributes in Opscode Chef (serialized_object)

I had a couple of nodes in my chef server that had a problem while bootstrapping and missed the FQDN and domain automatic attributes due to which they were not indexed by SOLR and not searchable by knife. I could not rebootstrap these machines, but wanted to fix this and it took me a while to do so. Therefore I am posting this hoping that it will save others some time.
Automatic attributes are stored by Chef in the database and are not editable by knife (see Chef Attributes Overview). They are stored in chef's database as a column named serialized_object in the nodes table in hex and is in fact a gzipped JSON string.
To obtain the JSON string:
Use a PostgreSQL client to connect to the chef PostgreSQL (you can find the credentials on the chef server in /etc/chef-server/chef-server-secrets.json)
Save the contents of the serialized_object to a file say serialized_object.hex (it should look something like '\x1f8b08000...')
Run: xxd -p -r serialized_object.hex > serialized_object.gz
Run: gunzip serialized_object.gz
Now the file serialized_object contains the attributes in JSON format which you can edit. After editing you can store its contents back in chef server by following this:
Run: gzip serialized_object
Run: xxd -p serialized_object.gz > serialized_object.hex
Now you need to use the PostgreSQL client and insert the Hex data (be sure to remove prefix backslashes and x from the hex string) with the following query:
update nodes set serialized_object = decode('1f8b08000...','hex') where name = ''
Hope this helps someone :)