Mitaka nova-manage api_db sync 'error: No sql_connection parameter is established' - openstack-nova

I am trying to set up a Mitaka OpenStack cloud. But when I try to execute:
# /usr/bin/nova-manage --debug api_db sync
And the I get the error message:
error: No sql_connection parameter is established
Yet I am able to access the nova database via mysql command line, using the values I am using for my I have in the /etc/nova/nova.conf:
[database]
connection=mysql://nova:nova#svl-os:3306/nova

In the Mitaka release they added a new DB schema, nova_api. So I needed to add ...
[api_database]
connection=mysql://nova_api_db_user:password#mydbhost:3306/nova_api
... to my /etc/nova/nova.conf file.

Related

Value for 'configPath' when running checkForServerUpgrade on AWS RDS

To prepare my upgrade from mysql 5.7 to mysql 8, I want to run the upgrade utility checker. Here's what I did so far:
installed mysqlsh on my machine
started mysqlsh
executed util.checkForServerUpgrade targeting the server that I want to upgrade
Here's the exact command that I used in step 3:
util.checkForServerUpgrade('root#my-remote-host:3306', { "password":"my-password" })
This runs fine but some checks are not executed because I don't provide the configPath parameter. For example, here's a warning that I get:
14) Removed system variables for error logging to the system log configuration
To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
More information:
https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-13.html#mysqld-8-0-13-logging
Anybody knows the value that I should provide for the configPath parameter?
I've tried to do the same using the command util.checkForServerUpgrade defining the configPath, without success. I then tried to run the same command directly from outside the mysqlsh shell, with success:
mysqlsh -- util check-for-server-upgrade root#localhost --target-version=8.0.13 --output-format=JSON --config-path=/etc/mysql/my.cnf
and it worked. To be noted that when I've tried to run from mysqlsh in the session root#localhost the command:
util.checkForServerUpgrade({"configPath":"/etc/mysql/my.cnf"})
mysqlsh replied with:
"Util.checkForServerUpgrade: Argument #1: Invalid values in connection options: configPath (ArgumentError)"
Try putting in the connection string, for example,
util.checkForServerUpgrade('root#localhost',{'configPath': '/etc/my.cnf'})
This worked for me, but without the connection string it doesn't.

AWS EB undefined RDS_HOSTNAME with Database hosts array empty

Currently in a Laravel project using AWS EB with RDS.
When I run
php artisan migrate --seed
then I get
PHP Notice: Undefined index: RDS_HOSTNAME in /var/app/current/config/database.php on line 5
PHP Notice: Undefined index: RDS_USERNAME in /var/app/current/config/database.php on line 6
PHP Notice: Undefined index: RDS_PASSWORD in /var/app/current/config/database.php on line 7
PHP Notice: Undefined index: RDS_DB_NAME in /var/app/current/config/database.php on line 8
and
Database hosts array is empty. (SQL: select * from
information_schema.tables where table_schema = ? and table_name =
migrations and table_type = 'BASE TABLE')
I'm not using a .env file but defining these variables in EB configuration, like
and my ./config/database.php file
Tested changing the variables' prefix to RDS_ instead of DB_ in EB but that didn't solve the problem.
Per the Elastic Beanstalk documentation here:
Note: Environment properties aren't automatically exported to the
shell, even though they are present in the instance. Instead,
environment properties are made available to the application through
the stack that it runs in, based on which platform you're using.
So Elastic Beanstalk is going to pass those environment variables to the Apache HTTP server process, but not the Linux shell where you are running that php command.
Per the documentation here, you need to use the get-config script to pull those environment variable values into your shell.
So you'll need to do this for all
/opt/elasticbeanstalk/bin/get-config environment -k RDS_USERNAME
which will print the value of RDS_USERNAME. Then export it to use in other commands
export RDS_USERNAME="value"
Do that for all - RDS_HOSTNAME, RDS_USERNAME, RDS_PASSWORD and RDS_DB_NAME. Then if you run
export
you must be able to see RDS_HOSTNAME, RDS_USERNAME, RDS_PASSWORD and RDS_DB_NAME and the respective value in front.
Once that's done and you run
php artisan migrate --seed
you'll then get as expected

Cannot acces instance using compute ssh : "ERROR: [....putty.exe] exited with return code [1]

Here's my problem :
I would like to connect to a gcp instance. When I run the Google Cloud SDK shell as an administrator with the command :
gcloud compute ssh my_instance --zone=europe-west1-b -- -L=8081:locahost:8081
..I get this error : ERROR (gcloud.compute.ssh) [..../putty.exe] exited with return code [1]
My instance is running with the metadata enable-oslogin as TRUE, as the project.
Do you have an idea of what is the problem ?
When using -- in the command, you are passing SSH flags after the dashes and not gcloud command flags. To explain, gcloud compute ssh is a thin wrapper around the ssh(1) command that takes care of authentication and the translation of the instance name into an IP address.
In this case, -- is equivalent to --ssh-flag as per this SDK reference. It seems that putty is outputting an error that is not passed into the command line (SDK shell). The actual error should be visible in the dialog window before putty exits.
I have tried the command myself on Windows and the exact error was unknown option "L=8081:localhost:8081". The SSH flag is not accepted as you have an = sign there (typo).
According to linuxcommand.org manual, the flag should be in this format:
-L [bind_address:]port:host:hostport
Hence, you should run the command like this:
gcloud compute ssh my_instance --zone=europe-west1-b -- -L 8081:locahost:8081
Note also that you may have to create a firewall rule to allow Ingress to the instance on port 8081.

Using Apache Drill

I am trying to use Apache Drill. The instructions at https://drill.apache.org/docs/drill-in-10-minutes/ seem to be very straightforward but after following them I get the following error:
show files;
Error: VALIDATION ERROR: SHOW FILES is supported in workspace type schema only. Schema [] is not a workspace schema.
Missing config for the path to files maybe?
Looks like you are issuing this command without connecting to any schema. You can issue this command after switching to particular schema using 'use '.Issue 'show schemas' to list available schemas.
If you are using sqlline, You may specify schema while connecting to sqlline as below (to connect schema 'dfs') .
sqlline -u "jdbc:drill:schema=dfs;zk=<zk node>:<zk port>"

Importing CSV to MySQL table returns an error #1148

I am trying to import with DirectAdmin, when I selected CSV without using LOAD DATA - I got the error "Invalid field count in CSV input on line 1."
When I tried with LOAD DATA I got the following error: "#1148 - The used command is not allowed with this MySQL version."
The CSV was created in MS Access from MS Access database.
Here are the first 2 rows:
"product_id","vendor_id"," product_parent_id","product_sku","product_s_desc ","product_desc","product_thumb_image ","product_full_image","product_publish","product_weight","product_weight_uom ","product_length ","product_width","product_height ","product_lwh_uom ","disp_order","price","sale","product_url ","product_in_stock","product_available_date","product_availability ","product_special ","product_discount_id ","ship_code_id ","cdate ","mdate ","product_name ","product_sales ","attribute ","custom_attribute ","product_tax_id ","product_unit ","product_packaging ","child_options ","quantity_options ","child_option_ids ","product_order_levels "
41,2,0,1,,,"resized/Krug-Rose-Champagne-lg.jpg","Krug-Rose-Champagne-lg.jpg","Y","750.0000","grams","4.0000","4.0000",14,,14,3516,0,,,1296518400,,"N",0,"NULL ",1296574622,1297953843,"קרוג רוזה",0,,,2,"piece ",65537,"N,N,N,N,N,Y,20%,10%, ","none,0,0,1 ",,"0,0 "
From mysql command line pass the following parameter:
mysql -u username -p dbname --local-infile
Instead of using: load data infile, use: load data local infile and it should perform the import.
By default, mysql does not enable load data local as per the the security concerns defined here:
http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
If LOAD DATA LOCAL is disabled, either in the server or the client, a
client that attempts to issue such a statement receives the following
error message:
ERROR 1148: The used command is not allowed with this MySQL version