I'm trying to connect nova-compute in Hyper-V with devstack on Ubuntu 12.04, but this error appears:
CRITICAL nova [-] (OperationalError) (1054, "Unknown column
'instances.server_name' in 'field list'") 'SELECT instances.created_at
AS instances_created_at, instances.updated_at AS instances_updated_at,
instances.deleted_at AS instances_deleted_at, instances.deleted AS
instances_deleted, instances.id AS instances_id, instances.user_id AS
instances_user_id, instances.project_id AS instances_project_id,
instances.image_ref AS instances_image_ref, instances.kernel_id AS
instances_kernel_id, instances.ramdisk_id AS instances_ramdisk_id,
instances.server_name AS instances_server_name, instances.launch_index
AS instances_launch_index, instances.key_name AS instances_key_name,
instances.key_data AS instances_key_data, instances.power_state AS
instances_power_state, instances.vm_state AS instances_vm_state,
instances.task_state AS instances_task_state, instances.memory_mb AS
instances_memory_mb, instances.vcpus AS instances_vcpus,
instances.root_gb AS instances_root_gb, instances.ephemeral_gb AS
instances_ephemeral_gb, instances.hostname AS instances_hostname,
instances.host AS instances_host, instances.instance_type_id AS
instances_instance_type_id, instances.user_data AS
instances_user_data, instances.reservation_id AS
instances_reservation_id, instances.scheduled_at AS
instances_scheduled_at, instances.launched_at AS
instances_launched_at, instances.terminated_at AS
instances_terminated_at, instances.availability_zone AS
instances_availability_zone, instances.display_name AS
instances_display_name, instances.display_description AS
instances_display_description, instances.launched_on AS
instances_launched_on, instances.locked AS instances_locked,
instances.os_type AS instances_os_type, instances.architecture AS
instances_architecture, instances.vm_mode AS instances_vm_mode,
instances.uuid AS instances_uuid, instances.root_device_name AS
instances_root_device_name, instances.default_ephemeral_device AS
instances_default_ephemeral_device, instances.default_swap_device AS
instances_default_swap_device, instances.config_drive AS
instances_config_drive, instances.access_ip_v4 AS
instances_access_ip_v4, instances.access_ip_v6 AS
instances_access_ip_v6, instances.auto_disk_config AS
instances_auto_disk_config, instances.progress AS instances_progress,
instances.shutdown_terminate AS instances_shutdown_terminate,
instances.disable_terminate AS instances_disable_terminate,
instance_types_1.created_at AS instance_types_1_created_at,
instance_types_1.updated_at AS instance_types_1_updated_at,
instance_types_1.deleted_at AS instance_types_1_deleted_at,
instance_types_1.deleted AS instance_types_1_deleted,
instance_types_1.id AS instance_types_1_id, instance_types_1.name AS
instance_types_1_name, instance_types_1.memory_mb AS
instance_types_1_memory_mb, instance_types_1.vcpus AS
instance_types_1_vcpus, instance_types_1.root_gb AS
instance_types_1_root_gb, instance_types_1.ephemeral_gb AS
instance_types_1_ephemeral_gb, instance_types_1.flavorid AS
instance_types_1_flavorid, instance_types_1.swap AS
instance_types_1_swap, instance_types_1.rxtx_factor AS
instance_types_1_rxtx_factor, instance_types_1.vcpu_weight AS
instance_types_1_vcpu_weight, instance_types_1.disabled AS
instance_types_1_disabled, instance_types_1.is_public AS
instance_types_1_is_public, instance_info_caches_1.created_at AS
instance_info_caches_1_created_at, instance_info_caches_1.updated_at
AS instance_info_caches_1_updated_at,
instance_info_caches_1.deleted_at AS
instance_info_caches_1_deleted_at, instance_info_caches_1.deleted AS
instance_info_caches_1_deleted, instance_info_caches_1.id AS
instance_info_caches_1_id, instance_info_caches_1.network_info AS
instance_info_caches_1_network_info,
instance_info_caches_1.instance_uuid AS
instance_info_caches_1_instance_uuid, security_groups_1.created_at AS
security_groups_1_created_at, security_groups_1.updated_at AS
security_groups_1_updated_at, security_groups_1.deleted_at AS
security_groups_1_deleted_at, security_groups_1.deleted AS
security_groups_1_deleted, security_groups_1.id AS
security_groups_1_id, security_groups_1.name AS
security_groups_1_name, security_groups_1.description AS
security_groups_1_description, security_groups_1.user_id AS
security_groups_1_user_id, security_groups_1.project_id AS
security_groups_1_project_id, instance_metadata_1.created_at AS
instance_metadata_1_created_at, instance_metadata_1.updated_at AS
instance_metadata_1_updated_at, instance_metadata_1.deleted_at AS
instance_metadata_1_deleted_at, instance_metadata_1.deleted AS
instance_metadata_1_deleted, instance_metadata_1.id AS
instance_metadata_1_id, instance_metadata_1.key AS
instance_metadata_1_key, instance_metadata_1.value AS
instance_metadata_1_value, instance_metadata_1.instance_uuid AS
instance_metadata_1_instance_uuid \nFROM instances LEFT OUTER JOIN
instance_types AS instance_types_1 ON instances.instance_type_id =
instance_types_1.id LEFT OUTER JOIN instance_info_caches AS
instance_info_caches_1 ON instance_info_caches_1.instance_uuid =
instances.uuid LEFT OUTER JOIN security_group_instance_association AS
security_group_instance_association_1 ON
security_group_instance_association_1.instance_uuid = instances.uuid
AND instances.deleted = %s LEFT OUTER JOIN security_groups AS
security_groups_1 ON security_groups_1.id =
security_group_instance_association_1.security_group_id AND
security_group_instance_association_1.deleted = %s AND
security_groups_1.deleted = %s LEFT OUTER JOIN instance_metadata AS
instance_metadata_1 ON instance_metadata_1.instance_uuid =
instances.uuid AND instance_metadata_1.deleted = %s \nWHERE
instances.deleted = %s AND instances.host = %s' (0, 0, 0, 0, 0,
'WIN-NVR4BLPKAS1')
There's a column missing in instances table in the DB.
It may be out of date with the code; this command will update the DB:
nova-manage db sync
If the code is super super out of date, that command may not work super well, you may be able to either sync backwards, then forwards:
First find the earliest version with:
ls nova/db/sqlalchemy/migrate_repo/versions | sort -n | head
Then sync to that version:
nova-manage db sync 112 # if 112 is the earliest in the list
Then sync forward to the end again:
nova-manage db sync
If that doesn't work you may be able to go into mysql and add the column manually:
alter table instances add column server_name varchar;
Finally you could drop the DB, re-create it then do the sync (in mysql):
drop database nova;
create database nova;
grant all on nova.* to openstack#localhost identified by 'openstack';
That's assuming your nova db username+password is openstack and openstack. (You can get that from the devstack localrc)
I had the exact same issue while I was expanding to more compute hosts, while I was troubleshooting the issue I found that I had different nova services version, I added proper repositories and remove the current Openstack compute and network service, re-installed and re-configure them again and everything went OK.
I was using Havana and below repositories were added. (I'm using Centos on all machines)
yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm
yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Related
I am using Python to pipe data from one mysql database to another. Here is a lightly abstracted version of code which I have been using for several months, which has worked fairly well:
def copy_table(mytable):
raw_mysqldump = "mysqldump -h source_host -u source_user --password='secret' --lock-tables=FALSE myschema mytable"
raw_mysql = "mysql -h destination_host -u destination_user --password='secret' myschema"
mysqldump = shlex.split(raw_mysqldump)
mysql = shlex.split(raw_mysql)
ps = subprocess.Popen(mysqldump, stdout=subprocess.PIPE)
subprocess.check_output(mysql, stdin=ps.stdout)
ps.stdout.close()
retcode = ps.wait()
if retcode == 0:
return mytable, 1
else:
return mytable, 0
The size of the data has grown, and it currently takes about an hour to copy something like 30 tables. In order to speed things along, I would like to utilize multiprocessing. I am attempting to execute the following code on an Ubuntu server, which is an t2.micro (AWS EC2).
def copy_tables(tables):
with multiprocessing.Pool(processes=4) as pool:
params = [(arg, table) for table in sorted(tables)]
results = pool.starmap(copy_table, params)
failed_tables = [table for table, success in results if success == 0]
all_tables_processed = False if failed_tables else True
return all_tables_processed
The problem is: almost all of the tables will copy, but there are always a couple of child processes left over that will not complete - they just hang, and I can see from monitoring the database that no data is being transferred. It feels like they have somehow disconnected from the parent process, or that data is not being returned properly.
This is my first question, and I've tried to be both specific and concise - thanks in advance for any help, and please let me know if I can provide more information.
I think the following code
ps = subprocess.Popen(mysqldump, stdout=subprocess.PIPE)
subprocess.check_output(mysql, stdin=ps.stdout)
ps.stdout.close()
retcode = ps.wait()
should be
ps = subprocess.Popen(mysqldump, stdout=subprocess.PIPE)
sps = subprocess.Popen(mysql, stdin=ps.stdout)
retcode = ps.wait()
ps.stdout.close()
sps.wait()
You should not close the pipe until the mysqldump process finished. And check_output is blocking, it will hang until stdin reach the end.
I have a data frame containing columns 'Quarter' having values like "16/17 Q1", "16/17 Q2"... and 'Vendor' having values like "a", "b"... .
I am trying to write this data frame into database using
query <- paste("INSERT INTO cc_demo (Quarter,Vendor) VALUES(dd$FY_QUARTER,dd$VENDOR.x)")
but it is throwing error :
Error in .local(conn, statement, ...) :
could not run statement: Unknown column 'dd$FY_QUARTER' in 'field list'
I am new to Rmysql, Please provide me some solution to write entire dataframe?
To write a data frame to mySQL DB you need to:
Create a connection to your database, you need to specify:
MySQL connection
User
Password
Host
Database name
library("RMySQL")
connection <- dbConnect(MySQL(), user = 'root', password = 'password', host = 'localhost', dbname = 'TheDB')
Using the connection create a table and then export data to the database
dbWriteTable(connection, "testTable", testTable)
You can overwrite an existing table like this:
dbWriteTable(connection, "testTable", testTable_2, overwrite=TRUE)
I would advise against writing sql query when you can actually use very handy functions such as dbWriteTable from the RMySQL package. But for the sake of practice, below is an example of how you should go about writing the sql query that does multiple inserts for a MySQL database:
# Set up a data.frame
dd <- data.frame(Quarter = c("16/17 Q1", "16/17 Q2"), Vendors = c("a","b"))
# Begin the query
sql_qry <- "insert into cc_demo (Quarter,Vendor) VALUES"
# Finish it with
sql_qry <- paste0(sql_qry, paste(sprintf("('%s', '%s')", dd$Quarter, dd$Vendors), collapse = ","))
You should get:
"insert into cc_demo (Quarter,Vendor) VALUES('16/17 Q1', 'a'),('16/17 Q2', 'b')"
You can provide this query to your database connection in order to run it.
I hope this helps.
I'm trying to get data from MySQL DB into Rstudio-server. My actions are like
mydb = dbConnect(MySQL(), user='user', password='password', dbname='dbname', host='localhost')
query <- stri_paste('select sellings.updated_at AS Up_Date, concat(item_parameters.title, " ", ad_attributes.int_value) AS Class, CONCAT(geos.name, " ", geos.kind) AS place, geos.lon, geos.lat, sellings.price AS price, ((geo_routes.distance*2/1000 + 100)) AS delivery_cost FROM sellings, users, item_parameters, ad_attributes, geos, geo_routes WHERE users.encrypted_password!="" && item_parameters.title="Класс" && sellings.price IS NOT NULL && ad_attributes.int_value IS NOT NULL AND users.id=sellings.user_id AND item_parameters.id=ad_attributes.item_parameter_id AND sellings.id = ad_attributes.ad_id AND sellings.geo_guid = geos.guid AND geos.routable_guid = geo_routes.src_guid AND geo_routes.distance = (SELECT geo_routes.distance FROM geo_routes, geos WHERE geos.guid = sellings.geo_guid AND geo_routes.src_guid = geos.routable_guid AND geo_routes.dst_guid = (SELECT geos.routable_guid FROM geos WHERE geos.name = "Воронеж" && geos.kind = "г")) ORDER BY Up_Date;')
rs = dbGetQuery(mydb, query)
And I get an empty dataframe. But when I do the same with my local DB everything is OK. The query takes a pretty long time, about 3 minutes, but it works properly. Moreover the same query works right from the command line in MySQL. On the server, it takes about 4 seconds. OS of server is Debian 7, OS of local machine is Win 8. Any idea?
Sometimes when querying from the command line the default schema has been set in a previous command. This command doesn't carry over to R so the exact same query from a command line to a R session might not work. Maybe check the dbname.
Insert the below statements in your SQL query
SET NOCOUNT ON
SET ANSI_WARNINGS OFF
It worked for me
I am trying to connect to a mysql server using LuaSql via a mysql proxy. I try to execute a simple program (db.lua):
require("luasql.mysql")
local _sqlEnv = assert(luasql.mysql())
local _con = nil
function read_auth(auth)
local host, port = string.match(proxy.backends[1].address, "(.*):(.*)")
_con = assert(_sqlEnv:connect( "db_name", "username", "password", "hostname", "3306"))
end
function disconnect_client()
assert(_con:close())
end
function read_query(packet)
local cur = con:execute("select * from t1")
myTable = {}
row = cur:fetch(myTable, "a")
print(myTable.id,myTable.user)
end
This code executes well when I execute it without mysql-proxy. When I am connecting with mysql-proxy, the error-log displays these errors:
mysql.lua:8: bad argument #1 to 'insert' (table expected, got nil)
db.lua:1: loop or previous error loading module 'luasql.mysql'
mysql.lua is a default file of LuaSql:
---------------------------------------------------------------------
-- MySQL specific tests and configurations.
-- $Id: mysql.lua,v 1.4 2006/01/25 20:28:30 tomas Exp $
---------------------------------------------------------------------
QUERYING_STRING_TYPE_NAME = "binary(65535)"
table.insert (CUR_METHODS, "numrows")
table.insert (EXTENSIONS, numrows)
---------------------------------------------------------------------
-- Build SQL command to create the test table.
---------------------------------------------------------------------
local _define_table = define_table
function define_table (n)
return _define_table(n) .. " TYPE = InnoDB;"
end
---------------------------------------------------------------------
-- MySQL versions 4.0.x do not implement rollback.
---------------------------------------------------------------------
local _rollback = rollback
function rollback ()
if luasql._MYSQLVERSION and string.sub(luasql._MYSQLVERSION, 1, 3) == "4.0" then
io.write("skipping rollback test (mysql version 4.0.x)")
return
else
_rollback ()
end
end
As stated in my previous comment, the error indicates that table.insert (CUR_METHODS, ...) is getting a nil as first arg. Since the first arg is CUR_METHODS, it means that this object CUR_METHODS has not been defined yet. Since this happens near top of the luasql.mysql module, my guess is that the luasql initialization was incomplete, maybe because the mysql DLL was not found. My guess is that the LUA_CPATH does not find the MySQL DLL for luasql, but I'm surprised that you wouldn't get a package error, so something odd is going on. You'll have to dig into the luasql module and C file to figure out why it is not being created.
Update: alternately, update your post to show the output of print("LUA path:", package.path) and print("LUA path:", package.cpath) from your mysql-proxy script and also show the path of folder where luasql is installed and contents of that folder.
I just downloaded the mysql-proxy and created this script lua (found in Mysql docs):
function read_query(packet)
if string.byte(packet) == proxy.COM_QUERY then
print("QUERY: " .. string.sub(packet, 2))
end
end
This is the command-line I'm using:
mysql-proxy -P localhost:1234 -b localhost:3306 --proxy-lua-script=profile.lua --plugins=proxy
When I run a simple query (like "select * from table1"), this error is reported: "failed: .\lua-scope.c:241: stat(C:...\profile.lua) failed: No error (0)"
Note: If I run mysql-proxy without lua script, no error occurs.
I need to install something to get mysql-proxy and query tracing working?
My environment is Windows 7 Professional x64.
Sorry the bad english.
The error you're getting is caused by --proxy-lua-script pointing to a file that mysql-proxy can't find. Either you've typed the name in wrong, you've typed the path in wrong, or you are expecting it in your CWD and it's not there. Or actually, looking at the entire error a little more closely, it seems possible that mysql-proxy itself sees the file in CWD itself OK, but one of the underlying modules doesn't like it (possibly because mysql-proxy changes the CWD somehow?)
Try saving profile.lua to the root of your C: drive and trying different versions of the option like so:
--proxy-lua-script=c:\profile.lua
--proxy-lua-script=\profile.lua
--proxy-lua-script=/profile.lua
One of those would probably work
simple query log lua script:
require("mysql.tokenizer")
local fh = io.open("/var/log/mysql/proxy.query.log", "a+")
fh:setvbuf('line',4096)
local the_query = "";
local seqno = 0;
function read_query( packet )
if string.byte(packet) == proxy.COM_QUERY then
seqno = seqno + 1
the_query = (string.gsub(string.gsub(string.sub(packet, 2), "%s%s*", ' '), "^%s*(.-)%s*$", "%1"))
fh:write(string.format("%s %09d %09d : %s (%s) -- %s\n",
os.date('%Y-%m-%d %H:%M:%S'),
proxy.connection.server.thread_id,
seqno,
proxy.connection.client.username,
proxy.connection.client.default_db,
the_query))
fh:flush()
return proxy.PROXY_SEND_QUERY
else
query = ""
end
end