Can't create tables in MySQL workbench - mysql

I've just installed MySQL workbench on my computer and have importend an old database into the system which seems to be working find. All the data is there and tables. I can do select, insert, updates etc.
However, if I expand a database, I see tables, views, routines. If I then right click on tables, nothing happens if I click create table... However, if I manually type in the create sql command, it creates a table just fine.
The old laptop has:
OS: Ubuntu 10.04.3
MySQL: 5.1.41
MySQL Workbench: 5.2.33
The new laptop has:
OS: Ubuntu 10.04.3
MySQL: 5.1.41
MySQL Workbench: 5.2.37
I have also tried starting mysql workbench using sudo mysql-workbench and I get the same problem.
However, it does give the following output at command line if I start it from the command line on the new laptop:
oshirowanen#laptop:~$ mysql-workbench
Ready.
** Message: query.save_edits built-in command is being overwritten
** Message: query.discard_edits built-in command is being overwritten
** (mysql-workbench-bin:2737): CRITICAL **: murrine_style_draw_box: assertion `height >= -1' failed
(mysql-workbench-bin:2737): glibmm-CRITICAL **:
unhandled exception (type Glib::Error) in signal handler:
domain: gtk-builder-error-quark
code : 6
what : Unknown internal child: selection
(mysql-workbench-bin:2737): glibmm-CRITICAL **:
unhandled exception (type Glib::Error) in signal handler:
domain: gtk-builder-error-quark
code : 6
what : Unknown internal child: selection
oshirowanen#laptop:~$
On the old laptop I get:
oshirowanen#laptop:~$ mysql-workbench
Log levels '0111000'
disabling log level 0
enabling log level 1
enabling log level 2
enabling log level 3
disabling log level 4
disabling log level 5
disabling log level 6
Ready.
Any idea why I can't create tables using the mouse?

this is a known issue with Ubuntu 10.04:
go to:
/usr/share/mysql-workbench/modules/data/editor_mysql_table_live.glade
and delete all the nodes that look like this:
<child internal-child="selection">
<object class="GtkTreeSelection" id="treeview-selection5"/>
</child>

Related

MyDumper / MyLoader unable to import large tables into azure MySql database

I’m in the progress of migration a large mysql database to a “Azure-database voor MySQL Flexible Server”.
The database has a few tables that are larger than 1GB, the largest one being 200GB. All tables are InnoDB tables.
Because of the size of the tables, a normal mysql dump didn’t work, so as suggested here, I resorted to MyDumper/MyLoader: https://learn.microsoft.com/en-us/azure/mysql/concepts-migrate-mydumper-myloader
I dumped one of the large tables (a 31GB table) with the following command:
mydumper --database mySchema \
--tables-list my_large_table
--host database
--user root
--ask-password
--compress-protocol
--chunk-filesize 500
--verbose 3
--compress
--statement-size 104857600
I then copied the files over to a VM in the same region/zone as the Azure database and started the import with the following command:
myloader --directory mydumpdir \
--host dbname.mysql.database.azure.com \
--user my_admin \
--queries-per-transaction 100 \
--ask-password \
--verbose 3 \
--enable-binlog \
--threads 4 \
--overwrite-tables \
--compress-protocol
MyLoader seems to start loading and produced the following output:
** Message: 08:37:56.624: Server version reported as: 5.7.32-log
** Message: 08:37:56.674: Thread 1 restoring create database on `mySchema` from mySchema-schema-create.sql.gz
** Message: 08:37:56.711: Thread 2 restoring table `mySchema`.`my_large_table` from export-20220217-073020/mySchema.my_large_table-schema.sql.gz
** Message: 08:37:56.711: Dropping table or view (if exists) `mySchema`.`my_large_table`
** Message: 08:37:56.979: Creating table `mySchema`.`my_large_table` from export-20220217-073020/mySchema.my_large_table-schema.sql.gz
** Message: 08:37:57.348: Thread 2 restoring `mySchema`.`my_large_table` part 3 of 0 from mySchema.my_large_table.00003.sql.gz. Progress 1 of 26 .
** Message: 08:37:57.349: Thread 1 restoring `mySchema`.`my_large_table` part 0 of 0 from mySchema.my_large_table.00000.sql.gz. Progress 2 of 26 .
** Message: 08:37:57.349: Thread 4 restoring `mySchema`.`my_large_table` part 1 of 0 from mySchema.my_large_table.00001.sql.gz. Progress 3 of 26 .
** Message: 08:37:57.349: Thread 3 restoring `mySchema`.`my_large_table` part 2 of 0 from mySchema.my_large_table.00002.sql.gz. Progress 4 of 26 .
When I execute a "show full processlist" command on the Azure database, I see the 4 connected threads, but I see they are all sleeping, it seems like nothing is happening.
When I don't kill the command, it errors out after a long time:
** (myloader:31323): CRITICAL **: 17:07:27.642: Error occours between lines: 6 and 1888321 on file mySchema.my_large_table.00002.sql.gz: Lost connection to MySQL server during query
** (myloader:31323): CRITICAL **: 17:07:27.642: Error occours between lines: 6 and 1888161 on file mySchema.my_large_table.00001.sql.gz: MySQL server has gone away
** (myloader:31323): CRITICAL **: 17:07:27.642: Error occours between lines: 6 and 1888353 on file mySchema.my_large_table.00003.sql.gz: Lost connection to MySQL server during query
** (myloader:31323): CRITICAL **: 17:07:27.642: Error occours between lines: 6 and 1888284 on file mySchema.my_large_table.00000.sql.gz: MySQL server has gone away
After these errors, the table is still empty.
I tried a few different settings when dumping/loading, but to no avail:
start only 1 thread
make smaller chunks (100mb)
remove --compress-protocol
I also tried importing a smaller table (400MB in chunks of 100MB ), with exactly the same settings, and that did actually work.
I tried to import the tables into a mysql database on my local machine, and there I experienced exactly the same problem: the large table (31GB) import created 4 sleeping threads and didn't do anything, while the smaller table import (400MB in chunks of 100MB) did work.
So the problem doesn't seem to be related to the Azure database.
I now have no clue what the problem is, any ideas?
I had a similar problem, for me it ended up being the instance I was restoring into was too small, the server kept running out of memory. Try temporarily increasing the instance size to a much larger size, and once the data is imported shrink the size of the instance back down.

Gnome Boxes on Fedora 33 fails to open

I attempt to load gnome-boxes from the terminal (I'm running Fedora 33) and get the following error
$ gnome-boxes
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.343: GtkFlowBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.344: GtkListBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.904: libvirt-machine.vala:83: Failed to disable 3D Acceleration
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.913: libvirt-broker.vala:70: Failed to update domain 'fedora33-wor-2': Failed to set domain configuration: XML error: Invalid PCI address 0000:04:00.0. slot must be >= 1
(gnome-boxes:3194): Boxes-CRITICAL **: 12:34:57.916: boxes_vm_importer_get_source_media: assertion 'self != NULL' failed
Segmentation fault (core dumped)
My system:
$uname -a
Linux localhost.localdomain 5.9.16-200.fc33.x86_64 #1 SMP Mon Dec 21 14:08:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I don't whether it's related but I recently updated from kernel 5.9.11 directly to 5.9.16 (haven't used the PC in question for some weeks) and before gnome-boxes was working as normal.
Please advise how I can restore gnome-boxes - I have some virtual machines that I need to access...
I faced this issue when I force stopped Gnome-Boxes while cloning a VM.
Deleting the conflicting VM will resolve your issue(in your case 'fedora33-wor-2').
To delete the VM in fedora, install "libvirt-client" which provides "virsh" using the command
dnf install libvirt-client
then double check the available VM's using
virsh list --all
Delete the VM using command,
virsh undefine VM_Name
#channel-fun solved the problem of staring up gnome-boxes.
But the real problem is in cloning procedure. The XML describing the new machine is malformed.
virt-clone --original fedora33-ser --auto-clone
works properly.
I know this is an old thread, but I had the same problem recently.
I shut down gnome boxes whilst it was cloning a vm, and shutdown the machine.
I then couldn't open boxes, as it would just crash.
I was able to delete the VM itself, and then deleted the XML file associated with it.
To delete the VM itself, go to :
$HOME/.var/app/org.gnome.Boxes/data/gnome-boxes/images (which in my case is a symbolic link to a data drive)
and delete the VM with the name that you were cloning to (or safer, just move it somewhere).
To delete the XML file associated with it:
$HOME/.var/app/org.gnome.Boxes/config/libvirt/qemu/
and delete (or safer move) the file that is named VM_NAME.xml.
Then boxes should open ok, at least it worked for me.
Extending on Channel Fun's answer for Ubuntu repos the package is libvirt-clients (note the plural s):
sudo apt install libvirt-clients
Check the available VM's using:
virsh list --all
Delete the VM using:
virsh undefine VM_Name
If you receive the error:
error: Refusing to undefine while domain managed save image exists
Then you can explicitly remove that also using the --managed-save flag:
virsh undefine VM_Name --managed-save

Bugzilla installation error with MySQL version 8.0

Getting an error with creating database with latest MySQL version 8.0 for Bugzilla installation.
I'm setting up a new server for Bugzilla with below configuration.
Bugzilla version : 5.0.6
Strawberry PERL version : 5.28.2.1
MySQL version : 8.0
My current setup is working fine with older version of MySQL 5.7.27. But somehow have to migrate to newer version of MySQL 8.0 and with this I'm getting an error while creating table entry in database.
On google, what I found is that something related to 'GROUPS'. This keyword is reserved in MySQL 8.0 and bugzilla trying to use that keyword. I don't have a knowledge of MySQL so I couldn't figured out this problem.
Below is the output of checksetup.pl
....
....
Checking for DBD-mysql (v4.001) ok: found v4.050
Checking for MySQL (v5.0.15) ok: found v8.0.17
Adding new table bz_schema...
Initializing bz_schema...
Creating tables...
Converting attach_data maximum size to 100G...
Setting up choices for standard drop-down fields:
priority op_sys resolution bug_status rep_platform bug_severity
Creating ./data directory...
Creating ./data/assets directory...
Creating ./data/attachments directory...
Creating ./data/db directory...
Creating ./data/extensions directory...
Creating ./data/mining directory...
Creating ./data/webdot directory...
Creating ./graphs directory...
Creating ./skins/custom directory...
Creating ./data/extensions/additional...
Creating ./data/mailer.testfile...
Creating ./Bugzilla/.htaccess...
Creating ./data/.htaccess...
Creating ./data/assets/.htaccess...
Creating ./data/attachments/.htaccess...
Creating ./data/webdot/.htaccess...
Creating ./graphs/.htaccess...
Creating ./lib/.htaccess...
Creating ./template/.htaccess...
Creating contrib/.htaccess...
Creating t/.htaccess...
Creating xt/.htaccess...
Precompiling templates...done.
DBD::mysql::db selectrow_array failed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'groups where name = ''' at line 1 [for Statement "SELECT id FROM groups where name = ''"] at Bugzilla/Install/DB.pm line 2497.
Bugzilla::Install::DB::_fix_group_with_empty_name() called at Bugzilla/Install/DB.pm line 358
Bugzilla::Install::DB::update_table_definitions(HASH(0x34e8cb8)) called at checksetup.pl line 175
Starting from Mysql 8.0 groups is reserved word. Conseqently 'groups' cannot be the table name. You can fix it by putting backticks around the table name groups in bugzilla code.
Groups can still be used, however, you need to add the database name in front of every call to the "GROUPS" table. In the case of Bugzilla standard install, the database name is 'bugs'. Example from line 1643 in DB.pm:
$dbh->do("ALTER TABLE groups DROP PRIMARY KEY");
Changed to:
$dbh->do("ALTER TABLE bugs.groups DROP PRIMARY KEY");
There are about a dozen SQL statements that make a call to that table in DB.pm. You will have to add it to all of them.
Try this:
SELECT id FROM database_name.groups where name = ''

karaf + pax-jdbc the connection pool had reached the limit

I have a problem with the pool connections of pax-jdbc in karaf, I'm trying to inject a Mysql DataSource (DS) through
blueprint.xml into my project, for test it, I have built a karaf command where injects the DS into karaf command class
and execute a query with that connection. That it's OK, but the problem is when I execute the command a lot times, for
each execution a new instance of the DS is created and the pool connection cannot open new connections to MySQL, because
the pool had reached the limit.
I have uploaded my code to github in this link: https://github.com/christmo/karaf-pax-jdbc , you can give a pull request
if you find an error in this project.
For test this project you can do:
1. Download karaf 4.0.4 or apache-karaf-4.1.0-SNAPSHOT
2. Copy the file karaf-pax-jdbc/etc/org.ops4j.datasource-my-ds.cfg to ${karaf}/etc, this file have the mysql
configuration change with your mysql configuration data.
4. Start mysql database engine
3. Start karaf -> cd ${karaf}/bin/; ./karaf
4. Add the repo of this project with this karaf command: feature:repo-add mvn:pax/features/1.0-SNAPSHOT/xml/features
5. Install the feature created for this project: feature:install mysql-test
6. Execute the command for test this problem: mysql-connection, this command only execute "Select 1" in mysql
If you execute 9 times this command "mysql-connection", it will freeze the prompt of karaf and if you interrupt the
execution you can get this exception:
java.sql.SQLException: Cannot get a connection, general error at
org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:146)
at com.twim.OrmCommand.execute(OrmCommand.java:53) at
org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:83)
at
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)
at
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:87)
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:480)
at
org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:406)
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108) at
org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182) at
org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119) at
org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:94)
at
org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:270)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.lang.InterruptedException at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at
org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:583)
at
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:442)
at
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at
org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
... 12 more
The problem in your code is in the line System.out.println("--DS--: " + ds.getConnection());.
There you create a connection but never close it. So with every call you drain the pool.

SLES crash dump

I would like to test whether my server creating a crash dump upon a OS crash. I can see the /etc/sysconfig/kdump config file is configured.
So I issued the command to kernel panic echo c > /proc/sysrq-trigger so it crashed the server but it never create a dump file for some reason. This is HP BL460g7 blade with ASR disabled.
When I trigger the kernel panic it crashed but stays about 10 minutes (looks like its trying to save a crash dump) but it never. I checked the message logs but cannot see reason why its not dumping. Main problem is how do I find why it's not dumping a crash file, is there are any logs I can check what has really gone wrong?
I'm using SUSE Linux Enterprise Server 11 (x86_64) SP 1.
Did you follow the steps explained here?
SUSE Support - Configure kernel core dump capture
The most important tasks should be:
install kdump, kexec-tools and makedumpfile
add crashkernel=... to the kernel command line (Grub)
chkconfig boot.kdump on
ensure that you have enough free space in /var/crash (default dir)
Then please reboot your system and run:
sync; echo c >/proc/sysrq-trigger
After another boot please check for new files in /var/crash. If this doesn't work for you, please show us the content of /etc/sysconfig/kdump and at least the output of
cat /proc/cmdline
chkconfig boot.kdump
Do you have a display connected to the machine?