Why opening "Setting > Plugins" I see loading mark only? - phpstorm

In PhpStorm 2021.1.4 when I try to add more plugins by opening "Setting > Plugins" I see a loading mark only:
Is is misconfigure of my options?
UPDATED BLOCKS #1 :
I reinstalled my ubuntu 20 and installed PHPStorm ans restored all settings from settings.zip I made before.
Opening in PHPStorm I open Settings=>Plugings=>"Marketplace" and entering some value I see that is is hang.
I tried to restart network-manager
sudo service network-manager restart
and got some inet properties I found in net:
master#master-at-home:/mnt/_work_sdb8/wwwroot/lar/BiCurrencies$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 44:8a:5b:ee:2a:dd brd ff:ff:ff:ff:ff:ff
inet 213.109.234.130/23 brd 213.109.235.255 scope global dynamic noprefixroute enp4s0
valid_lft 1759sec preferred_lft 1759sec
inet6 fe80::bf86:555c:7dfa:146f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: wlp5s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 40:e2:30:0d:7b:9f brd ff:ff:ff:ff:ff:ff
master#master-at-home:/mnt/_work_sdb8/wwwroot/lar/BiCurrencies$ ip route show
default via 213.109.234.1 dev enp4s0 proto dhcp metric 100
169.254.0.0/16 dev enp4s0 scope link metric 1000
213.109.234.0/23 dev enp4s0 proto kernel scope link src 213.109.234.130 metric 100
master#master-at-home:/mnt/_work_sdb8/wwwroot/lar/BiCurrencies$ ip l show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 44:8a:5b:ee:2a:dd brd ff:ff:ff:ff:ff:ff
3: wlp5s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
link/ether 40:e2:30:0d:7b:9f brd ff:ff:ff:ff:ff:ff
master#master-at-home:/mnt/_work_sdb8/wwwroot/lar/BiCurrencies$
master#master-at-home:/mnt/_work_sdb8/wwwroot/lar/BiCurrencies$ ip r | grep default
default via 213.109.234.1 dev enp4s0 proto dhcp metric 100
master#master-at-home:/mnt/_work_sdb8/wwwroot/lar/BiCurrencies$ ip l show | grep -A1 eth0
master#master-at-home:/mnt/_work_sdb8/wwwroot/lar/BiCurrencies$ ip l show | egrep -A1 'eth0|lo'
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
master#master-at-home:/mnt/_work_sdb8/wwwroot/lar/BiCurrencies$
$ ifconfig
enp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 213.109.234.130 netmask 255.255.254.0 broadcast 213.109.235.255
inet6 fe80::bf86:555c:7dfa:146f prefixlen 64 scopeid 0x20<link>
ether 44:8a:5b:ee:2a:dd txqueuelen 1000 (Ethernet)
RX packets 736979 bytes 1004566933 (1.0 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 424950 bytes 34798999 (34.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 19
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 68775 bytes 24816821 (24.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 68775 bytes 24816821 (24.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlp5s0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 40:e2:30:0d:7b:9f txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Not sure if any of them can be usefull. I suppose maybe some misconfigure in settings.zip ?
It seems to me that last what I changed in PHPStorm that were my attempt to set xdebug in my PHPStorm
with debug port 9000,9003...
Any ideas what could raise that problem ?

Related

2 interface in side a pod, midhaul_ker#midhaul_edk and midhaul_ker#midhaul_edk, is this a kind of veth pair loop?

below is part of the output of "ip a" in a pod with SRIOV and dpdk, i have trouble to understand midhaul_ker and midhaul_edk connection, are they a veth pair connected to each other, if yes, what is the traffic flow would be?
4: midhaul_edk#midhaul_ker: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether f2:2b:96:c5:20:83 brd ff:ff:ff:ff:ff:ff
inet6 fe80::f02b:96ff:fec5:2083/64 scope link
valid_lft forever preferred_lft forever
5: midhaul_ker#midhaul_edk: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ae:70:3b:ac:7d:66 brd ff:ff:ff:ff:ff:ff
inet 10.97.62.18/27 brd 10.97.62.31 scope global midhaul_ker
valid_lft forever preferred_lft forever
inet6 fe80::40c8:23ff:fe1e:13c2/64 scope link
valid_lft forever preferred_lft forever
6: backhaul_edk#backhaul_ker: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 92:50:df:52:1e:b2 brd ff:ff:ff:ff:ff:ff
inet6 fe80::9050:dfff:fe52:1eb2/64 scope link
valid_lft forever preferred_lft forever
7: backhaul_ker#backhaul_edk: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ba:1c:4c:1b:18:88 brd ff:ff:ff:ff:ff:ff
inet 10.97.62.113/27 brd 10.97.62.127 scope global backhaul_ker
valid_lft forever preferred_lft forever
inet6 fe80::ecce:33ff:fec7:fa05/64 scope link
valid_lft forever preferred_lft forever

Mysql connect host change every time in the docker container,why?

In the docker container.I try to login the host mysql server. First the host ip is changed,so confused for me.But second login success. Anyone can explain this strange happening?
I login ip is 192.168.100.164,but the error info shows ip 172.18.0.4,which is the container localhost.
More info:
root#b67c39311dbb:~/project# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
root#b67c39311dbb:~/project# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.4 netmask 255.255.0.0 broadcast 172.18.255.255
ether 02:42:ac:12:00:04 txqueuelen 0 (Ethernet)
RX packets 2099 bytes 2414555 (2.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1752 bytes 132863 (132.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 35 bytes 3216 (3.2 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 35 bytes 3216 (3.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Try adding --add-host="localhost:192.168.100.164" when launching docker run. But this is not a good practice to my mind. You should move your mysql database to another container and create a network between them
That is true, when we start docker container it is get their own ip for container. You need to map host port with docker container. then, when you try to connect host port it redirect to myssql docker container. Please look at https://docs.docker.com/config/containers/container-networking/
I'd suggest you to create a docker bridged network and then create your container using the --add-host as suggested by Alexey.
In a simple script:
DOCKER_NET_NAME='the_docker_network_name'
DOCKER_GATEWAY='172.100.0.1'
docker network create \
--driver bridge \
--subnet=172.100.0.0/16 \
--gateway=$DOCKER_GATEWAY \
$DOCKER_NET_NAME
docker run -d \
--add-host db.onthehost:$DOCKER_GATEWAY \
--restart=always \
--network=$DOCKER_NET_NAME \
--name=magicContainerName \
yourImage:latest
EDIT: creating a network will also simplify the communication among containers (if you plan to have it in the future) since you'll be able to use the container names instead of their IP.

How to assign multiple outgoing IPs addresses to a single instance on GCE?

How does one assign multiple ephemeral external IP addresses to the same machine on Google Compute Engine? The web interface only discusses the primary IP addresses, but I see no mention of adding more addresses.
I found a related question over at https://stackoverflow.com/a/39963576/14731 but it focuses on routing multiple incoming IPs to the same instance.
My application is a web client that needs to make multiple outgoing connections from multiple source IPs.
Yes it's possible, with some steps:
Create the same number of VPC (Network) as you need interfaces
Create a subnet inside each VPC and make sure the are not overlapping
Add a firewall rule in the first VPC to allow SSH from your location
Create an instance with multiple interfaces (one in each VPC) and assign external address to each one
SSH to your instance via the address located on the first VPC
Configure a separate routing table for each network interface
Things you have to know:
You can add interfaces only on instance creation
I've got an error on configuration of routing table, but it worked (RTNETLINK answers: File exists)
Routing table of secondary interfaces are not persisted, you have to manage how to do this
Results
yann#test-multiple-ip:~$ ip a
[...]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
link/ether 42:01:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.2/32 brd 192.168.0.2 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::4001:c0ff:fea8:2/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
link/ether 42:01:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.2/32 brd 192.168.1.2 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::4001:c0ff:fea8:102/64 scope link
valid_lft forever preferred_lft forever
yann#test-multiple-ip:~$ curl --interface eth0 ifconfig.co
35.241.195.172
yann#test-multiple-ip:~$ curl --interface eth1 ifconfig.co
35.241.253.41

Cannot import migration data into a MariaDB database

I'm trying to import some exported migration data into a MariaDB database.
I could successfully do the import into the H2 database.
But when trying to import in the MariaDB one, it creates 87 tables in the database instead of 91 tables, and also ends up in error:
2018-04-22 14:13:33,275 INFO [org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider] (ServerService Thread Pool -- 58) Initializing database schema. Using changelog META-INF/jpa-changelog-master.xml
2018-04-22 14:18:22,393 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0348: Timeout after [300] seconds waiting for service container stability. Operation will roll back. Step that first updated the service container was 'add' at address '[
("core-service" => "management"),
("management-interface" => "http-interface")
]'
This new log chunk shows it takes almost 5 mn. It's way too long.
More from the stacktrace:
16:16:55,690 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 58) MSC000001: Failed to start service jboss.undertow.deployment.default-server.default-host./auth: org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./auth: java.lang.RuntimeException: RESTEASY003325: Failed to construct public org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:84)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: java.lang.RuntimeException: RESTEASY003325: Failed to construct public org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
at org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:162)
at org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2298)
at org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:340)
at org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:253)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:120)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:36)
at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
at org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
at io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:250)
at io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:133)
at io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:565)
at io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:536)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:578)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:100)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
... 6 more
Caused by: java.lang.RuntimeException: Failed to update database
at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:102)
at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:67)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.update(DefaultJpaConnectionProviderFactory.java:322)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.migration(DefaultJpaConnectionProviderFactory.java:292)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.lambda$lazyInit$0(DefaultJpaConnectionProviderFactory.java:179)
at org.keycloak.models.utils.KeycloakModelUtils.suspendJtaTransaction(KeycloakModelUtils.java:544)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.lazyInit(DefaultJpaConnectionProviderFactory.java:130)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.create(DefaultJpaConnectionProviderFactory.java:78)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.create(DefaultJpaConnectionProviderFactory.java:56)
at org.keycloak.services.DefaultKeycloakSession.getProvider(DefaultKeycloakSession.java:163)
at org.keycloak.models.jpa.JpaRealmProviderFactory.create(JpaRealmProviderFactory.java:51)
at org.keycloak.models.jpa.JpaRealmProviderFactory.create(JpaRealmProviderFactory.java:33)
at org.keycloak.services.DefaultKeycloakSession.getProvider(DefaultKeycloakSession.java:163)
at org.keycloak.models.cache.infinispan.RealmCacheSession.getDelegate(RealmCacheSession.java:144)
at org.keycloak.models.cache.infinispan.RealmCacheSession.getMigrationModel(RealmCacheSession.java:137)
at org.keycloak.migration.MigrationModelManager.migrate(MigrationModelManager.java:76)
at org.keycloak.services.resources.KeycloakApplication.migrateModel(KeycloakApplication.java:246)
at org.keycloak.services.resources.KeycloakApplication.migrateAndBootstrap(KeycloakApplication.java:187)
at org.keycloak.services.resources.KeycloakApplication$1.run(KeycloakApplication.java:146)
at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
at org.keycloak.services.resources.KeycloakApplication.<init>(KeycloakApplication.java:137)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:150)
... 28 more
Caused by: liquibase.exception.MigrationFailedException: Migration failed for change set META-INF/jpa-changelog-2.1.0.xml::2.1.0::bburke#redhat.com:
Reason: liquibase.exception.UnexpectedLiquibaseException: java.sql.SQLException: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk8.WrappedConnectionJDK8#55194ba1
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:573)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:51)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:73)
at liquibase.Liquibase.update(Liquibase.java:210)
at liquibase.Liquibase.update(Liquibase.java:190)
at liquibase.Liquibase.update(Liquibase.java:186)
at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.updateChangeSet(LiquibaseJpaUpdaterProvider.java:135)
at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:88)
... 53 more
Caused by: liquibase.exception.UnexpectedLiquibaseException: java.sql.SQLException: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk8.WrappedConnectionJDK8#55194ba1
at liquibase.database.jvm.JdbcConnection.getURL(JdbcConnection.java:79)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:62)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:122)
at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1247)
at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1230)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:548)
... 60 more
Caused by: java.sql.SQLException: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk8.WrappedConnectionJDK8#55194ba1
at org.jboss.jca.adapters.jdbc.WrappedConnection.lock(WrappedConnection.java:164)
at org.jboss.jca.adapters.jdbc.WrappedConnection.getMetaData(WrappedConnection.java:913)
at liquibase.database.jvm.JdbcConnection.getURL(JdbcConnection.java:77)
... 65 more
The export command was:
$KEYCLOAK_HOME/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=exported_realms -Dkeycloak.migration.strategy=OVERWRITE_EXISTING
The import command that fails is:
$KEYCLOAK_HOME/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=exported_realms -Dkeycloak.migration.strategy=OVERWRITE_EXISTING
Here is the data source used in the standalone/configuration/standalone.xml file:
<datasource jndi-name="java:/jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true">
<connection-url>jdbc:mysql://localhost:3306/keycloak?useSSL=false&characterEncoding=UTF-8</connection-url>
<driver>mysql</driver>
<pool>
<min-pool-size>5</min-pool-size>
<max-pool-size>15</max-pool-size>
</pool>
<security>
<user-name>keycloak</user-name>
<password>xxxxxx</password>
</security>
<validation>
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker"/>
<validate-on-match>true</validate-on-match>
<exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"/>
</validation>
</datasource>
I'm using keycloak-3.4.1.Final and mariadb-10.1.24 on a java version 1.8.0_60.
Running the ./mysqltuner.pl utility shows:
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 2.0G/222.6M
[OK] Ratio InnoDB log file size / InnoDB Buffer pool size: 256.0M * 2/2.0G should be equal 25%
[OK] InnoDB buffer pool instances: 2
[--] InnoDB Buffer Pool Chunk Size not used or defined in your version
[!!] InnoDB Read buffer efficiency: 63.85% (802 hits/ 1256 total)
[!!] InnoDB Write Log efficiency: 0% (1 hits/ 0 total)
[OK] InnoDB log waits: 0.00% (0 waits / 1 writes)
General recommendations:
Control warning line(s) into /home/stephane/programs/install/mariadb/mariadb.error.log file
1 CVE(s) found for your MySQL release. Consider upgrading your version !
MySQL started within last 24 hours - recommendations may be inaccurate
Dedicate this server to your database for highest performance.
Reduce or eliminate unclosed connections and network issues
Consider installing Sys schema from https://github.com/mysql/mysql-sys
Variables to adjust:
query_cache_size (=0)
query_cache_type (=0)
query_cache_limit (> 1M, or use smaller result sets)
After installing the MySQLTuner utility:
wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl
wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/basic_passwords.txt -O basic_passwords.txt
wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/vulnerabilities.csv -O vulnerabilities.csv
chmod +x mysqltuner.pl
./mysqltuner.pl
I understood my database server was poorly configured and writes were too slow, resulting in a timeout of the import operation.
I then configured the my.cnf file with the directives:
skip-name-resolve = 1
performance_schema = 1
innodb_log_file_size = 256M
innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 2
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
thread_cache_size = 4
The one directive that allowed the import to complete successfully was:
innodb_flush_log_at_trx_commit = 2
UPDATE: I commented out the innodb_flush_log_at_trx_commit = 2 directive, so as to trigger the error again. I could then collect the additional information as requested by the bellow comment.
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 14761
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 14761
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
$ free -m
total used free shared buff/cache available
Mem: 3743 2751 116 115 875 700
Swap: 4450 74 4376
The complete my.cnf file:
[mysqld]
sql_mode = NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION # This is strict mode: NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
socket = /home/stephane/programs/install/mariadb/tmp/mariadb.sock
user = stephane
basedir = /home/stephane/programs/install/mariadb
datadir = /home/stephane/programs/install/mariadb/data
log-bin = /home/stephane/programs/install/mariadb/mariadb.bin.log
log-error = /home/stephane/programs/install/mariadb/mariadb.error.log
general-log-file = /home/stephane/programs/install/mariadb/mariadb.log
slow-query-log-file = /home/stephane/programs/install/mariadb/mariadb.slow.queries.log
long_query_time = 1
log-queries-not-using-indexes = 1
innodb_file_per_table = 1
sync_binlog = 1
character-set-client-handshake = FALSE
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
wait_timeout = 28800 # amount of seconds during inactivity that MySQL will wait before it will close a connection on a non-interactive connection
interactive_timeout = 28800 # same, but for interactive sessions
max_allowed_packet = 128M
net_write_timeout = 180
skip-name-resolve = 1
thread_cache_size = 4
#skip-networking
#skip-host-cache
#bulk_insert_buffer_size = 1G
performance_schema = 1
innodb_log_file_size = 128M
innodb_buffer_pool_size = 1G
innodb_buffer_pool_instances = 2
#innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
[client]
socket = /home/stephane/programs/install/mariadb/tmp/mariadb.sock
default-character-set = utf8mb4
[mysql]
default-character-set = utf8mb4
The environment status and variables
The other commands output:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1,9G 0 1,9G 0% /dev
tmpfs 375M 6,1M 369M 2% /run
/dev/sda1 17G 7,6G 8,4G 48% /
tmpfs 1,9G 21M 1,9G 2% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
/dev/sda5 438G 51G 365G 13% /home
tmpfs 375M 16K 375M 1% /run/user/1000
$ top - 19:22:22 up 1:13, 1 user, load average: 2,04, 1,27, 1,17
Tasks: 223 total, 1 running, 222 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2,9 us, 0,6 sy, 0,0 ni, 73,3 id, 23,1 wa, 0,0 hi, 0,1 si, 0,0 st
KiB Mem : 3833232 total, 196116 free, 2611360 used, 1025756 buff/cache
KiB Swap: 4557820 total, 4488188 free, 69632 used. 985000 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10399 stephane 20 0 4171240 448816 25724 S 3,6 11,7 0:39.78 java
8110 stephane 20 0 1217152 111392 40800 S 2,3 2,9 0:54.32 chrome
8290 stephane 20 0 1276140 148024 41360 S 2,0 3,9 0:43.52 chrome
1272 root 20 0 373844 45632 28108 S 1,0 1,2 1:37.31 Xorg
3172 stephane 20 0 729100 37060 22116 S 1,0 1,0 0:14.50 gnome-terminal-
11433 stephane 20 0 3163040 324680 9288 S 1,0 8,5 0:05.09 mysqld
8260 stephane 20 0 1242104 142028 42292 S 0,7 3,7 0:31.93 chrome
8358 stephane 20 0 1252060 99884 40876 S 0,7 2,6 0:34.06 chrome
12580 root 20 0 1095296 78872 36456 S 0,7 2,1 0:10.29 dockerd
14 root rt 0 0 0 0 S 0,3 0,0 0:00.01 watchdog/1
2461 stephane 20 0 1232332 203156 74752 S 0,3 5,3 4:29.75 chrome
7437 stephane 20 0 3509576 199780 46004 S 0,3 5,2 0:20.66 skypeforlinux
8079 stephane 20 0 1243784 130948 38848 S 0,3 3,4 0:23.82 chrome
8191 stephane 20 0 1146672 72848 37536 S 0,3 1,9 0:12.41 chrome
8501 root 20 0 0 0 0 S 0,3 0,0 0:00.80 kworker/0:1
9331 stephane 20 0 46468 4164 3380 R 0,3 0,1 0:01.38 top
1 root 20 0 220368 8492 6404 S 0,0 0,2 0:02.26 systemd
2 root 20 0 0 0 0 S 0,0 0,0 0:00.00 kthreadd
4 root 0 -20 0 0 0 S 0,0 0,0 0:00.00 kworker/0:0H
6 root 0 -20 0 0 0 S 0,0 0,0 0:00.00 mm_percpu_wq
$ iostat -x
Linux 4.13.0-39-generic (stephane-ThinkPad-X201) 22/05/2018 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
8,65 0,92 2,17 8,73 0,00 79,53
Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sda 42,68 23,49 816,72 905,13 8,40 36,83 16,45 61,06 17,18 52,96 1,98 19,14 38,53 4,31 28,53
$ free -m
total used free shared buff/cache available
Mem: 3743 2571 137 107 1034 924
Swap: 4450 68 4382
The top, iostat and free commands were executed during the import migration script execution.
The full MySQL Tuner output
The root cause of this error is because the linux Server needs increment the number of open files.
Really, you need first tune your bdd, because when is very slowly this cause a timeout.
Check the attribute "open files" with the command:
ulimit -n
In my case I use 200000.
Please use this example:
# maximum capability of system
user#ubuntu:~$ cat /proc/sys/fs/file-max
708444
# available limit
user#ubuntu:~$ ulimit -n
1024
# To increase the available limit to say 200000
user#ubuntu:~$ sudo vim /etc/sysctl.conf
# add the following line to it
fs.file-max = 200000
# run this to refresh with new config
user#ubuntu:~$ sudo sysctl -p
# edit the following file
user#ubuntu:~$ sudo vim /etc/security/limits.conf
# add following lines to it
* soft nofile 200000
* hard nofile 200000
www-data soft nofile 200000
www-data hard nofile 200000
root soft nofile 200000
root hard nofile 200000
# edit the following file
user#ubuntu:~$ sudo vim /etc/pam.d/common-session
# add this line to it
session required pam_limits.so
# logout and login and try the following command
user#ubuntu:~$ ulimit -n
200000
# now you can increase no.of.connections per Nginx worker
# in Nginx main config /etc/nginx/nginx.conf
worker_connections 200000;
worker_rlimit_nofile 200000;
Your ulimit -a report indicates 'open files' are at 1024
ulimit -n 10000 would expand your capacity to better accommodate MySQL.
In the 1,318 seconds reported in SHOW GLOBAL STATUS, there we
33 com_rollback items counted and
1 handler_rollback possibly all the result of the java failure documented above.
Suggestions to consider for your my.cnf-ini [mysqld] section to possibly speed up the processing.
Suggestions by Back To Basics, Inc. 05/24/2018 for this IMPORT processing
max_connect_errors=10 # why tolerate 100 hacker/cracker attempts?
thread_cache_size=30 # from 4 to ensure threads ready to go
innodb_io_capacity_max=10000 # from 2000 default, for SSD vs HDD
innodb_io_capacity=5000 # from 200 default, for SSD vs HDD
have_symlink=NO # to protect server from RANSOMWARE crowd
innodb_flush_neighbors=0 # from 1, no need when SSD - no rotational delay
innodb_lru_scan_depth=512 # from 1024 to conserve CPU see v8 refman
innodb_print_all_deadlocks=ON # from OFF in error log for proactive correction
innodb_purge_threads=4 # from 1 to speed purge processing
log_bin=OFF # from ON unless you need to invest the resources during import
log_warnings=2 # from 1 for addl info on aborted_connection in error log
max_join_size=1000000000 # from upper limit of 4 Billion rows
max_seeks_for_key=32 # rather than allowing optimizer to search 4 Billion ndx's.
max_write_lock_count=16 # to allow RD after nn lcks rather than 4 Billion
performance_schema=OFF # from ON for this IMPORT processing speed
log_queries_not_using_indexes=0 # not likely to look at these, for import
Copy and paste at END of [mysqld} for quick test, weed out duplicate
same variable names when time permits from the top of [mysqld].
These will not resolve the error documented but should speed up the processing.
PLease provide feedback when time permits.

zabbix_sender: send value error: ZBX_TCP_READ() timed out

I have some problems sending my test data to Zabbix Server.
My configurations are set up, even have Zabbix agent installed and correctly working (sends data for monitoring CPU, memory,...).
This is the situation: I have a Zabbix installed on a Debian VM, and configured a host with correct IP, port, item(Zabbix trapper).
I want to send a value just for testing from my Windows 10 PC using "zabbix_sender"; later I want to find a way to get data from a .txt file for monitoring.
Used command from my cmd:
zabbix_sender -vv -z XXX.XXX.X.X -p XXXX -s "IT-CONS-PC4" -k trap -o "test"
Error:
zabbix_sender [8688]: DEBUG: send value error: ZBX_TCP_READ() timed out
Did someone else had this issue?
This errors out on the network level.
check that the local firewall on the Zabbix server allows incoming connections on the server port (10051 by default)
check that the VM network connectivity is correct
As a simple test, you can telnet from the box with zabbix_sender to the Zabbix server on port 10051. If that fails, you have a basic network issue.
after many an hour, this is what fixed it.
Active Agent Checks were failing with following error or similar:
active check configuration update from [zabbix.verticalcomputers.com:10051] started to fail (ZBX_TCP_READ() timed out)
For whatever reason, active agents wont be able to connect properly (active checks won't work, only passive), if server is behind a firewall NAT'd, and you don't have the following in your ifcfg-eth0 (or whatever NIC) file. It will work if you bypass the firewall and put a public IP right on the zabbix server.
NM_CONTROLLED=no
BOOTPROTO=static
If you use the CentOS 7 wizard, or nmtui to config your NIC, instead of manually, those lines don't get added.
I noticed this because when running "ip add", I'd get the following:
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:07:82:07 brd ff:ff:ff:ff:ff:ff
inet 10.32.2.25/24 brd 10.32.2.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
Notice the "noprefixroute". That was unsettling, so I dug for a long time online, with no leads. After adding the two lines to the NIC config mentioned above, and restarting the network, now looks like this:
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:07:82:07 brd ff:ff:ff:ff:ff:ff
inet 10.32.2.25/24 brd 10.32.2.255 scope global eth0
valid_lft forever preferred_lft forever
Late but i hope this could help you.
Check the last sections of scripts docs
Be sure that your zabbix server has connection with your zabbix agent (normally cause by firewalls)
1.1 by port 10050 for passive checks
1.2 by port 10051 for active checks
# you can do it with telnet from your zabbix server
> telnet <agent ip> <10050 or 10051>
Trying <agent ip>...
Connected to <agent ip>.
Escape character is '^]'.
You can modify your server/agent config file to increase Timeout directive. By default is 3 and you can set it up to 30 seconds. If you do this, be sure to modify in both server and agent.
2.1 Don't forget restarting the services service zabbix-agent restart and service zabbix-server restart