I have a puppet code that supposed to create one galera cluster that contains two nodes but instead, it is creating two clusters with one node each.
the name of the two nodes are testbox1 and testbox2
the following is my ./hiera/role/testbox.yaml
---
classes:
- '::galera'
selinux::mode: 'permissive'
yum::repos::enabled:
- percona-x86_64
yum::repos:
contrail-3.2.1-mitaka:
enabled: 0
packages:
- 'Percona-XtraDB-Cluster-shared-compat-57'
- 'Percona-Server-selinux-56'
galera::configure_repo: false
galera::package_ensure: 'present'
galera::galera_package_ensure: 'absent'
galera::galera_package_name: 'Percona-XtraDB-Cluster-galera-3'
galera::client_package_name: 'Percona-XtraDB-Cluster-client-57'
galera::mysql_package_name: 'Percona-XtraDB-Cluster-server-57'
galera::bootstrap_command: 'systemctl start mysql#bootstrap.service'
galera::mysql_service_name: 'mysql'
mysql::server_service_name: 'mysql'
galera::service_enabled: true
galera::mysql_restart: true
galera::configure_firewall: false
mysql::server::purge_conf_dir: true
galera::purge_conf_dir: true
galera::grep_binary: '/bin/grep'
galera::mysql_binary: '/usr/bin/mysql'
galera::rundir: '/var/run/mysqld'
galera::socket: '/var/lib/mysql/mysql.sock'
galera::create_root_user: true
galera::create_root_my_cnf: true
galera::create_status_user: true
galera::status_check: true
galera::galera_servers: ['testbox-1', 'testbox-2']
galera::galera_master: 'testbox-1'
galera::status_password: 'bla'
galera::bind_address: '0.0.0.0'
galera::override_options:
mysqld:
pxc_strict_mode: 'ENFORCING'
wsrep_provider: '/usr/lib64/galera3/libgalera_smm.so'
wsrep_slave_threads: 8
wsrep_sst_method: 'rsync'
wsrep_cluster_name: 'grafana-galera-cluster'
wsrep_node_address: "%{ipaddress}"
wsrep_node_name: "%{hostname}"
wsrep_sst_auth: "sstuser:%{hiera('galera::sstuser_password')}"
binlog_format: 'ROW'
default_storage_engine: 'InnoDB'
innodb_locks_unsafe_for_binlog: 1
innodb_autoinc_lock_mode: 2
innodb_buffer_pool_size: '40000M'
innodb_log_file_size: '100M'
query_cache_size: 0
query_cache_type: 0
datadir: '/var/lib/mysql'
socket: '/var/lib/mysql/mysql.sock'
log-error: '/var/log/mysqld.log'
pid-file: '/var/run/mysql/mysql.pid'
max_connections: '10000'
max_connect_errors: '10000000'
mysqld_safe:
log-error: '/var/log/mysqld.log'
galera::status_user: 'clustercheck'
galera::status_allow: '%'
galera::status_available_when_donor: 0
galera::status_available_when_readonly: -1
galera::status_host: 'localhost'
galera::status_log_on_success: ''
galera::status_log_on_success_operator: '='
galera::status_port: 9200
galera::validate::action: 'select count(1);'
galera::validate::catch: 'ERROR'
galera::validate::delay: 3
galera::validate::inv_catch: undef
galera::validate::retries: 20
and I am using fraenki/galera module
The thing with this code, I end up with testbox1 in one cluster and testbox2 in another cluster instead of having both of them in the same cluster, After troubleshooting my issue is related to jira.percona.com/browse/PXC-2258, I found out the puppet code will create wsrep.cnf which has no value for wsrep_cluster_address and this will overwrite /etc/my.cnf.d/server.cnf which has the right value. I know how to fix this manually by deleting wsrep.cnf but I would like to have Puppet to do this without me fixing things manually but I do not know how.
puppet version 3.8.7 (opensource) (I can not upgrade it)
mysql#bootstrap needs to be executed on only one node. The other node do a normal start and then it will SST off the first node.
With two nodes you will have trouble getting a quorum and its unworkable as a HA system.
Related
Any help is appreciated, Please let me know were I am going wrong!
I am getting errors shown in the following image, I am running Loki and Grafana as 2 different AWS ECS-FARGATE tasks but my Liki container is failing and keep restarting itself:
My loki-config.yaml:
auth_enabled: true
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
max_transfer_retries: 0 # Chunk transfers disabled
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: index_
period: 24h
storage_config:
aws:
s3: s3://XXXXX:YYYY#eu-west-1/logs-loki-test
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
compactor:
working_directory: /loki/boltdb-shipper-compactor
shared_store: aws
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
ruler:
storage:
type: local
local:
directory: /loki/rules
rule_path: /loki/rules-temp
alertmanager_url: http://localhost:9093
ring:
kvstore:
store: inmemory
enable_api: true
In the compactor block, line shared_store replace aws with s3 and try out
Summary:
I already have a setup of "static jenkins server" type jenkins-x running in openshift 3.11 provider. The cluster was crashed and I want to reinstall jenkins-x in my cluster but there is no support for "static jenkins server" now.
So I am trying to install "jenkins-x" via "jx boot" but the installation fails with "tekton pipeline controller" pod into "crashloopbackoff" state.
Steps to reproduce the behavior:
jx-requirements.yml:
autoUpdate:
enabled: false
schedule: ""
bootConfigURL: https://github.com/jenkins-x/jenkins-x-boot-config.git
cluster:
clusterName: cic-60
devEnvApprovers:
- automation
environmentGitOwner: cic-60
gitKind: bitbucketserver
gitName: bs
gitServer: http://rtx-swtl-git.fnc.net.local
namespace: jx
provider: openshift
registry: docker-registry.default.svc:5000
environments:
- ingress:
domain: 172.29.35.81.nip.io
externalDNS: false
namespaceSubDomain: -jx.
tls:
email: ""
enabled: false
production: false
key: dev
repository: environment-cic-60-dev
- ingress:
domain: ""
externalDNS: false
namespaceSubDomain: ""
tls:
email: ""
enabled: false
production: false
key: staging
repository: environment-cic-60-staging
- ingress:
domain: ""
externalDNS: false
namespaceSubDomain: ""
tls:
email: ""
enabled: false
production: false
key: production
repository: environment-cic-60-production
gitops: true
ingress:
domain: 172.29.35.81.nip.io
externalDNS: false
namespaceSubDomain: -jx.
tls:
email: ""
enabled: false
production: false
kaniko: true
repository: nexus
secretStorage: local
storage:
backup:
enabled: false
url: ""
logs:
enabled: false
url: ""
reports:
enabled: false
url: ""
repository:
enabled: false
url: ""
vault: {}
velero:
schedule: ""
ttl: ""
versionStream:
ref: v1.0.562
url: https://github.com/jenkins-x/jenkins-x-versions.git
webhook: lighthouse
Expected behavior:
All the pods under jx namespace should be up & running and jenkins-x should be installed properly
Actual behavior:
Tekton pipeline controller pod is into "CrashLoopBackOff" state with error:
Pods with status in "jx" namespace:
NAME READY STATUS RESTARTS AGE
jenkins-x-chartmuseum-5687695d57-pp994 1/1 Running 0 1d
jenkins-x-controllerbuild-78b4b56695-mg2vs 1/1 Running 0 1d
jenkins-x-controllerrole-765cf99bdb-swshp 1/1 Running 0 1d
jenkins-x-docker-registry-5bcd587565-rhd7q 1/1 Running 0 1d
jenkins-x-gcactivities-1598421600-jtgm6 0/1 Completed 0 1h
jenkins-x-gcactivities-1598423400-4rd76 0/1 Completed 0 43m
jenkins-x-gcactivities-1598425200-sd7xm 0/1 Completed 0 13m
jenkins-x-gcpods-1598421600-z7s4w 0/1 Completed 0 1h
jenkins-x-gcpods-1598423400-vzb6p 0/1 Completed 0 43m
jenkins-x-gcpods-1598425200-56zdp 0/1 Completed 0 13m
jenkins-x-gcpreviews-1598421600-5k4vf 0/1 Completed 0 1h
jenkins-x-nexus-c7dcb47c7-fh7kx 1/1 Running 0 1d
lighthouse-foghorn-654c868bc8-d5w57 1/1 Running 0 1d
lighthouse-gc-jobs-1598421600-bmsq8 0/1 Completed 0 1h
lighthouse-gc-jobs-1598423400-zskt5 0/1 Completed 0 43m
lighthouse-gc-jobs-1598425200-m9gtd 0/1 Completed 0 13m
lighthouse-jx-controller-6c9b8994bd-qt6tc 1/1 Running 0 1d
lighthouse-keeper-7c6fd9466f-gdjjt 1/1 Running 0 1d
lighthouse-webhooks-56668dc58b-4c52j 1/1 Running 0 1d
lighthouse-webhooks-56668dc58b-8dh27 1/1 Running 0 1d
tekton-pipelines-controller-76c8c8dd78-llj4c 0/1 CrashLoopBackOff 436 1d
tiller-7ddfd45c57-rwtt9 1/1 Running 0 1d
Error log:
2020/08/24 18:38:00 Registering 4 clients
2020/08/24 18:38:00 Registering 3 informer factories
2020/08/24 18:38:00 Registering 8 informers
2020/08/24 18:38:00 Registering 2 controllers
{"level":"info","caller":"logging/config.go:108","msg":"Successfully created the logger."}
{"level":"info","caller":"logging/config.go:109","msg":"Logging level set to info"}
{"level":"fatal","logger":"tekton","caller":"sharedmain/main.go:149","msg":"Version check failed","commit":"821ac4d","error":"kubernetes version \"v1.11.0\" is not compatible, need at least \"v1.14.0\" (this can be overridden with the env var \"KUBERNETES_MIN_VERSION\")","stacktrace":"github.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithConfig\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:149\ngithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithContext\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:114\nmain.main\n\tgithub.com/tektoncd/pipeline/cmd/controller/main.go:72\nruntime.main\n\truntime/proc.go:203"}
After downgrading the tekton image from "0.11.0" to "0.9.0" the tekton pipeline controller pod is into running state. And a new tekton pipeline webhook pod got created and it is into "Crashloopbackoff"
Jx version:
Version 2.1.127
Commit 4bc05a9
Build date 2020-08-05T20:34:57Z
Go version 1.13.8
Git tree state clean
Diagnostic information:
The output of jx diagnose version is:
Running in namespace: jx
Version 2.1.127
Commit 4bc05a9
Build date 2020-08-05T20:34:57Z
Go version 1.13.8
Git tree state clean
NAME VERSION
Kubernetes cluster v1.11.0+d4cacc0
kubectl (installed in JX_BIN) v1.16.6-beta.0
helm client 2.16.9
git 2.24.1
Operating System "CentOS Linux release 7.8.2003 (Core)"
Please visit https://jenkins-x.io/faq/issues/ for any known issues.
Finished printing diagnostic information
Kubernetes cluster: openshift - 3.11
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-10-15T09:45:30Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Operating system / Environment:
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
I need to install "jenkins-x" via "jx boot" in "openshift-3.11" which uses default kubernetes version - 1.11.0 but "jx boot" requires atleast 1.14.0. Please suggest if there is any work around to get jenkins-x on openshift-3.11
As the error message shows (in the crashloop), kubernetes version "v1.11.0" is not compatible, need at least "v1.14.0", which make it not installable on OpenShift 3 (as it ships with Kubernetes 1.11.0). It seems jenkins-X comes with Tetkon Pipelines v0.14.2 which requires at least Kubernetes 1.14.0 (and later releases like Tekton Pipelines v0.15.0 requires Kubernetes 1.16.0).
{"level":"fatal","logger":"tekton","caller":"sharedmain/main.go:149","msg":"Version check failed","commit":"821ac4d","error":"kubernetes version \"v1.11.0\" is not compatible, need at least \"v1.14.0\" (this can be overridden with the env var \"KUBERNETES_MIN_VERSION\")","stacktrace":"github.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithConfig\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:149\ngithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithContext\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:114\nmain.main\n\tgithub.com/tektoncd/pipeline/cmd/controller/main.go:72\nruntime.main\n\truntime/proc.go:203"}
Theorically, setting KUBERNETES_MIN_VERSION in the controller deployment might make it work but it is not being tested and most likely the controller won't behave correctly as it's using feature that are not available in 1.11.0. Other than this, there is no workaround that I know of.
I have a problem with connectivity in docker. I use an official mysql 5.7 image and Prisma server. When I start it via prisma cli, that uses docker compose underneath (described here) everything works.
But I need to start this containers programmatically via docker api and in this case connections from app are dropped with [Note] Aborted connection 8 to db: 'unconnected' user: 'root' host: '164.20.10.2' (Got an error reading communication packets).
So what I doo:
Creating a bridge network:
const network = await docker.network.create({
Name: manifest.name + '_network',
IPAM: {
"Driver": "default",
"Config": [
{
"Subnet": "164.20.0.0/16",
"IPRange": "164.20.10.0/24"
}
]
}});
Creating mysql container and attaching it to network
const mysql = await docker.container.create({
Image: 'mysql:5.7',
Hostname: manifest.name + '-mysql',
Names: ['/' + manifest.name + '-mysql'],
NetworkingConfig: {
EndpointsConfig: {
[manifest.name + '_network']: {
Aliases: [manifest.name + '-mysql']
}
}
},
Restart: 'always',
Args: [
"mysqld",
"--max-connections=1000",
"--sql-mode=ALLOW_INVALID_DATES,ANSI_QUOTES,ERROR_FOR_DIVISION_BY_ZERO,HIGH_NOT_PRECEDENCE,IGNORE_SPACE,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_FIELD_OPTIONS,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_UNSIGNED_SUBTRACTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY,PIPES_AS_CONCAT,REAL_AS_FLOAT,STRICT_ALL_TABLES,STRICT_TRANS_TABLES,ANSI,DB2,MAXDB,MSSQL,MYSQL323,MYSQL40,ORACLE,POSTGRESQL,TRADITIONAL"
],
Env: [
'MYSQL_ROOT_PASSWORD=secret'
]
});
await network.connect({
Container: mysql.id
});
await mysql.start();
Then I wait Mysql to boot, create needed databases and needed Prisma containers from prismagraphql/prisma:1.1 and start them. App server resolves mysql host correctly, but connections are dropped by mysql.
Telnet from app container to mysql container in 3306 port responds correctly:
J
5.7.21U;uH Kem']#45T]2mysql_native_password
What am I doing wrong?
Check the below:
max_allowed_packets
wait_timeout
net_read_timeout
Also monitor MySQL process list during the issue to identify timeouts.
Can you try some wait, it could be possible that application try to connect to mysql server before its ready to accept connection. To test this, add some wait on startup or run mysql followed by application as different deployments.
The fix is to add --wait-timeout=28800 (or higher number) into MySQL arguments:
Args: [
"mysqld",
"--max-connections=1000",
"--sql-mode=ALLOW_INVALID_DATES,ANSI_QUOTES,ERROR_FOR_DIVISION_BY_ZERO,HIGH_NOT_PRECEDENCE,IGNORE_SPACE,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,NO_DIR_IN_CREATE,NO_ENGINE_SUBSTITUTION,NO_FIELD_OPTIONS,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_UNSIGNED_SUBTRACTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY,PIPES_AS_CONCAT,REAL_AS_FLOAT,STRICT_ALL_TABLES,STRICT_TRANS_TABLES,ANSI,DB2,MAXDB,MSSQL,MYSQL323,MYSQL40,ORACLE,POSTGRESQL,TRADITIONAL",
"--wait-timeout=28800" // 28800 sec = 8 hours
],
Reference: https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout
But maybe it's wiser to find out what is the root cause for idle connections.
From last 4 days we are facing frequent database crashes with mysql infobright engine, there is no recent changes on production environment and no updates.
Currently we are using the version 5.1.40.
Find the below dump, can any one help to figure out the issue.
170520 21:12:08 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
key_buffer_size=1677721600
read_buffer_size=1048576
max_used_connections=75
max_threads=1000
threads_connected=54
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 3696548 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd: 0xc2a4bd000
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7fc0d0bede58 thread_stack 0x80000
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0xaef849]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x412e13]
/lib64/libpthread.so.0(+0xf7e0) [0x7fc0d48c77e0]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0xb10635]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0xb1f123]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x9a9693]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x76ae0c]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x76b594]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x767ab3]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x7694ea]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x72902b]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x422325]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x427573]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x42b38c]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x42c227]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x42cb05]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x41f06d]
/lib64/libpthread.so.0(+0x7aa1) [0x7fc0d48bfaa1]
/lib64/libc.so.6(clone+0x6d) [0x7fc0d460caad]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0xc2da5e410 = SELECT DATE_FORMAT(DATETIME,'%Y%m%d') AS YEAR_MONTH_DAY_SK ,HOUR(DATETIME) AS HOUR_SK, IFNULL(DESTINATION,'0') AS DESTINATION, IFNULL(DATETIME,'1970-01-01 00:00:00') AS DATETIME, IFNULL(CLIENTID,'0') AS CLIENTID, IFNULL(GROUPID,'0') AS GROUPID, IFNULL(TEAMID,'0') AS TEAMID, IFNULL(SERVICEID,'0') AS SERVICEID, IFNULL(CHANNELID,'0') AS CHANNELID, IFNULL(STATUSID,'0') AS STATUSID, CASE REASONCODE WHEN '' THEN NULL WHEN NULL THEN NULL ELSE REASONCODE END AS REASONCODE, CASE REASONDESC WHEN '' THEN NULL WHEN NULL THEN NULL ELSE REASONDESC END AS REASONDESC, IFNULL(ACTIONTYPE1ID,'0') AS ACTIONTYPE1ID, CASE ACTIONTYPE1DESC WHEN '' THEN NULL WHEN NULL THEN NULL ELSE ACTIONTYPE1DESC END AS ACTIONTYPE1DESC, IFNULL(ACTIONTYPE2ID,'0') AS ACTIONTYPE2ID, CASE ACTIONTYPE2DESC WHEN '' THEN NULL WHEN NULL THEN NULL ELSE ACTIONTYPE2DESC END AS ACTIONTYPE2DESC, IFNULL(ATTACHMENT,'0') AS ATTACHMENT, CASE MIMETYPE WHEN '' THEN NULL WHEN NULL THEN NULL ELSE MIMETYPE END AS MIMETYPE, CASE VOICEFLOWNAME WH
thd->thread_id=35918
thd->killed=NOT_KILLED
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
170520 21:12:08 mysqld_safe Number of processes running now: 0
170520 21:12:08 mysqld_safe mysqld restarted
tcmalloc: large alloc 1365172224 bytes == 0x4518000 #
Loading configuration for Infobright instance ...
Option: AllowMySQLQueryPath, value: 1.
Option: AutoConfigure, value: 0.
Option: CacheFolder, value: /usr/local/infobright-4.7.1-x86_64/cache.
Option: ControlMessages, value: 0.
Option: IBEngineRevision, value: IEE_4.7.1_r30553_31737.
Option: InternalMessages, value: 0.
Option: InternalMessagesFlushPeriod, value: 60.
Option: KNFolder, value: BH_RSI_Repository.
Option: KNLevel, value: 99.
Option: LicenseCheckInterval, value: 0.
Option: LicenseExpireWarningDays, value: 0.
Option: LicenseFile, value: <unknown>.
Option: LicenseServerIPAddr, value: .
Option: LicenseServerType, value: .
Option: LicenseServerWarningNumber, value: .
Option: LoaderMainHeapSize, value: 800.
Option: PushDown, value: 1.
Option: ServerMainHeapSize, value: 48000.
Option: UseMySQLImportExportDefaults, value: 0.
Option: bherrLogLevel, value: 1.
Infobright instance configuration loaded.
tcmalloc: large alloc 40265318400 bytes == 0x687c8000 #
tcmalloc: large alloc 10066329600 bytes == 0x9cff48000 #
170520 21:12:09 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
170520 21:12:09 [ERROR] Do you already have another mysqld server running on port: 5029 ?
170520 21:12:09 [ERROR] Aborting
170520 21:12:09 [Note] /usr/local/infobright-4.7.1-x86_64/bin/mysqld: Shutdown complete
170520 21:12:09 mysqld_safe mysqld from pid file /data/infobright/data/SH-UMP-CINFBRT2.pid ended
I tried to install proxy on development machine and I got the following error.
/etc/init.d/mysql-proxyd start
Starting mysql-proxy: 2011-02-26 15:51:45: (critical) admin-plugin.c:569: --admin-username needs to be set
2011-02-26 15:51:45: (critical) mainloop.c:267: applying config of plugin admin failed
2011-02-26 15:51:45: (critical) mysql-proxy-cli.c:596: Failure from chassis_mainloop. Shutting down.
[ OK ]
Since this is only a test machine, I do not want the security feature of proxy. How do I avoid the above error?
Either upgrade your version of mysql-proxy to 0.8.2 or greater or explicitly specify that you don't need the admin plugin by using mysql-proxy --plugins=proxy
[mysql-proxy]
daemon = true
user = mysql
proxy-skip-profiling = true
keepalive = true
max-open-files = 2048
event-threads = 50
pid-file = /var/run/mysql-proxy.pid
log-file = /var/log/mysql-proxy.log
log-level = debug
admin-address=:4401
admin-username=1
admin-password=1
admin-lua-script=/usr/local/lib/mysql-proxy/lua/admin.lua
proxy-address = 0.0.0.0:3307
proxy-backend-addresses = 192.168.2.1:3306
proxy-read-only-backend-addresses=192.168.6.2:3306, 192.168.6.1:3306
proxy-lua-script=/usr/lib/mysql-proxy/lua/proxy/balance.lua