I have a kubeadmin account for OpenShift 4.2 and am able to successfully login via oc login -u kubeadmin.
I exposed the built-in docker registry through DefaultRoute as documented in https://docs.openshift.com/container-platform/4.2/registry/securing-exposing-registry.html
My docker client runs on macOS and is configured to trust the default self-signed certificate of the registry
openssl s_client -showcerts -connect $(oc registry info) </dev/null 2>/dev/null|openssl x509 -outform PEM > tls.pem
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain tls.pem
Now when I try logging into the built-in registry, I get the following error
docker login $(oc registry info) -u $(oc whoami) -p $(oc whoami -t)
Error response from daemon: Get https://my-openshift-registry.com/v2/: unauthorized: authentication required
The registry logs report the following errors
error authorizing context: authorization header required
invalid token: Unauthorized
And more specifically
oc logs -f -n openshift-image-registry deployments/image-registry
time="2019-11-29T18:03:25.581914855Z" level=warning msg="error authorizing context: authorization header required" go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=aa41909a-4aa0-42a5-9568-91aa77c7f7ab http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri=/v2/ http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"
time="2019-11-29T18:03:25.581958296Z" level=info msg=response go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=d2216e3a-0e12-4e77-b3cb-fd47b6f9a804 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri=/v2/ http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration="923.654µs" http.response.status=401 http.response.written=87
time="2019-11-29T18:03:26.187770058Z" level=error msg="invalid token: Unauthorized" go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=638fc003-1d4a-433c-950e-f9eb9d5328c4 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri="/openshift/token?account=kube%3Aadmin&client_id=docker&offline_token=true" http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))"
time="2019-11-29T18:03:26.187818779Z" level=info msg=response go.version=go1.11.13 http.request.host=my-openshift-registry.com http.request.id=5486d94a-f756-401b-859d-0676e2a28465 http.request.method=GET http.request.remoteaddr=10.16.7.10 http.request.uri="/openshift/token?account=kube%3Aadmin&client_id=docker&offline_token=true" http.request.useragent="docker/19.03.5 go/go1.12.12 git-commit/633a0ea kernel/4.9.184-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.5 \\(darwin\\))" http.response.contenttype=application/json http.response.duration=6.97799ms http.response.status=401 http.response.written=0
My oc client is
oc version
Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0+b4261e0", GitCommit:"b4261e07ed", GitTreeState:"clean", BuildDate:"2019-07-06T03:16:01Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6+2e5ed54", GitCommit:"2e5ed54", GitTreeState:"clean", BuildDate:"2019-10-10T22:04:13Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"}
My docker info is
docker info
Client:
Debug Mode: false
Server:
Containers: 7
Running: 0
Paused: 0
Stopped: 7
Images: 179
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.184-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 5.818GiB
Name: docker-desktop
ID: JRNE:4IBW:MUMK:CGKT:SMWT:27MW:D6OO:YFE5:3KVX:AEWI:QC7M:IBN4
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 29
Goroutines: 44
System Time: 2019-11-29T21:12:21.3565037Z
EventsListeners: 2
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
I have tried adding the registry-viewer role to kubeadmin, but this did not make any difference
oc policy add-role-to-user registry-viewer kubeadmin
oc policy add-role-to-user registry-viewer kube:admin
Is there any suggestion as to what I could try or how to diagnose the problem further? I am able to access the registry from within the cluster, however, I need to access it externally through docker login.
As silly as it sounds, the problem was that $(oc whoami) evaluated to kube:admin instead of kubeadmin and only the latter works. For example, in order to successfully login I had to replace
docker login $(oc registry info) -u $(oc whoami) -p $(oc whoami -t)
with
docker login $(oc registry info) -u kubeadmin -p $(oc whoami -t)
The relevant role is registry-viewer, however, I think the user kubeadmin would have it pre-configured
oc policy add-role-to-user registry-viewer kubeadmin
oc adm policy add-cluster-role-to-user registry-viewer kubeadmin
To add registry viewer role the command is
oc adm policy add-cluster-role-to-user registry-viewer kubeadmin
You can refer to their documentation to work with the internal registry.
What did I do:
1. docker run --net minha-rede --name mysql01 -e MYSQL_ROOT_PASSWORD=Password1234 -d mysql
2. docker run --net minha-rede --name wordpress01 --link mysql01 -p 8080:80 -e WORDPRESS_DB_HOST=mysql01:3306 -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=Password1234 -e WORDPRESS_DB_NAME=wordpress -e WORDPRESS_TABLE_PREFIX=wp_ -d wordpress
3. docker exec -it mysql01 bash
4. mysql -u root -p
5. CREATE USER 'luckerman'#'localhost' IDENTIFIED BY 'onboard' WITH MAX_USER_CONNECTIONS 3;
6. GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'luckerman'#'localhost';
7. exit
8. exit
9. docker run -d \
-p 9104:9104 --name mysqlexp01\
--network minha-rede \
-e DATA_SOURCE_NAME="luckerman:onboard#(minha-rede:3306)/" \
prom/mysqld-exporter
But when I run the http://localhost:9104/metrics I have the message # TYPE mysql_exporter_last_scrape_error gauge
mysql_exporter_last_scrape_error 1
and when I run docker logs mysqlexp01 it shows me this:
time="2018-09-11T20:57:40Z" level=info msg="Starting mysqld_exporter (version=0.11.0, branch=HEAD, revision=5d7179615695a61ecc3b5bf90a2a7c76a9592cdd)" source="mysqld_exporter.go:206"
time="2018-09-11T20:57:40Z" level=info msg="Build context (go=go1.10.3, user=root#3d3ff666b0e4, date=20180629-15:00:35)" source="mysqld_exporter.go:207"
time="2018-09-11T20:57:40Z" level=info msg="Enabled scrapers:" source="mysqld_exporter.go:218"
time="2018-09-11T20:57:40Z" level=info msg=" --collect.info_schema.tables" source="mysqld_exporter.go:222"
time="2018-09-11T20:57:40Z" level=info msg=" --collect.global_status" source="mysqld_exporter.go:222"
time="2018-09-11T20:57:40Z" level=info msg=" --collect.global_variables" source="mysqld_exporter.go:222"
time="2018-09-11T20:57:40Z" level=info msg=" --collect.slave_status" source="mysqld_exporter.go:222"
time="2018-09-11T20:57:40Z" level=info msg="Listening on :9104" source="mysqld_exporter.go:232"
time="2018-09-11T20:57:44Z" level=error msg="Error pinging mysqld: dial tcp 127.0.0.1:3306: connect: connection refused" source="exporter.go:119"
What I did wrong? I tried many forums, sites, etc...
It worked! I did this:
docker network inspect minha-rede
Then I found the ip of my-sql (in my case 172.23.0.2)
Then I entered in my-sql and I did these commands:
CREATE USER 'luckerman'#'172.23.0.2' IDENTIFIED BY 'onboard' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON . TO 'luckerman'#'172.23.0.2';
Thank you #alex-karshin!
Node v8.11
NPM v5.6
Whenever I try to call polymer serve an error occurs telling that server failed to start and no available ports, which is wrong! most of the ports are available.
$ polymer serve
ERROR: Server failed to start: Error: No available ports. Ports tried: [8081,8000,8001,8003,8031,2000,2001,2020,2109,2222,2310,3000,3001,3030,3210,3333,4000,4001,4040,4321,4502,4503,4567,5000,5001,5050,5432,6000,6001,6060,6666,6543,7000,7070,7774,7777,8765,8777,8888,9000,9001,9080,9090,9876,9877,9999,49221,55001]
at /Users/nabed/.config/yarn/global/node_modules/polyserve/lib/start_server.js:380:15
at Generator.next (<anonymous>)
at fulfilled (/Users/nabed/.config/yarn/global/node_modules/polyserve/lib/start_server.js:17:58)
at <anonymous>
error: cli runtime exception: Error: Error: No available ports. Ports tried: [8081,8000,8001,8003,8031,2000,2001,2020,2109,2222,2310,3000,3001,3030,3210,3333,4000,4001,4040,4321,4502,4503,4567,5000,5001,5050,5432,6000,6001,6060,6666,6543,7000,7070,7774,7777,8765,8777,8888,9000,9001,9080,9090,9876,9877,9999,49221,55001]
error: Error: Error: No available ports. Ports tried: [8081,8000,8001,8003,8031,2000,2001,2020,2109,2222,2310,3000,3001,3030,3210,3333,4000,4001,4040,4321,4502,4503,4567,5000,5001,5050,5432,6000,6001,6060,6666,6543,7000,7070,7774,7777,8765,8777,8888,9000,9001,9080,9090,9876,9877,9999,49221,55001]
at /Users/nabed/.config/yarn/global/node_modules/polyserve/lib/start_server.js:91:19
at Generator.throw (<anonymous>)
at rejected (/Users/nabed/.config/yarn/global/node_modules/polyserve/lib/start_server.js:18:65)
at <anonymous>
here is a --verbose err log text http://freetexthost.com/2sjgr45yx5
I am on mac, I installed node via package installer form there website
As #synk said on the comment:
polymer serve --hostname 0.0.0.0 or replace 0.0.0.0 with an IP
that is available on the machine
I tried to use simple example of mysql-events package but when i tried to use it , i got this error:
Error: ER_NO_BINARY_LOGGING: You are not using binary logging
so i changed my.cnf:
#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
#what i added:
log_bin = "/home/erfan/salone-entezar/server/"
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
but when i tried to restart mysql (using $ sudo service mysql restart) this error has happened:
Job for mysql.service failed because the control process exited with error code.
See "systemctl status mysql.service" and "journalctl -xe" for details.
and this is systemctl status mysql.service :
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: activating (start-post) (Result: exit-code) since Fri 2016-11-18 20:16:48 IRST; 3s ago
Process: 8838 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE)
Process: 8831 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 8838 (code=exited, status=1/FAILURE); Control PID: 8839 (mysql-systemd-s)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/mysql.service
└─control
├─8839 /bin/bash /usr/share/mysql/mysql-systemd-start post
└─8850 sleep 1
Nov 18 20:16:48 erfan-m systemd[1]: Starting MySQL Community Server...
Nov 18 20:16:48 erfan-m mysql-systemd-start[8831]: my_print_defaults: [ERROR] Found option without preceding group in config file /etc/mysql/my.cnf at line 19!
Nov 18 20:16:48 erfan-m mysql-systemd-start[8831]: my_print_defaults: [ERROR] Fatal error in defaults handling. Program aborted!
Nov 18 20:16:48 erfan-m systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILURE
What should i do now and what is my problem?
for UBUNTU
i have to add socketPath : '/var/run/mysqld/mysqld.sock' to dsn variable in code and also correcting /etc/mysql/my.cnf as below :
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
[mysqld] #grouping config options is important
# Must be unique integer from 1-2^32
server-id = 1
# Row format required for ZongJi
binlog_format = row
# Directory must exist. This path works for Linux. Other OS may require
# different path.
log_bin = /var/log/mysql/mysql-bin.log
and at last restart it with $sudo service mysql restart
for CentOS
i have to add socketPath : '/var/lib/mysql/mysql.sock' to dsn variable in code and also correcting /etc/my.cnf as below :
[mysqld]
# Must be unique integer from 1-2^32
server-id = 1
# Row format required for ZongJi
binlog_format = row
# Directory must exist. This path works for Linux. Other OS may require
# different path.
log_bin = /var/log/mariadb/mariadb-bin.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
and at last restart it with $systemctl restart mariadb
NOTE : CentOS 7 has replaced MySQL with MariaDB. So there is deference between log_bin path of UBUNTU and CentOS .
I am attempting to limit access to a directory using basic authentication with the user:password stored in a mysql database. Upon starting the Apache service with mod_authn_dbd enabled it creates about 60 to 70 MySQL processes, all of them have a command of "sleep". These errors however appear throughout the Apache log, and as a result of this error the authentication intermittently fails:
[Mon Aug 19 21:38:15 2013] [error] (20014)Internal error: DBD: failed to initialise
[Mon Aug 19 21:38:15 2013] [crit] (20014)Internal error: DBD: child init failed!
[Mon Aug 19 21:38:15 2013] [error] (20014)Internal error: DBD: Can't connect to mysql
I have tried adjusting MySQL connection limits and the DBD Parameters to fix this, without success.
This is my current configuration, with sensitive info removed:
<IfModule mod_authn_dbd.c>
DBDriver mysql
DBDParams "host=localhost port=3306 dbname=SITE_USERS user=DBUSER pass=DBPASS"
DBDExptime 300
DBDMin 1
DBDMax 10
</IfModule>
<Directory "/home/mysite/public_html/protected">
AuthCookieName CookieAuth
AuthCookieBase64 On
AuthType Basic
AuthName "Registered User"
AuthBasicProvider dbd
AuthDBDUserPWQuery "SELECT password FROM users WHERE username = %s"
Require valid-user
AllowOverride None
Order allow,deny
Allow from all
</Directory>
It seems like you're running into Bug #45995 mod_authn_dbd conflict with php+mysql.
As described in a related post, this is caused by a conflict between apache apr-util mysql driver and php mysql driver. You can either uninstall php-mysql if you are not using it, or if you are, you can downgrade apr and apr-util to version 1.3 or below.