Hawkular agent: index out of range - configuration

I have a problem with setting up hawkular-agent.
I have set up the agent in OpenShift pod and I get this error log when agent tries to call the jolokia endpoint of another application running in a different pod:
I0130 13:07:05.161073 1 metrics_collector_manager.go:94] START collecting metrics from [https://x.x.x.x:8778/jolokia/] every [10]s
panic: runtime error: index out of range
goroutine 11 [running]:
panic(0x11c3ac0, 0xc420016060)
/home/mazz/bin/go-install/go/src/runtime/panic.go:500 +0x1a1
github.com/hawkular/hawkular-openshift-agent/k8s.K8SEndpoint.GetUrl(0xc4202a9cb8, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/home/mazz/source/go/hawkular-openshift-agent/src/github.com/hawkular/hawkular-openshift-agent/k8s/configmap_entry.go:68 +0x26b
github.com/hawkular/hawkular-openshift-agent/k8s.(*NodeEventConsumer).startCollecting(0xc42045d2c0, 0xc42016c500)
/home/mazz/source/go/hawkular-openshift-agent/src/github.com/hawkular/hawkular-openshift-agent/k8s/node_event_consumer.go:163 +0x10c
github.com/hawkular/hawkular-openshift-agent/k8s.(*NodeEventConsumer).consumeNodeEvents(0xc42045d2c0)
/home/mazz/source/go/hawkular-openshift-agent/src/github.com/hawkular/hawkular-openshift-agent/k8s/node_event_consumer.go:112 +0x26d
created by github.com/hawkular/hawkular-openshift-agent/k8s.(*NodeEventConsumer).Start
/home/mazz/source/go/hawkular-openshift-agent/src/github.com/hawkular/hawkular-openshift-agent/k8s/node_event_consumer.go:82 +0x5c5
Maybe someone had a similar problem? The version of agent is Hawkular OpenShift Agent: Version: 0.1.0.

I succeeded to fix the issue by using openshift-infra project instead of openshift-infra1.

Related

ESPHOME not working for MHz19 with Lolin D1 mini

I connected the MHz19 sensor to the D1 mini and want to flash it via ESP-Home.
I followed the following guide:
https://esphome.io/components/sensor/mhz19.html
I used the code
esphome:
name: co2-sensor
esp8266:
board: esp01_1m
# Enable logging
#logger:
# Enable Home Assistant API
api:
ota:
password: "xxxxxx"
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
# Enable fallback hotspot (captive portal) in case wifi connection fails
ap:
ssid: "Co2-Sensor Fallback Hotspot"
password: "xxxxx"
captive_portal:
uart:
rx_pin: GPIO3
tx_pin: GPIO1
baud_rate: 9600
sensor:
- platform: mhz19
co2:
name: "CO2 Value"
temperature:
name: "MH-Z19 Temperature"
update_interval: 60s
automatic_baseline_calibration: false
But cannot flash, I get the following error
======================== [SUCCESS] Took 305.85 seconds ========================
INFO Successfully compiled program.
esptool.py v3.2
Serial port /dev/ttyUSB0
Connecting......................................
A fatal error occurred: Failed to connect to ESP8266: No serial data received.
For troubleshooting steps visit: https://github.com/espressif/esptool#troubleshooting
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v3.2
Serial port /dev/ttyUSB0
Connecting......................................
A fatal error occurred: Failed to connect to ESP8266: No serial data received.
For troubleshooting steps visit: https://github.com/espressif/esptool#troubleshooting
I can however flash it and it comes online if I disconnect the sensor, of course it publishes no data. So I assume it's something to do with UART. I also tried disabling the logging, which did nothing.
It seems the UART pins used by default for debug logging, even though I thought I had disabled the logging option. I used pin 4 and 5 and it worked.
https://esphome.io/components/uart.html#uart
Note that the value I got was 5000 ppm at first, when it was plugged into the Pi on which I'm running HA. When connecting to another PSU I got 'normal' looking values. I assume it simply did not get enough power from the PI.

Rancher desktop unable to pull an image from Docker

I have downloaded and set up Rancher Desktop with nerdctl but I am unable to pull any public image from the Docker Hub. I am receiving an error:
INFO[0011] trying next host error="failed to do request: Head
"https://registry-1.docker.io/v2/maildev/maildev/manifests/latest":
dial tcp: lookup registry-1.docker.io on 192.168.47.23:53: read udp
192.168.47.23:37689->192.168.47.23:53: i/o timeout" host=registry-1.docker.io
FATA[0011] failed to resolve reference
"docker.io/maildev/maildev:latest": failed to do request: Head
"https://registry-1.docker.io/v2/maildev/maildev/manifests/latest":
dial tcp: lookup registry-1.docker.io on 192.168.47.23:53: read udp
192.168.47.23:37689->192.168.47.23:53: i/o timeout
Thanks for your inputs

MSDTC WS-AT, HTTP could not register URL https://+:2372/WsatService/. Your process does not have access rights to this namespace

On a Windows Server 2012 machine, I have a local DTC and a clustered DTC, as you can see here:
Here you can see the clustered DTC in the Failover Cluster Manager:
I have enabled WS-AT with the following command on the clustered DTC:
wsatconfig -network:enable -endpointCert:7c6361568413852afb471d5f8b92604cdde530dd -accountsCerts:3bcf068b0b984d2af9d2efa03e8a489c8483ba11 -virtualServer:ftsappdev -restart
For the endpointCert, I gave the thumbprint of the certificate for ftsappdev (the cluster role), and for accountscerts, I gave the thumbprint of the certificate of a JBOSS server.
I also have configured WS-AT for the local DTC through the WS-AT tab in Component Services:
In Failover Cluster Manager, when I take the clustered DTC resource offline and then online, I get the following entry in the Eventviewer/Application:
The MSDTC WS-AT protocol failed at the beginning of recovery. As a result, WS-AT functionality will be disabled.
Protocol ID: c05b9cad-ab24-4bb3-9440-3548fa7b4b1b
Protocol Name: WS-AtomicTransaction 1.1
Exception: Microsoft.Transactions.Bridge.PluggableProtocolException: A channel factory could not be opened. ---> Microsoft.Transactions.Wsat.Messaging.MessagingInitializationException: A channel factory could not be opened. ---> System.ServiceModel.AddressAccessDeniedException: HTTP could not register URL https://+:2372/WsatService/. Your process does not have access rights to this namespace (see http://go.microsoft.com/fwlink/?LinkId=70353 for details). ---> System.Net.HttpListenerException: Access is denied
at System.Net.HttpListener.AddAllPrefixes()
at System.Net.HttpListener.Start()
at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen()
--- End of inner exception stack trace ---
at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen()
at System.ServiceModel.Channels.TransportManager.Open(TransportChannelListener channelListener)
at System.ServiceModel.Channels.TransportManagerContainer.Open(SelectTransportManagersCallback selectTransportManagerCallback)
at System.ServiceModel.Channels.TransportChannelListener.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.HttpChannelListener`1.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.Channels.LayeredChannelListener`1.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.Channels.DatagramChannelDemuxer`2.OnOuterListenerOpen(ChannelDemuxerFilter filter, IChannelListener listener, TimeSpan timeout)
at System.ServiceModel.Channels.SingletonChannelListener`3.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.Channels.InternalDuplexChannelFactory.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelFactory.TypedServiceChannelFactory`1.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.ChannelFactory.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at Microsoft.Transactions.Wsat.Messaging.CoordinationService.OpenChannelFactory[T](ChannelFactory`1 cf)
--- End of inner exception stack trace ---
at Microsoft.Transactions.Wsat.Messaging.CoordinationService.OpenChannelFactory[T](ChannelFactory`1 cf)
at Microsoft.Transactions.Wsat.Messaging.CoordinationService.Initialize(CoordinationServiceConfiguration config)
at Microsoft.Transactions.Wsat.Messaging.CoordinationService..ctor(CoordinationServiceConfiguration config, ProtocolVersion protocolVersion)
at Microsoft.Transactions.Wsat.Protocol.ProtocolState.RecoveryBeginning()
--- End of inner exception stack trace ---
at Microsoft.Transactions.Wsat.Protocol.ProtocolState.RecoveryBeginning()
at Microsoft.Transactions.Wsat.InputOutput.TransactionManagerReceive.RecoveryBeginning()
Process Name: msdtc
Process ID: 12248
In Component Services, when I restart the local DTC I get the following entry in the Eventviewer/Application:
The WS-AT protocol service successfully completed startup and recovery.
Protocol ID: cc228cf4-a9c8-43fc-8281-8565eb5889f2
Protocol Name: WS-AtomicTransaction 1.0
Process Name: msdtc
Process ID: 7744
Both DTCs run under the user Network Service:
Why does the clustered DTC not have access rights to this namespace, whereas the local DTC has? Both run under the same user.
How can I make the clustered DTC to register the URL https://+:2372/WsatService/ successfully?
I finally used port 8444. I had to reserve it with the command:
netsh http add urlacl url=https://+:8444/ user=Everyone
and then I ran wsatonfig specifying port 8444:
wsatconfig -network:enable -port:8444 -accounts:Everyone -endpointcert:7c6361568413852afb471d5f8b92604cdde530dd -accountsCerts:7c6361568413852afb471d5f8b92604cdde530dd,83112f9b598c4341b3975aba413bf04eb71eb679 -traceLevel:ALL -restart
Another time, it helped to disable and reenable the Network DTC Access in the properties of the Local DTC and the Cluster DTC:
Disable Local DTC, Apply and OK:
Enable Local DTC, Apply and OK:
Disable Cluster DTC, Apply and OK:
Enable Cluster DTC, Apply and OK:

Ejabberd Disco Iteam not coming

Vrsion: 17.11
Platform : ubuntu 16.04
With the mod_muc configuration, sometimes disco items does not load at all.
Here is a configuration I have used for disco items. Here is a crash log I found while crashes
mod_muc:
db_type: sql
default_room_options:
- allow_subscription: true
- mam: true
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
history_size: 100
max_rooms_discoitems: 1000
max_user_conferences: 50
max_users_presence: 50
Also, while joining same muc which was earlier available does not get connection. If I restart the server, things works well and again after certain times muc s doesn't come
Error Log:
Stopping MUC room x#conference.host.com
2018-07-27 12:57:39.972 [error] <0.32056.26> gen_fsm <0.32056.26> in state normal_state terminated with reason: bad return value: ok
2018-07-27 12:57:39.972 [error] <0.32056.26>#p1_fsm:terminate:760 CRASH REPORT Process <0.32056.26> with 0 neighbours exited with reason: bad return value: ok in p1_fsm:terminate/8 line 760
2018-07-30 05:12:12 =ERROR REPORT====
** State machine <0.9190.27> terminating
** Last event in was {route,<<>>,{iq,<<"qM1F3-119">>,set,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},[{xmlel,<<"query">>,[{<<"xmlns">>,<<"urn:xmpp:mam:2">>}],[{xmlel,<<"set">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/rsm">>}],[{xmlel,<<"max">>,[],[{xmlcdata,<<"30">>}]},{xmlel,<<"after">>,[],[]}]},{xmlel,<<"x">>,[{<<"xmlns">>,<<"jabber:x:data">>},{<<"type">>,<<"submit">>}],[{xmlel,<<"field">>,[{<<"var">>,<<"FORM_TYPE">>},{<<"type">>,<<"hidden">>}],[{xmlel,<<"value">>,[],[{xmlcdata,<<"urn:xmpp:mam:2">>}]}]}]}]}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}
** When State == normal_state
** Data == {state,<<"planet_discovery1532511384">>,
<<"conference.x.y.com">>,<<"x.y.com">>,{all,muc_create,[{allow,
[{acl,admin}]}],muc_create},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},{config,<<"Planet Discovery">>,<<>>,true,true,true,anyone,true,true,false,true,true,true,false,true,true,true,true,false,<<>>,true,[moderator,participant,visitor],true,1800,200,false,<<>>,{0,nil},true},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}|{x.y.com,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},<<"usr_name#x.y.com/1140">>,moderator,{presence,<<"qM1F3-116">>,available,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>},undefined,[],undefined,[{xmlel,<<"c">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/caps">>},{<<"hash">>,<<"sha-1">>},{<<"node">>,<<"http://www.igniterealtime.org/projects/smack">>},{<<"ver">>,<<"p801v5l0jeGbLCy09wmWvQCQ7Ok=">>}],[]},{vcard_xupdate,{<<>>,<<>>},undefined}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}]],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},nil,{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[[<<"usr_name#x.y.com/1140">>,{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}]],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,3,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[[{<<"miga8747b6">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[],[[{<<"ruba32cc6e">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[]}}},{lqueue,{{[],[]},0,unlimited},1000},[],<<>>,false,nil,none,undefined}
** Reason for termination =
** {bad_return_value,ok}
2018-07-30 05:12:12 =CRASH REPORT====
crasher:
initial call: mod_muc_room:init/1
pid: <0.9190.27>
registered_name: []
exception exit: {{bad_return_value,ok},[{p1_fsm,terminate,8,[{file,"src/p1_fsm.erl"},{line,760}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
ancestors: ['mod_muc_x.y.com',ejabberd_gen_mod_sup,ejabberd_sup,<0.32330.26>]
messages: []
links: []
dictionary: [{'$internal_queue_len',0}]
trap_exit: true
status: running
heap_size: 6772
stack_size: 27
reductions: 3310
neighbours:
2018-07-30 12:41:56 =ERROR REPORT====
What ejabberd version? and how did you install it?
The syntax of your default_room_options are wrong, did you really use that config like that?
And what changes you made from a stock installation? I mean: did you setup a cluster of several nodes, did you enable other modules that may interfere with mod_muc...?
And most importantly: you have setup the max_rooms_discoitems to 10000. How many rooms does the service have? That option should be set to a small value, because requesting discoitems for 10.000 rooms means requesting information to each single room, and that means 10.000 queries, and that can have unknown consequences. Does your problem reproduce if you set a low value, like 100?

getaddrinfo: start Temporary failure in name resolution Error opening specified endpoint "start" Server Exiting with code 1

While starting snmpd I am getting this error in /var/snmpd.log
**
> *getaddrinfo: start Temporary failure in name resolution Error opening
specified endpoint "start" Server Exiting with code 1*
**
For your info m using Fedora-14 & net-snmp-5.7.1 .
Thanks in Advance..Help me
Error opening specified endpoint "start" Server Exiting with code 1
means some process is using port 161.
For example try netstat -anp | grep 161, then stop that process and start snmpd again.