I am using Fiware IDM version 6.2 and i have issues with keystone server (running on port 5000)..
Keystone is working fine until server is not used for some amount of time (around 1 hour) after that first call that arrive ( in my case from pep-proxy checking auth token) it simply enter into unresponsive mode, meaning it does not send anything back. When i cancel request and send it again it start working normal.
I would like to know if there is something on my part that i missed or failed to check.
I am using docker to run Fiware IDm enviroment.
Picture of logs
You are using an old version both Keyrock and Wilma. Currently, both of them are in version 7.5.1. Please take a look on Hub Docker (https://hub.docker.com/u/fiware). Nevertheless, the issue that you mention is due to security management. The admin tokens expire after 1h, therefore you need to obtain a new one to continue working with it.
Related
I am trying to intergrate our mattermost with zabbix to receive notifications on alerts. I've followed up the instructions on this link. We are using Zabbix 4.4 with MM 5.19.
After enabling the integration, No alerts are being posted on Mattermost. I tried testing the Media type on Administration > Media Types > Mattermost > Test.
I've added the following as the parameters, but it throws the error : Connection timeout of 3 seconds exceeded when connecting to Zabbix server "localhost".
bot_token : {Token generated for the Bot in Mattemost}
mattermost_url : {https://mattermost.our-company.com}
send_mode : alarm
Tried changing {ZABBIX_URL} to both http://127.0.0.1 and http://zabbix.our-company.com (The DNS is resolved only internally, but our mattermost is available on public network) but none of them work.
I checked the logs inside /var/log/zabbix but no error or anything. I even tried putting the zabbix logs to Debug mode but no luck in any case, the only Debug log I've got is the following :
2063:20200216:090224.146 trapper got '{"request":"alert.send","sid":"74095b240dd6783618571516f029187a","data":{"parameters":{"zabbix_url":"{$ZABBIX.URL}","send_mode":"alarm","send_to":"{ALERT.SENDTO}","event_tags":"{EVENT.TAGS}","event_name":"{EVENT.NAME}","event_nseverity":"{EVENT.NSEVERITY}","event_ack_status":"{EVENT.ACK.STATUS}","event_value":"{EVENT.VALUE}","event_update_status":"{EVENT.UPDATE.STATUS}","event_date":"{EVENT.DATE}","event_time":"{EVENT.TIME}","event_severity":"{EVENT.SEVERITY}","event_opdata":"{EVENT.OPDATA}","event_id":"{EVENT.ID}","event_update_message":"{EVENT.UPDATE.MESSAGE}","trigger_id":"{TRIGGER.ID}","trigger_description":"{TRIGGER.DESCRIPTION}","host_name":"{HOST.NAME}","host_ip":"{HOST.IP}","event_update_date":"{EVENT.UPDATE.DATE}","event_update_time":"{EVENT.UPDATE.TIME}","event_recovery_date":"{EVENT.RECOVERY.DATE}","event_recovery_time":"{EVENT.RECOVERY.TIME}","bot_token":"qs3rkqdappy6i8gs3a8871phxc","mattermost_url":"https:\/\/mattermost.our-company.com"},"mediatypeid":"7"}}'
What can be the issue? Is there a way to "debug" and find the root cause of this error? Any help is appreciated! Note that right now we have integrated Slack with Zabbix and it's working fine, but we are moving to Mattermost and therefore, we need to migrate the integrations as well.
We found out the issue with our Network Admin. The problem was that our Zabbix server was trying to resolve Mattermost name from local network route (i.e. 192.168.x.x) and it kept failing, therefore, no SSL connection could be initiated.
It seems that Zabbix integration tests' error messages are quite generic and sometimes, misleading. Thorough investigation is needed for finding out the root cause.
I am integrating wirecloud and fiware-idm. Installed both through docker successfully. However, after installing fiware-idm, i am not able to login from admin. username - admin#test.com password - 1234.
Everytime it redirect it to "ip:3000/auth/login". Do I have to make any other configuration in wirecloud or fiware-idm?
Also, even after entering wrong credential, it redirects me to /auth/login and does not display any error message.
My wirecloud, fiware-idm and mysql database are in different containers. Is this can be the issue?
IdM should be deployed on production to be used by WireCloud. That is, you should configure the IDM service using public domains names, using https, and so on... Seems you are creating a local installation, so you should deploy some workarounds. Well, some of those requirements are not enforced by WireCloud, so it should be enough by ensure you use a domain name for accessing the IdM.
You can simulate having the idm server configured using public domains by adding the proper value to /etc/hosts (See this link if you are running windows), the correct value depends on how did you configured the IdM service. So, the idea is to ensure the domain used for accessing the idm resolves to the correct ip address both in the WireCloud container and from your local computer. We can provide you more detailed steps if you provide us more details about how are you launching the different containers.
I'm writing an application which delivers data from remote devices over an HTTP API. These devices are on a mobile data connection and have limited resources.
I wish to receive custom monitoring data over the HTTP API, relying on the security model designed in the application, and push that data to Zabbix directly (or indirectly) from node.js. I do not wish to use Zabbix Agent on the remote devices.
I see that I can use zabbix_sender to send data to a Zabbix server containing a pre-configured host. This works great. I intend to deliver monitoring data over my custom API, and when received give this data to zabbix_sender inside the server network.
The problem is there are many devices in the field and more are being added all the time.
TL;DR:
When zabbix_sender provides a custom hostname which doesn't exist in Zabbix already, it fails.
I would like to auto-add discovered hosts, based upon new hostnames from zabbix_sender. How would I do this?
Also, extra respect if anyone can give examples of how to avoid zabbix_sender and send data directly from node.js to the Zabbix server. I mean: suggest an NPM package that you have experience using. (Update: Found working node.js package here: https://www.npmjs.com/package/node-zabbix-sender)
Zabbix configuration: I'm learning from Zabbix 2.4 installed in Docker, no custom configuration from this Dockerhub: https://hub.docker.com/r/zabbix/zabbix-2.4/
Probably the best would be to use the Zabbix API to create hosts directly.
Alternatively, you could set up an action and emulate active agent connection, which would make Zabbix create the host via the active agent auto-regstration.
You could also use low level discovery (LLD) to send in JSON, which would result in hosts/items being created, based on prototypes.
In all of these cases you have to wait for one minute (by default) for the hosts to appear in the Zabbix cache, then you can send the data.
Also note that Zabbix 2.4 is not supported anymore, it will receive no fixes - it is not a "long-term support" release.
We are experiencing the issue when trying to connect to the cluster after updating the version of Java SDK.
The setup of the system is as follows:
We have a web application that is using Java SDK and a Couchbase cluster. In between we have a VIP (Virtual IP Address). We realise that isn’t ideal but we’re not able to change that immediately since VIP was mandated by Tech Ops. VIP is basically only there to reroute the initial request on application startup. That way we can make modifications on the cluster and ensure that when application starts it can find the cluster regardless of the actual nodes in the cluster and their IPs.
Prior to the issue we used JAVA SDK version 1.4.4. Our application would start and Java SDK would initiate a request on port 8091 to VIP. Please note that port 8091 is the only port open on VIP. VIP would reroute the request to one of the node cluster currently in use the cluster would respond to Java SDK. At that point Java SDK would discover all the nodes in the cluster and application would run fine. During up time if we would add, remove a node from the cluster Java SDK would update automatically and everything would run without the issue.
In the last sprint we updated the Java SDK to version 2.1.3. Our application would start and Java SDK would initiate a request on port 11210 to VIP. Since this port is not open the request would fail and Java SDK would throw an exception:
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:93)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:108)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:99)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:89)
No further request would be made on any port.
It appears the order in which port are being used has been changed between versions. Could somebody please confirm, or dispute, that the order in which ports are being used for cluster discovery has been changed between versions. Also could somebody please provide some advice on how we could resolve the issue. We are trying to understand the clients behavior, if we could open all those ports on the VIP would the client still then function correctly and at full performance?
The issue is happening on our production environment which we cannot use for testing out potential solutions since it will interfere with our products.
In v2.x of the Java SDK, it defaults to 11210 to get the cluster map to bootstrap the application. This is a huge improvement actually as now the map comes from the managed cache and not the cluster manager (8091). The SDK should use 8091 as a fall back if it cannot get the map on 11210 though. Regardless, you really want to get that map from 11210, trust me. It cleans up a lot of problems.
To resolve this long term and follow Couchbase best practices, upgrade to the Java 2.2.x SDK, get rid of the VIP entirely and go with a DNS SRV record instead. That gives you one DNS entry for the SDK connection object and you just manage the node list in DNS. It works great. I say SDK 2.2 as the DNS SRV record solution is fully supported there, in 2.1 it is experimental. VIPs are specifically recommended against by Couchbase these days. In older versions of the SDKs it was fine to do this and it helped with limiting the number of connections from the app to the DB nodes, but that is no longer necessary and can actually be a bad thing.
in addition to Kirk's long term answer (which I also advise you to follow), a shorter term solution may be to deactivate the 11210 bootstraping (carrier bootstrap) through the CouchbaseEnvironment by calling bootstrapCarrierEnabled(false) on the builder.
I don't guarantee that it'll work with a vIP even after that, but that may be worth a try if you're in a hurry.
I want to send email with Exchange by using telnet to port 25. Until two week ago I was able to, but now a "security fix" from Microsoft has removed this possibility.
When I try, I get this message:
421 4.3.2 Service not available, closing transmission channel
What can I do?
I use a service (Message Labs (ML)) to filter out all the spam. We got a new internet connection and in the process of re-configuring ML's inbound/outbound services to the new IP, I got an error. So, I tested it from external by telneting to the IP on port 25 and got the "421 4.3.2 Service not available, closing transmission channel" error. What I didn't realize at first was that the reason it failed was because I had set a specific grouping of IPs on the 2007 edge server receive connector (for the ML servers). So, I added my lan network & additionally another IP for the external host I was testing from and low and behold, I could connect from both.
What I figured was happening with ML was that their server that was testing the connectivity was on an address that was excluded from the edge server.
So, I removed my testing IPs and created a new, temporary, receive connector on the edge server, accepting from all addresses (0.0.0.0 - 255.255.255.255). I then submitted the change to ML again and guess what...this time they accepted it. Now, I'll simply remove the test receive connector and everything should be golden.
SMTP is the protocol that is used to receive email from the rest of the world so I doubt that Microsoft has dropped that. There must be some other misconfiguration on your server.
Try double-checking your relay-settings and the event-log on your exchange-server.
I found the answer at website:
http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=2900802&SiteID=17
Thanks for your help!
Basically, this functionality was removed by default and it could be restored by means of an ad hoc configuration - but with no guarrantee that further "updates" break the system again. Thanks, Microsoft.
After more than 5 years of flawless working, the 2010 EDG server suddenly stopped accepting with "421 4.3.2 Service not available". The SmtpReceive log (Get-TransportServer | select ReceiveProtocolLogPath) confirmed that it was indeed the edge server generating this error.
The EDGE server had two ip-addresses on a single NIC. After the following steps all worked fine again:
remove one ip-address from the nic on the edge server
update the static entry in DNS to point the second ip-address
on the Default internal receive connector allow to receive mail on all available IPv4 addresses.
Notice: this setup is not a security best practice for a DMZ. Better to use two NICs each with a leg in a different zone.