My Proton instance fails with a java.lang.NullPointerException whenever an event is sent by Orion
this is the Proton log:
proton_1 | 01-Jul-2016 09:46:03.117 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom started event message body reader
proton_1 | 01-Jul-2016 09:46:03.125 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Event: ApeContextUpdate
proton_1 | 01-Jul-2016 09:46:03.126 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Could not parse XML NGSI event java.lang.NullPointerException, reason: null
proton_1 | last attribute name: null last value: null
proton_1 | 01-Jul-2016 09:46:03.130 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom finished event message body reader
proton_1 | 01-Jul-2016 09:46:03.131 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent starting submitNewEvent
proton_1 | 01-Jul-2016 09:46:03.132 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent Could not send event, reason: java.lang.NullPointerException, message: null
I've read the Appendix of the User guide and double checked the event name and the attributes list.
This is an xml sent by orion:
POST /ProtonOnWebServer/rest/events HTTP/1.1
User-Agent: orion/0.28.0 libcurl/7.19.7
Host: localhost:8080
Accept: application/xml, application/json
Content-length: 772
Content-type: application/xml
<notifyContextRequest>
<subscriptionId>57762eb9982959644644f9ee</subscriptionId>
<originator>localhost</originator>
<contextResponseList>
<contextElementResponse>
<contextElement>
<entityId type="Ape" isPattern="false">
<id>u1</id>
</entityId>
<contextAttributeList>
<contextAttribute>
<name>carsharing</name>
<type>urn:x-ogc:def:trs:IDAS:1.0:ISO8601</type>
<contextValue>2016-07-01T11:01:06</contextValue>
</contextAttribute>
</contextAttributeList>
</contextElement>
<statusCode>
<code>200</code>
<reasonPhrase>OK</reasonPhrase>
</statusCode>
</contextElementResponse>
</contextResponseList>
</notifyContextRequest>
This is the definition of the Proton project (BTW this is the project copied from
the server filesystem because also the rest api fails with a
NullPointerException)
{
"epn": {
"events": [
{
"name": "ApeContextUpdate",
"createdDate": "Fri Jul 01 2016",
"attributes": [
{
"name": "entityId",
"type": "String",
"dimension": "0"
},
{
"name": "entityType",
"type": "String",
"dimension": "0"
},
{
"name": "carsharing",
"type": "Date",
"dimension": "0"
}
]
}
],
"epas": [],
"contexts": {
"temporal": [],
"segmentation": [],
"composite": []
},
"consumers": [],
"producers": [],
"name": "t0"
}
}
and this is my docker-compose file:
mongo:
image: mongo:2.6
command: --smallfiles --quiet
proton:
image: fiware/proactivetechnologyonline
ports:
- "8080:8080"
orion:
image: fiware/orion:0.28
links:
- mongo
- proton
command: -dbhost mongo --silent
ports:
- "1026:1026"
I'm using Orion 0.28 (the last one that supports XML notifications) and the latest Proton
UPDATE 1 - catalina.log
07-Jul-2016 07:52:39.914 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom started event message body reader
07-Jul-2016 07:52:39.924 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Event: ApeContextUpdate
07-Jul-2016 07:52:39.924 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Could not parse XML NGSI event java.lang.NullPointerException, reason: null
last attribute name: null last value: null
07-Jul-2016 07:52:39.928 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom finished event message body reader
07-Jul-2016 07:52:39.929 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent starting submitNewEvent
07-Jul-2016 07:52:39.929 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent Could not send event, reason: java.lang.NullPointerException, message: null
The problem seems to be that your Proton instance is not actually configured with your project's JSON definition file, therefore when sending the POST of any type you will always get NullPointerException since no such event can be found in Proton's metadata.
Please try to configure your instance's admin interface, as described here:
http://proactive-technology-online.readthedocs.io/en/latest/Proton-InstallationAndAdminGuide/index.html (Setup Apache Tomcat for management part)
And then run the following query:
GET //<ip of the machine running Proton>:8080/ProtonOnWebServerAdmin/resources/definitions.
This should return the all the project definitions this instance have...
And then if you see this in the list, you can retrieve your specific project's definition by running:
GET /<ip of the machine running Proton>:8080/resources/definitions/{definition_name}.
I think this will either return nothing, or will be empty.
You can update the definitions by using RESTful interface as described here: http://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/Complex_Event_Processing_Open_RESTful_API_Specification (under the Managing Definitions Repository part)
Related
We have enabled ACL's and TLS for Consul cluster in our environment. We have disabled the UI as well. But when I use the URL: http://<consul_agent>:8500/v1/coordinate/datacenters. How can disable the URL's as this?
I tested with adding the following to the consulConfig.json:
"ports":{
"http": -1
}
this did not solve the problem.
Apart from the suggestion provided to use "http_config": { "block_endpoints": I am trying to use the ACL Policy if that can solve.
I enabled the ACL's first
I created a policy using the command: consul acl policy create -name "urlblock" -description "Url Block Policy" -rules #service_block.hcl -token <tokenvalue>
contents of the service_block.hcl: service_prefix "/v1/status/leader" { policy = "deny" }
I created a agent token for this using the command: consul acl token create -description "Block Policy Token" -policy-name "urlblock" -token <tokenvalue>
I copied the agent token from the output of the above command and pasted that in the consul_config.json file in the acl -> tokens section as "tokens": { "agent": "<agenttokenvalue>"}
I restarted the consul agents (did the same in the consul client also).
Still I am able to access the endpoint /v1/status/leader. Any ideas as what is wrong with this approach?
That configuration should properly disable the HTTP server. I was able to validate this works using the following config with Consul 1.9.5.
Disabling Consul's HTTP server
Create config.json in the agent's configuration directory which completely disables the HTTP API port.
config.json
{
"ports": {
"http": -1
}
}
Start the Consul agent
$ consul agent -dev -config-file=config.json
==> Starting Consul agent...
Version: '1.9.5'
Node ID: 'ed7f0050-8191-999c-a53f-9ac48fd03f7e'
Node name: 'b1000.local'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: -1, HTTPS: -1, gRPC: 8502, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
...
Note the HTTP port is set to "-1" on the Client Addr line. The port is now inaccessible.
Test connectivity to HTTP API
$ curl localhost:8500
curl: (7) Failed to connect to localhost port 8500: Connection refused
Blocking access to specific API endpoints
Alternatively you can block access to specific API endpoints, without completely disabling the HTTP API, by using the http_config.block_endpoints configuration option.
For example:
Create a config named block-endpoints.json
{
"http_config": {
"block_endpoints": [
"/v1/catalog/datacenters",
"/v1/coordinate/datacenters",
"/v1/status/leader",
"/v1/status/peers"
]
}
}
Start Consul with this config
consul agent -dev -config-file=block-endpoints.json
==> Starting Consul agent...
Version: '1.9.5'
Node ID: '8ff15668-8624-47b5-6e83-7a8bfd715a56'
Node name: 'b1000.local'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
...
In this example, the HTTP API is enabled and listening on port 8500.
Test connectivity to HTTP API
If you issue a request to one of the blocked endpoints, the following error will be returned.
$ curl localhost:8500/v1/status/peers
Endpoint is blocked by agent configuration
However, access to other endpoints are still permitted.
$ curl localhost:8500/v1/agent/members
[
{
"Name": "b1000.local",
"Addr": "127.0.0.1",
"Port": 8301,
"Tags": {
"acls": "0",
"build": "1.9.5:3c1c2267",
"dc": "dc1",
"ft_fs": "1",
"ft_si": "1",
"id": "6d157a1b-c893-3903-9037-2e2bd0e6f973",
"port": "8300",
"raft_vsn": "3",
"role": "consul",
"segment": "",
"vsn": "2",
"vsn_max": "3",
"vsn_min": "2",
"wan_join_port": "8302"
},
"Status": 1,
"ProtocolMin": 1,
"ProtocolMax": 5,
"ProtocolCur": 2,
"DelegateMin": 2,
"DelegateMax": 5,
"DelegateCur": 4
}
]
Our origin-node.service on the master node fails with:
root#master> systemctl start origin-node.service
Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
root#master> systemctl status origin-node.service -l
[...]
May 05 07:17:47 master origin-node[44066]: bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 05 07:17:47 master origin-node[44066]: bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 05 07:17:47 master origin-node[44066]: certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 05 07:17:47 master origin-node[44066]: server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
So it seems that kubelet-client-current.pem and/or kubelet-server-current.pem contains an expired certificate and the service tries to create a CSR using an endpoint which is probably not yet available (because the master is down). We tried redeploying the certificates according to the OpenShift documentation Redeploying Certificates, but this fails while detecting an expired certificate:
root#master> ansible-playbook -i /etc/ansible/hosts openshift-master/redeploy-openshift-ca.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] *******************************************************************************************************************************************
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200505T042754.html or /root/cert-expiry-report.20200505T042754.json.\n"}
[...]
root#master> cat /root/cert-expiry-report.20200505T042754.json
[...]
"kubeconfigs": [
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
[...]
"summary": {
"expired": 2,
"ok": 22,
"total": 24,
"warning": 0
}
}
There is a guide for OpenShift 4.4 for Recovering from expired control plane certificates, but that does not apply for 3.11 and we did not find such a guide for our version.
Is it possible to recreate the expired certificates without a running master node for 3.11? Thanks for any help.
OpenShift Ansible: https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.11.153-2
Update 2020-05-06: I also executed redeploy-certificates.yml, but it fails at the same TASK:
root#master> ansible-playbook -i /etc/ansible/hosts playbooks/redeploy-certificates.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************
Wednesday 06 May 2020 04:07:06 -0400 (0:00:00.909) 0:01:07.582 *********
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200506T040603.html or /root/cert-expiry-report.20200506T040603.json.\n"}
Update 2020-05-11: Running with -e openshift_certificate_expiry_fail_on_warn=False results in:
root#master> ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
[...]
TASK [Wait for master API to come back online] *****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.111) 0:02:25.186 ************
skipping: [master.openshift-cluster.mydomain.com]
TASK [openshift_control_plane : restart master] ****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.257) 0:02:25.444 ************
changed: [master.openshift-cluster.mydomain.com] => (item=api)
changed: [master.openshift-cluster.mydomain.com] => (item=controllers)
RUNNING HANDLER [openshift_control_plane : verify API server] **************************************************************************************************
Monday 11 May 2020 03:48:57 -0400 (0:00:00.945) 0:02:26.389 ************
FAILED - RETRYING: verify API server (120 retries left).
FAILED - RETRYING: verify API server (119 retries left).
[...]
FAILED - RETRYING: verify API server (1 retries left).
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://lb.openshift-cluster.mydomain.com:8443/healthz/ready"], "delta": "0:00:00.182367", "end": "2020-05-11 03:51:52.245644", "msg": "non-zero return code", "rc": 35, "start": "2020-05-11 03:51:52.063277", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
root#master> systemctl status origin-node.service -l
[...]
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: E0511 04:23:28.077964 109972 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.078001 109972 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.080555 109972 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: F0511 04:23:28.130968 109972 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
[...]
I have this same case in customer environment, this error is because the certified was expiry, i "cheated" changing da S.O date before the expiry date. And the origin-node service started in my masters:
systemctl status origin-node
● origin-node.service - OpenShift Node
Loaded: loaded (/etc/systemd/system/origin-node.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2021-02-20 20:22:21 -02; 6min ago
Docs: https://github.com/openshift/origin
Main PID: 37230 (hyperkube)
Memory: 79.0M
CGroup: /system.slice/origin-node.service
└─37230 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0 --allow-privileged=true --anonymous-auth=true --authentication-token-webhook=true --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook --authorization-webhook-c...
Você tem mensagem de correio em /var/spool/mail/okd
The openshift_certificate_expiry role uses the openshift_certificate_expiry_fail_on_warn variable to determine if the playbook should fail when the days left are less than openshift_certificate_expiry_warning_days.
So try running the redeploy-certificates.yml with this additional variable set to "False":
ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
I have Artifactory pro license, and as the following pages provide, I called rest api.
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-CreateRepository
I have verified that all other APIs such as repository listing, account creation and listing works normally, but I have confirmed that the repository creation api does not work with 400 errors.
I wanted to see the error by changing the log level, but there was no information about why there was a 400 error at the trace log level.
Below are related logs:
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [TRACE] (o.a.a.d.r.DockerV2AuthenticationFilter:84) - DockerV2AuthenticationFilter path: /api/repositories/newrepo
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.a.AuthenticationFilterUtils:105) - Entering ArtifactorySsoAuthenticationFilter.getRemoteUserName
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.AccessFilter:299) - Cached key has been found for request: '/artifactory/api/repositories/newrepo' with method: 'PUT'
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [TRACE] (o.a.s.PasswordDecryptingManager:95) - Received authentication request for org.artifactory.security.props.auth.PropsAuthenticationToken#3dc5bccf: Principal: null; Credentials: [PROTECTED]; Authenticated: false; Details: org.springframework.security.web.authentication.WebAuthenticationDetails#b364: RemoteIpAddress: {IP}; SessionId: null; Not granted any authorities
2018-06-15 10:31:34,029 [http-nio-8081-exec-15] [DEBUG] (o.j.a.c.h.AccessHttpClient:109) - Executing : GET http://localhost:8040/access/api/v1/users/?cd=apiKey_shash%3DGprGDe&exactKeyMatch=false
2018-06-15 10:31:34,035 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.AccessFilter:305) - Header authentication org.artifactory.security.props.auth.PropsAuthenticationToken#c20ca8df: Principal: admin; Credentials: [PROTECTED]; Authenticated: true; Details: org.springframework.security.web.authentication.WebAuthenticationDetails#b364: RemoteIpAddress: {IP}; SessionId: null; Granted Authorities: admin, user found in cache.
2018-06-15 10:31:34,035 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.RepoFilter :100) - Entering request PUT (10.191.128.129) /api/repositories/newrepo.
2018-06-15 10:31:34,038 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.RepoFilter :188) - Exiting request PUT (10.191.128.129) /api/repositories/newrepo
Updated
My Artifactory Version: 6.0.2
Reponse Message from Artifactory:
{
"errors" : [ {
"status" : 400,
"message" : "No valid type of repository found.\n"
} ]
}
Repository Create JSON Message*:
{
"key": "newrepo",
"rclass: "local",
"packageType": "docker",
"dockerApiVersion": "V2",
"includesPattern": "**/*",
"excludesPattern": "",
"repoLayoutRef": "simple-default",
"description": "",
"checksumPolicyType": "client-checksums",
"blackedOut": false,
"propertySets": ["artifactory"]
}
The error in this block is on purpose, and the code highlighting finds it quite nicely, but when this post was originally made, highlighting was not available on SO.
In your JSON you are missing " after the rclass.
You wrote ' "rclass: ' and it should be ' "rclass": '
Once fixing this the command should work properly.
Good luck :)
curl -iuadmin:password -X PUT http://localhost:8081/artifactory/api/repositories/newrepo -H "Content-type:application/vnd.org.jfrog.artifactory.repositories.LocalRepositoryConfiguration+json" -T repo_temp.json
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Server: Artifactory/5.11.0
X-Artifactory-Id: bea9f3f68aa06e62:4db81752:1643a9cff9e:-8000
Content-Type: text/plain
Transfer-Encoding: chunked
Date: Tue, 26 Jun 2018 06:57:24 GMT
Successfully created repository 'newrepo'
repo_temp.json:
{
"key": "newrepo",
"rclass": "local",
"packageType": "docker",
"dockerApiVersion": "V2",
"includesPattern": "**/*",
"excludesPattern": "",
"repoLayoutRef": "simple-default",
"description": "",
"checksumPolicyType": "client-checksums",
"blackedOut": false,
"propertySets": ["artifactory"]
}
This error is (somehow) returned by Artifactory if the content-type header contains the charset, for example: Content-Type: application/json; charset=UTF-8
Try with simply Content-Type: application/json
I installed a KMS container on my server and I downloaded kurento hello world java application on my server but when I go to my java web application using my server IP adress I have to remote stream and and the following error (in firefox):
ICE failed, see about:webrtc for more details
in the about:webrtc It tells me that there is no STUN and no TURN server specified (and a lot of following output not very clear to me) The problem is that I specified a STUN server on the WebRtcEndpoint.conf.ini.
Here is my docker-compose.yml file:
kurento:
image: fiware/stream-oriented-kurento:latest
volumes:
- ./kurento.conf.json:/etc/kurento/kurento.conf.json:ro
- ./defaultCertificate.pem:/etc/kurento/defaultCertificate.pem:ro
- ./WebRtcEndpoint.conf.ini:/etc/kurento/modules/kurento/WebRtcEndpoint$
ports:
- "8888:8888"
- "8433:8433"
here is my kurento.conf.json file:
{
"mediaServer" : {
"resources": {
// //Resources usage limit for raising an exception when an object creatio$
// "exceptionLimit": "0.8",
// // Resources usage limit for restarting the server when no objects are $
// "killLimit": "0.7",
// Garbage collector period in seconds
"garbageCollectorPeriod": 240
},
"net" : {
"websocket": {
"port": 8888,
"secure": {
"port": 8433,
"certificate": "defaultCertificate.pem",
"password": ""
},
//"registrar": {
// "address": "ws://localhost:9090",
// "localAddress": "localhost"
//},
"path": "kurento",
"threads": 10
}
}
}
}
and my WebRtcEndpoint.conf.ini
; Only IP address are supported, not domain names for addresses
; You have to find a valid stun server. You can check if it works
; usin this tool:
; http://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
stunServerAddress=62.71.2.168
stunServerPort=3478
; turnURL gives the necessary info to configure TURN for WebRTC.
; 'address' must be an IP (not a domain).
; 'transport' is optional (UDP by default).
; turnURL=user:password#address:port(?transport=[udp|tcp|tls])
;pemCertificate is deprecated. Please use pemCertificateRSA instead
;pemCertificate=<path>
;pemCertificateRSA=<path>
;pemCertificateECDSA=<path>
and the certificate has been generated with :
certtool --generate-privkey --outfile defaultCertificate.pem
echo 'organization = your organization name' > certtool.tmpl
certtool --generate-self-signed --load-privkey defaultCertificate.pem \
--template certtool.tmpl >> defaultCertificate.pem
sudo chown kurento defaultCertificate.pem
and I went on my https://localhost:8433/kurento to validate the certificate
When I start the kurento container with docker-compose up I can see on the logs that my conf. file has been loaded:
kurento_1 | "websocket":
kurento_1 | {
kurento_1 | "port": "8888",
kurento_1 | "secure":
kurento_1 | {
kurento_1 | "port": "8433",
kurento_1 | "certificate":
"defaultCertificate.pem",
kurento_1 | "password": ""
kurento_1 | },
kurento_1 | "path": "kurento",
kurento_1 | "threads": "10"
kurento_1 | }
.....
kurento_1 | "WebRtcEndpoint":
kurento_1 | {
kurento_1 | "stunServerAddress": "62.71.2.168",
kurento_1 | "stunServerPort": "3478",
kurento_1 | "configPath":
"\/etc\/kurento\/modules\/kurento"
kurento_1 | },
and I start the hello world example with :
sudo mvn compile exec:java -Dkms.url=wss://localhost:8433/kurento
at this point everything seems to work OK, no error output.
When I try to access my web application from a client with https://:8443 the web page is loaded correctly and can start the stream. But I have no remote stream and have the error I printed at the beginning.
UPDATE 1
I changed the version of the kurento image in docker-compose.yml from
image: fiware/stream-oriented-kurento:latest
to:
image: fiware/stream-oriented-kurento:6.6.0
And now it is working sometimes. I have the same error (ICE failed, see about:webrtc for more details) but if I reload the page multiple time, it end up working after some reload. Any suggestion about what I am doing wrong?
UPDATE 2
I realized that when the web application start working (after multiple reload), the next time I access the web applicaiton, it will always work, until I restart the KMS. Then I have to reaload the page multiple time again to have the remote stream.
Now that I realized that, I tried again with image: fiware/stream-oriented-kurento:latest and it has the exact same behavior. I have to reload multiple time the page to make it work. I have no clue why is that, any idea?
Looking into your problem I feel the ICE candidates have not been created properly on both sides.
Have you configured the STUN / TURN in the webApplication (JS)?
If you haven't modify the example I believe they are not configured by default
Check into the options.configuration. Example:
var options = {
........ some options here ....
configuration: {
iceServers:[{
"url": "turn:xxx.xxx.xxx:port",
"username": "xxxxxx",
"credential": "xxxxxx"
}]
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options,<callback-here>);
Can you provide logs for both the Firefox ICE candidates generation and KMS ICE generation?
In addition is KMS up and running in the same machine as the tutorial?
More information about kurento_utils used in the hello-world: http://doc-kurento.readthedocs.io/en/stable/mastering/kurento_utils_js.html#using-data-channels
General example of WebRTC configuration the STUN on the Web client side:
https://www.w3.org/TR/webrtc/#simple-peer-to-peer-example
I tried to start BOSH on ejabberd. My ejabberd.cfg snippet is below:
{5280, ejabberd_http, [
{request_handlers, [
{["xmpp-httpbind"], mod_http_bind}
]},
captcha,
http_bind,
http_poll,
web_admin
]}
http://localhost:5280/http-bind fails to open any page.
And my client getting this response from server
Sent XML:
<iq to='localhost' id='uid:50502b03:00004823' type='get' x
mlns='jabber:client'><query xmlns='jabber:iq:auth'><username>anurag</username></
query></iq>
Received XML:
<iq xmlns='jabber:client' from='localhost' id='uid:505
029df:00004823' type='error'><error code='503' type='cancel'><service-unavailabl
e xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error></iq>
Sent XML: </stream:stream>
auth failed. reason: 0
ce: 18
I am using gloox library to create a client.
Did you add {mod_http_bind, []} to your modules section?