Pull replication(or sync) fails with remote sync gateway - couchbase

I am currently working on a PoC to understand the CouchBase lite sync functionality(with Java application). The pull and push replication worked absolutely fine when the sync-gateway was running locally on my machine.
Now when I move my sync-gateway to a Remote machine, the pull replicator doesn't work. But the push works fine. From the following logs on the client seems like there has been some error with pull replicator.
JavaSQLiteStorageEngine
PULL replication event. Source: com.couchbase.lite.replicator.Replication#29fc9a67 Transition: INITIAL -> RUNNING Total changes: 0 Completed changes: 0
Mar 25, 2016 6:04:53 PM com.couchbase.lite.util.SystemLogger e
SEVERE: ChangeTracker: com.couchbase.lite.replicator.ChangeTracker#2ca40c2c: Change tracker got error 404
Mar 25, 2016 6:04:53 PM com.couchbase.lite.util.SystemLogger e
SEVERE: Sync: Change tracker stopped during continuous replication
PULL replication event. Source: com.couchbase.lite.replicator.Replication#29fc9a67 Transition: RUNNING -> IDLE Total changes: 0 Completed changes: 0
Mar 25, 2016 6:05:04 PM com.couchbase.lite.util.SystemLogger e
SEVERE: ChangeTracker: com.couchbase.lite.replicator.ChangeTracker#1b3ae860: Change tracker got error 404
Mar 25, 2016 6:05:04 PM com.couchbase.lite.util.SystemLogger e
SEVERE: Sync: Change tracker stopped during continuous replication
Following is my sync-gateway logs
2016-03-25T18:19:03.210Z HTTP: #267: GET /test/_local/fc25dac22b1cec1454f09c3ea41f763bc4a46b20 (as mehtab.syed)
2016-03-25T18:19:03.210Z HTTP: #267: --> 404 missing (0.2 ms)
2016-03-25T18:19:03.214Z HTTP: #268: GET /test/_local/31d8e5f89b0db31be77ea73f950068c2d5fe11f8 (as mehtab.syed)
2016-03-25T18:19:03.214Z HTTP: #268: --> 404 missing (0.2 ms)
2016-03-25T18:19:03.340Z HTTP: #269: GET /test/_changes%3ffeed=normal&heartbeat=300000&style=all_docs&active_only=true?feed=normal&heartbeat=300000&style=all_docs&active_only=true
2016-03-25T18:19:03.340Z HTTP: #269: --> 404 unknown URL (0.2 ms)
2016-03-25T18:19:13.475Z HTTP: #270: GET /test/_changes%3ffeed=normal&heartbeat=300000&style=all_docs&active_only=true?feed=normal&heartbeat=300000&style=all_docs&active_only=true
2016-03-25T18:19:13.475Z HTTP: #270: --> 404 unknown URL (0.2 ms)
2016-03-25T18:19:23.601Z HTTP: #271: GET /test/_changes%3ffeed=normal&heartbeat=300000&style=all_docs&active_only=true?feed=normal&heartbeat=300000&style=all_docs&active_only=true
2016-03-25T18:19:23.601Z HTTP: #271: --> 404 unknown URL (0.2 ms)
Further, I am using cookie authentication and following is my sync-gateway configuration.
{
"interface": "127.0.0.1:4988",
"adminInterface": "127.0.0.1:4989",
"log": [
"CRUD",
"REST+",
"Access"
],
"databases": {
"test": {
"server": "walrus:",
"users": {"GUEST": {"disabled": true,"admin_channels": ["*"]}
},
"sync": `function sync(doc,oldDoc){
if(doc.type=="user"){
channel("u-"+doc._id)
access(doc.owner,"u-"+doc._id)
}else if(doc.type=="expense"){
channel("e-"+doc.owner)
access(doc.owner,"e-"+doc.owner)
access(doc.approver,"e-"+doc.owner)
}else{
channel(doc.channels)
}
}`
}
}
}
Does anyone have any idea what might be wrong?

Related

C# revit addin Error: Upload failed. Reason = Response status code does not indicate success: 504 (GATEWAY_TIMEOUT)

We are using c# revit addin in our project.
Design automation requests are getting failed with the error "status": "failedUpload",
Here is the report file for more details
[09/08/2022 17:30:21] Finished running. Process will return: Success
[09/08/2022 17:30:21] ====== Revit finished running: revitcoreconsole ======
[09/08/2022 17:30:25] End Revit Core Engine standard output dump.
[09/08/2022 17:30:25] End script phase.
[09/08/2022 17:30:25] Start upload phase.
[09/08/2022 17:30:25] Uploading 'T:\Aces\Jobs\ad78879202894efba8c5145367e8275d\result.rvt': verb - 'PUT', url - 'https://developer.api.autodesk.com/oss/v2/buckets/generatedmodels/objects/20220908164214_1305 - Interior purged.rvt'
[09/08/2022 17:31:27] Error: Retrying on GatewayTimeout. Request is 'PUT' 'https://developer.api.autodesk.com/oss/v2/buckets/generatedmodels/objects/20220908164214_1305 - Interior purged.rvt'
[09/08/2022 17:32:34] Error: Retrying on GatewayTimeout. Request is 'PUT' 'https://developer.api.autodesk.com/oss/v2/buckets/generatedmodels/objects/20220908164214_1305 - Interior purged.rvt'
[09/08/2022 17:33:51] Error: Retrying on GatewayTimeout. Request is 'PUT' 'https://developer.api.autodesk.com/oss/v2/buckets/generatedmodels/objects/20220908164214_1305 - Interior purged.rvt'
[09/08/2022 17:35:23] Error: Retrying on GatewayTimeout. Request is 'PUT' 'https://developer.api.autodesk.com/oss/v2/buckets/generatedmodels/objects/20220908164214_1305 - Interior purged.rvt'
[09/08/2022 17:35:23] Error: Failed - 504 (GATEWAY_TIMEOUT)
Request: PUT https://developer.api.autodesk.com/oss/v2/buckets/generatedmodels/objects/20220908164214_1305 - Interior purged.rvt
Request Headers:
Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlU3c0dGRldUTzlBekNhSzBqZURRM2dQZXBURVdWN2VhIn0.eyJzY29wZSI6WyJidWNrZXQ6Y3JlYXRlIiwiYnVja2V0OnJlYWQiLCJidWNrZXQ6ZGVsZXRlIiwiZGF0YTpyZWFkIiwiZGF0YTp3cml0ZSIsImRhdGE6Y3JlYXRlIiwiY29kZTphbGwiXSwiY2xpZW50X2lkIjoidFBvVGQ0dENuTEhrajlZMEtRYWRyVFdBT0pLSUxzN1AiLCJhdWQiOiJodHRwczovL2F1dG9kZXNrLmNvbS9hdWQvYWp3dGV4cDYwIiwianRpIjoiU0pnc1N2TjdKTjhFSkhNeFRpdUhmMko2Y3ZTTXhZVW9UZXphcmRJcFNocHZjOWFDM1pzdVVPeDhESjNRc0toViIsImV4cCI6MTY2MjY1ODc0OH0.PJPp7LroxjWkFD7i3_ErLGyM_wS_D0ir1Qr-w9TfHazaEpmZSrwQ6QsRKcJ9ibXS5z9RY_5WGtzojDPyNTF4kP9TISpgJlyJivbLTnxv7oqW_acd0FvQYlvsaozrx_HIfRJIJLLuF_k1gGwpeArK9yQrKtWYSY1_5c3t1QQSEAs4i5HVyWlPPT8eEsQDtY_EYj32QQoeIMnfI3XWWQBkhD1LnbI9yIzLJ0D8ZWzXbbzD78wAhYudzLsW_0ay3YQRd6fTerUVLHaQ0UgyjvFfTVOV5mFZimERqtpyKynIEnF4JBKZGzhzxv-OlEVNe31o5CLr4oy1QBj_E53q5ZX4Ug
Request Content Headers:
Content-Length: 818163712
Response Headers:
Connection: keep-alive
Response Content Headers:
Content-Length: 0
Response Body:
[09/08/2022 17:35:23] Error: Upload failed. Reason = Response status code does not indicate success: 504 (GATEWAY_TIMEOUT).
[09/08/2022 17:35:23] Error: An unexpected error happened during phase Publishing of job.
[09/08/2022 17:35:24] Job finished with result FailedUpload
[09/08/2022 17:35:24] Job Status:
{
"status": "failedUpload",
"reportUrl": "https://dasprod-store.s3.amazonaws.com/workItem/tPoTd4tCnLHkj9Y0KQadrTWAOJKILs7P/ad78879202894efba8c5145367e8275d/report.txt?X-Amz-Expires=48600&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEJj%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIQCgs4MB0vOFtfkJkyouuTSkaG5tzGqnIwaHN41D2ec5IgIgO8%2BddPJXEyQV0NP3R6iKxG7LGccF%2FJtCOZfmDc6oANsqgAMIMBADGgwyMjA0NzMxNTIzMTAiDMhdkXkWdvEoT%2BJc%2FSrdAk0REkrcSRlPtnlJTvPShrpoDhptZ%2FsoZAkNW%2BSWnSY%2FZom%2BNsa2qpmWopbOljmbOcBcmhh2K4JG9dN1AbErLjwd73f3cEId3gQoaE7O1CuBNhzl27K7MQ4tCva%2FxdkYNFV6r0Z%2FdJSylvSYrFyrrTS1jK5da2bL6tXBaJ1GByvRiOlGlDrXFyvNSnp%2FTJT1tJhX6wUGgpUaydqSHGfBsqXR%2Bwzskf8iKLZ7Z75oqCkZwmC9azQARwRT0PoSLTcR4RotaqDDdV8xYqCDUE0Us1ihVMYHKW%2FIrgMiDt5Vb3Cx5YJ68SNbNUAPG5JTmTyG%2BlWQiVXaV3IrH4SWktb494A5CJNJo6CkRmusDEPrvnEfiebWxAUwepcJl%2FU%2FJ8jj7Et1oROuReOS9obYtNHlodcEIOuaG62bsgw53awHzz095z9FV9ZNbStdmqb032uJmiSBWfaPr%2F8Zz7oSiykwpZDomAY6ngF5QHnzayjkxc04F41tGu31CD6DyAhmHq6XVZIUa3hwer76cIjHaZFSM1hWlrCwUUfGesfXXVI81KJrqaqyXkUkUshGBD2tEtvvyYb1CnYvnVssn28Rc0LNNl8WgFwPlfrPutbxT5yRsYTD1aAD2B80wHwBJ7vNLNUnonRMWUzY8QW%2BTO9F8pzb%2FGjLqQTlj3Ne4SM4Fpz8EXIng4gmNA%3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIATGVJZKM3BTQV7AHO/20220908/us-east-1/s3/aws4_request&X-Amz-Date=20220908T164217Z&X-Amz-SignedHeaders=host&X-Amz-Signature=f60e89e4fbd1527ba46c42480e64268483c15c5efdcb80ed2623144fbd222317",
"stats": {
"timeQueued": "2022-09-08T16:42:17.9517893Z",
"timeDownloadStarted": "2022-09-08T16:42:18.2481411Z",
"timeInstructionsStarted": "2022-09-08T16:43:20.1483203Z",
"timeInstructionsEnded": "2022-09-08T17:30:25.5436836Z",
"timeUploadEnded": "2022-09-08T17:35:23.6506512Z",
"bytesDownloaded": 1574150920
},
"id": "ad78879202894efba8c5145367e8275d"
}
Please help me to figure out the solution for this problem?
You need to finalize the upload, i.e, after design automation puts file to your s3 bucket you need to call buckets/:bucketKey/objects/:objectKey/signeds3upload
This can be done in OnCallback event, refer https://github.com/Autodesk-Forge/learn.forge.designautomation/blob/7d57b39014de5fccb0f082a9dd50a039bdcb2569/forgesample/Controllers/DesignAutomationController.cs#L456
Refer - https://forge.autodesk.com/en/docs/design-automation/v3/tutorials/revit/step6-prepare-cloud-storage/#step-4-complete-the-upload

How to manually recreate the bootstrap client certificate for OpenShift 3.11 master?

Our origin-node.service on the master node fails with:
root#master> systemctl start origin-node.service
Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
root#master> systemctl status origin-node.service -l
[...]
May 05 07:17:47 master origin-node[44066]: bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 05 07:17:47 master origin-node[44066]: bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 05 07:17:47 master origin-node[44066]: certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 05 07:17:47 master origin-node[44066]: server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
So it seems that kubelet-client-current.pem and/or kubelet-server-current.pem contains an expired certificate and the service tries to create a CSR using an endpoint which is probably not yet available (because the master is down). We tried redeploying the certificates according to the OpenShift documentation Redeploying Certificates, but this fails while detecting an expired certificate:
root#master> ansible-playbook -i /etc/ansible/hosts openshift-master/redeploy-openshift-ca.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] *******************************************************************************************************************************************
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200505T042754.html or /root/cert-expiry-report.20200505T042754.json.\n"}
[...]
root#master> cat /root/cert-expiry-report.20200505T042754.json
[...]
"kubeconfigs": [
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
[...]
"summary": {
"expired": 2,
"ok": 22,
"total": 24,
"warning": 0
}
}
There is a guide for OpenShift 4.4 for Recovering from expired control plane certificates, but that does not apply for 3.11 and we did not find such a guide for our version.
Is it possible to recreate the expired certificates without a running master node for 3.11? Thanks for any help.
OpenShift Ansible: https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.11.153-2
Update 2020-05-06: I also executed redeploy-certificates.yml, but it fails at the same TASK:
root#master> ansible-playbook -i /etc/ansible/hosts playbooks/redeploy-certificates.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************
Wednesday 06 May 2020 04:07:06 -0400 (0:00:00.909) 0:01:07.582 *********
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200506T040603.html or /root/cert-expiry-report.20200506T040603.json.\n"}
Update 2020-05-11: Running with -e openshift_certificate_expiry_fail_on_warn=False results in:
root#master> ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
[...]
TASK [Wait for master API to come back online] *****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.111) 0:02:25.186 ************
skipping: [master.openshift-cluster.mydomain.com]
TASK [openshift_control_plane : restart master] ****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.257) 0:02:25.444 ************
changed: [master.openshift-cluster.mydomain.com] => (item=api)
changed: [master.openshift-cluster.mydomain.com] => (item=controllers)
RUNNING HANDLER [openshift_control_plane : verify API server] **************************************************************************************************
Monday 11 May 2020 03:48:57 -0400 (0:00:00.945) 0:02:26.389 ************
FAILED - RETRYING: verify API server (120 retries left).
FAILED - RETRYING: verify API server (119 retries left).
[...]
FAILED - RETRYING: verify API server (1 retries left).
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://lb.openshift-cluster.mydomain.com:8443/healthz/ready"], "delta": "0:00:00.182367", "end": "2020-05-11 03:51:52.245644", "msg": "non-zero return code", "rc": 35, "start": "2020-05-11 03:51:52.063277", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
root#master> systemctl status origin-node.service -l
[...]
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: E0511 04:23:28.077964 109972 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.078001 109972 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.080555 109972 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: F0511 04:23:28.130968 109972 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
[...]
I have this same case in customer environment, this error is because the certified was expiry, i "cheated" changing da S.O date before the expiry date. And the origin-node service started in my masters:
systemctl status origin-node
● origin-node.service - OpenShift Node
Loaded: loaded (/etc/systemd/system/origin-node.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2021-02-20 20:22:21 -02; 6min ago
Docs: https://github.com/openshift/origin
Main PID: 37230 (hyperkube)
Memory: 79.0M
CGroup: /system.slice/origin-node.service
└─37230 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0 --allow-privileged=true --anonymous-auth=true --authentication-token-webhook=true --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook --authorization-webhook-c...
Você tem mensagem de correio em /var/spool/mail/okd
The openshift_certificate_expiry role uses the openshift_certificate_expiry_fail_on_warn variable to determine if the playbook should fail when the days left are less than openshift_certificate_expiry_warning_days.
So try running the redeploy-certificates.yml with this additional variable set to "False":
ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml

Server denies request due to wrong Domain coming from Fritzbox

I am trying to reach my local server via IPv6 which is failing due to certificate issues.
E.g. the nextcloud client gives following error:
$nextcloudcmd --trust --logdebug Nextcloud https://nextcloud.domain.de
10-20 12:47:43:798 [ info nextcloud.sync.accessmanager ]: 2 "" "https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json" has X-Request-ID "19a2a694-1912-4813-b3f5-2d4d5720fa80"
10-20 12:47:43:799 [ info nextcloud.sync.networkjob ]: OCC::JsonApiJob created for "https://nextcloud.domain.de" + "ocs/v1.php/cloud/capabilities" ""
10-20 12:47:43:955 [ info nextcloud.sync.account ]: "SSL-Errors happened for url \"https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json\" \tError in QSslCertificate(\"3\", \"f9:8e:0f:4f:bd:4b:a3:5f\", \"hkXxG7tBu+SGaRSBZ9gRyw==\", \"<hostname>.domain.de\", \"<hostname>.domain.de\", QMap((1, \"www.fritz.nas\")(1, \"fritz.nas\")(1, \"<WiFi-Name>\")(1, \"www.myfritz.box\")(1, \"myfritz.box\")(1, \"www.fritz.box\")(1, \"fritz.box\")(1, \"<hostname>.domain.de\")), QDateTime(2019-10-19 12:32:25.000 UTC Qt::UTC), QDateTime(2038-01-15 12:32:25.000 UTC Qt::UTC)) : \"The host name did not match any of the valid hosts for this certificate\" ( \"The host name did not match any of the valid hosts for this certificate\" ) \n \tError in QSslCertificate(\"3\", \"f9:8e:0f:4f:bd:4b:a3:5f\", \"hkXxG7tBu+SGaRSBZ9gRyw==\", \"<hostname>.domain.de\", \"<hostname>.domain.de\", QMap((1, \"www.fritz.nas\")(1, \"fritz.nas\")(1, \"<WiFi-Name>\")(1, \"www.myfritz.box\")(1, \"myfritz.box\")(1, \"www.fritz.box\")(1, \"fritz.box\")(1, \"<hostname>.domain.de\")), QDateTime(2019-10-19 12:32:25.000 UTC Qt::UTC), QDateTime(2038-01-15
12:32:25.000 UTC Qt::UTC)) : \"The certificate is self-signed, and untrusted\" ( \"The certificate is self-signed, and untrusted\" ) \n " Certs are known and trusted! This is not an actual error.
10-20 12:47:43:964 [ warning nextcloud.sync.networkjob ]: QNetworkReply::ProtocolInvalidOperationError "Server replied \"400 Bad Request\" to \"GET https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json\"" QVariant(int, 400)
10-20 12:47:43:964 [ info nextcloud.sync.networkjob.jsonapi ]: JsonApiJob of QUrl("https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json") FINISHED WITH STATUS "ProtocolInvalidOperationError Server replied \"400 Bad Request\" to \"GET https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json\""
10-20 12:47:43:964 [ warning nextcloud.sync.networkjob.jsonapi ]: Network error: "ocs/v1.php/cloud/capabilities" "Server replied \"400 Bad Request\" to \"GET https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json\"" QVariant(int, 400)
10-20 12:47:43:964 [ debug default ] [ main(int, char**)::<lambda ]: Server capabilities QJsonObject()
Error connecting to server
I wonder why Fritzbox tries to request via .domain.de instead of nextcloud.domain.de.
Can anyone point me into the right direction?
Okay got information from the Site (German: https://avm.de/service/fritzbox/fritzbox-7580/wissensdatenbank/publication/show/3525_Zugriff-auf-HTTPS-Server-im-Heimnetz-nicht-moglich#zd) which led me to following conclusion.
As you do not have NAT for IPv6 addresses and the fritzbox cannot do it as well, the IPv6 has to be from the server. Thus one solution I found is ddclient. By installing it on your GNU\Linux server it will update the IPv6 address at your DynDNS provider.
But one thing is still open. I cannot get IPv4 and IPv6 updated.

Artifactory Create Repository Rest API does not work

I have Artifactory pro license, and as the following pages provide, I called rest api.
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-CreateRepository
I have verified that all other APIs such as repository listing, account creation and listing works normally, but I have confirmed that the repository creation api does not work with 400 errors.
I wanted to see the error by changing the log level, but there was no information about why there was a 400 error at the trace log level.
Below are related logs:
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [TRACE] (o.a.a.d.r.DockerV2AuthenticationFilter:84) - DockerV2AuthenticationFilter path: /api/repositories/newrepo
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.a.AuthenticationFilterUtils:105) - Entering ArtifactorySsoAuthenticationFilter.getRemoteUserName
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.AccessFilter:299) - Cached key has been found for request: '/artifactory/api/repositories/newrepo' with method: 'PUT'
2018-06-15 10:31:34,028 [http-nio-8081-exec-15] [TRACE] (o.a.s.PasswordDecryptingManager:95) - Received authentication request for org.artifactory.security.props.auth.PropsAuthenticationToken#3dc5bccf: Principal: null; Credentials: [PROTECTED]; Authenticated: false; Details: org.springframework.security.web.authentication.WebAuthenticationDetails#b364: RemoteIpAddress: {IP}; SessionId: null; Not granted any authorities
2018-06-15 10:31:34,029 [http-nio-8081-exec-15] [DEBUG] (o.j.a.c.h.AccessHttpClient:109) - Executing : GET http://localhost:8040/access/api/v1/users/?cd=apiKey_shash%3DGprGDe&exactKeyMatch=false
2018-06-15 10:31:34,035 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.AccessFilter:305) - Header authentication org.artifactory.security.props.auth.PropsAuthenticationToken#c20ca8df: Principal: admin; Credentials: [PROTECTED]; Authenticated: true; Details: org.springframework.security.web.authentication.WebAuthenticationDetails#b364: RemoteIpAddress: {IP}; SessionId: null; Granted Authorities: admin, user found in cache.
2018-06-15 10:31:34,035 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.RepoFilter :100) - Entering request PUT (10.191.128.129) /api/repositories/newrepo.
2018-06-15 10:31:34,038 [http-nio-8081-exec-15] [DEBUG] (o.a.w.s.RepoFilter :188) - Exiting request PUT (10.191.128.129) /api/repositories/newrepo
Updated
My Artifactory Version: 6.0.2
Reponse Message from Artifactory:
{
"errors" : [ {
"status" : 400,
"message" : "No valid type of repository found.\n"
} ]
}
Repository Create JSON Message*:
{
"key": "newrepo",
"rclass: "local",
"packageType": "docker",
"dockerApiVersion": "V2",
"includesPattern": "**/*",
"excludesPattern": "",
"repoLayoutRef": "simple-default",
"description": "",
"checksumPolicyType": "client-checksums",
"blackedOut": false,
"propertySets": ["artifactory"]
}
The error in this block is on purpose, and the code highlighting finds it quite nicely, but when this post was originally made, highlighting was not available on SO.
In your JSON you are missing " after the rclass.
You wrote ' "rclass: ' and it should be ' "rclass": '
Once fixing this the command should work properly.
Good luck :)
curl -iuadmin:password -X PUT http://localhost:8081/artifactory/api/repositories/newrepo -H "Content-type:application/vnd.org.jfrog.artifactory.repositories.LocalRepositoryConfiguration+json" -T repo_temp.json
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Server: Artifactory/5.11.0
X-Artifactory-Id: bea9f3f68aa06e62:4db81752:1643a9cff9e:-8000
Content-Type: text/plain
Transfer-Encoding: chunked
Date: Tue, 26 Jun 2018 06:57:24 GMT
Successfully created repository 'newrepo'
repo_temp.json:
{
"key": "newrepo",
"rclass": "local",
"packageType": "docker",
"dockerApiVersion": "V2",
"includesPattern": "**/*",
"excludesPattern": "",
"repoLayoutRef": "simple-default",
"description": "",
"checksumPolicyType": "client-checksums",
"blackedOut": false,
"propertySets": ["artifactory"]
}
This error is (somehow) returned by Artifactory if the content-type header contains the charset, for example: Content-Type: application/json; charset=UTF-8
Try with simply Content-Type: application/json

Can not access Chrome headless debug

I am running a angular 5 unit test on a headless server in Karma and Jasmine. I am using chrome headless to run the tests.
I am not able to access Chrome's debug mode when using with --remote-debugging-port=9223. I tried with http://35.1.28.84:9223 in my remote chrome url.
I made sure the all interfaces are listening with host: '0.0.0.0'. I made sure the port was open also.
How come I am not able to access chrome's debugger remotely?
START:
29 03 2018 15:38:05.480:INFO [karma]: Karma v2.0.0 server started at http://0.0.0.0:9876/
29 03 2018 15:38:05.482:INFO [launcher]: Launching browser MyHeadlessChrome with unlimited concurrency
29 03 2018 15:38:05.497:INFO [launcher]: Starting browser ChromeHeadless
29 03 2018 15:38:18.487:INFO [HeadlessChrome 0.0.0 (Linux 0.0.0)]: Connected on socket pfKmImL3pGU9ibL7AAAA with id 10485493
headless-karma.conf.js
module.exports = function(config) {
config.set({
host: '0.0.0.0',
basePath: '',
frameworks: ['jasmine', '#angular/cli'],
plugins: [
require('karma-jasmine'),
require('karma-mocha-reporter'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('#angular/cli/plugins/karma')
],
reporters: ['mocha'],
port: 9876, // karma web server port
colors: true,
angularCli: {
environment: 'dev'
},
browsers: ['MyHeadlessChrome'],
customLaunchers: {
MyHeadlessChrome: {
base: 'ChromeHeadless',
flags: [
'--disable-translate',
'--disable-extensions',
'--no-first-run',
'--disable-background-networking',
'--remote-debugging-port=9223',
]
}
},
autoWatch: false,
singleRun: true,
concurrency: Infinity
});
};
one#work:~/github/MCTS.UI (dh/headless-unittests)
$ google-chrome --version
Google Chrome 64.0.3282.167
one#work:~/github/MCTS.UI (dh/headless-unittests)
$ google-chrome-stable --version
Google Chrome 64.0.3282.167
There is another parameter you need to supply to chrome:
--remote-debugging-address=0.0.0.0
Use the given address instead of the default loopback for accepting remote debugging connections. Should be used together with --remote-debugging-port. Note that the remote debugging protocol does not perform any authentication, so exposing it too widely can be a security risk.