How to deploy dockerhub or redhat container catalog (external registries) images in minishift - openshift

I'm studying openshift from and administration perspective and it is costing me a lot to understand it and one thing that I want to do is to deploy known containers into it and all I can do with minishift is deploy examples it provides or push a local develop which is not helpful to my purpose but I can't find a 'how to' of linking minishift cdk (the one from redhat) to external repositories like dockerhub or registry.access.redhat.com
Any help will be really appreciated.
What I did was:
Connect to minishift docker service
> $ eval $(minishift docker-env)
Tried to pull from redhat container catalog.
$ docker pull registry.access.redhat.com/openshift3/jenkins-2-rhel7:3.11.98-6
Trying to pull repository registry.access.redhat.com/openshift3/jenkins-2-rhel7 ...
error parsing HTTP 404 response body: invalid character 'F' looking for beginning of value: "File not found.\""
$ docker pull registry.redhat.io/openshift3/jenkins-2-rhel7:3.11.98-6
Trying to pull repository registry.redhat.io/openshift3/jenkins-2-rhel7 ...
unable to retrieve auth token: 401 unauthorized
$ docker login registry.redhat.io
Username: ********
Password:
WARNING! Your password will be stored unencrypted in /home/cesar.cabral/.docker/config.json.
Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker pull registry.redhat.io/openshift3/jenkins-2-rhel7:3.11.98-6
Trying to pull repository registry.redhat.io/openshift3/jenkins-2-rhel7 ...
> error parsing HTTP 404 response body: invalid character 'F' looking for beginning of value: "File not found.\""
tried the same but with dockerhub, no luck either.
$ docker pull hub.docker.com/nginx:1.17.0
Trying to pull repository hub.docker.com/nginx ...
Pulling repository hub.docker.com/nginx
invalid character '<' looking for beginning of value

Related

Hyperledger Composer CLI Ping to a Business Network returns AccessException

Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network

go-ethereum - geth - puppeth - ethstat remote server : docker: command not found

I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.

Openshift: Error pulling image from remote, secure docker registry using certificates

I use the all-in-one VM of Openshift origin.
I am trying to pull images from a private, secure registry using an Image Stream. This is the ImageStream definition:
apiVersion: v1
kind: ImageStream
metadata:
name: my-image-stream
annotations:
description: Keeps track of changes in the application image
name: my-image
spec:
dockerImageRepository: "my.registry.net/myproject/my-image"
The repository is secured with a certificate. On my local machine, i have them in /etc/docker/certs.d/my.registry.net and I can login with docker login my.registry.net.
When I run oc import-image, however, I get the following error:
The import completed with errors.
Name: my-image
Namespace: myproject
Created: About an hour ago
Labels: <none>
Description: Keeps track of changes in the application image
Annotations: openshift.io/image.dockerRepositoryCheck=2017-01-27T08:09:49Z
Docker Pull Spec: 172.30.53.244:5000/myproject/my-image
Unique Images: 0
Tags: 1
latest
tagged from my.registry.net/myproject/my-image
! error: Import failed (InternalError): Internal error occurred: Get https://my.registry.net/v2/: remote error: handshake failure
About an hour ago
I have copied the certificates to the vagrant machine and restarted the docker daemon, but the problem remains. I have not found any documentation on how to properly add the certificates, so I just put them in the usual docker folder.
What is the appropriate way to make this work?
Update in response to rezie's answer:
There is no file etc/origin/master/ca-bundle.crt on my vagrant box. I found the following ca-bundle.crt files :
$ find / -iname ca-bundle.crt
/etc/pki/tls/certs/ca-bundle.crt
##multiple lines like
/var/lib/docker/devicemapper/mnt/something-hash-like/rootfs/etc/pki/tls/certs/ca-bundle.crt
/var/lib/origin/openshift.local.config/master/ca-bundle.crt
I appended the root certificate to /etc/pki/tls/certs/ca-bundle.crt and to var/lib/origin/openshift.local.config/master/ca-bundle.crt, but that did not change anything.
Please note, however, that I do not need to have this root certificate in /etc/docker/certs.d/... in order to login directly using docker login my.registry.net
I have appended
I cannot comment due tow lo karma so I'll write an answer saying almost the same as rezie.
The error:
! error: Import failed (InternalError): Internal error occurred: Get https://my.registry.net/v2/: remote error: handshake failure
About an hour ago
Comes from OpenShift, not from docker, therefore adding it to /etc/docker/certs.d/my.registry.net doesn't prevent the error from happening.
You should add the CA certificate at OS level, my guess is the steps failed for some reason so do it this way:
openssl s_client -connect my.registry.net:443 </dev/null |
sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' \
> /etc/pki/ca-trust/source/anchors/my.registry.net.crt &&
update-ca-trust check && update-ca-trust extract
Finally test if it worked running
curl https://my.registry.net/v2
If it doesn't give you a certificate error and you still can't do the oc import restart the atomic-openshift-master-api service
Try appending your CA (the same one you said you said that was used in the my.registry.net directory) into Openshift's ca bundle (e.g. /etc/origin/master/ca-bundle.crt. Then restart the service and reattempt import-image (making sure that you do not include the --insecure flag).
For reference, check out this issue from the Origin project. As you've mentioned, there's currently no way to supply certificates along with the dockercfg secret, and the suggestion from that issue is to add the CA as a trusted root CA across all the hosts.

Problems accessing IBM Containers at UK Data Center

Note: This is a question related to Bluemix Container service, it is not generic to Docker.
I have a linux environment with cf and ice tools installed and working correctly with US_SOUTH Data Center. I changed the login parameters to UK Data Center and now, although it login correctly to Container service it fails when executing any command with 404.
Command failed with container cloud service
404 Not Found: Requested route ('api-ice.eu-gb.bluemix.net') does not exist.
I did the login following documentation:
ice login -a https://api.eu-gb.bluemix.net -H https://api-ice.eu-gb.bluemix.net/v2/containers -R registry.eu-gb.bluemix.net
And as I said the login is successful.
Try this for London:
ice login -H containers-api.eu-gb.bluemix.net -R registry.eu-gb.bluemix.net -a api.eu-gb.bluemix.net
For US South:
ice login -H containers-api.ng.bluemix.net -R registry.ng.bluemix.net -a api.ng.bluemix.net
api-ice.eu-gb.bluemix.net should throw a 404. When we closed our our public beta we changed our API server to use the containers-api.{domain} pattern. (While temporarily leaving api-ice.ng.bluemix.net available for folks needing to migrate from the beta.)
We are currently updating the docs. Thanks for pointing this out.

TC7 (20939) : upgrade : mercurial : http auth : Test Connection Succeeds... but build checks fail (http auth)

Have been using EAP 7 for a couple of months, this is the 2nd upgrade.
Upgraded to build 20939 today and now get errors when builds are trying to check mercurial for changes (VCS problem: FOO Edit this VCS root>>). If I edit the VCS Root and click Test Connection it succeeds. How do I go about debugging this issue?
Have tried re-saving the vcs root. I deleted and recreated the vcs root on one project and get the same result.
The recent entries in the teamcity-vcs log don't have domain\user:password, should they?
I now have both the teamcity and buildagent services running under my AD account. I don't remember what account the teamcity service was using before the upgrade (is that logged somewhere?).
If the vcs root is configured with an 'https://' and has user/password why don't I see the credentials in the log message (see above post)?
My user directory contains mercurial.ini / ssl cert (and was working pre-upgrade).
TeamCity hosted on Windows2k8, mercurial repo, using Active Directory credentials for authentication.
teamcity service is running as Local System
buildagent running as AD account (for builds that deploy to other machines)
newest errors:
[2012-01-11 17:12:39,578] WARN [cutor 4 {id=29}] - jetbrains.buildServer.VCS - Error while loading changes for root mercurial: https://mycompany.com/myproject {instance id=29, parent id=8}, cause: 'cmd /c hg pull https://mycompany.com/MyProject' command failed.
stderr: abort: http authorization required
older errors:
[2012-01-10 16:38:02,791] INFO [TeamCity Agent ] - jetbrains.buildServer.VCS - Patch applied for agent=computer {id=1, host=127.0.0.1:9090}, buildType=Project :: MVC3 {id=bt12}, root=mercurial: https://mycompany/myproject {instance id=12, parent id=1}, version=3775:7fc0ae5029e6
[2012-01-11 10:30:36,277] INFO [_Server_StartUp] - jetbrains.buildServer.VCS - Server-wide hg path is not set, will use path from the VCS root settings
The problem persisted after a complete uninstall/re-install.
In the VCS Root definition... I left the user/password fields blank and encoded the user:password into the 'Pull changes from' string (just like you'd do on the command-line.
https://domain\user:password#hg.mycompany.com/Repo
To sorta clean up the plaintext password I created a project level property 'MyPassword' (type password) and used it in the connection string like this:
https://domain\user:%MyPassword%#hg.mycompany.com/Repo
Still not great but I'm up and running and the password is not viewable by causal users.