i'm having troubles with my kubernetes cluster (hosted on AWS) where i'm trying to let my two pods communicate trough services. I have one pod on a deployment based on NODEJS and one pod on a deployment based on MYSQL. This is my yaml configuration file for the deployments and the services (all in one)
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-deployment-products
namespace: namespace-private
labels:
app: productsdb
spec:
replicas: 1
selector:
matchLabels:
app: productsdb
template:
metadata:
labels:
app: productsdb
spec:
containers:
- name: productsdb
image: training-registry.com/library/productsdb:latest
env:
- name: DB_HOST
value: "productsdb-service.namespace-private.svc.cluster.local"
- name: DB_NAME
value: "products_db"
- name: DB_USER
value: "root"
- name: DB_PWD
value: "productsPWD"
- name: MYSQL_DATABASE
value: "products_db"
- name: MYSQL_ROOT_USER
value: "root"
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: productsdb-secret
key: MYSQL_ROOT_PASSWORD
- name: DB_DIALECT
value: "mysql"
- name: LOG_LEVEL
value: "debug"
- name: ES_LOG_LEVEL
value: "debug"
- name: ES_CLIENT
value: "http://elasticsearch:9200"
- name: ES_INDEX
value: "demo-uniroma3-products"
- name: ES_USER
value: "elastic"
- name: ES_PWD
value: "elastic"
- name: LOGGER_SERVICE
value: "products-service"
- name: DB_PORT
value: "3306"
- name: SERVER_PORT
value: "5000"
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: productsdb-service
namespace: namespace-private
spec:
selector:
app: productsdb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: products-service-metaname
namespace: namespace-private
labels:
app: products-service
spec:
replicas: 1
selector:
matchLabels:
app: products-service
template:
metadata:
labels:
app: products-service
spec:
containers:
- name: products-service
image: training-registry.com/library/products-service:latest
env:
- name: DB_HOST
value: "productsdb-service.namespace-private.svc.cluster.local"
- name: DB_NAME
value: "products_db"
- name: DB_USER
value: "root"
- name: DB_PWD
value: "productsPWD"
- name: MYSQL_DATABASE
value: "products_db"
- name: MYSQL_ROOT_USER
name: "root"
- name: MYSQL_ROOT_PASSWORD
value: "productsPWD"
- name: DB_DIALECT
value: "mysql"
- name: ES_USER
value: "elastic"
- name: ES_PWD
value: "elastic"
- name: LOGGER_SERVICE
value: "products-service"
- name: DB_PORT
value: "3306"
- name: SERVER_PORT
value: "5000"
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: products-service-service
namespace: namespace-private
spec:
selector:
app: products-service
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30001
As you can see, i created the two services and used the complete name of the db service as variable "DB_HOST", but if i try to test the connection with port-forward on the address "localhost:5000/products", the browser tell me
{"success":false,"reason":{"name":"SequelizeConnectionError","parent":{"errno":-3001,"code":"EAI_AGAIN","syscall":"getaddrinfo","hostname":"productsdb-service.namespace-private.svc.cluster.local","fatal":true},"original":{"errno":-3001,"code":"EAI_AGAIN","syscall":"getaddrinfo","hostname":"productsdb-service.namespace-private.svc.cluster.local","fatal":true}}}
I tried to change the DB_HOST env variable with the name of the service, with the service IP but nothing seems it work. Do you know why and how can i resolve this? Thank you in advance
So I had EventStore 5.0.7 installed as a 3 node cluster, working just fine.
I tried to upgrade to EventStore 21.10.1. The config for EventStore has changed substantially since the move from 5.x to 20.x and 21.x, and despite multiple readings of all kinds of documentation, I'm still doing something wrong.
What we see is 6 nodes appearing - each server twice - and the gossip failing, and nothing working, ie, cannot insert events.
What am I doing wrong?
EventStore 5.0.7
EventStore 21.10.1
Config for EventStore 21.10.1
---
# Paths
Db: /var/lib/eventstore
Index: /var/lib/eventstore/index
Log: /var/log/eventstore
# Run in insecure mode
Insecure: true
DisableInternalTcpTls: true
DisableExternalTcpTls: true
# Network configuration
IntIp: 172.31.47.243
ExtIp: 0.0.0.0
HttpPort: 2113
IntTcpPort: 1112
ExtTcpPort: 1113
EnableExternalTcp: true
EnableAtomPubOverHTTP: false
# Projections configuration
RunProjections: System
ClusterSize: 3
LogLevel: Verbose
LogHttpRequests: true
LogFailedAuthenticationAttempts: true
LogConfig: /etc/eventstore/logconfig.json
HttpPortAdvertiseAs: 2114
ExtHostAdvertiseAs: 54.209.234.141
IntTcpHeartbeatTimeout: 2000
ExtTcpHeartbeatTimeout: 2000
IntTcpHeartbeatInterval: 5000
ExtTcpHeartbeatInterval: 5000
GossipTimeoutMs: 5000
GossipIntervalMs: 2000
StatsPeriodSec: 900
DiscoverViaDns: false
GossipSeed: 172.31.45.192:2113,172.31.41.141:2113
Config for EventStore 21.10.1 (as seen at startup)
MODIFIED OPTIONS:
STATS PERIOD SEC: 900 (Yaml)
LOG HTTP REQUESTS: true (Yaml)
LOG FAILED AUTHENTICATION ATTEMPTS: true (Yaml)
INSECURE: true (Yaml)
LOG: /var/log/eventstore (Yaml)
LOG CONFIG: /etc/eventstore/logconfig.json (Yaml)
LOG LEVEL: Verbose (Yaml)
CLUSTER SIZE: 3 (Yaml)
DISCOVER VIA DNS: false (Yaml)
GOSSIP SEED: 172.31.46.96:2113,172.31.40.110:2113 (Yaml)
GOSSIP INTERVAL MS: 2000 (Yaml)
GOSSIP TIMEOUT MS: 5000 (Yaml)
DB: /var/lib/eventstore (Yaml)
INDEX: /var/lib/eventstore/index (Yaml)
INT IP: 172.31.35.133 (Yaml)
EXT IP: 0.0.0.0 (Yaml)
HTTP PORT: 2113 (Yaml)
ENABLE EXTERNAL TCP: true (Yaml)
INT TCP PORT: 1112 (Yaml)
EXT TCP PORT: 1113 (Yaml)
EXT HOST ADVERTISE AS: 3.82.200.231 (Yaml)
HTTP PORT ADVERTISE AS: 2114 (Yaml)
INT TCP HEARTBEAT TIMEOUT: 2000 (Yaml)
EXT TCP HEARTBEAT TIMEOUT: 2000 (Yaml)
INT TCP HEARTBEAT INTERVAL: 5000 (Yaml)
EXT TCP HEARTBEAT INTERVAL: 5000 (Yaml)
DISABLE INTERNAL TCP TLS: true (Yaml)
DISABLE EXTERNAL TCP TLS: true (Yaml)
ENABLE ATOM PUB OVER HTTP: false (Yaml)
RUN PROJECTIONS: System (Yaml)
DEFAULT OPTIONS:
HELP: False (<DEFAULT>)
VERSION: False (<DEFAULT>)
CONFIG: /etc/eventstore/eventstore.conf (<DEFAULT>)
WHAT IF: False (<DEFAULT>)
START STANDARD PROJECTIONS: False (<DEFAULT>)
DISABLE HTTP CACHING: False (<DEFAULT>)
WORKER THREADS: 0 (<DEFAULT>)
ENABLE HISTOGRAMS: False (<DEFAULT>)
SKIP INDEX SCAN ON READS: False (<DEFAULT>)
MAX APPEND SIZE: 1048576 (<DEFAULT>)
LOG CONSOLE FORMAT: Plain (<DEFAULT>)
LOG FILE SIZE: 1073741824 (<DEFAULT>)
LOG FILE INTERVAL: Day (<DEFAULT>)
LOG FILE RETENTION COUNT: 31 (<DEFAULT>)
DISABLE LOG FILE: False (<DEFAULT>)
AUTHORIZATION TYPE: internal (<DEFAULT>)
AUTHORIZATION CONFIG: <empty> (<DEFAULT>)
AUTHENTICATION TYPE: internal (<DEFAULT>)
AUTHENTICATION CONFIG: <empty> (<DEFAULT>)
DISABLE FIRST LEVEL HTTP AUTHORIZATION: False (<DEFAULT>)
TRUSTED ROOT CERTIFICATES PATH: <empty> (<DEFAULT>)
CERTIFICATE RESERVED NODE COMMON NAME: eventstoredb-node (<DEFAULT>)
CERTIFICATE FILE: <empty> (<DEFAULT>)
CERTIFICATE PRIVATE KEY FILE: <empty> (<DEFAULT>)
CERTIFICATE PASSWORD: <empty> (<DEFAULT>)
CERTIFICATE STORE LOCATION: <empty> (<DEFAULT>)
CERTIFICATE STORE NAME: <empty> (<DEFAULT>)
CERTIFICATE SUBJECT NAME: <empty> (<DEFAULT>)
CERTIFICATE THUMBPRINT: <empty> (<DEFAULT>)
STREAM INFO CACHE CAPACITY: 0 (<DEFAULT>)
NODE PRIORITY: 0 (<DEFAULT>)
COMMIT COUNT: -1 (<DEFAULT>)
PREPARE COUNT: -1 (<DEFAULT>)
CLUSTER DNS: fake.dns (<DEFAULT>)
CLUSTER GOSSIP PORT: 2113 (<DEFAULT>)
GOSSIP ALLOWED DIFFERENCE MS: 60000 (<DEFAULT>)
READ ONLY REPLICA: False (<DEFAULT>)
UNSAFE ALLOW SURPLUS NODES: False (<DEFAULT>)
DEAD MEMBER REMOVAL PERIOD SEC: 1800 (<DEFAULT>)
LEADER ELECTION TIMEOUT MS: 1000 (<DEFAULT>)
QUORUM SIZE: 1 (<DEFAULT>)
PREPARE ACK COUNT: 1 (<DEFAULT>)
COMMIT ACK COUNT: 1 (<DEFAULT>)
MIN FLUSH DELAY MS: 2 (<DEFAULT>)
DISABLE SCAVENGE MERGING: False (<DEFAULT>)
SCAVENGE HISTORY MAX AGE: 30 (<DEFAULT>)
CACHED CHUNKS: -1 (<DEFAULT>)
CHUNKS CACHE SIZE: 536871424 (<DEFAULT>)
MAX MEM TABLE SIZE: 1000000 (<DEFAULT>)
HASH COLLISION READ LIMIT: 100 (<DEFAULT>)
MEM DB: False (<DEFAULT>)
USE INDEX BLOOM FILTERS: True (<DEFAULT>)
INDEX CACHE SIZE: 0 (<DEFAULT>)
SKIP DB VERIFY: False (<DEFAULT>)
WRITE THROUGH: False (<DEFAULT>)
UNBUFFERED: False (<DEFAULT>)
CHUNK INITIAL READER COUNT: 5 (<DEFAULT>)
PREPARE TIMEOUT MS: 2000 (<DEFAULT>)
COMMIT TIMEOUT MS: 2000 (<DEFAULT>)
WRITE TIMEOUT MS: 2000 (<DEFAULT>)
UNSAFE DISABLE FLUSH TO DISK: False (<DEFAULT>)
UNSAFE IGNORE HARD DELETE: False (<DEFAULT>)
SKIP INDEX VERIFY: False (<DEFAULT>)
INDEX CACHE DEPTH: 16 (<DEFAULT>)
OPTIMIZE INDEX MERGE: False (<DEFAULT>)
ALWAYS KEEP SCAVENGED: False (<DEFAULT>)
REDUCE FILE CACHE PRESSURE: False (<DEFAULT>)
INITIALIZATION THREADS: 1 (<DEFAULT>)
READER THREADS COUNT: 0 (<DEFAULT>)
MAX AUTO MERGE INDEX LEVEL: 2147483647 (<DEFAULT>)
WRITE STATS TO DB: False (<DEFAULT>)
MAX TRUNCATION: 268435456 (<DEFAULT>)
CHUNK SIZE: 268435456 (<DEFAULT>)
STATS STORAGE: File (<DEFAULT>)
DB LOG FORMAT: V2 (<DEFAULT>)
STREAM EXISTENCE FILTER SIZE: 256000000 (<DEFAULT>)
KEEP ALIVE INTERVAL: 10000 (<DEFAULT>)
KEEP ALIVE TIMEOUT: 10000 (<DEFAULT>)
INT HOST ADVERTISE AS: <empty> (<DEFAULT>)
ADVERTISE HOST TO CLIENT AS: <empty> (<DEFAULT>)
ADVERTISE HTTP PORT TO CLIENT AS: 0 (<DEFAULT>)
ADVERTISE TCP PORT TO CLIENT AS: 0 (<DEFAULT>)
EXT TCP PORT ADVERTISE AS: 0 (<DEFAULT>)
INT TCP PORT ADVERTISE AS: 0 (<DEFAULT>)
GOSSIP ON SINGLE NODE: <empty> (<DEFAULT>)
CONNECTION PENDING SEND BYTES THRESHOLD: 10485760 (<DEFAULT>)
CONNECTION QUEUE SIZE THRESHOLD: 50000 (<DEFAULT>)
DISABLE ADMIN UI: False (<DEFAULT>)
DISABLE STATS ON HTTP: False (<DEFAULT>)
DISABLE GOSSIP ON HTTP: False (<DEFAULT>)
ENABLE TRUSTED AUTH: False (<DEFAULT>)
PROJECTION THREADS: 3 (<DEFAULT>)
PROJECTIONS QUERY EXPIRY: 5 (<DEFAULT>)
FAULT OUT OF ORDER PROJECTIONS: False (<DEFAULT>)
PROJECTION COMPILATION TIMEOUT: 500 (<DEFAULT>)
PROJECTION EXECUTION TIMEOUT: 250 (<DEFAULT>)
Gossip for EventStore 21.10.1
{
"members": [
{
"instanceId": "ed2ee047-eb59-4b11-86fd-a5b366edd0ce",
"timeStamp": "2022-01-12T23:17:42.539034Z",
"state": "Unknown",
"isAlive": true,
"internalTcpIp": "172.31.46.231",
"internalTcpPort": 1112,
"internalSecureTcpPort": 0,
"externalTcpIp": "52.91.48.59",
"externalTcpPort": 1113,
"externalSecureTcpPort": 0,
"httpEndPointIp": "52.91.48.59",
"httpEndPointPort": 2114,
"lastCommitPosition": -1,
"writerCheckpoint": 0,
"chaserCheckpoint": 0,
"epochPosition": -1,
"epochNumber": -1,
"epochId": "00000000-0000-0000-0000-000000000000",
"nodePriority": 0,
"isReadOnlyReplica": false
},
{
"instanceId": "dfcc4139-2966-454c-8cee-71261cedafba",
"timeStamp": "2022-01-12T23:17:40.0803168Z",
"state": "Unknown",
"isAlive": false,
"internalTcpIp": "172.31.46.43",
"internalTcpPort": 1112,
"internalSecureTcpPort": 0,
"externalTcpIp": "44.201.237.180",
"externalTcpPort": 1113,
"externalSecureTcpPort": 0,
"httpEndPointIp": "44.201.237.180",
"httpEndPointPort": 2114,
"lastCommitPosition": -1,
"writerCheckpoint": 0,
"chaserCheckpoint": 0,
"epochPosition": -1,
"epochNumber": -1,
"epochId": "00000000-0000-0000-0000-000000000000",
"nodePriority": 0,
"isReadOnlyReplica": false
},
{
"instanceId": "2a47929c-afd6-496f-b87b-d85904eeed18",
"timeStamp": "2022-01-12T23:17:40.539795Z",
"state": "Unknown",
"isAlive": true,
"internalTcpIp": "172.31.38.246",
"internalTcpPort": 1112,
"internalSecureTcpPort": 0,
"externalTcpIp": "3.93.17.39",
"externalTcpPort": 1113,
"externalSecureTcpPort": 0,
"httpEndPointIp": "3.93.17.39",
"httpEndPointPort": 2114,
"lastCommitPosition": -1,
"writerCheckpoint": 0,
"chaserCheckpoint": 0,
"epochPosition": -1,
"epochNumber": -1,
"epochId": "00000000-0000-0000-0000-000000000000",
"nodePriority": 0,
"isReadOnlyReplica": false
},
{
"instanceId": "00000000-0000-0000-0000-000000000000",
"timeStamp": "2022-01-12T22:39:46.4047071Z",
"state": "Manager",
"isAlive": true,
"internalTcpIp": "172.31.46.43",
"internalTcpPort": 2113,
"internalSecureTcpPort": 0,
"externalTcpIp": "172.31.46.43",
"externalTcpPort": 2113,
"externalSecureTcpPort": 0,
"httpEndPointIp": "172.31.46.43",
"httpEndPointPort": 2113,
"lastCommitPosition": -1,
"writerCheckpoint": -1,
"chaserCheckpoint": -1,
"epochPosition": -1,
"epochNumber": -1,
"epochId": "00000000-0000-0000-0000-000000000000",
"nodePriority": 0,
"isReadOnlyReplica": false
},
{
"instanceId": "00000000-0000-0000-0000-000000000000",
"timeStamp": "2022-01-12T22:53:47.9621597Z",
"state": "Manager",
"isAlive": true,
"internalTcpIp": "172.31.46.231",
"internalTcpPort": 2113,
"internalSecureTcpPort": 0,
"externalTcpIp": "172.31.46.231",
"externalTcpPort": 2113,
"externalSecureTcpPort": 0,
"httpEndPointIp": "172.31.46.231",
"httpEndPointPort": 2113,
"lastCommitPosition": -1,
"writerCheckpoint": -1,
"chaserCheckpoint": -1,
"epochPosition": -1,
"epochNumber": -1,
"epochId": "00000000-0000-0000-0000-000000000000",
"nodePriority": 0,
"isReadOnlyReplica": false
},
{
"instanceId": "00000000-0000-0000-0000-000000000000",
"timeStamp": "2022-01-12T22:53:47.9621597Z",
"state": "Manager",
"isAlive": true,
"internalTcpIp": "172.31.38.246",
"internalTcpPort": 2113,
"internalSecureTcpPort": 0,
"externalTcpIp": "172.31.38.246",
"externalTcpPort": 2113,
"externalSecureTcpPort": 0,
"httpEndPointIp": "172.31.38.246",
"httpEndPointPort": 2113,
"lastCommitPosition": -1,
"writerCheckpoint": -1,
"chaserCheckpoint": -1,
"epochPosition": -1,
"epochNumber": -1,
"epochId": "00000000-0000-0000-0000-000000000000",
"nodePriority": 0,
"isReadOnlyReplica": false
}
],
"serverIp": "52.91.48.59",
"serverPort": 2114
}
This online tool : https://configurator.eventstore.com/ should help you setup the configuration correctly
I would like to use the Shared Views feature of Vault/AutoCAD/ACADE/Inventor to push models and such (in particular larger Inventor assemblies) up to the cloud, where I assume they land in an OSS bucket. I think that they are only valid up there for 30 days, but that length of time is fine for my purpose. What I would like to know is there any way I can get the ID of the bucket, and then use that to view or pull that SVF file into an APP back inside my firewall for further use? I have some extensions planned in a viewer app back on premises, but I don't have a good way to get the models into the cloud and generated in the first place because of the references in the model - I would have to checkout the model and all references, zip them up, and then send it to the model derivative and such. I am hoping that since the shared views feature already does that part, I can just leverage that to start my process.
MORE INFO:
So, using Fiddler I can see several interesting calls related to this. I went into Vault and created a shared view for a DWG file, then just watched the traffic. I see calls in a pattern that could be maybe leveraged.
Call 1:
GET https://360.autodesk.com/Viewer/GetViewerTranslationById?viewerId=dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u
Call 2:
GET https://360.autodesk.com/Viewer/GetAccessToken
Call 2 returns a token:
token_type: "Bearer"
expires_in: "3599"
expires_at: "2020-07-17T17:02:15.1187374+00:00"
access_token: "eyJhbGciOiJIUzI1NiIsImtpZCI6Imp3dF9zeW1tZXRyaWNfa2V5In0.eyJzY29wZSI6WyJkYXRhOnJlYWQiLCJkYXRhOndyaXRlIiwiZGF0YTpzZWFyY2giXSwiY2xpZW50X2lkIjoiVmZNQ3U1NDg2U3hLQXVRaGRVMU9aYTJuTHdqR1VXcEciLCJhdWQiOiJodHRwczovL2F1dG9kZXNrLmNvbS9hdWQvand0ZXhwNjAiLCJqdGkiOiI5NVFQYnhGWjlmZ1p0YWxzbXg2OW1oUWczck9WbWRKeDRtZjdtdUY0Z0xBME01TmNqZjJCT1hoa3RiRHdlVm04IiwiZXhwIjoxNTk1MDA1MzQ2fQ.tRQYz7PP-_NIDrZFrWbXvxiP4NfooBHAIC89eQuelkw"
bucket: "a360viewer"
Then Call 3 to the derivatives service:
GET https://developer.api.autodesk.com/derivativeservice/v2/manifest/dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u
.....
Authorization: Bearer eyJhbGciOiJIUzI1NiIsImtpZCI6Imp3dF9zeW1tZXRyaWNfa2V5In0.eyJzY29wZSI6WyJkYXRhOnJlYWQiLCJkYXRhOndyaXRlIiwiZGF0YTpzZWFyY2giXSwiY2xpZW50X2lkIjoiVmZNQ3U1NDg2U3hLQXVRaGRVMU9aYTJuTHdqR1VXcEciLCJhdWQiOiJodHRwczovL2F1dG9kZXNrLmNvbS9hdWQvand0ZXhwNjAiLCJqdGkiOiI5NVFQYnhGWjlmZ1p0YWxzbXg2OW1oUWczck9WbWRKeDRtZjdtdUY0Z0xBME01TmNqZjJCT1hoa3RiRHdlVm04IiwiZXhwIjoxNTk1MDA1MzQ2fQ.tRQYz7PP-_NIDrZFrWbXvxiP4NfooBHAIC89eQuelkw
....
Call 3 returns lots of information:
guid: "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u"
owner: "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u"
hasThumbnail: "true"
startedAt: "Fri Jul 17 16:06:29 UTC 2020"
type: "design"
urn: "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u"
success: "100%"
progress: "complete"
region: "US"
status: "success"
children:
0:
guid: "aa85aad6-c480-4a35-9cbf-4cf5994a25ba"
hasThumbnail: "true"
role: "viewable"
progress: "complete"
type: "folder"
status: "success"
version: "2.0"
urn: "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u"
inputFileSize: 1059328
inputFileType: "collaboration"
name: "C-424305-036.dwg.dwf"
success: "100%"
children:
0:
guid: "87CBE465-EAB2-43C0-BA0A-D148B3418FF3_Sheets"
type: "folder"
name: "Sheets"
hasThumbnail: "true"
status: "success"
progress: "complete"
success: "100%"
children:
0:
guid: "933b32f8-830d-4861-6e81-294f5d07d4fc"
type: "geometry"
role: "2d"
name: "C-424305-036-Model"
status: "success"
size: 585718
hasThumbnail: "true"
progress: "complete"
success: "100%"
viewableID: "com.autodesk.dwf.ePlot_87CBE466-EAB2-43C0-BA0A-D148B3418FF3"
order: 0
children:
0:
urn: "urn:adsk.viewing:fs.file:dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u/output/933b32f8-830d-4861-6e81-294f5d07d4fc_f2d/primaryGraphics.f2d"
role: "graphics"
size: 503288
mime: "application/autodesk-f2d"
guid: "c7624064-cb61-933b-787c-e02bef313ea4"
type: "resource"
status: "success"
1:
urn: "urn:adsk.viewing:fs.file:dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u/output/933b32f8-830d-4861-6e81-294f5d07d4fc_f2d/section_properties.db"
role: "Autodesk.CloudPlatform.PropertyDatabase"
size: 24576
mime: "application/autodesk-db"
guid: "20392091-d0da-d74d-741f-904b078b9c0a"
type: "resource"
status: "success"
2:
guid: "a3a9ad7f-b9aa-451e-8bbc-da2972137d82"
type: "view"
role: "2d"
name: "INITIAL"
viewbox:
0: 0.67
1: 0.575
2: 16.330833
3: 10.423333
1:
guid: "d6e600db-d158-d84f-3101-0a654aa29df3"
type: "geometry"
role: "2d"
name: "C-424305-036-C-424305-036-00"
status: "success"
size: 617457
hasThumbnail: "true"
progress: "complete"
success: "100%"
viewableID: "com.autodesk.dwf.ePlot_87CBE467-EAB2-43C0-BA0A-D148B3418FF3"
order: 1
children:
0:
urn: "urn:adsk.viewing:fs.file:dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u/output/d6e600db-d158-d84f-3101-0a654aa29df3_f2d/primaryGraphics.f2d"
role: "graphics"
size: 535065
mime: "application/autodesk-f2d"
guid: "441471cc-a249-1b4e-36dd-7ea4bff584cd"
type: "resource"
status: "success"
1:
urn: "urn:adsk.viewing:fs.file:dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6YTM2MHZpZXdlci90NjM3MzA1OTg3ODExMjQxMTQ4XzQxYjE3M2M1LTVmZjctNDQ2ZC1iNTFkLTgwZTg1NjQ3NjU3ZC5jb2xsYWJvcmF0aW9u/output/d6e600db-d158-d84f-3101-0a654aa29df3_f2d/section_properties.db"
role: "Autodesk.CloudPlatform.PropertyDatabase"
size: 24576
mime: "application/autodesk-db"
guid: "03334fbb-15ce-446d-ff80-111dc8616642"
type: "resource"
status: "success"
2:
guid: "69f50589-0774-4cf3-9b21-8164c1e71027"
type: "view"
role: "2d"
name: "INITIAL"
viewbox:
0: 3.958333
1: 0.4325
2: 16.330833
3: 10.565
Further research reveals that the ID used for calling is base64 encoded, decoding it reveals this:
urn:adsk.objects:os.object:a360viewer/t637305987811241148_41b173c5-5ff7-446d-b51d-80e85647657d.collaboration
I am using Hyperledger fabric-1.0.1 , openshift v3.4.1.44 , Kubernetes v1.4.0
In my deployment I am having
2 organization, 4 peers , 1 orderer and 2 ca's
I am deploying following YAML on openshift to create PODS and services.
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca0
name: ca0
spec:
ports:
- name: "7054"
port: 7054
targetPort: 7054
selector:
io.kompose.service: ca0
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca1
name: ca1
spec:
ports:
- name: "8054"
port: 8054
targetPort: 7054
selector:
io.kompose.service: ca1
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: orderer
name: orderer
spec:
ports:
- name: "7050"
port: 7050
targetPort: 7050
selector:
io.kompose.service: orderer
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer01
name: peer01
spec:
ports:
- name: "7051"
port: 7051
targetPort: 7051
- name: "7053"
port: 7053
targetPort: 7053
selector:
io.kompose.service: peer01
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer02
name: peer02
spec:
ports:
- name: "9051"
port: 9051
targetPort: 7051
- name: "9053"
port: 9053
targetPort: 7053
selector:
io.kompose.service: peer02
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer11
name: peer11
spec:
ports:
- name: "8051"
port: 8051
targetPort: 7051
- name: "8053"
port: 8053
targetPort: 7053
selector:
io.kompose.service: peer11
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer12
name: peer12
spec:
ports:
- name: "10051"
port: 10051
targetPort: 7051
- name: "10053"
port: 10053
targetPort: 7053
selector:
io.kompose.service: peer12
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca0
name: ca0
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca0
spec:
containers:
- args:
- sh
- -c
- fabric-ca-server start --ca.certfile /var/code/peerOrganizations/org1.example.com/ca/ca.org1.example.com-cert.pem
--ca.keyfile /var/code/peerOrganizations/org1.example.com/ca/PK-KEY
-b admin:adminpw -d
env:
- name: FABRIC_CA_HOME
value: /etc/hyperledger/fabric-ca-server
- name: FABRIC_CA_SERVER_CA_NAME
value: ca-org1
- name: FABRIC_CA_SERVER_TLS_CERTFILE
value: /var/code/peerOrganizations/org1.example.com/ca/ca.org1.example.com-cert.pem
- name: FABRIC_CA_SERVER_TLS_ENABLED
value: "false"
- name: FABRIC_CA_SERVER_TLS_KEYFILE
value: /var/code/peerOrganizations/org1.example.com/ca/PK-KEY
image: hyperledger/fabric-ca:x86_64-1.0.1
name: ca-peerorg1
ports:
- containerPort: 7054
resources: {}
volumeMounts:
- mountPath: /etc/hyperledger
name: ca0-claim0
- mountPath: /var/fabricdeploy
name: common-claim
restartPolicy: Always
volumes:
- name: ca0-claim0
persistentVolumeClaim:
claimName: ca0-pvc
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca0-pvc
name: ca0-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca1
name: ca1
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca1
spec:
containers:
- args:
- sh
- -c
- fabric-ca-server start --ca.certfile /var/code/peerOrganizations/org2.example.com/ca/ca.org2.example.com-cert.pem
--ca.keyfile /var/code/peerOrganizations/org2.example.com/ca/PK-KEY
-b admin:adminpw -d
env:
- name: FABRIC_CA_HOME
value: /etc/hyperledger/fabric-ca-server
- name: FABRIC_CA_SERVER_CA_NAME
value: ca-org2
- name: FABRIC_CA_SERVER_TLS_CERTFILE
value: /var/code/peerOrganizations/org2.example.com/ca/ca.org2.example.com-cert.pem
- name: FABRIC_CA_SERVER_TLS_ENABLED
value: "false"
- name: FABRIC_CA_SERVER_TLS_KEYFILE
value: /var/code/peerOrganizations/org2.example.com/ca/PK-KEY
image: hyperledger/fabric-ca:x86_64-1.0.1
name: ca-peerorg2
ports:
- containerPort: 7054
resources: {}
volumeMounts:
- mountPath: /etc/hyperledger
name: ca1-claim0
- mountPath: /var/fabricdeploy
name: common-claim
restartPolicy: Always
volumes:
- name: ca1-claim0
persistentVolumeClaim:
claimName: ca1-pvc
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca1-pvc
name: ca1-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: orderer
name: orderer
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: orderer
spec:
containers:
- args:
- orderer
env:
- name: ORDERER_GENERAL_GENESISFILE
value: /var/fabricdeploy/fabric-samples/first-network/channel-artifacts/genesis.block
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /var/code/ordererOrganizations/example.com/orderers/orderer.example.com/msp
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /var/code/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
- name: ORDERER_GENERAL_TLS_ENABLED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /var/code/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: '[/var/code/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt]'
image: hyperledger/fabric-orderer:x86_64-1.0.1
name: orderer
ports:
- containerPort: 7050
resources: {}
volumeMounts:
- mountPath: /var/fabricdeploy
name: common-claim
- mountPath: /var
name: ordererclaim1
workingDir: /opt/gopath/src/github.com/hyperledger/fabric
restartPolicy: Always
volumes:
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
- name: ordererclaim1
persistentVolumeClaim:
claimName: orderer-pvc
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: orderer-pvc
name: orderer-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer01
name: peer01
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer01
spec:
containers:
- args:
- peer
- node
- start
env:
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ADDRESS
value: peer01.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: peer01.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_ID
value: peer0.org1.example.com
- name: CORE_PEER_LOCALMSPID
value: Org1MSP
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_TLS_KEY_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- name: CORE_PEER_MSPCONFIGPATH
value: /var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp
image: hyperledger/fabric-peer:x86_64-1.0.1
name: peer01
ports:
- containerPort: 7051
- containerPort: 7053
resources: {}
volumeMounts:
- mountPath: /var
name: peer01claim0
- mountPath: /var/fabricdeploy
name: common-claim
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
restartPolicy: Always
volumes:
- name: peer01claim0
persistentVolumeClaim:
claimName: peer01-pvc
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer01-pvc
name: peer01-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer02
name: peer02
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer02
spec:
containers:
- args:
- peer
- node
- start
env:
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ADDRESS
value: peer02.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: peer02.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_ID
value: peer0.org2.example.com
- name: CORE_PEER_LOCALMSPID
value: Org2MSP
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_TLS_KEY_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
- name: CORE_PEER_MSPCONFIGPATH
value: /var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp
image: hyperledger/fabric-peer:x86_64-1.0.1
name: peer02
ports:
- containerPort: 7051
- containerPort: 7053
resources: {}
volumeMounts:
- mountPath: /var
name: peer02claim0
- mountPath: /var/fabricdeploy
name: common-claim
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
restartPolicy: Always
volumes:
- name: peer02claim0
persistentVolumeClaim:
claimName: peer02-pvc
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer02-pvc
name: peer02-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer11
name: peer11
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer11
spec:
containers:
- args:
- peer
- node
- start
env:
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ADDRESS
value: peer11.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: peer01.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: peer11.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_ID
value: peer1.org1.example.com
- name: CORE_PEER_LOCALMSPID
value: Org1MSP
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.crt
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_TLS_KEY_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt
- name: CORE_PEER_MSPCONFIGPATH
value: /var/code/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp
image: hyperledger/fabric-peer:x86_64-1.0.1
name: peer11
ports:
- containerPort: 7051
- containerPort: 7053
resources: {}
volumeMounts:
- mountPath: /var
name: peer11claim0
- mountPath: /var/fabricdeploy
name: peer11claim1
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
restartPolicy: Always
volumes:
- name: peer11claim0
persistentVolumeClaim:
claimName: peer11-pvc
- name: peer11claim1
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer11-pvc
name: peer11-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer12
name: peer12
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer12
spec:
containers:
- args:
- peer
- node
- start
env:
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ADDRESS
value: peer12.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: peer12.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: peer12.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_ID
value: peer1.org2.example.com
- name: CORE_PEER_LOCALMSPID
value: Org2MSP
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.crt
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_TLS_KEY_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt
- name: CORE_PEER_MSPCONFIGPATH
value: /var/code/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp
image: hyperledger/fabric-peer:x86_64-1.0.1
name: peer12
ports:
- containerPort: 7051
- containerPort: 7053
resources: {}
volumeMounts:
- mountPath: /var
name: peer12claim0
- mountPath: /var/fabricdeploy
name: peer12claim1
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
restartPolicy: Always
volumes:
- name: peer12claim0
persistentVolumeClaim:
claimName: peer12-pvc
- name: peer12claim1
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer12-pvc
name: peer12-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
kind: List
metadata: {}
When I tried to executed steps of script.sh https://github.com/hyperledger/fabric-samples/tree/release/first-network/scripts (Hyperledger fabric -Building Your First Network) to build network I am getting error at step installChaincode.
:/var/fabricdeploy/fabric-samples/first-network/scripts$ ./script.sh
Build your first network (BYFN) end-to-end test
Channel name : mychannel
Creating channel...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
CORE_PEER_TLS_KEY_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
.
.
.
2017-08-31 13:56:02.520 UTC [main] main -> INFO 021 Exiting.....
===================== Channel "mychannel" is created successfully =====================
Having all peers join the channel...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:02.565 UTC [msp/identity] Sign -> DEBU 005 Sign: digest: F98AD2F3EFC2B7B6916C149E819B7F322C29595623D48A90AB14899C0E2DDD51
2017-08-31 13:56:02.591 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
2017-08-31 13:56:02.591 UTC [main] main -> INFO 007 Exiting.....
===================== PEER0 joined on the channel "mychannel" =====================
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:04.669 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
2017-08-31 13:56:04.669 UTC [main] main -> INFO 007 Exiting.....
===================== PEER1 joined on the channel "mychannel" =====================
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:06.760 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
2017-08-31 13:56:06.760 UTC [main] main -> INFO 007 Exiting.....
===================== PEER2 joined on the channel "mychannel" =====================
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:08.844 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
2017-08-31 13:56:08.844 UTC [main] main -> INFO 007 Exiting.....
===================== PEER3 joined on the channel "mychannel" =====================
Updating anchor peers for org1...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:10.934 UTC [main] main -> INFO 010 Exiting.....
===================== Anchor peers for org "Org1MSP" on "mychannel" is updated successfully =====================
Updating anchor peers for org2...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:11.006 UTC [main] main -> INFO 010 Exiting.....
===================== Anchor peers for org "Org2MSP" on "mychannel" is updated successfully =====================
Installing chaincode on org1/peer0...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
CORE_PEER_TLS_KEY_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
CORE_PEER_LOCALMSPID=Org1MSP
CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
CORE_PEER_TLS_CERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
CORE_PEER_TLS_ENABLED=false
CORE_PEER_MSPCONFIGPATH=/var/code/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
CORE_PEER_ID=cli
CORE_LOGGING_LEVEL=DEBUG
CORE_PEER_ADDRESS=peer01.first-network.svc.cluster.local:7051
2017-08-/opt/go/src/runtime/panic.go:566 +0x95EBU 001 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.30.217.53:7051: getsockopt: connection refused";
runtime.sigpanic()peer01.first-network.svc.cluster.local:7051 <nil>}
fatal er/opt/go/src/runtime/sigpanic_unix.go:12 +0x2ccn
[signal SIGSEGV: segmentation violation code=0x1 addr=0x47 pc=0x7fb7242db259]
goroutine 20 [syscall, locked to thread]:
runtime.cgocall(0xb08d50, 0xc4200265f8, 0xc400000000)
runtime./opt/go/src/runtime/cgocall.go:131 +0x110 fp=0xc4200265b0 sp=0xc420026570
net._C2f??:0 +0x68 fp=0xc4200265f8 sp=0xc4200265b0018d6e0, 0xc42013c158, 0x0, 0x0, 0x0)
net.cgoL/opt/go/src/net/cgo_unix.go:146 +0x37c fp=0xc420026718 sp=0xc4200265f8
net.cgoI/opt/go/src/net/cgo_unix.go:198 +0x4d fp=0xc4200267a8 sp=0xc420026718
runtime./opt/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200267b0 sp=0xc4200267a8
created /opt/go/src/net/cgo_unix.go:208 +0xb4
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/clientconn.go:434 +0x856
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.Dial(0xc420018092, 0x2b, 0xc420357300, 0x4, 0x4, 0xc420357300, 0x2, 0x4)
github.c/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/clientconn.go:319 +0x960018092, 0x2b, 0xc420357300, 0x4, 0x4, 0x0, 0x0, 0x0)
github.c/opt/gopath/src/github.com/hyperledger/fabric/core/comm/connection.go:191 +0x2a9b, 0x490001, 0x0, 0x0, 0xc, 0xc420018092, 0x2b)
github.c/opt/gopath/src/github.com/hyperledger/fabric/core/peer/peer.go:500 +0xbe018092, 0x2b, 0xc420018092, 0x2b, 0xc4201a5988)
github.c/opt/gopath/src/github.com/hyperledger/fabric/core/peer/peer.go:475 +0x4e4201a59c0, 0x0)
github.c/opt/gopath/src/github.com/hyperledger/fabric/peer/common/common.go:114 +0x29 0x0, 0xc4200001a0)
github.c/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/common.go:240 +0x77a
github.c/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/install.go:166 +0x5a8 0xd9d943, 0x5)
github.c/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/install.go:54 +0x54, 0x0, 0x6, 0x0, 0x0)
!!!!!!!!!!!!!!! Chaincode installation on remote peer PEER0 has Failed !!!!!!!!!!!!!!!!
========= ERROR !!! FAILED to execute End-2-End Scenario ===========
I was trying to generate a template from my existing setup with
oc export dc,svc,bc --selector="microservice=imagesvc" -o yaml --as-template=imagesvc
The problem is that the template points the container source to my reigstry. I would like to modify the template in a way that the build configuration is building the container from source, then attaches it to the deploymentconfig. How can I achieve something like that?
This is the config I currently have. When I apply I get various errors. As an example in Builds I get "Invalid output reference"
Any help with this would be greatly appreciated.
apiVersion: v1
kind: Template
metadata:
creationTimestamp: null
name: imagesvc
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
generation: 1
labels:
app: gcsimageupload
microservice: imagesvc
name: gcsimageupload
spec:
replicas: 1
selector:
deploymentconfig: gcsimageupload
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
app: gcsimageupload
deploymentconfig: gcsimageupload
microservice: imagesvc
spec:
containers:
- imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: gcsimageupload
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /secret
name: gcsimageupload-secret
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: gcsimageupload-secret
secret:
defaultMode: 420
secretName: gcsimageupload-secret
test: false
triggers:
- imageChangeParams:
automatic: true
containerNames:
- gcsimageupload
from:
kind: ImageStreamTag
name: gcsimageupload:latest
namespace: web
type: ImageChange
- type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
generation: 1
labels:
app: imagesvc
microservice: imagesvc
name: imagesvc
spec:
replicas: 1
selector:
deploymentconfig: imagesvc
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
app: imagesvc
deploymentconfig: imagesvc
microservice: imagesvc
spec:
containers:
- imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: imagesvc
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
test: false
triggers:
- imageChangeParams:
automatic: true
containerNames:
- imagesvc
from:
kind: ImageStreamTag
name: imagesvc:latest
namespace: web
type: ImageChange
- type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
generation: 1
labels:
app: imaginary
microservice: imagesvc
name: imaginary
spec:
replicas: 1
selector:
app: imaginary
deploymentconfig: imaginary
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: imaginary
deploymentconfig: imaginary
microservice: imagesvc
spec:
containers:
- image: h2non/imaginary
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: imaginary
ports:
- containerPort: 9000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- imaginary
from:
kind: ImageStreamTag
name: imaginary:latest
namespace: web
type: ImageChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: gcsimageupload
microservice: imagesvc
name: gcsimageupload
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
deploymentconfig: gcsimageupload
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
service.alpha.openshift.io/dependencies: '[{"name":"gcsimageupload","namespace":"","kind":"Service"},{"name":"imaginary","namespace":"","kind":"Service"}]'
creationTimestamp: null
labels:
app: imagesvc
microservice: imagesvc
name: imagesvc
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
deploymentconfig: imagesvc
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: imaginary
microservice: imagesvc
name: imaginary
spec:
ports:
- name: 9000-tcp
port: 9000
protocol: TCP
targetPort: 9000
selector:
deploymentconfig: imaginary
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: gcsimageupload
microservice: imagesvc
name: gcsimageupload
spec:
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: gcsimageupload:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: master
uri: https://github.com/un1x86/openshift-ms-gcsFileUpload.git
type: Git
strategy:
sourceStrategy:
env:
- name: GCS_PROJECT
value: ${GCS_PROJECT_ID}
- name: GCS_KEY_FILENAME
value: ${GCS_KEY_FILENAME}
- name: GCS_BUCKET
value: ${GCS_BUCKET}
from:
kind: ImageStreamTag
name: nodejs:4
namespace: openshift
type: Source
triggers:
- github:
secret: f9928132855c5a30
type: GitHub
- generic:
secret: 77ece14f810caa3f
type: Generic
- imageChange: {}
type: ImageChange
- type: ConfigChange
status:
lastVersion: 0
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: imagesvc
microservice: imagesvc
name: imagesvc
spec:
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: imagesvc:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: master
uri: https://github.com/un1x86/openshift-ms-imagesvc.git
type: Git
strategy:
sourceStrategy:
env:
- name: IMAGINARY_APPLICATION_DOMAIN
value: http://imaginary:9000
- name: GCSIMAGEUPLOAD_APPLICATION_DOMAIN
value: http://gcsimageupload:8080
from:
kind: ImageStreamTag
name: nodejs:4
namespace: openshift
type: Source
triggers:
- generic:
secret: 945da12357ef35cf
type: Generic
- github:
secret: 18106312cfa8e2d1
type: GitHub
- imageChange: {}
type: ImageChange
- type: ConfigChange
status:
lastVersion: 0
parameters:
- description: "GCS Project ID"
name: GCS_PROJECT_ID
value: ""
required: true
- description: "GCS Key Filename"
name: GCS_KEY_FILENAME
value: /secret/keyfile.json
required: true
- description: "GCS Bucket name"
name: GCS_BUCKET
value: ""
required: true
You will need to create two imagestreams named "imagesvc" and "gcsimageupload". You could do it by cli "oc create is " or by adding to the template:
- kind: ImageStream
apiVersion: v1
metadata:
name: <name>
spec:
lookupPolicy:
local: false