We are using hudson for our builds and we have found that if we use the master to do the build the perforce view is not loaded. There are no errors in the console except that if it is a new project the perforce plugin seems to be detecting the wrong version and trying to load changelist 0. Other than that, running the exact same commands shown in the console on the master as the hudson user works correctly. Here is the console output for a new project building on the linux master (note this doesn't do anything other than load the view. Checking the directory afterwards shows nothing was loaded):
Started by user anonymous
Building on master
Clearing workspace...
Cleared workspace.
Using master perforce client: hudson_alec_test
[workspace] $ /usr/local/bin/p4 workspace -o hudson_alec_test
Last sync'd change: 0
[workspace] $ /usr/local/bin/p4 counter change
[workspace] $ /usr/local/bin/p4 -s changes //hudson_alec_test/...#1,#5561
Sync'ing workspace to changelist 0 (forcing sync of unchanged files).
[workspace] $ /usr/local/bin/p4 sync -f //hudson_alec_test/...#0
Sync complete, took 259 ms
Finished: SUCCESS
Here's the output from this project once I bind it to our mac slave (no change in project configuration other than binding it to the slave - note now perforce correctly detects the changes and correctly loads the view):
Started by user anonymous
Building remotely on xxx.xxx.xxx.xxx
Clearing workspace...
Cleared workspace.
Using remote perforce client: hudson_alec_test--yyy
[alec_test] $ /usr/local/bin/p4 workspace -o hudson_alec_test--yyy
[alec_test] $ /usr/local/bin/p4 login -p
[alec_test] $ /usr/local/bin/p4 -P xxx workspace -o hudson_alec_test--yyy
Changing P4 Client Root to: /Users/hudson/hudson_builds/workspace/alec_test/
Changing P4 Client View from:
//depot/... //hudson_alec_test--yyy/...
Changing P4 Client View to:
//depot/webservices/dev/projects/parents/... //hudson_alec_test-yyy/webservices/dev/projects/parents/...
Saving new client hudson_alec_test--yyy
[alec_test] $ /usr/local/bin/p4 -P xxx -s client -i
Last sync'd change: 0
[alec_test] $ /usr/local/bin/p4 -P xxx counter change
[alec_test] $ /usr/local/bin/p4 -P xxx -s changes //hudson_alec_test--yyy/...#1,#5561
[alec_test] $ /usr/local/bin/p4 -P xxx describe -s 5554
[alec_test] $ /usr/local/bin/p4 -P xxx describe -s 5552
[alec_test] $ /usr/local/bin/p4 -P xxx describe -s 5551
[alec_test] $ /usr/local/bin/p4 -P xxx describe -s 5550
[alec_test] $ /usr/local/bin/p4 -P xxx describe -s 5213
[alec_test] $ /usr/local/bin/p4 -P xxx describe -s 5211
Sync'ing workspace to changelist 5554 (forcing sync of unchanged files).
[alec_test] $ /usr/local/bin/p4 -P xxx sync -f //hudson_alec_test--yyy/...#5554
Sync complete, took 409 ms
Finished: SUCCESS
And now changing it back to the master the console output is: (again no change in project configuration other than binding it to the master vs the slave but now no files are loaded in the workspace on the master)
Started by user anonymous
Building on master
Clearing workspace...
Cleared workspace.
Using master perforce client: hudson_alec_test
[workspace] $ /usr/local/bin/p4 workspace -o hudson_alec_test
Last sync'd change: 5554
[workspace] $ /usr/local/bin/p4 counter change
[workspace] $ /usr/local/bin/p4 -s changes //hudson_alec_test/...#5555,#5561
Sync'ing workspace to changelist 5554 (forcing sync of unchanged files).
[workspace] $ /usr/local/bin/p4 sync -f //hudson_alec_test/...#5554
Sync complete, took 229 ms
Finished: SUCCESS
I have seen other questions reference a perforce-hudson log but I can't find any perforce logging other than what's shown in the console. Any suggestions on how to debug this further would be greatly appreciated.
As I was writing up this question I noticed that on the master the perforce plugin doesn't seem to be setting the client view in the log the way it does on the slave. I don't know if this has anything to do with the issue but it is suspicious.
You're using 2 different clients. Your slave looks to be managed by the plugin, maybe? So although you say nothing is different, your clients are different. Note the [workspace] $ /usr/local/bin/p4 -s changes //hudson_alec_test/...#1,#5561, then it syncs to #0, that means it didn't find any changes in that depot. You see it again in your second test [workspace] $ /usr/local/bin/p4 -s changes //hudson_alec_test/...#5555,#5561 and then it syncs to #5554. There were no changes in your depot between #5555 and #5561. Most likely its because your depot path between the 2 clients is different?
Related
The project is here: deep_pix_bis_pad.icb2019
The paper: Deep Pixel-wise Binary Supervision for Face Presentation Attack Detection
I follow the installation instructions just as follow
$ cd bob.paper.deep_pix_bis_pad.icb2019
$ conda env create -f environment.yml
$ source activate bob.paper.deep_pix_bis_pad.icb2019 # activate the environment
$ buildout
$ bin/bob_dbmanage.py all download --missing
$ bin/train_pixbis.py --help # test the installation
When I try to run buildout,I got error message like that
mr.developer: Cloned 'bob.bio.base' with git from 'git#gitlab.idiap.ch:bob/bob.bio.base'.
mr.developer: git cloning of 'bob.bio.base' failed.
mr.developer: Connection closed by 192.33.221.117 port 22
mr.developer: fatal: Could not read from remote repository.
I can connect to git#gitlab.com while I was rejected by git#gitlab.idiap.ch.
$ ssh -T git#gitlab.com
Welcome to GitLab, #buzuyun!
$ ssh -T git#gitlab.idiap.ch
Connection closed by 192.33.221.xxx port 22
I guess it's because this is a server belonging to the Institute Idiap.
But I cannot login or register an account, so I can never pass the ssh authentation as I cannot add my ssh key without an account!
Has anyone reproduce this paper?
I'm new to openshift or K8'S. I have installed Openshift v3.11.0+bf985b1-463 cluster in my centos 7.
While running prerequisites.yml and deploy_cluster.yml run status is successful. And i have updated htpasswd and granted the cluster-admin role for my user.
htpasswd -b ${HTPASSWD_PATH}/htpasswd $OKD_USERNAME ${OKD_PASSWORD}
oc adm policy add-cluster-role-to-user cluster-admin $OKD_USERNAME
and i have create the user and identity also by the below cmd.
oc create user bob
oc create identity ldap_provider:bob
oc create useridentitymapping ldap_provider:bob bob
When i try to login with oc login -u bob -p password it say's
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
But i can able to login with oc login -u system:admin
For your information: the okd deploy_cluster.yml ran successfully but the below pod is in error state.
Is that cause the problem? cmd oc get pods
Suggest me how can i fix the issue. Thank you.
UPDATE:
I have ran the deploy_cluster.yml once again the login issue is solved able to login. But it fails with the below error.
This phase can be restarted by running: playbooks/openshift-logging/config.yml
Node logging-es-data-master-ioblern6 in cluster logging-es was unable to rollout. Please see documentation regarding recovering during a rolling cluster restart
In openshift console the Logging Pod have the below event.
But all the servers have enough memory like more than 65% is free.
And the Ansible version is 2.6.5
1 Master node config:
4CPU, 16GB RAM, 50GB HDD
2 Slave and 1 infra node config:
4CPU, 16GB RAM, 20GB HDD
To create a new user try to follow these steps:
1 Create on each master node the password entry in htpasswd file with:
$ htpasswd -b </path/to/htpasswd> <user_name>
$ htpasswd -b /etc/origin/master/htpasswd myUser myPassword
2 Restart on each master node the master api and master controllers
$ master-restart controllers && master-restart api
or
$ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
3 Apply needed roles
$ oc adm policy add-cluster-role-to-user cluster-admin myUser
4 Login as myUser
$ oc login -u myUser -p myPassword
Running again the deploy_cluster.yaml after configuring the htpasswd file, will force the restart of master controllers and api so you've been able to login as your new user.
About the other problem, registry-console and loggin-es-data-master pods not running it's because you cannot run again the deploy_cluster.yaml when your cluster is already up and running so you have to uninstall okd and then run again the playbook. This happens because the SDN is already working and all your nodes already own all needed certificates.
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
and then again
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
More detailed informations are here
If, after all this procedure, the logging-es-data-master pod should not run then uninstall the logging component with
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
and then uninstall the whole okd and install it again.
If your cluster is already working and you cannot perform again the installation so try only to uninstall and reinstall the logging component:
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=True
RH detailed instructinos are here
I downloaded deb package from https://www.couchbase.com/downloads and installed it using:
sudo dpkg -i couchbaseXXX.deb
It is successfully installed but when I try to execute:
couchbase-cli bucket-create -c localhost:8091 -u Administrator ****
Returns:
couchbase-cli: command not found
What is the issue behind that, How to fix it?
First you have to setup the couchbase cluster with same command before the bucket creation. An example below, --services could be index,data,query.
/opt/couchbase/bin/couchbase-cli cluster-init -c 127.0.0.1:8091 -u Administrator -p Public123 --cluster-username=Administrator --cluster-password=Public123 --cluster-port=8091 --cluster-ramsize=49971 --cluster-index-ramsize=2000 --services=data
you have to go into CLI directory location and run the command.
Below are the steps I have done.
cd /opt/couchbase/bin
./couchbase-cli bucket-create -c localhost:8091 -u Administrator -p password --bucket test-data --bucket-type couchbase --bucket-ramsize 100
Once I run the above command, I got the success message and the bucket has been created.
I've recently started using Google's Compute engine for some of my projects the problem is my startup script doesn't seem to work, For some reason my script just doesn't work, the VM has the startup-script metadata and it works fine when I run it manually with:
sudo google_metadata_script_runner --script-type startup
Here is what I am trying to run on startup:
#!/bin/bash
sudo apt-get update
sudo rm -f Eve.jar
sudo rm -f GameServerStatus.jar
wget <URL>/Eve.jar
wget <URL>/GameServerStatus.jar
sudo chmod 7777 Eve.jar
sudo chmod 7777 GameServerStatus.jar
screen -dmS Eve sh Eve.sh
screen -dmS PWISS sh GameServerStatus.sh
There are no errors in the log either, it just seems to stop at the chmod or screen commands, Any ideas?
Thanks!
To add to kangbu's answer:
Checking the logs in container-optimized OS by
sudo journalctl -u google-startup-scripts.service
showed that the script could not find the user. After a long time of debugging I finally added a delay before the sudo and now it works. Seems the user is not registered when the script runs.
#! /bin/bash
sleep 10 # wait...
cut -d: -f1 /etc/passwd > /home/user/users.txt # make sure the user exists
cd /home/user/project # cd does not work after sudo, do it before
sudo -u user bash -c '\
source /home/user/.bashrc && \
<your-task> && \
date > /home/user/startup.log'
I have the same problem #Brina mentioned. I set up metadata key startup-script and value like:
touch a
ls -al > test.txt
When I ran the script above sudo google_metadata_script_runner --script-type startup, it worked perfectly, However if I reset my VM instance the startup script didn't work. So, I checked startup script logs
...
Jul 3 04:30:37 kbot-6 ntpd[1514]: Listen normally on 5 eth0 fe80::4001:aff:fe8c:7 UDP 123
Jul 3 04:30:37 kbot-6 ntpd[1514]: peers refreshed
Jul 3 04:30:37 kbot-6 ntpd[1514]: Listening on routing socket on fd #22 for interface updates
Jul 3 04:30:38 kbot-6 startup-script: INFO Starting startup scripts.
Jul 3 04:30:38 kbot-6 startup-script: INFO Found startup-script in metadata.
Jul 3 04:30:38 kbot-6 startup-script: INFO startup-script: Return code 0.
Jul 3 04:30:38 kbot-6 startup-script: INFO Finished running startup scripts.
Yes. they found startup-script and ran it. I guessed it had executed as an another user. I changed my script like this:
pwd > /tmp/pwd.txt
whoami > /tmp/whoami.txt
The result is:
myuserid#kbot-6:/tmp$ cat pwd.txt whoami.txt
/
root
Yes. It was executed at the / diectory as root user. Finally, I changed my script to sudo -u myuserid bash -c ... which run it by specified userid.
Go to the VM instances page.
Click on the instance for which you want to add a startup script.
Click the Edit button at the top of the page.
Under Custom metadata, click Add item.
Add your startup script using one of the following keys:
startup-script: Supply the startup script contents directly with this key.
startup-script-URL: Supply a Google Cloud Storage URL to the start script file with this key.
It is working. The documentation for the new instance and existing instance as shown in GCE Start Up Script
Startup script output is written to the following log files:
CentOS and RHEL: /var/log/messages
Debian: /var/log/daemon.log
Ubuntu 14.04, 16.04, and 16.10: /var/log/syslog
On Ubuntu 12.04, SLES 11 and 12, and all images older than v20160606:
sudo /usr/share/google/run-startup-scripts
think that you do not need sudo, as well as the chmod 7777 should be 777
also a cd (or at least a pwd) at the beginning might be useful.
... log to text file, in order to know where the script may fail.
Error while executing ansible ping module
bash ~ ansible webservers -i inventory -m ping -k -u root -vvvv
SSH password:
<~> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO ~
<my-lnx> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO my-lnx
~ | FAILED => FAILED: [Errno 8] nodename nor servname provided, or not known
<my-lnx> REMOTE_MODULE ping
<my-lnx> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582 && echo $HOME/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582'
<my-lnx> PUT /var/folders/8n/fftvnbbs51q834y16vfvb1q00000gn/T/tmpP6zwZj TO /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ping
<my-lnx> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ping; rm -rf /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ >/dev/null 2>&1'
my-lnx | FAILED >> {
"failed": true,
"msg": "Error: ansible requires a json module, none found!",
"parsed": false
}
This is my inventory file
bash ~ cat inventory
[webservers]
my-lnx ansible_ssh_host=my-lnx ansible_ssh_port=22
I have installed simplejosn module also in the client as well as remote machine
bash ~ pip list | grep json
simple-json (1.1)
simplejson (3.6.5)
I think you need to install the python-simplejson module.
Try to run this command first and then your desired commands:
ansible webservers -i inventory -m raw -a "sudo yum install -y python-simplejson" -k -u root -vvvv
I am supposing that its old Red Hat/CentOS system.
If you don't want or can't install the python-simplejson module on remote servers, you can simply request the raw output instead:
> ansible webservers -i inventory -m ping -m raw
Or like I did, added it to my ~/.bash_profile
alias ansible="ansible -m raw"
# And then simply running:
> ansible webservers -i inventory -m ping
in centos 5.* version no python-simple json available on repo to donwload and install. you can simple use below mentioned method.
make sure both the source and destination should be accessed password less and from source to destination also password less.
use ssh-keygen -t rsa to generate key
ssh-copy-id user#host_ip
"---
- hosts: (ansible host)
become: yes
remote_user: root
gather_facts: false
tasks:
- name: copying copying temps
shell: ssh (source) && rsync -parv /root/temp/* root#(Destination):/root/temp/"