I have OpenShift Enterprise 2.0 running in a multi-node setup. I am running a simple JBoss scaled app (3 gears, so HAProxy and 2 JBoss gears). I have used a pre_start_jbossews script in .openshift/action_hooks to configure verbose GC logging (with just gc.log as the file name). However, I can't figure out how to get the gc.log files from the gears running JBoss.
[Interestingly enough, there is an empty gc.log file in the head/parent gear (running HAProxy). Looks like there is a java process started there, that might be a bug.]
I tried to run
rhc scp <appname> download . jbossews/gc.log --gears
hoping that it would be implemented like the ssh --gears option, but it just tells me 'invalid option'. So my question is, how can I actually download logs from child gears?
I don't think that you can use RHC directly to get what you want.
That may require an Request for Enhancement to be made to the RHC SCP command.
File that here: https://github.com/openshift/rhc/issues
However you can use the following to find all of your GEARS:
rhc app show APP_NAME --gears | awk '{print $5}' | tail -n +3
From this list you can list all the logs for each gear that are part of that application.
for url in $(rhc app show APP_NAME --gears | awk '{print $5}' | tail -n +3); do for dir in $(ssh $url "ls -R | grep -i log.*:"); do echo -n $url:${dir%?}; echo; done; done
With that you can us simple scp commands to get the files you need from all of the gears:
for file_dir in $(for url in $(rhc app show APP_NAME --gears | awk '{print $5}' | tail -n +3); do for dir in $(ssh $url "ls -R | grep -i log.*:"); do echo -n $url:${dir%?}; echo; done; done); do scp "$file_dir/*" .; done
If you need to download any files, you can use an SFTP client like FileZilla, so you can copy files from the server.
I know it's been a while since the original question was posted, but I just bumped into the same issue today and found that you can use the scp command directly if you know the gear SSH URL:
scp local_file user#gear_ssh:remote_file
to upload a file to the gear, or
scp user#gear_ssh:remote_file local_file
to download from the gear.
Related
I've got a server with an smb address, smb://files.cluster.ins.localnet/
Is it possible to send there files (fast) via the command line in a way similar to scp or rsync?
For example,
scp_to_samba folder_to_copy smb://files.cluster.ins.localnet/copied_content_folder/
I haven't found a way to get either rsync or scp to play nice with Samba servers. Try using using smbclient -c as described in this answer:
smbclient //files.cluster.ins.localnet -c 'prompt OFF; recurse ON; lcd folder_to_copy; mkdir copied_content_folder; cd copied_content_folder; mput *'
If you're planning on communicating with the same server frequently and want something command-like, you could try something like this bash 'script':
#!/bin/bash
# scp_to_samba.sh
smbclient //files.cluster.ins.localnet -W domain -U username \
-c "prompt OFF; recurse ON; lcd $1; mkdir $2; cd $2; mput *"
where domain and username are whatever credentials you need to log on to your server. Usage would then be:
./scp_to_samba.sh folder_to_copy copied_content_folder
To copy back from the server, you'd need to switch a few things in that command/script and use mget instead of mput.
Is this 'fast'? I don't know. But it is pretty straightforward and has worked pretty well for me so far.
See the smbclient man page for more details.
I'm trying to install tungsten replicator 3.0.0-524 GA from MySQL to MongoDB but when I'm running the cookbook/validate_cluster the error:
There is already another Tungsten installation script running
(InstallationScriptCheck)
Keep showing up
The configuration I'm using for the cluster are:
./tools/tpm configure mysql2mongodb \
--enable-heterogenous-master=true \
--topology=master-slave \
--master=mysql \
--replication-user=boahub_boahub \
--replication-password=*****\
--slaves=tracking-mongo \
--home-directory=/opt/mysql \
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=boahub_boahub.urls,boahub_boahub.media_campaigns \
--start-and-report
./tools/tpm configure mysql2mongodb \
--hosts=tracking-mongo \
--datasource-type=mongodb \
--replication-port=27017
./tools/tpm -v install --install-directory=/opt/tungsten
I've configured both "mysql" and "tracking-mongo" hosts under /etc/hosts file
So far I've tried to
1. Reboot the system
2. Clear my /opt/tungsten installation directory
3. Delete the deploy.cfg
The verbose output of the tools/tpm -v install shows the SSH between the two machines succeeded and the command for checking other tungsten script is
ps ax 2>/dev/null | grep configure.rb | grep -v firewall | grep -v grep | awk '{print $1}'
When I execute this command it comes up with nothing.
What can I do? Is there and way to ignore this check?
Thanks!
You can remove any check using --skip-validation-check option(argument required). You can use this option multiple time without problem.
The option takes as argument the name of the check which can be found in the error message.
In your case you can add the following option to your command:
--skip-validation-check InstallationScriptCheck
I have a feeling this may help you get through.
Have you tried install your master and slave separately? Do a
./tools/tpm install
after configuring & installing master, clear the configuration with
./tools/tpm configure defaults --reset
Then apply your slave settings and do the other tpm install.
A few weeks ago I had run into some similar (maybe, I can't recall as clear) trouble. The phrase "another script" in your post has brought back some memory of that for me, hope it works.
Good Luck!
I am trying to create a custom CD/DVD to deploy RHEL 7 with kickstart file. Here is what I did:
Edited isolinux.cfg (in the ISOLinux folder) and grub.cfg file (in the EFI\BOOT folder).
Created ISO using mkisofs.
But it is not working. Am I using correct files/method?
Edit the ISO image and put the ks.cfg file that you have created.
Preferably, put the ks.cfg file inside ks directory. More information can be found here.
You need to use the new command. Here is an example of what will work:
Add the kickstart file to your download and exploded ISO.
Run this command in the area with the ISO and kickstart and point to another location to build the ISO:
genisoimage -r -v -V "OEL6 with KS for OVM Manager" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o OEL6U6_OVM_Manager.iso /var/www/html/Template/ISO/
I found the way to create custom DVD from the RHEL7 page.
Mount the downloaded image
mount -t iso9660 -o loop path/to/image.iso /mnt/iso
Create a working directory - a directory where you want to place the contents of the ISO image.
mkdir /tmp/ISO
Copy all contents of the mounted image to your new working directory. Make sure to use the -p option to preserve file and directory permissions and ownership.
cp -pRf /mnt/iso /tmp/ISO
Unmount the image.
umount /mnt/iso
Make sure your current working directory is the top-level directory of the extracted ISO image - e.g. /tmp/ISO/iso. Create the new ISO image using genisoimage:
genisoimage -U -r -v -T -J -joliet-long -V "RHEL-7.1 Server.x86_64" -Volset "RHEL-7.1 Server.x86_64" -A "RHEL-7.1 Server.x86_64" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .
Hope the answer will helpful:
I am editing my answer due to the comments posted. Here is a more comprehensive solution:
(A) You need to create the ISO properly. I found helpful information in this URL.
Here is the line that I actually ended up with, for my MBR/UEFI ISO creation:
mkisofs -U -A "<Volume Header>" -V "RHEL-7.1 x86_64" -volset "RHEL-7.1 x86_64" -J -joliet-long -r -v -T -x ./lost+found -o ${OUTPUT}/${HOST}.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -boot-load-size 18755 /dir/where/sources/for/ISO/are/located
Be careful with the -V parameter, as it has to match what the kernel has defined for inst.stage2. In the default grub.conf included in the boot disk, it is configured to be "hd:LABEL=RHEL-7.1\x20x86_64" which matches with the settings above.
(B) You need the correct setup for EFI for RHEL7. For some reason, this has changed from RHEL6, where you could just use the /EFI/BOOT/BOOTX64.conf. Now it uses the /EFI/BOOT/grub.cfg. Common wisdom from Red Hat Manuals state to add the inst.ks= parameter to the kernel line. The grub.cfg that comes in the /EFI/BOOT directory of the RHEL7 boot iso actually has the linuxefi parameter, instead of the kernel one, I would guess they would work the same. If you are including the KS file on the CD, this should get you there.
Good Luck!
How do you tail openshift log files? I issued the following command:
rhc tail myapp
It seems to show first error line and then stops, but doesn't exit. If I press ctrl+C it asks whether to stop batch or not. How can I display last few errors and may be browse page by page? Is there page down/ page up shortcuts?
The 'rhc tail' command reads the last few lines of each of your log files and continues to feed subsequent log messages to your console. To view the entire log file, please review:
https://www.openshift.com/faq/how-to-troubleshoot-application-issues-using-logs
you can see by running:
rhc tail -a yourappname -l youremail -p yourpassword
Adding -a option fix this issue for me.
rhc tail -a {app_name}
Openshift place logs in different files, so if you want get logs of a specific file then you can add -f file/address/and/name
Example :
rhc tail -f app-root/logs/nodejs.log -a myAppName
also you can ask for specific number of lines by adding -o "-n 40" in command. Above command will get last 40 lines.
Example :
rhc tail -f app-root/logs/nodejs.log -o "-n 40" -a myAppName
You can also download them:
$ scp SHA#APP-DOMAIN.rhcloud.com:/var/lib/openshift/SHA/app-root/\
logs/APP.log "~/upstream.jbossas.log"
Feasible also in windows directly in git bash.
I am using GeForce 8400M GS on Ubuntu 10.04 and I am learning CUDA programming. I am writing and running few basic programs. I was using cudaMalloc, and it kept giving me an error until I ran the code as root. However, I had to run the code as root only once. After that, even if I run the code as normal user, I do not get an error on malloc. What's going on?
This is probably due to your GPU not being properly initialized at boot. I've come across this problem when using Ubuntu Server and other installations where an X server isn't being started automatically. Try the following to fix it:
Create a directory for a script to initialize your GPUs. I usually use /root/bin. In this directory, create a file called cudainit.sh with the following code in it (this script came from the Nvidia forums).
#!/bin/bash
/sbin/modprobe nvidia
if [ "$?" -eq 0 ]; then
# Count the number of NVIDIA controllers found.
N3D=`/usr/bin/lspci | grep -i NVIDIA | grep "3D controller" | wc -l`
NVGA=`/usr/bin/lspci | grep -i NVIDIA | grep "VGA compatible controller" | wc -l`
N=`expr $N3D + $NVGA - 1`
for i in `seq 0 $N`; do
mknod -m 666 /dev/nvidia$i c 195 $i;
done
mknod -m 666 /dev/nvidiactl c 195 255
else
exit 1
fi
Now we need to make this script run automatically at boot. Edit /etc/rc.local to look like the following.
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
#
# Init CUDA for all users
#
/root/bin/cudainit.sh
exit 0
Reboot your computer and try to run your CUDA program as a regular user. If I'm right about what the problem is, then it should be fixed.
To work with Ubuntu 14.04 I followed https://devtalk.nvidia.com/default/topic/699610/linux/334-21-driver-returns-999-on-cuinit-cuda-/ to add nvidia-uvm to etc/modules, and to add a line to a custom udev rule. Create /etc/udev/rules.d/70-nvidia-uvm.rules with this line:
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/bin/mknod -m 666 /dev/nvidia-uvm c $(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0;'"
I don't understand why sudo modprobe nvidia-uvm works to create a proper /dev/nvidia-uvm (as does sudo cuda_program) but the /etc/modules listing requires the udev rule.