Hortonworks Data Platform (Hadoop) installation single node cluster on Ubuntu 12.04 64bit - hadoop2

I am referring the manual installation guide (http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.0/bk_installing_manually_book/content/rpm-chap1.html) provided on Hortonworks website. I am facing issue while configuring the remote repositories (http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.0/bk_installing_manually_book/content/rpm-chap1-3.html). When I am running the command “sudo wget http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/hdp.list -O /etc/apt/sources.list.d/hdp.list” on the Ubuntu 12.04 terminal, it shows error “404 NOT FOUND”.
Below is the error:
–2015-04-13 12:59:10– http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/hdp.list
Resolving public-repo-1.hortonworks.com (public-repo-1.hortonworks.com)… 54.192.174.35, 54.230.174.43, 54.230.174.121, …
Connecting to public-repo-1.hortonworks.com (public-repo-1.hortonworks.com)|54.192.174.35|:80… connected.
HTTP request sent, awaiting response… 404 Not Found
2015-04-13 12:59:11 ERROR 404: Not Found.
Please help me solving this issue.

Can you disable your Firewall and check if you still get this problem?
Some corporate firewalls block Hortonworks (as a source of unauthorized software)

Related

Error while running "podman run"; error adding pod to CNI network "podman": unexpected end of JSON input

I'm new to podman, and I just trying to run containers on it.
(podman version 3.4.0, installed by brew, intel Core MAC)
However, when I trying to run "podman run {image-name}", below errors were thrown.
$ podman run -ti -d --name web httpd 125
Error: error configuring network namespace for container b0e70d672cb66005833c0a300c8661b88eab49e942c240d69d17587e0b75c47b: error adding pod web_2_web_2 to CNI network "podman": unexpected end of JSON input
$ podman run centos:7
Error: error preparing container a6d0bc1ad217cd8207935561dc8ff7bd33672da3fa513917f9965cb39520c449 for attach: error configuring network namespace for container a6d0bc1ad217cd8207935561dc8ff7bd33672da3fa513917f9965cb39520c449: error adding pod quirky_snyder_quirky_snyder to CNI network "podman": unexpected end of JSON input
By reading https://issueexplorer.com/issue/containers/podman/11452, I removed ~/.docker/, but the solution doesn't work in my case.
Of course, the error message says there was "unexpected end of JSON input", but I don't know how to fix it. Could anyone guess why podman didn't work even running these base images, or how to debug it?
Thanks in advance.
on macos, current machine version 3.3.1 has this problem. I had this problem on server version 3.3.1 and I do not encounter it on server version 3.4.0. You can check server version with podman version.
Try removing current machine and installing a newer one
podman machine stop
podman machine rm
podman machine init --image-path next
podman machine start
Check server version again with podman version.
Try running your image again.

Moodle: Installation failed - Coding Error Detected

I have tried to install Moodle 3.5.1 on my development server (Apache 2.4.29 and MySQL v.5.7).
The installation process went smoothly: The MySQL database has been set up, all required PHP packages are installed, the system has been successfully installed, all file permissions are correct.
After the installation I only get the following error:
"Coding error detected, it must be fixed by a programmer: PHP catchable fatal error"
There are no further error messages. There are no error messages in Apache Error log or PHP log files. In the PHP ini file the display of error messages is activated.
So I can not figure out what did not work or how to fix it.
Put error_reporting(E_ALL); and ini_set('display_errors', 1); in the beggining of the script you see problem in for more debug info
Moodle 3.5 requires PHP 7
https://docs.moodle.org/35/en/PHP
Could that be your problem?

Open-shift-Running Handler [openshift_master : verify API Server] FAILED - RETRYING Verify API Server

I am Trying to Install Openshift 3.9.1 In CentOS 7.5. All the process Done but Below Failed Issue Occurring. If you have any suggestion about it,
( Running Handler [ openshift_master : verify API Server ])
FAILED - RETRYING Verify API Server (120 retries left).
install configuration
problem occurring...
If any proper step by step guideline notice me. thanks

libvirt error when trying to 'hot' attach-disk on guest with "Channel qemu-ga"

I have KVM virtual machine running CentOS 7 as guest OS. I'm trying to attach an additional disk to it on the run (without shutting it down) using this command:
$ sudo virsh attach-disk centos --source /var/lib/libvirt/images/newdisk.img --target sdb --persistent
But receive an error:
error: Failed to attach disk
error: internal error: cannot update AppArmor profile 'libvirt-d2e7bbb8-c7b3-44ec-b0ea-27539e0df732'
If I do the same with Debian guest - everything is ok.
What is difference, how to solve that?
UPDATE:
I have a comment!
I compared two VM's xml and saw that CentOS have QEMU-agent in his configuration:
<channel type="unix">
<source mode="bind" path="/var/lib/libvirt/qemu/channel/target/centos_auto.org.qemu.guest_agent.0"></source>
<target name="org.qemu.guest_agent.0" type="virtio"></target>
<address bus="0" controller="0" port="1" type="virtio-serial"></address>
</channel>
Then I removed "channel qemu-ga", restarted VM and checked "hot add" feature. It worked.
I tested it on other VMs (CentOS, Fedora, Debian) and saw the same.
As a result:
If enable qemu-agent i cannot use hot plug.
If use "hot plug" i must forget about agent.
Is it my mistake in configuration or these features can't work together?
Host-OS: Ubuntu 15.10
QEMU emulator: now 2.4.92 (tested 2.3 and 2.4.1)
VMM: 1.3.0
This is a clear bug in the apparmor security driver for libvirt. The existence of the QEMU guest agent config in the XML should have no impact on ability to hotplug disks to a guest. This bug should be reported to the libvirt upstream, or Ubuntu bug trackers.

Apache Tomcat 6 and Cpanel Errors

Using cPanel easyApache I installed Apache Tomcat 5.5.x and attempted to upgrade it to Tomcat 6.x.x
Download and expand Tomcat 6 in /usr/local/jakarta
Changed group owner to Nobody/Tomcat with 0755
Changed tomcat symlink to /usr/local/jakarta/apache-tomcat-6.x.x
Extracted and built the native daemon jsvc
Uncommented and changed /usr/local/jakarta/tomcat/conf/tomcat-users.xml
<role rolename="manager-gui"/>
<user username="tomcat" password="secret" roles="manager-gui"/>
</tomcat-users>
Copied from 5.5.x to 6.x.x
/usr/local/jakarta/apache-tomcat-6.0.33/conf/workers.properties
/usr/local/jakarta/apache-tomcat-6.0.33/conf/httpd-jk.conf
Installed Servlet from cPanel to my domain:
Main -> Account Functions -> Install Servlets
Restarted Tomcat using
/scripts/restartsrv_tomcat
I can browse to the url ie www.tomcat.com:8080 and I see the correct version number.
So in my public_html folder I created a test.jsp. When I visit it I get error:
500 Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.
Looking in the logs for apache, I find no errors, there is an error in the mod_jk log:
[Tue Jan 25 18:51:40 2012] [21925:47707893800752] [info] jk_handler::mod_jk.c (2686): Could not find a worker for worker name=ajp13
So checking workers.properties I see it contains:
worker.list=wlb,jkstatus
worker.list=ajp13
worker.ajp13.type=ajp13
worker.ajp13.host=localhost
worker.ajp13.port=8009
worker.wlb.type=lb
worker.wlb.balance_workers=ajp13w
Im stumped as to what else is missing that is causing the error I am seeing in the browser, any and all hints and help is greatly appreciated.
Try getting rid of your duplicate worker.list lines, and also you don't have a worker called ajp13w which you're calling as a balance_worker.
worker.list=wlb,jkstatus,ajp13
worker.ajp13.type=ajp13
worker.ajp13.host=localhost
worker.ajp13.port=8009
worker.wlb.type=lb
worker.wlb.balance_workers=ajp13