Clogin errors Rancid - configuration

I have configured a hp switch on Rancid. i have no problems when am able to clogin and hlogin into the switch.when i run the rancid and check my logs when i check my logs i get the following output. I also need to check my config files.where do i check that.kindly help me on this.
These are the outputs of my log files:
Trying to get all of the configs.
unknown router manufacturer for 10.2.0.13: hp1
unknown router manufacturer for 10.2.0.200: ibm
10.2.0.13 clogin error: Error: TIMEOUT reached
10.2.0.13: missed cmd(s): show tech transceivers,show module,show config status,show system-information,show system information,show stack,show version,show flash,write term,show config files
10.2.0.13: End of run not found
;
=====================================
Getting missed routers: round 1.
unknown router manufacturer for 10.2.0.200: ibm
unknown router manufacturer for 10.2.0.13: hp1
10.2.0.13 clogin error: Error: TIMEOUT reached
10.2.0.13: missed cmd(s): show tech transceivers,show module,show config status,show system-information,show system information,show stack,show version,show flash,write term,show config files
10.2.0.13: End of run not found
;
=====================================
Getting missed routers: round 2.
unknown router manufacturer for 10.2.0.200: ibm
unknown router manufacturer for 10.2.0.13: hp1
10.2.0.13 clogin error: Error: TIMEOUT reached
10.2.0.13: missed cmd(s): show tech transceivers,show module,show config status,show system-information,show system information,show stack,show version,show flash,write term,show config files
10.2.0.13: End of run not found
;
=====================================
Getting missed routers: round 3.
unknown router manufacturer for 10.2.0.200: ibm
unknown router manufacturer for 10.2.0.13: hp1
10.2.0.13 clogin error: Error: TIMEOUT reached
10.2.0.13: missed cmd(s): show tech transceivers,show module,show config status,show system-information,show system information,show stack,show version,show flash,write term,show config files
10.2.0.13: End of run not found
;
=====================================
Getting missed routers: round 4.
unknown router manufacturer for 10.2.0.200: ibm
unknown router manufacturer for 10.2.0.13: hp1
10.2.0.13 clogin error: Error: TIMEOUT reached
10.2.0.13: missed cmd(s): show tech transceivers,show module,show config status,show system-information,show system information,show stack,show version,show flash,write term,show config files
10.2.0.13: End of run not found
;
=====================================
Getting missed routers: round 5.
unknown router manufacturer for 10.2.0.200: ibm
unknown router manufacturer for 10.2.0.13: hp1
10.2.0.13 clogin error: Error: TIMEOUT reached
10.2.0.13: missed cmd(s): show tech transceivers,show module,show config status,show system-information,show system information,show stack,show version,show flash,write term,show config files
10.2.0.13: End of run not found
;
cvs diff: Diffing .
cvs diff: Diffing configs
cvs commit: Examining .
cvs commit: Examining configs
/usr/local/rancid/bin/control_rancid: 503: /usr/local/rancid/bin/control_rancid: sendmail: not found
ending: Mon Feb 3 14:32:35 EST 2014

Related

Cannot start minishift on mac Catalina

I tried a couple of time to install uninstall and reinstall openshift on 2 macs with os Catalina 10.15.7 but it never starts.
I read Minishift cannot start in macOS and installed the var as described. but I still get the error below. Did anybody manage to install it on Catalyna and managed to resolve these errors?
kind regards
Markus
-- Starting Minishift VM ........ FAIL E0423 15:57:02.814314 21785 start.go:499] Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: Error setting up host only network on machine start: /usr/local/bin/VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.99.1 --netmask 255.255.255.0 failed:
VBoxManage: error: Code E_ACCESSDENIED (0x80070005) - Access denied (extended info not available)
VBoxManage: error: Context: "EnableStaticIPConfig(Bstr(pszIp).raw(), Bstr(pszNetmask).raw())" at line 242 of file VBoxManageHostonly.cpp
. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: Error setting up host only network on machine start: /usr/local/bin/VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.99.1 --netmask 255.255.255.0 failed:
VBoxManage: error: Code E_ACCESSDENIED (0x80070005) - Access denied (extended info not available)
VBoxManage: error: Context: "EnableStaticIPConfig(Bstr(pszIp).raw(), Bstr(pszNetmask).raw())" at line 242 of file VBoxManageHostonly.cpp
It seens a virtualbox configuration error about network access deniying. Try this:
Access or create file /etc/vbox/networks.conf (or where your vbox configuration file is located)
Change the content to: *0.0.0.0/0 ::/0

Pods stuck containercreating

previously my MySQL pod stuck at terminating status, and then I tried to force delete using command like this
kubectl delete pods <pod> --grace-period=0 --force
Later I tried to helm upgrade again, my pod was stuck at containercreating status, and this event from pod
17s Warning FailedMount pod/db-mysql-primary-0 MountVolume.SetUp failed for volume "pvc-f32a6f84-d897-4e35-9595-680302771c54" : kubernetes.io/csi: mount
er.SetUpAt failed to check for STAGE_UNSTAGE_VOLUME capability: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix
/var/lib/kubelet/plugins/dobs.csi.digitalocean.com/csi.sock: connect: no such file or directory"
17s Warning FailedMount pod/db-mysql-secondary-0 MountVolume.SetUp failed for volume "pvc-61fc6eda-97fa-455f-ac2c-df8ebcb90f1c" : kubernetes.io/csi: mount
er.SetUpAt failed to check for STAGE_UNSTAGE_VOLUME capability: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix
/var/lib/kubelet/plugins/dobs.csi.digitalocean.com/csi.sock: connect: no such file or directory"
anyone please can help me to resolve this issue, thanks a lot.
When you run the command
kubectl delete pods <pod> --grace-period=0 --force
you ask kubernetes to forget the Pod, not to delete it. You have to be careful while using this command. You have to make sure that the containers of the Pod are not running in the host especially when they are mounted to a PVC. Probably the containers is still running and attached to the PVC.
pool-product-8jd40 0
spec:
drivers: null
and on my some pool the driver csi not ready (null), it's supposed to be equal 1 (ready)
*sorry i can't attach the image yet

sqoop export failed- Export job failed

Hi I am trying to run the sqoop export command. There is an empty table called empolyee in mysql under the username newuser and the database is db. I have created a csv file by taking care of the datatypes that the table employee in mysql has. I have put that file in the hdfs directory called /sqoop/export/emp.csv. I have checked the file in hdfs. It is present in that directory. Then I have run the export command as
sqoop export --connect jdbc:mysql://localhost:3306/db --username newuser --table employee --export-dir /sqoop/export/emp.csv --driver com.mysql.jdbc.Driver
It is mapping 100% and then I am getting the error as Export job failed as in
20/06/26 15:18:37 INFO mapreduce.Job: map 100% reduce 0%
20/06/26 15:18:38 INFO mapreduce.Job: Job job_1593163228066_0003 failed with state FAILED due to: Task failed task_1593163228066_0003_m_000003
Job failed as tasks failed. failedMaps:1 failedReduces:0
20/06/26 15:18:38 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=1
Killed map tasks=3
Launched map tasks=4
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=23002
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=23002
Total vcore-milliseconds taken by all map tasks=23002
Total megabyte-milliseconds taken by all map tasks=23554048
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
20/06/26 15:18:38 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
20/06/26 15:18:38 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 22.6316 seconds (0 bytes/sec)
20/06/26 15:18:38 INFO mapreduce.ExportJobBase: Exported 0 records.
20/06/26 15:18:38 ERROR mapreduce.ExportJobBase: Export job failed!
20/06/26 15:18:38 ERROR tool.ExportTool: Error during export:
Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445)
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
So, what may be the problem and how could I fix this? Thanks

Mysql Cluster data ndoe can not start to connecto management node

Now, every config is ok , but when I use
./ndbd --initial
on data node , it will ouput log :
ndb_mgm :
Forced node shutdown completed. Occured during startphase 0. Caused by error 2350: 'Invalid configuration received from Management Server(Configuration error). Permanent error, external action needed'.
ndbd :
2018-05-10 13:54:43 [ndbd] INFO -- Angel pid: 14533 started child: 14534
2018-05-10 13:54:43 [ndbd] INFO -- Initial start of data node, ignoring any info on disk
2018-05-10 13:54:43 [ndbd] INFO -- Configuration fetched from '172.19.16.170:1186', generation: 1
2018-05-10 13:54:43 [ndbd] INFO -- Changing directory to '/var/lib/mysql-cluster'
2018-05-10 13:54:43 [ndbd] INFO -- Invalid configuration fetched
2018-05-10 13:54:43 [ndbd] INFO -- ConfigParam: 113 not found
2018-05-10 13:54:43 [ndbd] INFO -- Error handler shutting down system
2018-05-10 13:54:43 [ndbd] INFO -- Error handler shutdown completed - exiting
2018-05-10 13:54:43 [ndbd] ALERT -- Node 2: Forced node shutdown completed. Occured during startphase 0. Caused by error 2350: 'Invalid configuration received from Management Server(Configuration error). Permanent error, external action needed'.
Now, the ndbd log tell me , it fetch configuration from server error, but I dont know where is the error.
Thsi is my management node configuration :
[ndbd default]
NoOfReplicas= 1
[mysqld default]
[ndb_mgmd default]
[tcp default]
[ndb_mgmd]
HostName= 172.19.16.170
[ndbd]
NodeId=2
HostName= 172.19.16.166
DataDir= /var/lib/mysql-cluster
[ndbd]
NodeId=3
HostName= 172.19.16.167
DataDir= /var/lib/mysql-cluster
[mysqld]
[mysqld]
[mysqld]
This is an upgrade issue. The error only occurs in ndbd up to
version 7.5. But it is only the ndb_mgmd from version 7.6 that
can set IndexMemory to 0. ConfigParam: 113 is IndexMemory which
is deprecated in 7.6. So probably you are running a ndb_mgmd from
version 7.6 and ndbd from version 7.5.

Facing an issue while executing Sqoop import command in Hadoop 2.6.0

I am using MAC Osx for Hadoop Stack and using MySQL as the database for it.I am trying to execute a Sqoop import command:
sqoop import --connect jdbc:mysql://127.0.0.1/emp --table employee --username root --password reality --m 1 --target-dir /sqoop_import
But I am facing below-mentioned issue while executing it. Even in /etc/hosts, localhost is at 127.0.0.1.host file screenshot. I have tried pinging localhost and it works but the error of host is down, still prevails. Please help.
2016-02-06 17:42:38,267 INFO [main] mapreduce.Job: Job job_1454152643692_0010 failed with state FAILED due to: Application application_1454152643692_0010 failed 2 times due to Error launching appattempt_1454152643692_0010_000002. Got exception: java.io.IOException: Failed on local exception: java.net.SocketException: Host is down; Host Details : local host is: "Mohits-MacBook-Pro.local/192.168.0.103"; destination host is: "192.168.0.105":38183;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Host is down
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 9 more
. Failing the application.
2016-02-06 17:42:38,304 INFO [main] mapreduce.Job: Counters: 0
2016-02-06 17:42:38,321 WARN [main] mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-02-06 17:42:38,326 INFO [main] mapreduce.ImportJobBase: Transferred 0 bytes in 125.7138 seconds (0 bytes/sec)
2016-02-06 17:42:38,349 WARN [main] mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-02-06 17:42:38,350 INFO [main] mapreduce.ImportJobBase: Retrieved 0 records.
2016-02-06 17:42:38,351 ERROR [main] tool.ImportTool: Error during import: Import job failed!
I see that you are having network issues. From the log above it says that local host translates to Mohits-MacBook-Pro.local/192.168.0.103 in you unix system and destination which it is trying to connect is "192.168.0.105":38183. Please go to unix system and see /etc/hosts file and make sure you change localhost to 127.0.0.1.