Failed to connect: 0 - stream-socket-client

i got this error in sending notification
Warning: stream_socket_client() [function.stream-socket-client]: SSL operation failed with code 1. OpenSSL Error messages: error:14094438:SSL routines:SSL3_READ_BYTES:tlsv1 alert internal error in /nfs/c04/h03/mnt/181638/domains/s.aumenta.do/html/merckerbitux/push/push_ios_admin.php on line 50
Warning: stream_socket_client() [function.stream-socket-client]: Failed to enable crypto in /nfs/c04/h03/mnt/181638/domains/s.aumenta.do/html/merckerbitux/push/push_ios_admin.php on line 50
Warning: stream_socket_client() [function.stream-socket-client]: unable to connect to ssl://gateway.push.apple.com:2195 (Unknown error) in /nfs/c04/h03/mnt/181638/domains/s.aumenta.do/html/merckerbitux/push/push_ios_admin.php on line 50
Failed to connect: 0
please give the solution

issue was solve
problem is in .pem files that is corrupted so i re-genarate it and it works . Please check for password and path for the .pem file.

Related

Unable to start Hive metastore related to java exception

I installed Apache hive 3, Apache Hadoop 3, Trino and Mysql (mysql Ver 14.14 Distrib 5.7.20, for Linux (x86_64) using EditLine wrapper).
When I start Hive metastore I had an exception, it blocked the start of the metastore:
Exception:
Exception in thread "main" java.lang.NoSuchMethodError: com/google/common/base/Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V (loaded from file:/root/trino_poc/trino-server-351/etc/catalog/apache-hive-3.1.2-bin/lib/guava-19.0.jar by jdk.internal.loader.ClassLoaders$AppClassLoader#8753f89b) called from class org.apache.hadoop.conf.Configuration (loaded from file:/root/trino_poc/hadoop-3.2.2/share/hadoop/common/hadoop-common-3.2.2.jar by jdk.internal.loader.ClassLoaders$AppClassLoader#8753f89b).
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
My solution was (I found it on github) to replace the guava-19.0.jar by guava-22.0.jar
So the metastore started well on localhost:9083
But the port is not in listen: netstat -na | grep :9083 ==> nothing
in hive log I have this error:
Unable to open a test connection to the given database. JDBC url = jdbc:mysql://localhost:3306/metastore_dbs?useSSL=false;createDatabaseIfNotExist=true, username = root. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------^M
java.sql.SQLException: The connection property 'useSSL' only accepts values of the form: 'true', 'false', 'yes' or 'no'. The value 'false;createDatabaseIfNotExist=true' is not in this set
So the issue is here: 'false;createDatabaseIfNotExist=true' the ';' is not supported in xml file because this code is exist in hive-site.xml and I need to properly escape the characters.
I used '?' instead, Hivemetastore started but port 9083 is not listen, and I got almost the same exception'false?createDatabaseIfNotExist=true'
I used '&' ==> Hive metastore is not started (failed) and I got this exception:
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.common.base.internal.Finalizer (file:/root/trino_poc/trino-server-351/etc/catalog/apache-hive-3.1.2-bin/lib/guava-22.0.jar) to field java.lang.Thread.inheritableThreadLocals
WARNING: Please consider reporting this to the maintainers of com.google.common.base.internal.Finalizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
MetaException(message:org.jets3t.service.security.ProviderCredentials)
If I resume:
To resolve exception to initialize Database and start hive metastore I replace the guava-19.0.jar by guava-22.0.jar but port 9083 is not listen.
Then to resolve the issue of port listening:
I Used '?' instead of ';' ==> hive metastore still started well
but port 9083 in not listen.
I tried using '&' instead of '?' ==> Hive metastore not
started (failed) and I got the exception above related to
guava-22.0.jar
I hope that I was clear. Really I need to your help, because I spent a lot of hours trying to resolve the issue.
Thanks in advance

Unable to analyse MySQL error logs in OSSEC

I am trying to analyze MySQL error logs that are generated on my OSSEC agent and raise alerts using OSSEC server.
here is the code block added to /var/ossec/etc/ossec.conf on the agent side to read the error logs of MySQL from the agent:
<localfile>
<log_format>mysql_log</log_format>
<location>/var/log/mysql/error.log</location>
</localfile>
After doing so I have restarted the agent and server but unable test any error logs that are getting generated on the agent side like:
2020-09-15T04:09:24.164859Z 12 [Note] Access denied for user 'root'#'localhost' (using password: YES)
As per doc https://ossec-docs.readthedocs.io/en/latest/docs/programs/ossec-logtest.html under Caveats we need to add MySQL log: to the log generated for the ossec-logtest.
This will be added automatically when we send these logs to the OSSEC server for analysis from the agent.
ossec-logtest result for MySQL error log
ossec-logtest is working fine after adding MySQL log: to the beginning but they are not working in the realtime.
Can anyone please help me through this problem.
The fact that ossec-logtest trigger an alert means that mysql decoder and rules are working fine
Check on Agent
MySql is running. systemctl status mysqld.service
MySql configuration (loglevel and output file) allow to log that kind of event . See here
If the value is greater than 1, aborted connections are written to the
error log, and access-denied errors for new connection attempts are
written.
MySql is effectively logging 'Access denied': grep "Access denied" /var/log/mysql/error.log
Ossec and their processes is running ok: /var/ossec/bin/ossec-control status
Check on Manager
log_alert_level field in /var/ossec/etc/ossec.conf is lower o equal than 9 (loglevel showed in your ossec-logtest)

download from command line in Ubuntu

I'm trying to get a file from Mysql webpage from command line Ubuntu:
wget "http://dev.mysql.com/get/mysql-apt-config_0.8.3-1_all.deb"
But I'm getting error:
--2017-10-09 15:22:23-- http://dev.mysql.com/get/mysql-apt-config_0.8.3-1_all.deb
Resolving dev.mysql.com (dev.mysql.com)... 137.254.60.11
Connecting to dev.mysql.com (dev.mysql.com)|137.254.60.11|:80... failed: Connection refused.
What is the reason for this connection refuse error ?

Unable to start mysql in redmine

Hi I'm unable to start my mysql and subversion using root as user.When im trying to start it by ./ctlscript.sh start mysql it is showing that mysql could not be started.The same is happening in the case of subversion also.In logs its showing that
error: Can't connect to local MySQL server through socket '/data/redmine/mysql/tmp/mysql.sock' (2) (Mysql2::Error)
error: [ pid=6349 thr=139977144923904 file=ext/common/agents/HelperAgent/RequestHandler.h:1731 time=2014-09-29 12:54:58.7821 ]: [Client 23] Cannot checkout session. An error occured while starting up the preloader.
Can any one help?
Please check whether your root user has the permission to start the mysql and other services.
If not please give the permissions using CHMOD

Hive metastore configuration

I configured hive with MYSQL as repository. When I start the hive server using my standard user (infa_hadoop) it is giving me an error "cant connect to metastore using the URI provided".
But if I login as root and start the hive server it starts well.
command used:
hive --service hiveserver
But when I tried to execute the ETL job (informatica) it is giving me the Access control exception!
Error :
Function [INFASQLExecute] failed in adapter [/u01/app/informatica/plugins/dynamic/hiveruntime/libhive.so] with error code [-1].
FnName: INFASQLExecute -- execute(). SQLException: Query returned non-zero code: 12, cause: FAILED: Hive Internal Error: java.lang.RuntimeException(org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="scratchdir":Infa_Linux:supergroup:rwxr-xr-x)
Function [INFASQLGetDiagRecW] failed in adapter [/u01/app/informatica/plugins/dynamic/hiveruntime/libhive.so] with error code [100].
FnName: INFASQLExecute -- execute(). SQLException: Query returned non-zero code: 12, cause: FAILED: Hive Internal Error: java.lang.RuntimeException(org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="scratchdir":Infa_Linux:supergroup:rwxr-xr-x)
Function [INFASQLGetDiagRecW] failed in adapter [/u01/app/informatica/plugins/dynamic/hiveruntime/libhive.so] with error code [100].].
But hive is working fine in command promt ? Any suggestions..
First, please check if your hive thrift server is up and running .
It is advised to use the following command to start the hive thrift server
hive --service hiveserver -p 10001
Telnet and check if the server is running in port 10001 , If yes I suppose your issue would be resolved