We are facing connectivity issues, while trying to connect via sqlplus from our Solaris server to Oracle Cloud.
Error: ORA-29106: Cannot import PKCS #12 wallet
ORA-29106: Cannot import PKCS #12 wallet
If using the client less than 12 version
Bug: 23335686 -> update Your client
ORA-12518: TNS:listener could not hand off client connection
Check Your Client Server sqlnet.ora and tnsnames.ora file
sqlnet.ora: check WALLET_LOCATION -> DIRECTORY
tnsnames.ora: check TNSNAME
for me it helped:
# new lines in listener.ora
SSL_CLIENT_AUTHENTICATION = FALSE
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /opt/oracle/somewhere/hidden/wallets)
)
)
ADR_BASE_LISTENER = /opt/oracle
Related
I have an architecture where the lambda would run when a irs data file is put on the S3 bucket, I can easily connect to my RDS on my local machine but for some very weird reason the Lambda is not able to access it and giving error:
"errorMessage": "2022-11-15T22:22:51.919Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Task timed out after 60.06 seconds"
2-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 URI updated to: https://irs-data.s3.amazonaws.com/?prefix=index&encoding-type=url
[DEBUG] 2022-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Calculating signature using v4 auth.
[DEBUG] 2022-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 CanonicalRequest:
GET
/
encoding-type=url&prefix=index
host:irs-data.s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20221115T222153Z
x-amz-security-token:IQoJb3JpZ2luX2VjEP///////////wEaCXVzLWVhc3QtMSJHMEUCIQC9x2awzo/kQIantnRem2kmylKVHw5fBV+ylz/PQeP0DwIgQHovdX5Jv9/cpe/PAaWDTBZGcc3TxXGUALQRJCh1XMsq7QII9///////////ARAAGgw5NzU4MjIzMjkxNDIiDDegGWv5Wxk3ihIEdCrBAlqSbCaW/e4tIn2SK5gAcePArZf5Ij7o1qhoqEyG2boXivxftDkd7vM3RGg9lK2YaMEx9ku3mCBFpS03T5zlbr2EnaQjRuvEZzdHBKY79qqbUOCqcITmYkQQK+GSCoAyfnckjbjY0yORD41/7OS6wRa9pRKzu0ib8V/aE8Uln5Eem9ylYSn7LdyNWanD2I0CNfYNMV+Xx0bduAhVyXP6HjXikjTG5e2gqlA61xQmq4NMXyRixxINUk47R1FWBqPnYVqQWOIPW1HKcbj26qlW+JJyh530ML1RK3qqkssnH7c0LGu8rJz9Ag9wldHcRlODljZcaOmX7OlErdwIImGoeb99ngcVKVrCc+QnegTQolsoAhU3AG68LrZrmY/zRborttAslMzeUpiZ4fkA86QKJJDdpJEL/sZc/ZXzBMCj2x/ZozD+odCbBjqeAVPiKRQMCuBUqK8LlnALW2ki6RwMyS8WmGFpSoDjUYcyFDhMkHSa8TnTa+0gdertafyc4c4NPfsWFBYTLavdkgmACCkug75ENt3LWAgpGvBMxp6f2hiZKjJzqQnOE6VofIUXU8PLycB+L9uaJuYplLuMoRmjURtHFj5whMZrGclS0+V9/eH2ep8x9SAiFIJ1yOimmox6FTw2DhvpuE8U
host;x-amz-content-sha256;x-amz-date;x-amz-security-token
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 StringToSign:
AWS4-HMAC-SHA256
20221115T222153Z
20221115/us-east-1/s3/aws4_request
5d61ffe01d9d6dd6aee4b1faeecbf21721efb8696f94f969389c93b05579847c
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Signature:
d6e70d2c6350adfa7231bd7b2a63e5ac2fd83583f5dde1dfada2b08854d493d2
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=https://irs-data.s3.amazonaws.com/?prefix=index&encoding-type=url, headers={'User-Agent': b'Boto3/1.19.10 Python/3.9.13 Linux/4.14.255-285-225.501.amzn2.x86_64 exec-env/AWS_Lambda_python3.9 Botocore/1.22.12 Resource', 'X-Amz-Date': b'20221115T222153Z', 'X-Amz-Security-Token': b'IQoJb3JpZ2luX2VjEP///////////wEaCXVzLWVhc3QtMSJHMEUCIQC9x2awzo/kQIantnRem2kmylKVHw5fBV+ylz/PQeP0DwIgQHovdX5Jv9/cpe/PAaWDTBZGcc3TxXGUALQRJCh1XMsq7QII9///////////ARAAGgw5NzU4MjIzMjkxNDIiDDegGWv5Wxk3ihIEdCrBAlqSbCaW/e4tIn2SK5gAcePArZf5Ij7o1qhoqEyG2boXivxftDkd7vM3RGg9lK2YaMEx9ku3mCBFpS03T5zlbr2EnaQjRuvEZzdHBKY79qqbUOCqcITmYkQQK+GSCoAyfnckjbjY0yORD41/7OS6wRa9pRKzu0ib8V/aE8Uln5Eem9ylYSn7LdyNWanD2I0CNfYNMV+Xx0bduAhVyXP6HjXikjTG5e2gqlA61xQmq4NMXyRixxINUk47R1FWBqPnYVqQWOIPW1HKcbj26qlW+JJyh530ML1RK3qqkssnH7c0LGu8rJz9Ag9wldHcRlODljZcaOmX7OlErdwIImGoeb99ngcVKVrCc+QnegTQolsoAhU3AG68LrZrmY/zRborttAslMzeUpiZ4fkA86QKJJDdpJEL/sZc/ZXzBMCj2x/ZozD+odCbBjqeAVPiKRQMCuBUqK8LlnALW2ki6RwMyS8WmGFpSoDjUYcyFDhMkHSa8TnTa+0gdertafyc4c4NPfsWFBYTLavdkgmACCkug75ENt3LWAgpGvBMxp6f2hiZKjJzqQnOE6VofIUXU8PLycB+L9uaJuYplLuMoRmjURtHFj5whMZrGclS0+V9/eH2ep8x9SAiFIJ1yOimmox6FTw2DhvpuE8U', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=ASIA6GM4LCU3GISECH6S/20221115/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=d6e70d2c6350adfa7231bd7b2a63e5ac2fd83583f5dde1dfada2b08854d493d2'}>
[DEBUG] 2022-11-15T22:21:53.459Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Certificate path: /var/task/botocore/cacert.pem
[DEBUG] 2022-11-15T22:21:53.459Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Starting new HTTPS connection (1): irs-data.s3.amazonaws.com:443
2022-11-15T22:22:51.919Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Task timed out after 60.06 seconds
END RequestId: 9f20c035-5a47-4c6f-be9f-407b4a43aee6
REPORT RequestId: 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Duration: 60061.89 ms Billed Duration: 60000 ms Memory Size: 128 MB Max Memory Used: 116 MB Init Duration: 1040.64 ms
Lambda Code with S3 data:
import pandas as pd
import boto3
import os
from dotenv import load_dotenv
import logging
import sys
import time
import datetime as dt
import io
import pymysql
####### LOADING ENVIRONMENT VARIABLES #######
load_dotenv()
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
BUCKET = os.getenv('BUCKET')
BUCKET_PREFIX = os.getenv('BUCKET_PREFIX')
# Credentials to database connection
hostname= os.getenv('HOSTNAME')
dbname= os.getenv('DATABASE')
uname= os.getenv('USERNAME')
pwd= os.getenv('PASSWORD')
def lambda_handler(event, context):
try:
logger.info("TEST")
logger.info(BUCKET)
s3 = boto3.resource('s3')
# assigning the bucket:
my_bucket = s3.Bucket(BUCKET)
data_list = []
for my_bucket_object in my_bucket.objects.filter(Prefix=BUCKET_PREFIX):
if my_bucket_object.key.endswith(".csv"):
key=my_bucket_object.key
body=my_bucket_object.get()['Body'].read()
temp_data = pd.read_csv(io.BytesIO(body))
data_list.append(temp_data)
# concatenating all the files together:
df = pd.concat(data_list)
# Connect to MySQL Database
connection = pymysql.connect(host=hostname,user=uname,password=pwd,database=dbname)
cursor = connection.cursor()
# Truncate the table everytime before an ETL:
sql_trunc = "TRUNCATE TABLE `irs990`"
cursor.execute(sql_trunc)
# commit the results
connection.commit()
# creating columns from the dataframe:
cols = "`,`".join([str(i) for i in df.columns.tolist()])
# adding dataframe to mysql RDS
for i,row in df.iterrows():
sql = "INSERT INTO `irs990` (`" +cols + "`) VALUES (" + "%s,"*(len(row)-1) + "%s)"
cursor.execute(sql, tuple(row))
connection.commit()
# checking if data was successfully written:
sql = "SELECT * FROM `irs990`"
cursor.execute(sql)
result = cursor.fetchall()
for i in result:
print(i)
# closing MySQL connection:
connection.close()
except Exception as e:
logging.error(e)
My Lambda VPC details:
2
My RDS details:
3
4
5
Can somebody please help me what to do? I am assigning the lambda the same VPC as the RDS, I tried using the same security group as well and making sure the outbound IP address of lambda is in the inbound rules for RDS. But nothing :(
The proper security configuration should be:
A Security Group on the AWS Lambda function (Lambda-SG) that permits All outbound access (which is the default configuration)
A Security Group on the Amazon RDS database (DB-SG) that permits inbound connections on port 3306 from Lambda-SG
That is, DB-SG should specifically reference Lambda-SG. This will then permit the incoming connection from the Lambda function.
Merely putting the Lambda function and the RDS database "in the same Security Group" is insufficient because security groups apply to each resource individually. Unless the security group allows a connection from 'itself', this will not permit the desired access. Much better to use two security groups as described above.
I created an RDS DB Instance using a mocked boto3 rds client. Here's how I set it up in my conftest.py
#pytest.fixture
def aws_credentials():
"""Mocked AWS Credentials for moto."""
os.environ["AWS_ACCESS_KEY_ID"] = "testing"
os.environ["AWS_SECRET_ACCESS_KEY"] = "testing"
os.environ["AWS_SECURITY_TOKEN"] = "testing"
os.environ["AWS_SESSION_TOKEN"] = "testing"
#pytest.fixture
def rds_client(aws_credentials, aws_region):
with mock_rds():
client = boto3.client("rds", region_name=aws_region)
yield client
Following the example here (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.Python.html) I set up my mysql connector like this:
db_instance = rds_client.create_db_instance(DBInstanceIdentifier="TestDBInstanceIdentifier",
DBInstanceClass="db.m4.large", Engine="mysql",
MasterUsername="root", DBName="TestDBName")
print("RDS Instance-----------------------------------------------------")
print(db_instance)
host = db_instance['DBInstance']['Endpoint']['Address']
port = db_instance['DBInstance']['Endpoint']['Port']
user = db_instance['DBInstance']['MasterUsername']
dbname = db_instance['DBInstance']['DBName']
print("Starting connection")
token = rds_client.generate_db_auth_token(DBHostname=host, Port=port, DBUsername=user, Region=aws_region)
mydb = mysql.connector.connect(host=host, database=dbname, user=user, passwd=token, port=port)
However, the connector can't find the DB:
FAILED test_read_rds_db - mysql.connector.errors.DatabaseError: 2005 (HY000): Unknown MySQL server host 'TestDBInstanceIdentifier.aaaaaaaaaa.us-east-1.rds.amazonaws.com' (8)
Has someone been able to set this up before?
Moto does not offer this functionality. It mocks the AWS API, but does not expose any RDBMS-functionality.
You could look into Localstack instead. It uses Moto in the background to mock calls to AWS, but offers features on top of that such as the ability to connect to an RDS instance.
See the docs here: https://docs.localstack.cloud/aws/rds/
I'm trying to use MySQL cursor to interact with remote database:
from flask import Flask
from flask_mysqldb import MySQL
app = Flask(__name__)
app.config['MYSQL_USER'] = 'sql7368254' # it's a testing database. Nothing to exploit, really.
app.config['MYSQL_PASSWORD'] = 'YnCZ8j4jbi'
app.config['MYSQL_HOST'] = 'sql7.freemysqlhosting.net'
app.config['MYSQL_DB'] = 'sql7368254'
app.config['MYSQL_CURSORCLASS'] = 'DictCursor'
db = MySQL(app)
#app.route('/')
def index():
cur = db.connection.cursor()
cur.execute('''CREATE TABLE users (id INTEGER, email VARCHAR(30), password VARCHAR(255))''')
return 'Done'
if __name__ == '__main__':
app.run(debug=True)
When I spin this off I get:
* Serving Flask app "server.py"
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Segmentation fault (core dumped)
"Segmentation fault" isn't telling me what is wrong. What might be an issue ?
The import was a problem:
from flask_mysqldb import MySQL
As this topic describes. Flask-MySQLdb doesn't work nicely with Python 3.
Using Python MySQL connector instead is advised.
I'm trying to set up HS from a 12c database that will eventually send data across to a remote MySQL server.
I have installed the odbc driver;
root ~ # rpm -ivh mysql-connector-odbc-5.3.4-1.el6.x86_64.rpm
This is my listener.ora file;
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = ap-ora-records-test.ap.local)(PORT = 1521))
)
)
This is my tnsnames.ora file;
RECORDSDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ap-ora-records-test.ap.local)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = recordsdb.ap.local)
)
)
MYSQL =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=ap-ora-records-test.ap.local)(PORT=1521))
(CONNECT_DATA=(SID=MYSQL))
(HS=OK)
)
I connected to the database, and created the database link;
SQL> CREATE DATABASE LINK MYSQL
2 CONNECT TO "root" IDENTIFIED BY "removed"
3 USING 'mysql';
Restarting the lsnrctl;
./bin/lsnrctl reload
./bin/lsnrctl stop
./bin/lsnrctl start
./bin/lsnrctl status
./bin/lsnrctl status
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 08-SEP-2014 10:54:42
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date 08-SEP-2014 10:42:05
Uptime 0 days 0 hr. 12 min. 37 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/12.1.0/dbhome_1/log/diag/tnslsnr/ap-ora-records-test/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ap-ora-records-test.ap.local)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ap-ora-records-test.ap.local)(PORT=8080))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "recordsdb.ap.local" has 1 instance(s).
Instance "recordsdb", status READY, has 1 handler(s) for this service...
Service "recordsdbXDB.ap.local" has 1 instance(s).
Instance "recordsdb", status READY, has 1 handler(s) for this service...
The command completed successfully
As well as tnsping;
oracle /u01/app/oracle/product/12.1.0/dbhome_1 $ ./bin/tnsping ap-ora-records-test
TNS Ping Utility for Linux: Version 12.1.0.1.0 - Production on 08-SEP-2014 10:42:44
Copyright (c) 1997, 2013, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora
Used EZCONNECT adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=::1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=1521)))
OK (0 msec)
I then tried getting a piece of data - SQL> SELECT * from MYSQL.users#mysql; which gave the following result;
ERROR at line 1:
ORA-28545: error diagnosed by Net8 when connecting to an agent
Unable to retrieve text of NETWORK/NCR message 65535
ORA-02063: preceding 2 lines from MYSQL
Where am I going wrong?
edit 1:
This is my creation code;
CREATE SHARED PUBLIC DATABASE LINK mysql_remote_shared
CONNECT TO root IDENTIFIED BY password
AUTHENTICATED BY root IDENTIFIED BY password
USING 'mysql';
i'm having a little problem. I'm using Debian and I got asterisk 1.8 and I want to use CDR along with mysql.
In Asterisk 1.8 you apparently gotta use cdr-adaptive module instead of the regular. That is just what I did. Now I have 1 error when I "module reload cdr_adaptive_odbc.so" and I can't solve it :
WARNING[23172]: cdr_adaptive_odbc.c:123 load_config: No such connection 'MySQL-asterisk' in the 'adaptive-connection' section of cdr_adaptive_odbc.conf. Check res_odbc.conf.
Now here are all the files related, I can't understand what is wrong :
/etc/odbc.ini :
[MySQL]
Description = MySQL ODBC MyODBC Driver
Driver = /usr/lib/libmyodbc3.so
FileUsage = 1
[Text]
Description = ODBC for Text Files
Driver = /usr/lib/libodbctxt.so
Setup = /usr/lib/libodbctxtS.so
FileUsage = 1
CPTimeout =
CPReuse =
[PostgreSQL]
Description = PostgreSQL driver for Linux & Win32
Driver = /usr/lib/libodbcpsql.so
Setup = /usr/lib/libodbcpsqlS.so
FileUsage = 1
[DB2]
Description = DB2 Driver
Driver = /opt/IBM/db2/V8.1/lib64/libdb2.so
FileUsage = 1
DontDLClose = 1
DMEnvAttr = SQL_ATTR_UNIXODBC_ENVATTR={DB2INSTANCE=db2inst1}
[MySQL-asterisk]
Description = MySQL asterisk database
Driver = MySQL
Socket = /var/run/mysqld/mysqld.sock
Server = localhost
User = root
Password = XXXXX
Database = ics
Option = 3
/etc/asterisk/cdr_adaptive_odbc.conf :
[adaptive-connection]
connection = MySQL-asterisk
table = cdr
alias start => calldate
/etc/asterisk/res_odbc.conf :
[Asterisk]
enabled => yes
dsn => MySQL-asterisk
username => root
password => XXX
;pooling => no
;limit => 0
pre-connect => yes
This is what I get when i check the cdd status :
Call Detail Record (CDR) settings
----------------------------------
Logging: Enabled
Mode: Simple
Log unanswered calls: No
* Registered Backends
-------------------
Adaptive ODBC
cdr-custom
ODBC
csv
radius
res_config_sqlite
And this is what I get when i check de odcb
ODBC DSN Settings
-----------------
Name: Asterisk
DSN: MySQL-asterisk
I can't figure out what's wrong. Anyone has an idea ?
I assume you've fixed this now. I ran into a similar issue and the problem stemmed from the documentation. It refers to connection= in cdr_adaptive_odbc.conf being the DSN name. It's the name you want and not the DSN. So in your case:
connection = Asterisk
then at the command line do a
CLI> module reload cdr_adaptive_odbc.so
and you should see a screenfull as Asterisk finds the tables and does any mappings you have specified.