How can I adjust query timeouts - skyve

I'm getting a timeout exception in my local dev environment for a Persistence SQL query.
Is the timeout configurable in Skyve?

There are 2 global timeouts in specified in the json config
eg
// Datastore definitions
"dataStores": {
// Skyve data store
"skyve": {
// JNDI name
"jndi": "java:/H2Demo",
// Dialect
"dialect": "org.skyve.impl.persistence.hibernate.dialect.H2SpatialDialect",
// Timeout for data store connections employed in general UI/forms processing - 0 indicates no timeout
"oltpConnectionTimeoutInSeconds": 30,
// Timeout for data store connections employed when running jobs and background tasks - 0 indicates no timeout
"asyncConnectionTimeoutInSeconds": 300
}
},
There are timeouts you can specify against each metadata query.
eg
<query name="qUsers" documentName="User" timeoutInSeconds="23">
All programmatic queries can set the timeout.
eg
Persistence p = CORE.getPersistence();
SQL sql = p.newSQL("select 1 ADM_Contact");
sql.setTimeoutInSeconds(200);
sql.scalarIterable(Number.class);
p.newSQL("select 1 ADM_Contact").noTimeout().scalarResults(Number.class);

Related

My Lambda is not able to access MySQL RDS. What to do?

I have an architecture where the lambda would run when a irs data file is put on the S3 bucket, I can easily connect to my RDS on my local machine but for some very weird reason the Lambda is not able to access it and giving error:
"errorMessage": "2022-11-15T22:22:51.919Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Task timed out after 60.06 seconds"
2-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 URI updated to: https://irs-data.s3.amazonaws.com/?prefix=index&encoding-type=url
[DEBUG] 2022-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Calculating signature using v4 auth.
[DEBUG] 2022-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 CanonicalRequest:
GET
/
encoding-type=url&prefix=index
host:irs-data.s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20221115T222153Z
x-amz-security-token:IQoJb3JpZ2luX2VjEP///////////wEaCXVzLWVhc3QtMSJHMEUCIQC9x2awzo/kQIantnRem2kmylKVHw5fBV+ylz/PQeP0DwIgQHovdX5Jv9/cpe/PAaWDTBZGcc3TxXGUALQRJCh1XMsq7QII9///////////ARAAGgw5NzU4MjIzMjkxNDIiDDegGWv5Wxk3ihIEdCrBAlqSbCaW/e4tIn2SK5gAcePArZf5Ij7o1qhoqEyG2boXivxftDkd7vM3RGg9lK2YaMEx9ku3mCBFpS03T5zlbr2EnaQjRuvEZzdHBKY79qqbUOCqcITmYkQQK+GSCoAyfnckjbjY0yORD41/7OS6wRa9pRKzu0ib8V/aE8Uln5Eem9ylYSn7LdyNWanD2I0CNfYNMV+Xx0bduAhVyXP6HjXikjTG5e2gqlA61xQmq4NMXyRixxINUk47R1FWBqPnYVqQWOIPW1HKcbj26qlW+JJyh530ML1RK3qqkssnH7c0LGu8rJz9Ag9wldHcRlODljZcaOmX7OlErdwIImGoeb99ngcVKVrCc+QnegTQolsoAhU3AG68LrZrmY/zRborttAslMzeUpiZ4fkA86QKJJDdpJEL/sZc/ZXzBMCj2x/ZozD+odCbBjqeAVPiKRQMCuBUqK8LlnALW2ki6RwMyS8WmGFpSoDjUYcyFDhMkHSa8TnTa+0gdertafyc4c4NPfsWFBYTLavdkgmACCkug75ENt3LWAgpGvBMxp6f2hiZKjJzqQnOE6VofIUXU8PLycB+L9uaJuYplLuMoRmjURtHFj5whMZrGclS0+V9/eH2ep8x9SAiFIJ1yOimmox6FTw2DhvpuE8U
host;x-amz-content-sha256;x-amz-date;x-amz-security-token
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 StringToSign:
AWS4-HMAC-SHA256
20221115T222153Z
20221115/us-east-1/s3/aws4_request
5d61ffe01d9d6dd6aee4b1faeecbf21721efb8696f94f969389c93b05579847c
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Signature:
d6e70d2c6350adfa7231bd7b2a63e5ac2fd83583f5dde1dfada2b08854d493d2
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=https://irs-data.s3.amazonaws.com/?prefix=index&encoding-type=url, headers={'User-Agent': b'Boto3/1.19.10 Python/3.9.13 Linux/4.14.255-285-225.501.amzn2.x86_64 exec-env/AWS_Lambda_python3.9 Botocore/1.22.12 Resource', 'X-Amz-Date': b'20221115T222153Z', 'X-Amz-Security-Token': b'IQoJb3JpZ2luX2VjEP///////////wEaCXVzLWVhc3QtMSJHMEUCIQC9x2awzo/kQIantnRem2kmylKVHw5fBV+ylz/PQeP0DwIgQHovdX5Jv9/cpe/PAaWDTBZGcc3TxXGUALQRJCh1XMsq7QII9///////////ARAAGgw5NzU4MjIzMjkxNDIiDDegGWv5Wxk3ihIEdCrBAlqSbCaW/e4tIn2SK5gAcePArZf5Ij7o1qhoqEyG2boXivxftDkd7vM3RGg9lK2YaMEx9ku3mCBFpS03T5zlbr2EnaQjRuvEZzdHBKY79qqbUOCqcITmYkQQK+GSCoAyfnckjbjY0yORD41/7OS6wRa9pRKzu0ib8V/aE8Uln5Eem9ylYSn7LdyNWanD2I0CNfYNMV+Xx0bduAhVyXP6HjXikjTG5e2gqlA61xQmq4NMXyRixxINUk47R1FWBqPnYVqQWOIPW1HKcbj26qlW+JJyh530ML1RK3qqkssnH7c0LGu8rJz9Ag9wldHcRlODljZcaOmX7OlErdwIImGoeb99ngcVKVrCc+QnegTQolsoAhU3AG68LrZrmY/zRborttAslMzeUpiZ4fkA86QKJJDdpJEL/sZc/ZXzBMCj2x/ZozD+odCbBjqeAVPiKRQMCuBUqK8LlnALW2ki6RwMyS8WmGFpSoDjUYcyFDhMkHSa8TnTa+0gdertafyc4c4NPfsWFBYTLavdkgmACCkug75ENt3LWAgpGvBMxp6f2hiZKjJzqQnOE6VofIUXU8PLycB+L9uaJuYplLuMoRmjURtHFj5whMZrGclS0+V9/eH2ep8x9SAiFIJ1yOimmox6FTw2DhvpuE8U', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=ASIA6GM4LCU3GISECH6S/20221115/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=d6e70d2c6350adfa7231bd7b2a63e5ac2fd83583f5dde1dfada2b08854d493d2'}>
[DEBUG] 2022-11-15T22:21:53.459Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Certificate path: /var/task/botocore/cacert.pem
[DEBUG] 2022-11-15T22:21:53.459Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Starting new HTTPS connection (1): irs-data.s3.amazonaws.com:443
2022-11-15T22:22:51.919Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Task timed out after 60.06 seconds
END RequestId: 9f20c035-5a47-4c6f-be9f-407b4a43aee6
REPORT RequestId: 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Duration: 60061.89 ms Billed Duration: 60000 ms Memory Size: 128 MB Max Memory Used: 116 MB Init Duration: 1040.64 ms
Lambda Code with S3 data:
import pandas as pd
import boto3
import os
from dotenv import load_dotenv
import logging
import sys
import time
import datetime as dt
import io
import pymysql
####### LOADING ENVIRONMENT VARIABLES #######
load_dotenv()
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
BUCKET = os.getenv('BUCKET')
BUCKET_PREFIX = os.getenv('BUCKET_PREFIX')
# Credentials to database connection
hostname= os.getenv('HOSTNAME')
dbname= os.getenv('DATABASE')
uname= os.getenv('USERNAME')
pwd= os.getenv('PASSWORD')
def lambda_handler(event, context):
try:
logger.info("TEST")
logger.info(BUCKET)
s3 = boto3.resource('s3')
# assigning the bucket:
my_bucket = s3.Bucket(BUCKET)
data_list = []
for my_bucket_object in my_bucket.objects.filter(Prefix=BUCKET_PREFIX):
if my_bucket_object.key.endswith(".csv"):
key=my_bucket_object.key
body=my_bucket_object.get()['Body'].read()
temp_data = pd.read_csv(io.BytesIO(body))
data_list.append(temp_data)
# concatenating all the files together:
df = pd.concat(data_list)
# Connect to MySQL Database
connection = pymysql.connect(host=hostname,user=uname,password=pwd,database=dbname)
cursor = connection.cursor()
# Truncate the table everytime before an ETL:
sql_trunc = "TRUNCATE TABLE `irs990`"
cursor.execute(sql_trunc)
# commit the results
connection.commit()
# creating columns from the dataframe:
cols = "`,`".join([str(i) for i in df.columns.tolist()])
# adding dataframe to mysql RDS
for i,row in df.iterrows():
sql = "INSERT INTO `irs990` (`" +cols + "`) VALUES (" + "%s,"*(len(row)-1) + "%s)"
cursor.execute(sql, tuple(row))
connection.commit()
# checking if data was successfully written:
sql = "SELECT * FROM `irs990`"
cursor.execute(sql)
result = cursor.fetchall()
for i in result:
print(i)
# closing MySQL connection:
connection.close()
except Exception as e:
logging.error(e)
My Lambda VPC details:
2
My RDS details:
3
4
5
Can somebody please help me what to do? I am assigning the lambda the same VPC as the RDS, I tried using the same security group as well and making sure the outbound IP address of lambda is in the inbound rules for RDS. But nothing :(
The proper security configuration should be:
A Security Group on the AWS Lambda function (Lambda-SG) that permits All outbound access (which is the default configuration)
A Security Group on the Amazon RDS database (DB-SG) that permits inbound connections on port 3306 from Lambda-SG
That is, DB-SG should specifically reference Lambda-SG. This will then permit the incoming connection from the Lambda function.
Merely putting the Lambda function and the RDS database "in the same Security Group" is insufficient because security groups apply to each resource individually. Unless the security group allows a connection from 'itself', this will not permit the desired access. Much better to use two security groups as described above.

Caused by: org.apache.ignite.IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not serializable. Cache name:

I am trying to set up an Apache Ignite cache store using Mysql as external storage.
I have read all official documentation about it and examined many other examples, but I can't make it run:
[2022-06-02 16:45:56:551] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[2022-06-02 16:45:56:874] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
[2022-06-02 16:45:56:874] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[16:45:56] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[2022-06-02 16:45:56:898] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
[2022-06-02 16:45:56:926] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - Collision resolution is disabled (all jobs will be activated upon arrival).
[16:45:56] Security status [authentication=off, sandbox=off, tls/ssl=off]
[2022-06-02 16:45:56:927] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Security status [authentication=off, sandbox=off, tls/ssl=off]
[2022-06-02 16:45:57:204] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=b397c114-d34d-4245-9645-f78c5d184888]
[2022-06-02 16:45:57:242] [WARN] - 55333 - org.apache.ignite.logger.java.JavaLogger.warning(JavaLogger.java:295) - DataRegionConfiguration.maxWalArchiveSize instead DataRegionConfiguration.walHistorySize would be used for removing old archive wal files
[2022-06-02 16:45:57:253] [INFO] - 55333 - org.apache.ignite.logger.java.JavaLogger.info(JavaLogger.java:285) - Configured data regions initialized successfully [total=4]
[2022-06-02 16:45:57:307] [ERROR] - 55333 - org.apache.ignite.logger.java.JavaLogger.error(JavaLogger.java:310) - Exception during start processors, node will be stopped and close connections
org.apache.ignite.IgniteCheckedException: Failed to start processor: GridProcessorAdapter []
at org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1989) ~[ignite-core-2.10.0.jar:2.10.0]
Caused by: org.apache.ignite.IgniteCheckedException: Failed to validate cache configuration. Cache store factory is not serializable. Cache name: StockConfigCache
Caused by: org.apache.ignite.IgniteCheckedException: Failed to serialize object: CacheJdbcPojoStoreFactory [batchSize=512, dataSrcBean=null, dialect=org.apache.ignite.cache.store.jdbc.dialect.MySQLDialect#14993306, maxPoolSize=8, maxWrtAttempts=2, parallelLoadCacheMinThreshold=512, hasher=org.apache.ignite.cache.store.jdbc.JdbcTypeDefaultHasher#73ae82da, transformer=org.apache.ignite.cache.store.jdbc.JdbcTypesDefaultTransformer#6866e740, dataSrc=null, dataSrcFactory=com.anyex.ex.memory.model.CacheConfig$$Lambda$310/1421763091#31183ee2, sqlEscapeAll=false]
Caused by: java.io.NotSerializableException: com.anyex.ex.database.DynamicDataSource
Any advice or idea would be appreciated, thank you!
public static CacheConfiguration cacheStockConfigCache(DataSource dataSource, Boolean writeBehind)
{
CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setSqlSchema("public");
ccfg.setName("StockConfigCache");
ccfg.setCacheMode(CacheMode.REPLICATED);
ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ccfg.setIndexedTypes(Long.class, StockConfigMem.class);
CacheJdbcPojoStoreFactory cacheStoreFactory = new CacheJdbcPojoStoreFactory();
cacheStoreFactory.setDataSourceFactory((Factory<DataSource>) () -> dataSource);
//cacheStoreFactory.setDialect(new OracleDialect());
cacheStoreFactory.setDialect(new MySQLDialect());
cacheStoreFactory.setTypes(JdbcTypes.jdbcTypeStockConfigMem(ccfg.getName(), "StockConfig"));
ccfg.setCacheStoreFactory(cacheStoreFactory);
ccfg.setReadFromBackup(false);
ccfg.setCopyOnRead(true);
if(writeBehind){
ccfg.setWriteThrough(true);
ccfg.setWriteBehindEnabled(true);
}
return ccfg;
} public static JdbcType jdbcTypeStockConfigMem(String cacheName, String tableName)
{
JdbcType type = new JdbcType();
type.setCacheName(cacheName);
type.setKeyType(Long.class);
type.setValueType(StockConfigMem.class);
type.setDatabaseTable(tableName);
type.setKeyFields(new JdbcTypeField(Types.NUMERIC, "id", Long.class, "id"));
type.setValueFields(
new JdbcTypeField(Types.NUMERIC, "id", Long.class, "id"),
new JdbcTypeField(Types.NUMERIC, "stockinfoId", Long.class, "stockinfoId"),
new JdbcTypeField(Types.VARCHAR, "remark", String.class, "remark"),
new JdbcTypeField(Types.TIMESTAMP, "updateTime", Timestamp.class, "updateTime")
);
return type;
} igniteConfiguration.setCacheConfiguration(
CacheConfig.cacheStockConfigCache(dataSource, igniteProperties.getJdbc().getWriteBehind())
); #Bean("igniteInstance")
#ConditionalOnProperty(value = "ignite.enable", havingValue = "true", matchIfMissing = true)
public Ignite ignite(IgniteConfiguration igniteConfiguration)
{
log.info("igniteConfiguration info:{}", igniteConfiguration.toString());
Ignite ignite = Ignition.start(igniteConfiguration);
log.info("{} ignite started with discovery type {}", ignite.name(), igniteProperties.getType());
return ignite;
}

Cygnus 0.7.1 does not create tables (MySQL and HDFS)

I've installed (from source) cygnus 0.7.1, and after configuring it (MySQL and HDFS sinks) I can start it without problems. When I subscribe cygnus to a orion context, it receives the information ok, but there is a problem with MySQL and HDFS. This is the log:
15/03/17 13:58:52 INFO handlers.OrionRestHandler: Starting transaction (1426597123-891-0000000000)
15/03/17 13:58:52 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "5508250c1860a36e55240c84", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "ubk-temp", "isPattern" : "false", "id" : "ubk:temp:1", "attributes" : [ { "name" : "temperature", "type" : "float", "value" : "11" } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/03/17 13:58:52 INFO handlers.OrionRestHandler: Event put in the channel (id=1549700267, ttl=10)
15/03/17 13:58:52 INFO sinks.OrionSink: Event got from the channel (id=1549700267, headers={fiware-servicepath=ubktemp, destination=ubk_temp_1_ubk-temp, content-type=application/json, fiware-service=ubikwa, ttl=10, transactionId=1426597123-891-0000000000, timestamp=1426597132511}, bodyLength=462)
15/03/17 13:58:52 INFO sinks.OrionSink: Event got from the channel (id=1549700267, headers={fiware-servicepath=ubktemp, destination=ubk_temp_1_ubk-temp, content-type=application/json, fiware-service=ubikwa, ttl=10, transactionId=1426597123-891-0000000000, timestamp=1426597132511}, bodyLength=462)
15/03/17 13:58:52 INFO sinks.OrionMySQLSink: [mysql-sink] Persisting data at OrionMySQLSink. Database: ubikwa, Table: ubktemp_ubk_temp_1_ubk-temp, Timestamp: 2015-03-17T13:58:52.511, Data (attrs): {temperature=11}, (metadata): {temperature_md=[]}
15/03/17 13:58:53 INFO sinks.OrionHDFSSink: [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (ubikwa/ubktemp/ubk_temp_1_ubk-temp/ubk_temp_1_ubk-temp.txt), Data ({"recvTime":"2015-03-17T13:58:52.511","temperature":"11", "temperature_md":[]})
15/03/17 13:58:53 WARN sinks.OrionSink: Bad context data (Table 'ubikwa.ubktemp_ubk_temp_1_ubk-temp' doesn't exist)
15/03/17 13:58:53 INFO sinks.OrionSink: Finishing transaction (1426597123-891-0000000000)
The MySQL sink does not raise any errors but no tables are created. And the HDFS sink seems to be unable to create the files. I previously installed cygnus 0.6 and it worked with the same configuration.
Edit:
Here its is my configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink mysql-sink
cygnusagent.channels = hdfs-channel mysql-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel mysql-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = ubikwa
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = ubktemp
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts de
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.de.matching_table = /opt/cygnus/conf/matching_table.conf
# ============================================
# OrionHDFSSink configuration
# channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
# sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
# Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
# If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
cygnusagent.sinks.hdfs-sink.cosmos_host = 130.206.80.46
# port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
# default username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.cosmos_default_username = ***
# default password for the default username
cygnusagent.sinks.hdfs-sink.cosmos_default_password = ***
# HDFS backend type (webhdfs, httpfs or infinity)
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = column
# Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = 130.206.80.46
# Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
# Kerberos-based authentication enabling
cygnusagent.sinks.hdfs-sink.krb5_auth = false
# Kerberos username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
# Kerberos password
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
# Kerberos login file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
# Kerberos configuration file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
# ============================================
# OrionMySQLSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnusagent.sinks.mysql-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnusagent.sinks.mysql-sink.mysql_host = 127.0.0.1
# the port where the MySQL server listes for incomming connections
cygnusagent.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnusagent.sinks.mysql-sink.mysql_username = ***
# password for the user above
cygnusagent.sinks.mysql-sink.mysql_password = ***
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = column
#=============================================
# hdfs-channel configuration
# channel type (must not be changed)
cygnusagent.channels.hdfs-channel.type = memory
# capacity of the channel
cygnusagent.channels.hdfs-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.hdfs-channel.transactionCapacity = 100
#=============================================
# mysql-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mysql-channel.type = memory
# capacity of the channel
cygnusagent.channels.mysql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mysql-channel.transactionCapacity = 100
Any hints?
Thanks
I believe its because you are using parameter column in your configuration for OrionMySQLSink
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = column
In documentation is stated that when using column database and tables must be created before starting of cygnus. When you using row all 8 rows will be created automatically before first insert.
Within tables, we can find two options:
Fixed 8-field rows, as usual: recvTimeTs, recvTime, entityId, entityType, attrName, attrType, attrValue and attrMd. These tables
(and the databases) are created at execution time if the table doesn't
exist previously to the row insertion. Regarding attrValue, in its
simplest form, this value is just a string, but since Orion 0.11.0 it
can be Json object or Json array. Regarding attrMd, it contains a
string serialization of the metadata array for the attribute in Json
(if the attribute hasn't metadata, an empty array [] is inserted),
Two columns per each entity's attribute (one for the value and other for the metadata), plus an addition column about the reception
time of the data (recv_time). This kind of tables (and the databases)
must be provisioned previously to the execution of Cygnus, because
each entity may have a different number of attributes, and the
notifications must ensure a value per each attribute is notified.
The behaviour of the connector regarding the internal representation
of the data is governed through a configuration parameter,
attr_persistence, whose values can be row or column.
As Petark suggest, the "column mode" does not automatically creates the tables, and this must be provisioned in advanced by you. Why? The reason is, depending on the subscription you made to Orion CB in order to sent notifications to Cygnus, such notification may include some times a few attrbute updates, some times other very different attributes set, etc.
For instance, let's consider an entity called "car" with attrbutes "speed", "location" and "oil-level". Then you may say "notify Cygnus each time the speed changes, but send only the speed value. But at the same time you may say "notify Cygnus each time the oil level changes, and send all the attribute's value" as well. If the car starts moving, and only the speed and the location change, but not the oil level, then only the speed updates will be notified to Cygnus, which has no chance to know about the other attributes at any time.
Thus, if you want rows of data having all the 3 attributes, then you have to provision the table by yourself. By the way, having such subscription examples will lead to a lot of "speed-value,null,null" rows in your tables.
The differece with the "row mode" is that, independently of the number of notifed attributes, a new row will be added for each notified attribute, having all the rows the same format (entityId,entitytType,attrName,attrType,attrValue,attrMd); these format can be automatically provisioned by Cygnus.

Mysql Error = Can't create UNIX socket (24)

I have got the following exception
Mysql Caught Exception = Can't create UNIX socket (24).
I know that UNIX system error 24. That's "too many open files."
I refered this question ,:OperationalError: (2001, "Can't create UNIX socket (24)")
But I need to understand the exact problem.
In my code I execute multiple select query and store the result,not using free_result method in between.
Can it be the case of this error : Can't create UNIX socket (24)
Here is my code :
pthread_mutex_lock(&mysqlMutex);
mysql = mysql_init(NULL);
my_bool reconnect = 1;
MYSQL* connection;
mysql_options(mysql, MYSQL_OPT_RECONNECT, &reconnect);
connection = mysql_real_connect( server,user,password, database_name, 0, NULL, 0 );
if(connection == NULL)
{
//Connection failed.Exception Handling
}
//Execute query :SELECT * from user ;
mysql_query(mysql, getuser_query);
MYSQL_RES *mysql_res = mysql_store_result(mysql);
// Query # 2
// SELECT * from usergroup.
mysql_query(m_pMysql, userGroup_query);
mysql_res = mysql_store_result(mysql);
// Query # 3
// Query # 4
At the last:
//free mysql memory
mysql_free_result(mysql_res);
mysql_close(mysql);
pthread_mutex_unlock(&mysqlMutex);
You are perhaps opening too many simultaneous connections to mysql server. You need to increase available FDs to mysql process. You can usually do so with /etc/security/limits* in your linux distro.
You can also look at this question: Mysql decrease opened files

MySQL Connection Timeout Issue - Grails Application on Tomcat using Hibernate and ORM

I have a small grails application running on Tomcat in Ubuntu on a VPS. I use MySql as my datastore and everything works fine unless I leave the application for more than half a day (8 hours?). I did some searching and apparently this is the default wait_timeout in mysql.cnf so after 8 hours the connection will die but Tomcat won't know so when the next user tries to view the site they will see the connection failure error. Refreshing the page will fix this but I want to get rid of the error altogether. For my version of MySql (5.0.75) I have only my.cnf and it doesn't contain such a parameter, In any case changing this parameter doesn't solve the problem.
This Blog Post seems to be reporting a similar error but I still don't fully understand what I need to configure to get this fixed and also I am hoping that there is a simpler solution than another third party library. The machine I'm running on has 256MB ram and I'm trying to keep the number of programs/services running to a minimum.
Is there something I can configure in Grails / Tomcat / MySql to get this to go away?
Thanks in advance,
Gav
From my Catalina.out;
2010-04-29 21:26:25,946 [http-8080-2] ERROR util.JDBCExceptionReporter - The last packet successfully received from the server was 102,906,722 milliseconds$
2010-04-29 21:26:25,994 [http-8080-2] ERROR errors.GrailsExceptionResolver - Broken pipe
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
...
2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed.
2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed.
2010-04-29 21:26:26,017 [http-8080-2] ERROR servlet.GrailsDispatcherServlet - HandlerInterceptor.afterCompletion threw exception
org.hibernate.exception.GenericJDBCException: Cannot release connection
at java.lang.Thread.run(Thread.java:619)
Caused by: java.sql.SQLException: Already closed.
at org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:84)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.close(PoolingDataSource.java:181)
... 1 more
Referring to this article, you have stale connections in your DBCP connections pool that are silently dropped by OS or firewall.
The solution is to define a validation query and do a sanity check of the connection before you actually use it in your application.
In grails this is actually done by modifying the grails-app/conf/spring/Resource.groovy file and add the following:
beans = {
dataSource(BasicDataSource) {
//run the evictor every 30 minutes and evict any connections older than 30 minutes.
minEvictableIdleTimeMillis=1800000
timeBetweenEvictionRunsMillis=1800000
numTestsPerEvictionRun=3
//test the connection while its idle, before borrow and return it
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
}
In grails 1.3.X, you can modify the evictor values in the DataSource.groovy file to make sure pooled connections are used during idle. This will make sure the mysql server will not time out the connection.
production {
dataSource {
pooled = true
// Other database parameters..
properties {
maxActive = 50
maxIdle = 25
minIdle = 5
initialSize = 5
minEvictableIdleTimeMillis = 1800000
timeBetweenEvictionRunsMillis = 1800000
maxWait = 10000
}
}
A quick way to verify this works is to modify the MySQL my.cnf configuration file [mysql] element and add wait_time parameter with a low value.
Try increasing the number of open MySQL connections by putting the following in your DataSources.groovy:
dataSource {
driverClassName = "com.mysql.jdbc.Driver"
pooled=true
maxActive=10
initialSize=5
// Remaining connection params
}
If you want to go the whole hog, try implementing a connection pool; here is a useful link on this.
For grails 1.3.X, I had to add the following code to Bootstrap.groovy :
def init = {servletContext ->
def ctx=servletContext.getAttribute(ApplicationAttributes.APPLICATION_CONTEXT)
//implement test on borrow
def dataSource = ctx.dataSource
dataSource.targetDataSource.setMinEvictableIdleTimeMillis(1000 * 60 * 30)
dataSource.targetDataSource.setTimeBetweenEvictionRunsMillis(1000 * 60 * 30)
dataSource.targetDataSource.setNumTestsPerEvictionRun(3)
dataSource.targetDataSource.setTestOnBorrow(true)
dataSource.targetDataSource.setTestWhileIdle(true)
dataSource.targetDataSource.setTestOnReturn(false)
dataSource.targetDataSource.setValidationQuery("SELECT 1")
}
I also had to import org.codehaus.groovy.grails.commons.ApplicationAttributes
Add these parameters to dataSource
testOnBorrow = true
testWhileIdle = true
testOnReturn = true
See this article for more information
http://sacharya.com/grails-dbcp-stale-connections/
Starting from grails 2.3.6 default configuration already has options for preventing closing connection by timeout
These are the new defaults.
properties {
// See http://grails.org/doc/latest/guide/conf.html#dataSource for documentation
....
minIdle = 5
maxIdle = 25
maxWait = 10000
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 5000
minEvictableIdleTimeMillis = 60000
validationQuery = "SELECT 1"
validationQueryTimeout = 3
validationInterval = 15000
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
jdbcInterceptors = "ConnectionState;StatementCache(max=200)"
defaultTransactionIsolation = java.sql.Connection.TRANSACTION_READ_COMMITTED
}
What does your JDBC connection string look like? You can set an autoReconneect param in your data source config, e.g.
jdbc:mysql://hostname/mydb?autoReconnect=true