We connect freeswitch using odbc connection for device registration on server.
Now the problem occur when Freeswitch crashed,down or restart uncertainly that time the entry does not removed from sip_registration table and next time user are not able to make the call.
Then we have to manually delete that entry from database to make it working.
OS : Debian8
FS version : FreeSWITCH Version 1.6.6~64bit ( 64bit)
can anybody help us to resolve this issue?
The are multiple solutions:
have Nagios/Icinga check your freeswitch (send OPTIONS to 5060). If this fails your freeswitch is down. You can then have Nagios execute a script that cleans up your database.
have a simple (I use python) server listen on freeswitch ESL (Event Socket Layer) and act on (re-)Start events. So if your freeswitch is started, this server will do some things to cleanup your database.
Make some changes to the freeswitch startup script so it will do this housekeeping on startup
have a cronjob every minute or so delete all entries in sip_registration table that are older than the uptime of the freeswitch process.
But, after all, you should focus on why freeswitch crashed. That's your biggest problem, all the other is damage control...
Related
I have a running instance of VerneMQ (cluster of 2 nodes) on Google kubernets and using MySQL (CloudSQL) for Auth. Server accepts connections over TLS
It works fine, but after a few days i start seeing this message on the log:
can't authenticate client {[],<<"Client-id">>} from X.X.X.X:16609 due to plugin_chain_exhausted
The client app (paho) complains that the server refused the connection for being "not authorized (code=5 in paho error)"
after a few retry it finally connects. but every time it get's harder and harder until it just won't connect anymore
If i restart VerneMQ everything get's back to normal
I have only 3 clients currently connected at most, at the same time.
clients already connected have no issues in pub/sub.
In my configuration i have (among other things):
log.console.level=debug
plugins.vmq_diversity=on
vmq_diversity.mysql.* = all of them set
allow_anonymous=off
vmq_diversity.auth_mysql.enabled=on
it's like the server degrades over time. the status webpage reports no problem
My verne server was build from the git repository about a month ago and runs on a docker container
what could be the cause?
what else could i check to find posibles causes? maybe a diversity missconfiguration?
Tks
To quickly explain the plugin_chain_exhausted log: with Verne you can run multiple authentication/authorization plugins, and they will be checked in a chain. If one plugin allows the client, it will be in. If no plugin allows the client, you'll see the log above.
This does not explain the behaviour you describe, though. I don't think I have seen that.
In any case, the first thing to check is whether you actually run multiple plugins. For instance: have you disabled the vmq.passwd and the vmq.acl plugins?
I could see Stop Server and Bring Offline in mysql workbench.
I hope both are used to stop the services but still confused in difference between both in terms of when to use Stop Server and Bring Offline?
Stop server just stops the mysql process
Offline mode is a new feature introduced in MySQL 5.7.5, that basically throws out all users except DBAs :
MySQL Server now supports an “offline mode” with these
characteristics:
Connected client users who do not have the SUPER privilege are disconnected on the next request, with an appropriate error.
Disconnection includes terminating running statements and releasing
locks. Such clients also cannot initiate new connections, and receive
an appropriate error.
Connected client users who have the SUPER privilege are not disconnected, and can initiate new connections to manage the server.
Replication slave threads are permitted to keep applying data to the server.
Only users who have the SUPER privilege can control offline mode. To
put a server in offline mode, change the value of the new offline_mode
system variable from OFF to ON. To resume normal operations, change
offline_mode from ON to OFF. In offline mode, clients that are refused
access receive an ER_SERVER_OFFLINE_MODE error.
Source : Changes in MySQL 5.7.5 (2014-09-25, Milestone 15)
sorry for this post that it might be repeated, but I could not quite figure out from the other posts still.
I am unable to log on mysql localhost database, and the server is down says mysql#localhost:3366 - Refusing Connections. In mysql workbench start/shutdown mysql server, it is saying the database server instance is unknown (with start server button grayed out) and refreshing the status doesn't help.
Also, the mysql server should always be running automatically in the background whenever PC is restarted, but it is not showing in the services now.
when I try to execute mysqld from cmd, it just shuts down the server. And responds with
-"The Innodb memory heap is disabled"
-"the system tablespace must be writeable"
-"InnoDB init function returned error"
-"InnoDB registration as a storage engine failed"
Anybody has a solution to this? Thanks much!
Install it. Re-install it, if you're convinced it was already installed. The fact that it isn't even listed in the services suggests otherwise however. If it was installed but wouldn't start it would still show up, but not as started.
Scenario:
We have a scheduler which is using JDBC Job Store. Quartz version is 2.1.2.
The job which is being scheduling is also updating a database.
The database is same for both quartz and the job itself and is hosted in MySQL Server. Both application tables and quartz tables are stored in the same database.
Connection pool is different for both application and quartz. In the application we are using spring for connection pooling and quartz is forced to use connection pooling via quartz.properties.
Here is the snippet of quartz.properties
org.quartz.dataSource.qzDS.driver = com.mysql.jdbc.Driver
org.quartz.dataSource.qzDS.URL = jdbc:mysql://localhost:3306/dbname?autoReconnect=true
org.quartz.dataSource.qzDS.user = dbuser
org.quartz.dataSource.qzDS.password =dbpassword
org.quartz.dataSource.qzDS.maxConnections = 30
org.quartz.datasource.qzDS.validationQuery = select 1
#org.quartz.datasource.qzDS.minEvictableIdleTimeMillis=21600000
#org.quartz.datasource.qzDS.timeBetweenEvictionRunsMillis=1800000
#org.quartz.datasource.qzDS.numTestsPerEviction=-1
#org.quartz.datasource.qzDS.testWhileIdle=true
org.quartz.datasource.qzDS.debugUnreturnedConnectionStackTraces=true
org.quartz.datasource.qzDS.unreturnedConnectionTimeout=120
org.quartz.datasource.qzDS.initialPoolSize=5
org.quartz.datasource.qzDS.minPoolSize=5
org.quartz.datasource.qzDS.maxPoolSize=30
org.quartz.datasource.qzDS.acquireIncrement=5
org.quartz.datasource.qzDS.maxIdleTime=120
org.quartz.datasource.qzDS.validateOnCheckout=true
Database is clustered with MASTER-MASTER replication on two servers and they are being used via virtual IP everywhere in the application and quartz.
Scheduler i.e. quartz is also clustered on the same two machines where MySQL is clustered.
The problem:
One of the servers (till now we have got the problem with backup server machine) is occasionally throwing database connection error while calling notifyJobStoreJobComplete method. This is causing the job to stay in BLOCKED state even if the job itself has successfully completed but quartz was unable to update its status.
Questions:
What can be the cause of the problem?
How to move the BLOCKED jobs into WAITING state so that the jobs can be run on their next scheduled time at least. Direct editing the QRTZ_SIMPLE_TRIGGERS tables would not be a good solution, even if it works.
EDIT: To bump up the question.
the error during notifyJobStoreJobComplete is: org.quartz.impl.jdbcjobstore.JobStoreTX - Failed to override connection auto commit/transaction isolation.
[java] com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 619,082,686 milliseconds ago. The last packet sent successfully to the server was 619,082,686 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
I think main problem was communication link failure by MySQL which we solved it by increasing 'wait_timeout' to 14 days and as our maintenance is scheduled in every 15 days, we restart the each of MySQL server is our DB cluster (We have Master-Master replication in place). With approach we haven't get any communication link failure after that. In fact some time we don't restart the server in every 15 days but still no error(touch wood). :)
And as far as Quartz triggers being locked in BLOCKED state, we updated the quartz to 2.1.4 which possibly has the fix for the almost same problem. After the quartz update, we have faced the triggers being in BLOCKED state very very less frequent.
We are still unable to find out how to get the trigger out of BLOCKED state without directly modifying the quartz tables. Whenever we face this problem, we manually remove the entry for BLOCKED trigger from the qrtz_fired_triggers table and it solves the problem. I think enterprise version of quartz may have this feature from some web UI.
I'm trying to execute the command in the Windows console:
C:\SphinxSearch\bin\indexer --all --config C:\SphinxSearch\sphinx.conf
But I get an error:
ERROR: index 'indexname': sql_connect: Can't create TCP/IP socket
(10093) (DSN=mysql://root:*#localhost:3306/test).
A data source is mysql. Before the server restart everyone works fine.
How can I fix it?
I'm having the same error 10093. It's a windows error code by the way. In my case it occurs when trying to run the indexer through the system account via a scheduled task. If I'm running it directly as administrator, there's not a problem.
According to the site above:
Either your application hasn't called WSAStartup(), or WSAStartup() failed, or--possibly--you are accessing a socket which the current active task does not own (i.e. you're trying to share a socket between tasks).
In my case I'm thinking it might be the last one, some security problem due to user SYSTEM being used in my scheduled task. I was able to solve it by using my admin user instead: in the scheduled task, I set to use my local admin account with the option to "Run when user is logged on or not" and "Do not store password". I've also checked "Run with highest privileges". This seems to have done the trick as now my indexes are rotating on schedule.