MySQL-proxy and basic failover (detect state) - mysql

I have just installed mysql proxy 0.8.2, and started playing with it. I am using it together with two MySQL 5.5 servers, listening on 3306, the proxy is running on 4040. Oh, and OS is Win 7 32-bit.
My problem is that that the mysql proxy checking the state of the servers doesn't seem like it should.
I start up the script, and it runs as it should. But when I shutdown the primary server, the script doesn't seem to recognize that - it still tries to connect to it...
Version information
mysql-proxy 0.8.2
chassis: mysql-proxy 0.8.2
glib2: 2.16.6
libevent: 1.4.12-stable
LUA: Lua 5.1.2
package.path: C:\ProgramX86\dev\mysql-proxy\lib\mysql-proxy\lua\?.lua
package.cpath: C:\ProgramX86\dev\mysql-proxy\bin\lua-?.dll
-- modules
proxy: 0.8.2*
My config
[mysql-proxy]
proxy-address = :4040
proxy-backend-addresses = 10.3.0.9:3306,192.168.4.100:3306
proxy-lua-script = C:/ProgramX86/dev/mysql-proxy/failover3.lua
daemon = true
Failover lua script
function connect_server()
if proxy.global.backends[1].state == proxy.BACKEND_STATE_DOWN then
proxy.connection.backend_ndx = 2
else
proxy.connection.backend_ndx = 1
end
print ("s Connecting: " .. proxy.global.backends[proxy.connection.backend_ndx].dst.name)
end
function read_query(packet)
if proxy.global.backends[1].state == proxy.BACKEND_STATE_DOWN then
proxy.connection.backend_ndx = 2
else
proxy.connection.backend_ndx = 1
end
print ("q Connecting: " .. proxy.global.backends[proxy.connection.backend_ndx].dst.name)
end

It's because proxy.global.backends[1].state is still proxy.BACKEND_STATE_UP when the primary server is shut down.
Someone said it will take 3 minutes to wait the back end response, instead of watching Mysql service all the time.
I am trying to find a better way to solve the problem.

Related

MySQL login-path issues with clustercheck script used in xinetd

default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps

Debian Exim4 SMTP-AUTH stopped working

I have a strange problem that recently popped on my Debian Squeeze server.
I've had Exim4 configured to use SMTP-AUTH with encryption setup and running on this box for a long time, but now it doesn't work.
At first I thought it was maybe my certificates expired, but that wasn't the case, they're good for several more years.
It appears that the server isn't listening on port 25 any longer.
If I try to telnet to port 25 it times out.
If I run netstat -tulpen on the server nothing is listening on port 25.
I'm using the splitconf for Exim4.
In conf.d/main I'm enabling MAIN_TLS_ENABLE=true
In conf.d/auth/30_exim4-config_examples I have the following
# Authenticate against local passwords using sasl2-bin
# Requires exim_uid to be a member of sasl group, see README.Debian.gz
plain_saslauthd_server:
driver = plaintext
public_name = PLAIN
server_condition = ${if saslauthd{{$auth2}{$auth3}}{1}{0}}
server_set_id = $auth2
server_prompts = :
.ifndef AUTH_SERVER_ALLOW_NOTLS_PASSWORDS
server_advertise_condition = ${if eq{$tls_cipher}{}{}{*}}
.endif
#
login_saslauthd_server:
driver = plaintext
public_name = LOGIN
server_prompts = "Username:: : Password::"
# don't send system passwords over unencrypted connections
server_condition = ${if saslauthd{{$auth1}{$auth2}}{1}{0}}
server_set_id = $auth1
.ifndef AUTH_SERVER_ALLOW_NOTLS_PASSWORDS
server_advertise_condition = ${if eq{$tls_cipher}{}{}{*}}
.endif
On the server if I run this command:
swaks -a -tls -q HELO -s localhost -au A_USER_NAME -ap '<>'
I get this ...
=== Trying localhost:25...
* Error connecting 0.0.0.0 to localhost:25:
* IO::Socket::INET: connect: Connection refused
Can someone point me to some more advanced debugging techniques?
OK. I figured it out.
Comcast blocks port 25. I don't know why this is coming up now, unless they've recently started blocking it.
I had to change a line in /etc/default/exim4
From this
SMTPLISTENEROPTIONS='-oX 25 -oP /var/run/exim4/exim.pid'
To this
SMTPLISTENEROPTIONS='-oX 465:25 -oP /var/run/exim4/exim.pid'
I also added this to /etc/exim4/conf.d/main/03_exim4-config_tlsoptions
tls_on_connect_ports=465
It's odd that this just popped up, unless a Debian package updated the /etc/default/exim4 file. It's confusing, but it's working. Hopefully this will be helpful to someone in the future.
Cheers.

Frequent worker timeout

I have setup gunicorn with 3 workers, 30 worker connections and using eventlet worker class. It is set up behind Nginx. After every few requests, I see this in the logs.
[ERROR] gunicorn.error: WORKER TIMEOUT (pid:23475)
None
[INFO] gunicorn.error: Booting worker with pid: 23514
Why is this happening? How can I figure out what's going wrong?
We had the same problem using Django+nginx+gunicorn. From Gunicorn documentation we have configured the graceful-timeout that made almost no difference.
After some testings, we found the solution, the parameter to configure is: timeout (And not graceful timeout). It works like a clock..
So, Do:
1) open the gunicorn configuration file
2) set the TIMEOUT to what ever you need - the value is in seconds
NUM_WORKERS=3
TIMEOUT=120
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--timeout $TIMEOUT \
--log-level=debug \
--bind=127.0.0.1:9000 \
--pid=$PIDFILE
On Google Cloud
Just add --timeout 90 to entrypoint in app.yaml
entrypoint: gunicorn -b :$PORT main:app --timeout 90
Run Gunicorn with --log-level debug.
It should give you an app stack trace.
Is this endpoint taking too many time?
Maybe you are using flask without asynchronous support, so every request will block the call. To create async support without make difficult, add the gevent worker.
With gevent, a new call will spawn a new thread, and you app will be able to receive more requests
pip install gevent
gunicon .... --worker-class gevent
The Microsoft Azure official documentation for running Flask Apps on Azure App Services (Linux App) states the use of timeout as 600
gunicorn --bind=0.0.0.0 --timeout 600 application:app
https://learn.microsoft.com/en-us/azure/app-service/configure-language-python#flask-app
WORKER TIMEOUT means your application cannot response to the request in a defined amount of time. You can set this using gunicorn timeout settings. Some application need more time to response than another.
Another thing that may affect this is choosing the worker type
The default synchronous workers assume that your application is resource-bound in terms of CPU and network bandwidth. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. An example of something that takes an undefined amount of time is a request to the internet. At some point the external network will fail in such a way that clients will pile up on your servers. So, in this sense, any web application which makes outgoing requests to APIs will benefit from an asynchronous worker.
When I got the same problem as yours (I was trying to deploy my application using Docker Swarm), I've tried to increase the timeout and using another type of worker class. But all failed.
And then I suddenly realised I was limitting my resource too low for the service inside my compose file. This is the thing slowed down the application in my case
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
So I suggest you to check what thing slowing down your application in the first place
Could it be this?
http://docs.gunicorn.org/en/latest/settings.html#timeout
Other possibilities could be your response is taking too long or is stuck waiting.
This worked for me:
gunicorn app:app -b :8080 --timeout 120 --workers=3 --threads=3 --worker-connections=1000
If you have eventlet add:
--worker-class=eventlet
If you have gevent add:
--worker-class=gevent
I've got the same problem in Docker.
In Docker I keep trained LightGBM model + Flask serving requests. As HTTP server I used gunicorn 19.9.0. When I run my code locally on my Mac laptop everything worked just perfect, but when I ran the app in Docker my POST JSON requests were freezing for some time, then gunicorn worker had been failing with [CRITICAL] WORKER TIMEOUT exception.
I tried tons of different approaches, but the only one solved my issue was adding worker_class=gthread.
Here is my complete config:
import multiprocessing
workers = multiprocessing.cpu_count() * 2 + 1
accesslog = "-" # STDOUT
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(q)s" "%(D)s"'
bind = "0.0.0.0:5000"
keepalive = 120
timeout = 120
worker_class = "gthread"
threads = 3
I had very similar problem, I also tried using "runserver" to see if I could find anything but all I had was a message Killed
So I thought it could be resource problem, and I went ahead to give more RAM to the instance, and it worked.
You need to used an other worker type class an async one like gevent or tornado see this for more explanation :
First explantion :
You may also want to install Eventlet or Gevent if you expect that your application code may need to pause for extended periods of time during request processing
Second one :
The default synchronous workers assume that your application is resource bound in terms of CPU and network bandwidth. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. For instance, a request to the internet meets this criteria. At some point the external network will fail in such a way that clients will pile up on your servers.
If you are using GCP then you have to set workers per instance type.
Link to GCP best practices https://cloud.google.com/appengine/docs/standard/python3/runtime
timeout is a key parameter to this problem.
however it's not suit for me.
i found there is not gunicorn timeout error when i set workers=1.
when i look though my code, i found some socket connect (socket.send & socket.recv) in server init.
socket.recv will block my code and that's why it always timeout when workers>1
hope to give some ideas to the people who have some problem with me
For me, the solution was to add --timeout 90 to my entrypoint, but it wasn't working because I had TWO entrypoints defined, one in app.yaml, and another in my Dockerfile. I deleted the unused entrypoint and added --timeout 90 in the other.
For me, it was because I forgot to setup firewall rule on database server for my Django.
Frank's answer pointed me in the right direction. I have a Digital Ocean droplet accessing a managed Digital Ocean Postgresql database. All I needed to do was add my droplet to the database's "Trusted Sources".
(click on database in DO console, then click on settings. Edit Trusted Sources and select droplet name (click in editable area and it will be suggested to you)).
Check that your workers are not killed by a health check. A long request may block the health check request, and the worker gets killed by your platform because the platform thinks that the worker is unresponsive.
E.g. if you have a 25-second-long request, and a liveness check is configured to hit a different endpoint in the same service every 10 seconds, time out in 1 second, and retry 3 times, this gives 10+1*3 ~ 13 seconds, and you can see that it would trigger some times but not always.
The solution, if this is your case, is to reconfigure your liveness check (or whatever health check mechanism your platform uses) so it can wait until your typical request finishes. Or allow for more threads - something that makes sure that the health check is not blocked for long enough to trigger worker kill.
You can see that adding more workers may help with (or hide) the problem.
The easiest way that worked for me is to create a new config.py file in the same folder where your app.py exists and to put inside it the timeout and all your desired special configuration:
timeout = 999
Then just run the server while pointing to this configuration file
gunicorn -c config.py --bind 0.0.0.0:5000 wsgi:app
note that for this statement to work you need wsgi.py also in the same directory having the following
from myproject import app
if __name__ == "__main__":
app.run()
Cheers!
Apart from the gunicorn timeout settings which are already suggested, since you are using nginx in front, you can check if these 2 parameters works, proxy_connect_timeout and proxy_read_timeout which are by default 60 seconds. Can set them like this in your nginx configuration file as,
proxy_connect_timeout 120s;
proxy_read_timeout 120s;
In my case I came across this issue when sending larger(10MB) files to my server. My development server(app.run()) received them no problem but gunicorn could not handle them.
for people who come to the same problem I did. My solution was to send it in chunks like this:
ref / html example, separate large files ref
def upload_to_server():
upload_file_path = location
def read_in_chunks(file_object, chunk_size=524288):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
with open(upload_file_path, 'rb') as f:
for piece in read_in_chunks(f):
r = requests.post(
url + '/api/set-doc/stream' + '/' + server_file_name,
files={name: piece},
headers={'key': key, 'allow_all': 'true'})
my flask server:
#app.route('/api/set-doc/stream/<name>', methods=['GET', 'POST'])
def api_set_file_streamed(name):
folder = escape(name) # secure_filename(escape(name))
if 'key' in request.headers:
if request.headers['key'] != key:
return 404
else:
return 404
for fn in request.files:
file = request.files[fn]
if fn == '':
print('no file name')
flash('No selected file')
return 'fail'
if file and allowed_file(file.filename):
file_dir_path = os.path.join(app.config['UPLOAD_FOLDER'], folder)
if not os.path.exists(file_dir_path):
os.makedirs(file_dir_path)
file_path = os.path.join(file_dir_path, secure_filename(file.filename))
with open(file_path, 'ab') as f:
f.write(file.read())
return 'sucess'
return 404
in case you have changed the name of the django project you should also go to
cd /etc/systemd/system/
then
sudo nano gunicorn.service
then verify that at the end of the bind line the application name has been changed to the new application name

Enable tcp\ip remote connections to sql server express already installed database with code or script(query)

I am deploying sql express with my application. I will like that database engine to accept remote connections. I know how to configure that manual by launching the sql server configuration manager, enabling tcp/ip connections, specifying the ports etc.. I am wondering if it will be possible to do the same thing from the command line.
Or maybe I will have to create a "SQL Server 2008 Server Project" in visual studio.
EDIT 1
I posted the same question in here but I will like to do the same thing on a instance of sql express that is already installed. Take a look at the question in here
EDIT 2
I found these links that claim on doing something similar and I still cannot make it work.
1) http://support.microsoft.com/kb/839980
2) http://social.msdn.microsoft.com/Forums/en-US/sqlexpress/thread/c7d3c3af-2b1e-4273-afe9-0669dcb7bd02/
3) http://www.sql-questions.com/microsoft/SQL-Server/34211977/can-not-connect-to-sql-2008-express-on-same-lan.aspx
4) http://datazulu.com/blog/post/Enable_sql_server_tcp_via_script.aspx
EDIT 3
As Krzysztof stated in his response I need (plus other things that I know that are required)
1 - enable TCP/IP
I have managed to do this when installing a new instance of SQLExpress passing the parameter /TCPENABLED=1. When I install sql express like in this example. that instance of sql express will have TCP/IP enabled
2 - Open the right ports in the firewall
(I have done this manualy but I belive I will be able to figure it out how to do it with c#)
So far I have to play aroud with this console command:
netsh firewall set portopening protocol = TCP port = 1433 name = SQLPort mode = ENABLE scope = SUBNET profile = CURRENT
3 - Modify TCP/IP properties enable a IP address
I have not been able to figure out how to enable a IP, change a port etc.. I think this will be the step more complicated to solve
4 - Enable mixed mode authentication in sql server
I have managed to do this when installing SQL Express passing the parameter /SECURITYMODE=SQL refer to step 1's link.
SQL Server express requires this authentication type to accept remote connections.
5 - Change user (sa) default passowrd
By default the sa account has a NULL passowrd. In order to accept connections this user must have a password. I changed the default passowrd of the sa with the script:
ALTER LOGIN [sa] WITH PASSWORD='*****newPassword****'
6 - finally
will be able to connecto if all the last steps are satisied as:
SQLCMD -U sa -P newPassword -S 192.168.0.120\SQLEXPRESS,1433
by typing that in the command line: the connection string in C# will be very similar. I will have to replace -U for user , -P for password and -S for data source. I dont recall the exact names.
I tested below code with SQL Server 2008 R2 Express and I believe we should have solution for all 6 steps you outlined. Let's take on them one-by-one:
1 - Enable TCP/IP
We can enable TCP/IP protocol with WMI:
set wmiComputer = GetObject( _
"winmgmts:" _
& "\\.\root\Microsoft\SqlServer\ComputerManagement10")
set tcpProtocols = wmiComputer.ExecQuery( _
"select * from ServerNetworkProtocol " _
& "where InstanceName = 'SQLEXPRESS' and ProtocolName = 'Tcp'")
if tcpProtocols.Count = 1 then
' set tcpProtocol = tcpProtocols(0)
' I wish this worked, but unfortunately
' there's no int-indexed Item property in this type
' Doing this instead
for each tcpProtocol in tcpProtocols
dim setEnableResult
setEnableResult = tcpProtocol.SetEnable()
if setEnableResult <> 0 then
Wscript.Echo "Failed!"
end if
next
end if
2 - Open the right ports in the firewall
I believe your solution will work, just make sure you specify the right port. I suggest we pick a different port than 1433 and make it a static port SQL Server Express will be listening on. I will be using 3456 in this post, but please pick a different number in the real implementation (I feel that we will see a lot of applications using 3456 soon :-)
3 - Modify TCP/IP properties enable a IP address
We can use WMI again. Since we are using static port 3456, we just need to update two properties in IPAll section: disable dynamic ports and set the listening port to 3456:
set wmiComputer = GetObject( _
"winmgmts:" _
& "\\.\root\Microsoft\SqlServer\ComputerManagement10")
set tcpProperties = wmiComputer.ExecQuery( _
"select * from ServerNetworkProtocolProperty " _
& "where InstanceName='SQLEXPRESS' and " _
& "ProtocolName='Tcp' and IPAddressName='IPAll'")
for each tcpProperty in tcpProperties
dim setValueResult, requestedValue
if tcpProperty.PropertyName = "TcpPort" then
requestedValue = "3456"
elseif tcpProperty.PropertyName ="TcpDynamicPorts" then
requestedValue = ""
end if
setValueResult = tcpProperty.SetStringValue(requestedValue)
if setValueResult = 0 then
Wscript.Echo "" & tcpProperty.PropertyName & " set."
else
Wscript.Echo "" & tcpProperty.PropertyName & " failed!"
end if
next
Note that I didn't have to enable any of the individual addresses to make it work, but if it is required in your case, you should be able to extend this script easily to do so.
Just a reminder that when working with WMI, WBEMTest.exe is your best friend!
4 - Enable mixed mode authentication in sql server
I wish we could use WMI again, but unfortunately this setting is not exposed through WMI. There are two other options:
Use LoginMode property of Microsoft.SqlServer.Management.Smo.Server class, as described here.
Use LoginMode value in SQL Server registry, as described in this post. Note that by default the SQL Server Express instance is named SQLEXPRESS, so for my SQL Server 2008 R2 Express instance the right registry key was
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQLServer.
5 - Change user (sa) default password
You got this one covered.
6 - Finally (connect to the instance)
Since we are using a static port assigned to our SQL Server Express instance, there's no need to use instance name in the server address anymore.
SQLCMD -U sa -P newPassword -S 192.168.0.120,3456
Please let me know if this works for you (fingers crossed!).
I recommend to use SMO (Enable TCP/IP Network Protocol for SQL Server).
However, it was not available in my case.
I rewrote the WMI commands from Krzysztof Kozielczyk to PowerShell.
# Enable TCP/IP
Get-CimInstance -Namespace root/Microsoft/SqlServer/ComputerManagement10 -ClassName ServerNetworkProtocol -Filter "InstanceName = 'SQLEXPRESS' and ProtocolName = 'Tcp'" |
Invoke-CimMethod -Name SetEnable
# Open the right ports in the firewall
New-NetFirewallRule -DisplayName 'MSSQL$SQLEXPRESS' -Direction Inbound -Action Allow -Protocol TCP -LocalPort 1433
# Modify TCP/IP properties to enable an IP address
$properties = Get-CimInstance -Namespace root/Microsoft/SqlServer/ComputerManagement10 -ClassName ServerNetworkProtocolProperty -Filter "InstanceName='SQLEXPRESS' and ProtocolName = 'Tcp' and IPAddressName='IPAll'"
$properties | ? { $_.PropertyName -eq 'TcpPort' } | Invoke-CimMethod -Name SetStringValue -Arguments #{ StrValue = '1433' }
$properties | ? { $_.PropertyName -eq 'TcpPortDynamic' } | Invoke-CimMethod -Name SetStringValue -Arguments #{ StrValue = '' }
# Restart SQL Server
Restart-Service 'MSSQL$SQLEXPRESS'

Ruby/Rails/Mysql

I am on leopard. It comes with Ruby 1.8 & Sqlite3 pre-installed. I have updated ruby to 1.9.1 & added Mysql. Here's the problem. I cannot get the path to correctly point to ruby 1.9.1. I tried to update the sym-link to no avail. I am able to get into Mysql from the terminal but I cannot connect to the server through Ruby because Sqlite3 is the default. I changed the database in my apps config file but it still doesn't work. Something is really screwing this up. I want to unistall every version of Ruby, Rails, All Gems, Mysql, Sqlite3, etc & roll all of what I want on my own. Where can I find the commands through the command line to do this? Can I just send these files to the trash manually as I find them on /usr/local/....? I am really frustrated at this point! please help.
Re-installing those packages will still not guarantee that its going to work. I would recommend go through the logs and see if you pick up some something obvious. There are lots of debugging techniques available for a Rails app, for starters see here
Here's a small ruby snippet to see if the connection to MySQL works fine, give it a try if you see the MySQL server version being printed on your terminal then you know the problem is somewhere else, do not forget to change the credentials.
#!/usr/bin/ruby -w
require "mysql"
begin
# connect to the MySQL server
dbh = Mysql.real_connect("localhost", "testuser", "testpass", "test")
# get server version string and display it
puts "Server version: " + dbh.get_server_info
rescue Mysql::Error => e
puts "Error code: #{e.errno}"
puts "Error message: #{e.error}"
puts "Error SQLSTATE: #{e.sqlstate}" if e.respond_to?("sqlstate")
ensure
# disconnect from server
dbh.close if dbh
end
Also if possible please provide some more details about the environment you are using, like
Apache + Rails + Mongrel or
Apache + Rails + Passenger etc
a snippet of your app/config/database.yml etc
If you are frustrated, take a break , relax, have a coffee :-) and then start over again....working in a frustrated state of mind is definitely not going to help solve problems.
HTH