I use Xcode 3.2.1 (I am on snow leopard for some reasons) with mysql :
Server version: 5.6.15 MySQL Community Server (GPL) +
mysql-connector-c-6.1.3-osx10.6-x86_64
I am passing a request to mysql_query() as follows.
// mysql request
request = [NSString stringWithFormat:#"UPDATE consult SET summary='%#', pheno='%#' WHERE idConsult=%#", sum, phe, idc];
if (mysql_query(mysqlCnx,[request UTF8String])) {
db_finish_with_error(mysqlCnx);
}
When the request size is more than 4MB, I get a SIGPIPE with the following stack trace :
#0 0x7fff896d791e in sendto
#1 0x100065a92 in vio_write
#2 0x10004d2a2 in net_write_packet
#3 0x10004d3ac in net_write_buff
#4 0x10004d6e2 in net_write_command
#5 0x100048e3c in cli_advanced_command
#6 0x100046bdd in mysql_real_query
#7 0x1000093f9 in -[ConsultList mysqlUpdateResumePhenoFields:] at ConsultList.m:163
4#8 0x10000a565 in -[ConsultList okConsult:] at ConsultList.m:367
Any known issue ?
Here was the solution :
/etc/my.cnf
[mysqld]
max_allowed_packet=120M
as root :
chown 644 /etc/my.cnf
mysql> set global max_allowed_packet = 125829120;
Query OK, 0 rows affected (0,00 sec)
mysql> show variables like 'max_allowed_packet';
+--------------------+-----------+
| Variable_name | Value |
+--------------------+-----------+
| max_allowed_packet | 125829120 | << 120MB !
+--------------------+-----------+
Related
I'm administrating RHEL OpenShift cluster. I'm upgrading from 4.10.x -> 4.11.x -> 4.12.2
There are 3 masters, and 7 worker nodes.
all 3 masters updated
3 of the 8 workers updated.
Thus far the upgrade is now stuck on worker0 with:
oc logs machine-config-daemon-4bs9x -n openshift-machine-config-operator
< snip >
I0216 21:00:08.555947 3136 daemon.go:1255] Current config: rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.555986 3136 daemon.go:1256] Desired config: rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
I0216 21:00:08.555992 3136 daemon.go:1258] state: Degraded
I0216 21:00:08.566365 3136 update.go:2089] Running: rpm-ostree cleanup -r
Deployments unchanged.
I0216 21:00:08.647332 3136 update.go:2104] Disk currentConfig rendered-worker-263c6ea5fafb6f1da35a31749a1180d7 overrides node's currentConfig annotation rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.651201 3136 daemon.go:1564] Validating against pending config rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
E0216 21:00:10.291740 3136 writer.go:200] Marking Degraded due to: unexpected on-disk state validating against rendered-worker-263c6ea5fafb6f1da35a31749a1180d7: expected target osImageURL "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee", have "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17" ("b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f")
I've had this problem before and followed the RedHat solutions to run the following command. But this is now failing.
oc debug node/worker0.xx.com
sh-4.4# chroot /host
sh-4.4# rpm-ostree status
State: idle
Deployments:
* db83d20cf09a263777fcca78594b16da00af8acc245d29cc2a1344abc3f0dac2
Version: 412.86.202301311551-0 (2023-01-31T15:54:05Z)
sh-4.4#
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee"
I0216 21:02:54.449270 3962714 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-821872843 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee
I0216 21:03:48.349962 3962714 rpm-ostree.go:209] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0216 21:03:49.926169 3962714 rpm-ostree.go:246] No com.coreos.ostree-commit label found in metadata! Inspecting...
I0216 21:03:49.926234 3962714 rpm-ostree.go:412] Running captured: ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo
error: error running ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo: exit status 1
error: opening repo: opendir(/run/mco-machine-os-content/os-content-821872843/srv/repo): No such file or directory
sh-4.4#
After a reboot and retry now I'm getting:
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
I0217 19:10:06.928154 1443914 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee
error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
W0217 19:10:07.176459 1443914 run.go:45] nice failed: running nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee failed: error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
: exit status 1; retrying...
^C
I tried this:
/run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
expecting this result ( from a previous upgrade problem ):
sh-4.4# chroot /host
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17"
I0208 21:50:00.408235 2962835 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-3432684387 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0208 21:50:29.727695 2962835 rpm-ostree.go:353] Running captured: rpm-ostree status --json
I0208 21:50:29.780350 2962835 rpm-ostree.go:261] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:7c252d64354d207cd7fb2a6e2404e611a29bf214f63a97345dee1846055c15d8
I0208 21:50:31.456928 2962835 rpm-ostree.go:293] Pivoting to: 411.86.202301242231-0 (b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f)
I0208 21:50:31.456966 2962835 rpm-ostree.go:325] Executing rebase from repo path /run/mco-machine-os-content/os-content-3432684387/srv/repo with customImageURL pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 and checksum b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f
I0208 21:50:31.457048 2962835 update.go:1972] Running: rpm-ostree rebase --experimental /run/mco-machine-os-content/os-content-3432684387/srv/repo:b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f --custom-origin-url pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 --custom-origin-description Managed by machine-config-operator
0 metadata, 0 content objects imported; 0 bytes content written
Staging deployment... done
Upgraded:
NetworkManager 1:1.30.0-16.el8_4 -> 1:1.36.0-12.el8_6
< snip>
zlib 1.2.11-18.el8_4 -> 1.2.11-19.el8_6
Removed:
ModemManager-glib-1.10.8-2.el8.x86_64
libmbim-1.20.2-1.el8.x86_64
libqmi-1.24.0-1.el8.x86_64
openvswitch2.16-2.16.0-108.el8fdp.x86_64
redhat-release-coreos-410.84-2.el8.x86_64
Added:
WALinuxAgent-udev-2.3.0.2-2.el8_6.3.noarch
glibc-gconv-extra-2.28-189.5.el8_6.x86_64
libbpf-0.4.0-3.el8.x86_64
openvswitch2.17-2.17.0-67.el8fdp.x86_64
redhat-release-8.6-0.1.el8.x86_64
redhat-release-eula-8.6-0.1.el8.x86_64
shadow-utils-subid-2:4.6-16.el8.x86_64
Run "systemctl reboot" to start a reboot
sh-4.4# systemctl reboot
My MariaDB server is timing out my C++ client (using libmariadb) after 600 seconds (10 minutes) of inactivity, and I'm not sure why, because I can't find any configured timeouts that specify that number.
Here's my code, where I execute a simple SELECT query, wait 11 minutes, then run that same query again and get a "server gone" error:
#include <iostream>
#include <unistd.h>
#include <errmsg.h>
#include <mysql.h>
int main(int, char**)
{
// connect to the database
MYSQL* connection = mysql_init(NULL);
my_bool reconnect = 0;
mysql_options(connection, MYSQL_OPT_RECONNECT, &reconnect); // don't implicitly reconnect
mysql_real_connect(connection, "127.0.0.1", "testuser", "password",
"my_test_db", 3306, NULL, 0);
// run a simple query
mysql_query(connection, "select 5");
mysql_free_result(mysql_store_result(connection));
std::cout << "First query done...\n";
// sleep for 11 minutes
sleep(660);
// run the query again
if(! mysql_query(connection, "select 5"))
{
std::cout << "Second query succeeded after " << seconds << " seconds\n";
mysql_free_result(mysql_store_result(connection));
}
else
{
if(mysql_errno(connection) == CR_SERVER_GONE_ERROR)
{
// **** this happens every time ****
std::cout << "Server went away after " << seconds << " seconds\n";
}
}
// close the connection
mysql_close(connection);
connection = nullptr;
return 0;
}
The stdout of the server process reports that it timed out my connection:
$ sudo journalctl -u mariadb
...
Jul 24 17:58:31 myhost mysqld[407]: 2018-07-24 17:58:31 139667452651264 [Warning] Aborted connection 222 to db: 'my_test_db' user: 'testuser' host: 'localhost' (Got timeout reading communication packets)
...
Looking at a tcpdump capture, I can also see the server sending the client a TCP FIN packet, which closes the connection.
The reason I'm stumped is because I haven't changed any of the default timeout values, none of which are even 600 seconds:
MariaDB [(none)]> show variables like '%timeout%';
+-------------------------------------+----------+
| Variable_name | Value |
+-------------------------------------+----------+
| connect_timeout | 10 |
| deadlock_timeout_long | 50000000 |
| deadlock_timeout_short | 10000 |
| delayed_insert_timeout | 300 |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_print_lock_wait_timeout_info | OFF |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| thread_pool_idle_timeout | 60 |
| wait_timeout | 28800 |
+-------------------------------------+----------+
So why is the server timing out my connection? Based on the documentation, I would have thought it would have been because of the wait_timeout server variable, but it's left at the default of 8 hours...
BTW I'm using MariaDB 10.0 and libmariadb 2.0 (from the Ubuntu Xenial Universe repo)
Edit: here's an image of a tcpdump capture catching the disconnect. My Wireshark filter is tcp.port == 55916, so I'm looking at traffic to/from this one client connection. The FIN packet that the server sends is packet 1199, exactly 600 seconds after the previous packet (884).
wait_timeout is tricky. From the same connection do
SHOW SESSION VARIABLES LIKE '%timeout%';
SHOW SESSION VARIABLES WHERE VALUE BETWEEN 500 AND 700;
You should be able to workaround the issue by executing
mysql_query("SET ##wait_timeout = 22222");
Are you connected as 'root' or not?
More connector details:
See: https://dev.mysql.com/doc/refman/5.5/en/mysql-options.html
CLIENT_INTERACTIVE: Permit interactive_timeout seconds of inactivity (rather than wait_timeout seconds) before closing the connection. The client's session wait_timeout variable is set to the value of the session interactive_timeout variable.
https://dev.mysql.com/doc/relnotes/connector-cpp/en/news-1-1-5.html (MySQL Connector/C++ 1.1.5)
It is also possible to get and set the statement execution-time limit using the MySQL_Statement::getQueryTimeout() and MySQL_Statement::setQueryTimeout() methods.
There may also be a TCP/IP timeout.
I'm not sure about the exact reason. But I'm sure wait_timeout is not the only thing which has an effect on this. According to the only error message you have included in your question, it seems like there was a problem reading the packet.
Got timeout reading communication packets
I believe it was more like MariaDB had an issue reading the packet rather than attempting to connect or so. I also had a look at the MariaDB client library, and found this block;
if (ma_net_write_command(net,(uchar) command,arg,
length ? length : (ulong) strlen(arg), 0))
{
if (net->last_errno == ER_NET_PACKET_TOO_LARGE)
{
my_set_error(mysql, CR_NET_PACKET_TOO_LARGE, SQLSTATE_UNKNOWN, 0);
goto end;
}
end_server(mysql);
if (mariadb_reconnect(mysql))
goto end;
if (ma_net_write_command(net,(uchar) command,arg,
length ? length : (ulong) strlen(arg), 0))
{
my_set_error(mysql, CR_SERVER_GONE_ERROR, SQLSTATE_UNKNOWN, 0);
goto end;
}
}
https://github.com/MariaDB/mariadb-connector-c/blob/master/libmariadb/mariadb_lib.c
So it seems like it sets the error code to server gone away when it get a packet size issue. I suggest you to change the max_allowed_packet variable to some large value and see whether it has any effect.
SET ##global.max_allowed_packet = <some large value>;
https://mariadb.com/kb/en/library/server-system-variables/#max_allowed_packet
I hope it will help, or at least it will set you in some path to solve the problem :) and finally, I think you should handle the disconnects in your code rather than relying on the timeouts.
Galera cluster with Haproxy Load balancing. Change this parameter on haproxy
settings
defaults
timeout connect 10s
timeout client 30s
timeout server 30s
From last 4 days we are facing frequent database crashes with mysql infobright engine, there is no recent changes on production environment and no updates.
Currently we are using the version 5.1.40.
Find the below dump, can any one help to figure out the issue.
170520 21:12:08 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
key_buffer_size=1677721600
read_buffer_size=1048576
max_used_connections=75
max_threads=1000
threads_connected=54
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 3696548 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd: 0xc2a4bd000
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7fc0d0bede58 thread_stack 0x80000
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0xaef849]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x412e13]
/lib64/libpthread.so.0(+0xf7e0) [0x7fc0d48c77e0]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0xb10635]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0xb1f123]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x9a9693]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x76ae0c]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x76b594]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x767ab3]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x7694ea]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x72902b]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x422325]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x427573]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x42b38c]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x42c227]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x42cb05]
/usr/local/infobright-4.7.1-x86_64/bin/mysqld() [0x41f06d]
/lib64/libpthread.so.0(+0x7aa1) [0x7fc0d48bfaa1]
/lib64/libc.so.6(clone+0x6d) [0x7fc0d460caad]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0xc2da5e410 = SELECT DATE_FORMAT(DATETIME,'%Y%m%d') AS YEAR_MONTH_DAY_SK ,HOUR(DATETIME) AS HOUR_SK, IFNULL(DESTINATION,'0') AS DESTINATION, IFNULL(DATETIME,'1970-01-01 00:00:00') AS DATETIME, IFNULL(CLIENTID,'0') AS CLIENTID, IFNULL(GROUPID,'0') AS GROUPID, IFNULL(TEAMID,'0') AS TEAMID, IFNULL(SERVICEID,'0') AS SERVICEID, IFNULL(CHANNELID,'0') AS CHANNELID, IFNULL(STATUSID,'0') AS STATUSID, CASE REASONCODE WHEN '' THEN NULL WHEN NULL THEN NULL ELSE REASONCODE END AS REASONCODE, CASE REASONDESC WHEN '' THEN NULL WHEN NULL THEN NULL ELSE REASONDESC END AS REASONDESC, IFNULL(ACTIONTYPE1ID,'0') AS ACTIONTYPE1ID, CASE ACTIONTYPE1DESC WHEN '' THEN NULL WHEN NULL THEN NULL ELSE ACTIONTYPE1DESC END AS ACTIONTYPE1DESC, IFNULL(ACTIONTYPE2ID,'0') AS ACTIONTYPE2ID, CASE ACTIONTYPE2DESC WHEN '' THEN NULL WHEN NULL THEN NULL ELSE ACTIONTYPE2DESC END AS ACTIONTYPE2DESC, IFNULL(ATTACHMENT,'0') AS ATTACHMENT, CASE MIMETYPE WHEN '' THEN NULL WHEN NULL THEN NULL ELSE MIMETYPE END AS MIMETYPE, CASE VOICEFLOWNAME WH
thd->thread_id=35918
thd->killed=NOT_KILLED
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
170520 21:12:08 mysqld_safe Number of processes running now: 0
170520 21:12:08 mysqld_safe mysqld restarted
tcmalloc: large alloc 1365172224 bytes == 0x4518000 #
Loading configuration for Infobright instance ...
Option: AllowMySQLQueryPath, value: 1.
Option: AutoConfigure, value: 0.
Option: CacheFolder, value: /usr/local/infobright-4.7.1-x86_64/cache.
Option: ControlMessages, value: 0.
Option: IBEngineRevision, value: IEE_4.7.1_r30553_31737.
Option: InternalMessages, value: 0.
Option: InternalMessagesFlushPeriod, value: 60.
Option: KNFolder, value: BH_RSI_Repository.
Option: KNLevel, value: 99.
Option: LicenseCheckInterval, value: 0.
Option: LicenseExpireWarningDays, value: 0.
Option: LicenseFile, value: <unknown>.
Option: LicenseServerIPAddr, value: .
Option: LicenseServerType, value: .
Option: LicenseServerWarningNumber, value: .
Option: LoaderMainHeapSize, value: 800.
Option: PushDown, value: 1.
Option: ServerMainHeapSize, value: 48000.
Option: UseMySQLImportExportDefaults, value: 0.
Option: bherrLogLevel, value: 1.
Infobright instance configuration loaded.
tcmalloc: large alloc 40265318400 bytes == 0x687c8000 #
tcmalloc: large alloc 10066329600 bytes == 0x9cff48000 #
170520 21:12:09 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
170520 21:12:09 [ERROR] Do you already have another mysqld server running on port: 5029 ?
170520 21:12:09 [ERROR] Aborting
170520 21:12:09 [Note] /usr/local/infobright-4.7.1-x86_64/bin/mysqld: Shutdown complete
170520 21:12:09 mysqld_safe mysqld from pid file /data/infobright/data/SH-UMP-CINFBRT2.pid ended
I'm using Salt to provision cloud servers but I'm having problems with MySQLdb producing the correct permissions for MySQL. If I was executing the SQL directly it would be:
GRANT ALL ON `install\_%`.* TO 'installer'#'localhost';
The sls file contains:
installer_local_install_grants:
mysql_grants.present:
- grant: all privileges
- database: install\_%.*
- user: installer
- host: localhost
- escape: False
Which produces this error:
Function: mysql_grants.present
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1560, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/states/mysql_grants.py", line 187, in present
grant, database, user, host, grant_option, escape, ssl_option, **connection_args
File "/usr/lib/python2.7/dist-packages/salt/modules/mysql.py", line 1666, in grant_add
_execute(cur, qry['qry'], qry['args'])
File "/usr/lib/python2.7/dist-packages/salt/modules/mysql.py", line 505, in _execute
return cur.execute(qry, args)
File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 159, in execute
query = query % db.literal(args)
TypeError: * wants int
With debug in salt turned on the relevant line prior to submitting to MySQLdb is:
Doing query: GRANT ALL PRIVILEGES ON install\_%.* TO %(user)s#%(host)s args: {'host': 'localhost', 'user': 'installer'}
So it seems Salt is outputting the correct code but MySQLdb is not handling some part of the query correctly. The query is missing the back ticks but I'm really not sure how to get those in.
With the escape removed or set to True the grants look like:
+-----------+-----------------+------------------+
| Host | Db | User |
+-----------+-----------------+------------------+
| localhost | install\\_\% | installer |
+-----------+-----------------+------------------+
When it should look like:
+-----------+-----------------+------------------+
| Host | Db | User |
+-----------+-----------------+------------------+
| localhost | install\_% | installer |
+-----------+-----------------+------------------+
OK, would you open an issue referencing this SO post? Thanks!
https://github.com/saltstack/salt/issues/new
I'm trying to set up an application using the Zend Framework but the problem i'm getting is as soon as i add the following line in the application.ini the default home page that is created by the zend tool throws a fatal error
Fatal error: Uncaught exception 'Zend_Db_Adapter_Exception' with
message 'Configuration array must have a key for 'password' for login
credentials' in C:\xampp\php\PEAR\Zend\Db\Adapter\Abstract.php:295
Stack trace: #0 C:\xampp\php\PEAR\Zend\Db\Adapter\Abstract.php(183):
Zend_Db_Adapter_Abstract->_checkRequiredOptions(Array) #1
C:\xampp\php\PEAR\Zend\Db.php(270):
Zend_Db_Adapter_Abstract->__construct(Array) #2
C:\xampp\php\PEAR\Zend\Application\Resource\Db.php(142):
Zend_Db::factory('PDO_MYSQL', Array) #3
C:\xampp\php\PEAR\Zend\Application\Resource\Db.php(154):
Zend_Application_Resource_Db->getDbAdapter() #4
C:\xampp\php\PEAR\Zend\Application\Bootstrap\BootstrapAbstract.php(683):
Zend_Application_Resource_Db->init() #5
C:\xampp\php\PEAR\Zend\Application\Bootstrap\BootstrapAbstract.php(626):
Zend_Application_Bootstrap_BootstrapAbstract->_executeResource('db')
6 C:\xampp\php\PEAR\Zend\Application\Bootstrap\BootstrapAbstract.php(586):
Zend_Application_Bootstrap_BootstrapAbstract->_bootstrap(NULL) #7
C:\xampp\php\PEAR\Zend\Ap in
C:\xampp\php\PEAR\Zend\Db\Adapter\Abstract.php on line 295
resources.db.adapter = PDO_MYSQL
resources.db.params.host = localhost
resources.db.params.dbname = codenamesnm
i'm using xampp with windows 7 Any idea what is wrong
Do not omit these lines:
resources.db.params.username = rob
resources.db.params.password = 123456
set them to "root" or "" if necessary, but let them into your application.ini.