I'm running Ejabberd 17.01 on Ubuntu 16.4, and I need to log all messages in a database. With the module mod_mam this should be straightforward task. What I've done so far according to the official documentation:
Creating a MySQL database
Importing the schema that comes with the Ejabberd installation
Making the follow changes in the ejabberd.yml conf file
Code snippet:
auth_method: sql
default_db: sql
...
## MySQL server:
##
sql_type: mysql
sql_server: "localhost"
sql_database: "ejabberd"
sql_username: "someuser"
sql_password: "somepassword"
...
In principle, this seems to work. When I create new users, or users create their own accounts, I can see the additions to the corresponding MySQL table. That means that the database is used and can be accessed, I presume.
But now I want to enable mod_mam. According to the documentations and other sources on the Web, this should look something like this:
modules:
mod_mam: % <-- I had "mom_mam" here. Didn't notice the typo in the error message.
iqdisc: no_queue
db_type: sql
default: always
However, with this, Ejabberd fails to start throwing the following error:
2017-01-27 08:44:50.242 [critical] <0.39.0>#gen_mod:start_module:162 Problem starting the module mom_mam for host <<"xxx.xxx.xxx.xxx">>
options: [{iqdisc,no_queue},{db_type,sql},{default,always}]
error: undef
[{mom_mam,start,
[<<"xxx.xxx.xxx.xxx">>,
[{iqdisc,no_queue},{db_type,sql},{default,always}]],
[]},
{gen_mod,start_module,3,[{file,"src/gen_mod.erl"},{line,154}]},
{lists,foreach,2,[{file,"lists.erl"},{line,1337}]},
{ejabberd_app,start,2,[{file,"src/ejabberd_app.erl"},{line,77}]},
{application_master,start_it_old,4,
[{file,"application_master.erl"},{line,273}]}]
2017-01-27 08:44:50.242 [critical] <0.39.0>#gen_mod:maybe_halt_ejabberd:170 ejabberd initialization was aborted because a module start failed.
I cannot find anything online, it just seems to work usually.
Related
In ejabberd 18.01-2, installed in lxc container Ubuntu 18.04 Bionic LTS using apt, I'm trying to setup mod_http_upload.
In the section listen, I have
listen:
-
port: 5444
module: ejabberd_http
tls: true
request_handlers:
"/upload": mod_http_upload
In the configuration file, commented port was 5444, however, in the current documentation, it is 5443, so I am not sure which one is right.
In the modules section, I have
modules:
mod_http_upload:
host: "upload.ejabberd.forumanalogue.fr"
max_size: infinity
thumbnail: true
put_url: "https://ejabberd.forumanalogue.fr:5444/upload"
docroot: "/ejabberd/upload"
When I start the service, I can see an odd message in the logs
2019-11-11 21:02:35.287 [warning] <0.367.0>#ejabberd_pkix:handle_call:255 No certificate found matching 'upload.ejabberd.forumanalogue.fr': strictly configured clients or servers will reject connections with this host; obtain a certificate for this (sub)domain from any trusted CA such as Let's Encrypt (www.letsencrypt.org)
It is strange because I have a signed wildcard certificate.
certfiles:
- "/etc/letsencrypt/live/forumanalogue.fr/*.pem"
I can see the service with my client (Gajim) but when I try to send a file to another local account, I receive an error Access denied by service policy, see the complete log:
<iq xml:lang='en' to='foo#forumanalogue.fr/gajim.HCLJ4BZI' from='upload.ejabberd.forumanalogue.fr' type='error' id='1dd35274-90e9-4b3b-9608-0fab59afe34e'>
<request xmlns='urn:xmpp:http:upload'>
<filename>a.out</filename>
<size>27232</size>
<content-type>application/octet-stream</content-type>
</request>
<error code='403' type='auth'>
<forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'>Access denied by service policy</text>
</error>
</iq>
I had to enable debug logging in order to see something. It is quite verbose, but I think that the relevant part, which is non redundant with the client message, is
2019-11-11 20:53:08.329 [debug] <0.501.0>#mod_http_upload:process_slot_request:544 Denying HTTP upload slot request from foo#forumanalogue.fr/gajim.HCLJ4BZI
Thank you for your help.
I tried with ejabberd 18.01, a configuration similar to yours, and it works for me.
Looking at the source code, that "process_slot_request:544 " error means that the account attempting to use the upload feature is not allowed by the "local" Access rule in the vhost it sended it to. Probably it's a remote account. Remote to that upload service. In other words, the service upload.whatever can only be used by accounts like user12#whatever.
In your case, you are attempting to use upload.ejabberd.forumanalogue.fr from account foo#forumanalogue.fr, which is not local to that upload service.
Several ideas, I hope one of them suits your specific setup:
A) don't mess with vhosts. If it's forumanalogue.fr, keep it that everywhere
B) use #HOST# in host and put_url options
C) Or if you really want to mess with hosts, then add Access rights so accounts in that vhost are considered "local" to the upload service.
I am trying to set up a brand new Ghost blog on a Centos 7 server. I have Nginx, Node and Ghost installed and have written all of the necessary configuration files. It's pretty close to working, but I wanted to use MySQL instead of SQLite, so I created a new (blank) MySQL database called "ghost_db", set up a MySQL user called "ghost", gave the user permission for the database, and added these lines to config.js:
database: {
client: 'mysql',
connection: {
host: 'localhost',
user: 'ghost',
password: 'mypassword',
database: 'ghost_db'
charset: 'utf8'
filename: path.join(__dirname, '/content/data/ghost-dev.db')
},
debug: false
}, ...
When I try to start it, I get an error that suggests I use knex-migrator to initialize the database.
[john#a ghost]$ npm start
> ghost#1.18.4 start /var/www/ghost
> node index
[2017-12-10 00:08:00] ERROR
NAME: DatabaseIsNotOkError
CODE: MIGRATION_TABLE_IS_MISSING
MESSAGE: Please run knex-migrator init ...
However, some comments on Stackexchange suggest that using knex-migrate may be unnecessary for this version of Ghost, and when I run knex-migrator, it also fails:
[john#a ghost]$ knex-migrator init
[2017-12-09 16:21:33] ERROR
NAME: RollbackError
CODE: SQLITE_ERROR
MESSAGE: delete from "migrations" where "name" = '2-create-fixtures.js' and "version" = 'init' and "currentVersion" = '1.18' - SQLITE_ERROR: no such table: migrations
...[omitted]
Error: SQLITE_ERROR: no such table: migrations
I think the problem may be that the "ghost_db" database I initially created is blank. The "ghost-dev.db" file that is pointed to in the config.js seems to be for SQLite, but I get the same error message if I switch config.js back to using an SQLite database. I don't know what the "migrations" table is. I found the schema that I think Ghost expects at [https://github.com/TryGhost/Ghost/blob/1.16.2/core/server/data/schema/schema.js], but I'm not sure how to use that to initialize the tables, etc., except for doing it very laboriously by hand. I'm stumped!
Knex-migrator is new in Ghost 1.0, which also uses a config.<env>.json file for configuration.
It sounds like you added your database config into a file called config.js which was correct <1.0, however it seems you were installing Ghost 1.0 and therefore your new connection details would have needed to live in config.production.json.
You are correct that Ghost-CLI isn't intended for use on CentOS (it's for Ubuntu), but I'd be very surprised if it failed to install Ghost correctly. The issues with other OSs are mainly in the subtle differences between systemd i.e. keeping Ghost running.
The answer for me was just to not create the database at all and let Ghost do it as part of ghost install.
I took an alternate approach which proved successful, which was to install Ghost as an NPM module. The official Ghost instructions label this as an "advanced" process, but it wasn't too difficult to follow the instructions in the excellent nehalist.io and Stickleback blogs. There was also some useful guidance on the HugeServer knowledgebase. I think ultimately the problem was that the Ghost commandline interface (ghost-cli) wasn't designed for Centos 7.
default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps
I recently uploaded a Symfony2 project to GoDaddy and I'm having trouble accesing it because I get the message:
An exception occured in driver: SQLSTATE[HY000] [1045] Access denied for user 'root'#'127.0.0.1' (using password: NO)
Obviously the message is clear, so I checked and rechecked my parameters.yml, and the message don't even match what I have there, which I have changed several times to try to fix. This is my parameters.yml:
parameters:
database_driver: pdo_mysql
database_host: localhost
database_port: null
database_name: database1
database_user: database1user
database_password: mytestpassword
mailer_transport: smtp
mailer_host: 127.0.0.1
mailer_user: null
mailer_password: null
locale: en
secret: RandomTokenThatWillBeChanged
debug_toolbar: true
debug_redirects: false
use_assetic_controller: true
So, the error message doesn't tell me what is my real problem, or it is loading the parameters from some cached version that I haven't found yet. Any ideas of what else could cause or where could a cached version of this data be?
One of the best practice when developing a Symfony application is to
make it configurable via a parameters.yml file. It contains
information such as the database name, the mailer hostname, and custom
configuration parameters.
As those parameters can be different on your local machine, your
testing environment, your production servers, and even between
developers working on the same project, it is not recommended to store
it in the project repository. Instead, the repository should contain a
paramaters.yml.dist file with sensible defaults that can be used as a
good starting point for everyone.
Then, whenever a developer starts working on the project, the
parameters`.yml file must be created by using the parameters.yml.dist
as a template. That works quite well, but whenever a new value is
added to the template, you must remember to update the main parameter
file accordingly.
As of Symfony 2.3, the Standard Edition comes with a new bundle that
automates the tedious work. Whenever you run composer install, a
script creates the parameters.yml file if it does not exist and allows
you to customize all the values interactively. Moreover, if you use
the --no-interaction flag, it will silently fallback to the default
values.
http://symfony.com/blog/new-in-symfony-2-3-interactive-management-of-the-parameters-yml-file
So, is it not possible that your paramaters.yml is overwritten by paramaters.yml.dist?
You can also try to completely clear the cache
In Dev:
php app/console cache:clear
In Production:
php app/console cache:clear --env=prod --no-debug
I have installed Sonar 3.5.1 on a brand new PostgreSQL 9.2 database. The server seems to run fine, but sonar-runner (v2.2) fails with the following error:
Caused by: org.sonar.core.persistence.BadDatabaseVersion: The current batch process and the configured remote server do not share the same DB configuration.
- Batch side: jdbc:postgresql://10.1.0.210/sonar (postgres / *****)
- Server side: check the configuration at http://sonar.kopitoto/system
I am pretty confident that there is no other concurrent installation of Sonar pointing to the same database, because:
This is the first Sonar installation in this organization, ever
The value of sonar.core.id in the DB matches the value returned by the Sonar server:
Getting the value from the DB:
sonar=# SELECT text_value FROM properties WHERE prop_key = 'sonar.core.id';
text_value
----------------
20130525192736
(1 row)
Getting the value from the server:
$ curl http://sonar.kopitoto/api/server
<?xml version="1.0" encoding="UTF-8"?>
<server>
<id>20130525192736</id>
<version>3.5.1</version>
<status>UP</status>
</server>
Sonar-runner's properties:
sonar.host.url: http://sonar.kopitoto
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.password: *****
sonar.jdbc.schema: public
sonar.jdbc.url: jdbc:postgresql://10.1.0.210/sonar
sonar.jdbc.username: postgres
Of course, the password is not five stars, but I checked it twice. If I change it a little bit, the runner fails earlier with authentication error. So a password mismatch is ruled out.
Server's sonar.properties:
sonar.jdbc.username: postgres
sonar.jdbc.password: *****
sonar.jdbc.url: jdbc:postgresql://10.1.0.210/sonar
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.schema: public
Again, the password above is not five stars, but I am pretty sure it is correct. The server logs say nothing about errors, and shows how the database schema is initialized when I stop the thing, drop the database, create an empty one, and then start the Sonar server again.
Am I missing something?
At this point, I am thinking that this is a bug in Sonar (probably in sonar-runner). Unfortunately, Sonar's issue-tracking system is littered with such reports, all closed with "Not a bug" resolution. I guess I will be dismissed similarly if I reopen one of those issues.
So I hope I am really missing something here.