Heroku FlyingSphinx Configuration needed for ThinkingSphinx::Configuration.instance.controller.running? - thinking-sphinx

We ported our Rails 5.2 app to Heroku and were able to get almost everything working with FlyingSphinx.
Search and indexing work well but as a convenience to our users, we try to let them know when the daemon is down for service or if we're re-indexing.
Previously, we were able to use
ThinkingSphinx::Configuration.instance.controller.running?
But this always returns false on Heroku even if the daemon is running.
Our thinking_sphinx.yml doesn't specify file locations or information on where the pid is, so I suspect this may be the issue; however, I can't find anywhere that would explain what to use in thinking_sphinx.yml for Heroku/FlyingSphinx, if it's at all necessary https://freelancing-gods.com/thinking-sphinx/v3/advanced_config.html.
Our thinking_sphinx.yml looks like this now:
common: &common
mem_limit: 40M
64bit_timestamps: true
development:
<<: *common
test:
<<: *common
mysql41: 9307
quiet_deltas: true
staging:
<<: *common
quiet_deltas: true
production:
<<: *common
version: '2.2.11'
quiet_deltas: true
Suggestions?

Ah, I've not had this requested before, but it's definitely possible:
require "flying_sphinx/commands" if ENV["FLYING_SPHINX_IDENTIFIER"]
ThinkingSphinx::Commander.call(
:running,
ThinkingSphinx::Configuration.instance,
{}
)
When this is called locally, it'll check the daemon via the pid file, but when it's called on a Heroku app using Flying Sphinx, it talks to the Flying Sphinx API to get the running state. Hence, it's important to only run the require call for Heroku-hosted environments - there's no point having local/test envs calling the Flying Sphinx API.
Setting file/pid locations in Heroku/Flying Sphinx environments are mostly not going to do anything, because Flying Sphinx overwrites these anyway to match the standardised approach on their servers. The exceptions are for stopfiles/exceptions/etc, and those corresponding files are uploaded to Flying Sphinx so the daemon there can be configured appropriately.

Related

Initialize Ghost database on new install

I am trying to set up a brand new Ghost blog on a Centos 7 server. I have Nginx, Node and Ghost installed and have written all of the necessary configuration files. It's pretty close to working, but I wanted to use MySQL instead of SQLite, so I created a new (blank) MySQL database called "ghost_db", set up a MySQL user called "ghost", gave the user permission for the database, and added these lines to config.js:
database: {
client: 'mysql',
connection: {
host: 'localhost',
user: 'ghost',
password: 'mypassword',
database: 'ghost_db'
charset: 'utf8'
filename: path.join(__dirname, '/content/data/ghost-dev.db')
},
debug: false
}, ...
When I try to start it, I get an error that suggests I use knex-migrator to initialize the database.
[john#a ghost]$ npm start
> ghost#1.18.4 start /var/www/ghost
> node index
[2017-12-10 00:08:00] ERROR
NAME: DatabaseIsNotOkError
CODE: MIGRATION_TABLE_IS_MISSING
MESSAGE: Please run knex-migrator init ...
However, some comments on Stackexchange suggest that using knex-migrate may be unnecessary for this version of Ghost, and when I run knex-migrator, it also fails:
[john#a ghost]$ knex-migrator init
[2017-12-09 16:21:33] ERROR
NAME: RollbackError
CODE: SQLITE_ERROR
MESSAGE: delete from "migrations" where "name" = '2-create-fixtures.js' and "version" = 'init' and "currentVersion" = '1.18' - SQLITE_ERROR: no such table: migrations
...[omitted]
Error: SQLITE_ERROR: no such table: migrations
I think the problem may be that the "ghost_db" database I initially created is blank. The "ghost-dev.db" file that is pointed to in the config.js seems to be for SQLite, but I get the same error message if I switch config.js back to using an SQLite database. I don't know what the "migrations" table is. I found the schema that I think Ghost expects at [https://github.com/TryGhost/Ghost/blob/1.16.2/core/server/data/schema/schema.js], but I'm not sure how to use that to initialize the tables, etc., except for doing it very laboriously by hand. I'm stumped!
Knex-migrator is new in Ghost 1.0, which also uses a config.<env>.json file for configuration.
It sounds like you added your database config into a file called config.js which was correct <1.0, however it seems you were installing Ghost 1.0 and therefore your new connection details would have needed to live in config.production.json.
You are correct that Ghost-CLI isn't intended for use on CentOS (it's for Ubuntu), but I'd be very surprised if it failed to install Ghost correctly. The issues with other OSs are mainly in the subtle differences between systemd i.e. keeping Ghost running.
The answer for me was just to not create the database at all and let Ghost do it as part of ghost install.
I took an alternate approach which proved successful, which was to install Ghost as an NPM module. The official Ghost instructions label this as an "advanced" process, but it wasn't too difficult to follow the instructions in the excellent nehalist.io and Stickleback blogs. There was also some useful guidance on the HugeServer knowledgebase. I think ultimately the problem was that the Ghost commandline interface (ghost-cli) wasn't designed for Centos 7.

Setup environment variable for doctrine and elasticsearch

How to setup environment variable for symfony.
Like if i run my project than it should detetched the envirment and do the action, as an example ---
http: //production.com -> prod * environment *
http: //localhost:9200 -> * dev * environment --- for elasticsearch
http: //localhost:8000 -> * dev * environment --- for doctrine/mysql
So if i run a mysql request on localhost it should make the request at
http: //localhost:8000
and if i make a request for elasticsearch it should make the request at
http: //localhost:9200
and if it runs in the production environment it should do the request at
http: //production.com:9200 --- elasticsearch
http: //production.com:8000 --- doctrine/mysql
I think it can be done at parameters.yml but i really did not get how it can be done.
Can someone help me to solve this problem.
Thanks a lot in advanced .
I'm not exactly sure what's the problem here so I'll give you a more general answer.
Symfony has a really great way to configure your project for different situations (or environments). You should have a look at the official documentation which explains things in depth.
By default, Symfony comes with 3 configurations for different environments:
app/config/config_dev.yml for development
app/config/config_prod.yml for production
app/config/config_test.yml for (unit) testing
Each of these config files can override settings from the base configuration file which is app/config/config.yml. You would store your general/common settings there. Whenever you need to override something for a specific environment, you just go to the environment config and change it.
Lets say you have the following base configuration in app/config/config.yml:
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
host: "%prod_database_host%"
port: "%prod_database_port%"
dbname: "%prod_database_name%"
user: "%prod_database_user%"
password: "%prod_database_password%"
charset: UTF8
Now lets say, you have 3 different databases for each environment - prod, dev and test. The way to do this is to override the configuration in the environment configuration file (lets say app/config/config_dev.yml:
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
host: "%dev_database_host%"
port: "%dev_database_port%"
dbname: "%dev_database_name%"
user: "%dev_database_user%"
password: "%dev_database_password%"
charset: UTF8
Add the necessary %dev_*% parameters to your app/config/parameters.yml.dist and app/config/parameters.yml. Now, whenever you open your application using the dev environment, it will connect to the specified database in your parameters (%dev_database...%).
This is pretty much it. You can do the same for any configuration you need to be changed in a specific environment. You should definitely have a look at the documentation. It's explained straight-forward with examples.

Server hangs all requests after a while

Our Rails 4.0 application (Ruby 2.1.2) is running on Nginx with Puma 2.9.0.
I recently noticed that all requests to our application hang after a while (usually 1 or 2 days).
When checking the log, which is set to debug mode, I noticed the following log stack up:
[2014-10-11T00:02:31.727382 #23458] INFO -- : Started GET "/" for ...
It does mean that requests actually hit the Rails app but somehow it isn't proceeded, while normally it would be:
I, [2014-10-11T00:02:31.727382 #23458] INFO -- : Started GET "/" for ....
I, [2014-10-11T00:02:31.729393 #23458] INFO -- : Processing by HomeController#index as HTML
My puma config is the following:
threads 16,32
workers 4
Our application is only for internal usage as now, so the RPM is very low, and none of the requests are take longer than 2s.
What is/are the reasons that could lead to this problem? (puma config, database connection, etc.)
Thank you in advance.
Update:
After installing the gem rack_timer to log the time spent on each middleware, I realized that our requests has been stuck at the ActiveRecord::QueryCache when the hang occurred, with huge amount of time on it:
Rack Timer (incoming) -- ActiveRecord::QueryCache: 925626.7731189728 ms
I removed this middleware for now and it seems to be back to normal. However, I understand the purpose of this middleware is to increase the performance, so removing it is just a temporary solution. Please help me find out the possible cause of this issue.
FYI, we're using mysql (5.1.67) with adapter mysql2 (0.3.13)
It could be a symptom of RAM starvation due to the query cache getting too big. We saw this in one of our apps running on Heroku. The default query cache is set to 1000. Lowering the limit eased the RAM usage for us with no noticeable performance degradation:
database.yml:
default: &default
adapter: postgresql
pool: <%= ENV["DB_POOL"] || ENV['MAX_THREADS'] || 5 %>
timeout: 5000
port: 5432
host: localhost
statement_limit: <%= ENV["DB_STATEMENT_LIMIT"] || 200 %>
However searching for "activerecord querycache slow" returns other causes, such as perhaps outdated versions of Ruby or Puma or rack-timeout: https://stackoverflow.com/a/44158724/126636
Or maybe a too large value for read_timeout: https://stackoverflow.com/a/30526430/126636

Symfony2 and GoDaddy error message with incorrect information

I recently uploaded a Symfony2 project to GoDaddy and I'm having trouble accesing it because I get the message:
An exception occured in driver: SQLSTATE[HY000] [1045] Access denied for user 'root'#'127.0.0.1' (using password: NO)
Obviously the message is clear, so I checked and rechecked my parameters.yml, and the message don't even match what I have there, which I have changed several times to try to fix. This is my parameters.yml:
parameters:
database_driver: pdo_mysql
database_host: localhost
database_port: null
database_name: database1
database_user: database1user
database_password: mytestpassword
mailer_transport: smtp
mailer_host: 127.0.0.1
mailer_user: null
mailer_password: null
locale: en
secret: RandomTokenThatWillBeChanged
debug_toolbar: true
debug_redirects: false
use_assetic_controller: true
So, the error message doesn't tell me what is my real problem, or it is loading the parameters from some cached version that I haven't found yet. Any ideas of what else could cause or where could a cached version of this data be?
One of the best practice when developing a Symfony application is to
make it configurable via a parameters.yml file. It contains
information such as the database name, the mailer hostname, and custom
configuration parameters.
As those parameters can be different on your local machine, your
testing environment, your production servers, and even between
developers working on the same project, it is not recommended to store
it in the project repository. Instead, the repository should contain a
paramaters.yml.dist file with sensible defaults that can be used as a
good starting point for everyone.
Then, whenever a developer starts working on the project, the
parameters`.yml file must be created by using the parameters.yml.dist
as a template. That works quite well, but whenever a new value is
added to the template, you must remember to update the main parameter
file accordingly.
As of Symfony 2.3, the Standard Edition comes with a new bundle that
automates the tedious work. Whenever you run composer install, a
script creates the parameters.yml file if it does not exist and allows
you to customize all the values interactively. Moreover, if you use
the --no-interaction flag, it will silently fallback to the default
values.
http://symfony.com/blog/new-in-symfony-2-3-interactive-management-of-the-parameters-yml-file
So, is it not possible that your paramaters.yml is overwritten by paramaters.yml.dist?
You can also try to completely clear the cache
In Dev:
php app/console cache:clear
In Production:
php app/console cache:clear --env=prod --no-debug

redmine email notifications with my own postfix server

I can't get redmine's email notifications to work. I am running my own mailserver with postfix using some mysql backend for the accounts. I added an account for redmine and tested it successfully using thunderbird. It is configured on port 25 using STARTTLS.
This is my config/configuration.yml of redmine:
production:
email_delivery:
delivery_method: :smtp
smtp_settings:
tls: true
address: www.mydomain.org
port: 25
authentication: :login
domain: mydomain.org
user_name: tracker#mydomain.org
password: PASSWORD
As I said, the credentials work for sure. The port is 25 and the address is correct as well. Redmine is running on the same server, but using localhost as address doesn't work either.
The error message redmine is giving me reads
... (Connection timed out - connect(2)).
In the postfix log files, I can find nothing, not even an attempt to login or send an email. I am using Ruby 1.8.7 patchlevel 3xx and Rails 2.3.5. It seems like there is a problem with the connection in general, and not with my mailserver.
What can I do to find the source of the problem? I am not very familiar with how ruby works.
I figured it out... Below the commented, suggested configuration blocks in the configuration.xml file is another, uncommented email block, that reads
default:
email_delivery: ...
Even though I thought that by uncommenting the production: block these settings would be overridden, it started working the moment I inserted the email settings into this default block. This is a bit weird, but anyway - it does work now like a charm.