Symfony2 and GoDaddy error message with incorrect information - mysql

I recently uploaded a Symfony2 project to GoDaddy and I'm having trouble accesing it because I get the message:
An exception occured in driver: SQLSTATE[HY000] [1045] Access denied for user 'root'#'127.0.0.1' (using password: NO)
Obviously the message is clear, so I checked and rechecked my parameters.yml, and the message don't even match what I have there, which I have changed several times to try to fix. This is my parameters.yml:
parameters:
database_driver: pdo_mysql
database_host: localhost
database_port: null
database_name: database1
database_user: database1user
database_password: mytestpassword
mailer_transport: smtp
mailer_host: 127.0.0.1
mailer_user: null
mailer_password: null
locale: en
secret: RandomTokenThatWillBeChanged
debug_toolbar: true
debug_redirects: false
use_assetic_controller: true
So, the error message doesn't tell me what is my real problem, or it is loading the parameters from some cached version that I haven't found yet. Any ideas of what else could cause or where could a cached version of this data be?

One of the best practice when developing a Symfony application is to
make it configurable via a parameters.yml file. It contains
information such as the database name, the mailer hostname, and custom
configuration parameters.
As those parameters can be different on your local machine, your
testing environment, your production servers, and even between
developers working on the same project, it is not recommended to store
it in the project repository. Instead, the repository should contain a
paramaters.yml.dist file with sensible defaults that can be used as a
good starting point for everyone.
Then, whenever a developer starts working on the project, the
parameters`.yml file must be created by using the parameters.yml.dist
as a template. That works quite well, but whenever a new value is
added to the template, you must remember to update the main parameter
file accordingly.
As of Symfony 2.3, the Standard Edition comes with a new bundle that
automates the tedious work. Whenever you run composer install, a
script creates the parameters.yml file if it does not exist and allows
you to customize all the values interactively. Moreover, if you use
the --no-interaction flag, it will silently fallback to the default
values.
http://symfony.com/blog/new-in-symfony-2-3-interactive-management-of-the-parameters-yml-file
So, is it not possible that your paramaters.yml is overwritten by paramaters.yml.dist?
You can also try to completely clear the cache
In Dev:
php app/console cache:clear
In Production:
php app/console cache:clear --env=prod --no-debug

Related

Environment variable not found: DATABASE_URL. Prisma and mysql

I've developped an API with Node.Js, Express, Prisma and Mysql in local firstly. After that it works, I have deployed my API on Heroku and I took the ClearDB add-on to have a Mysql DB on Heroku.
So the deployment is OKAY when I go on my root root URI I have the "Cannot GET /" message, and when I try to connect to my ClearDB with MysqlWorkbench I have my tables, columns etc...
The main problem is from Prisma.
When I go to the "Run console" of my Heroku's project, the command npx prisma init works perfectly BUT when I type npx prisma migrate deploy || dev or also if I try to npx prisma db push I have this error =>
Error: Get Config: Schema parsing - Error while interacting with query-engine-node-api library
Error code: P1012
error: Environment variable not found: DATABASE_URL.
--> schema.prisma:10
|
9 | provider = "mysql"
10 | url = env("DATABASE_URL")
|
All my code is in a GitHub repo, I've configured my .env (which is in the root folder of my server) like this :
DATABASE_URL="mysql://<username>:<my-password>#eu-cdbr-west-30.cleardb.net/heroku_36d295ebb6686a2"
NODE_ENV="development"
APP_SECRET="jwtsecret12"
NODE_PATH="./src"
ACCESS_TOKEN_SECRET="651651651848754cdfce9fz8ef4ef54se8f4sef48s69ef84e"
I hope you have all the informations that you need to help me :)
PS : Locally my project works perfectly
Waiting for your answers, thank you very much !
Your .env file is irrelevant. It should not be used on Heroku (and should not be tracked in your repository).
ClearDB provides an environment variable called CLEARDB_DATABASE_URL, not DATABASE_URL. You can either change your code to use this variable instead of DATABASE_URL, or you can set DATABASE_URL to the same value:
Retrieve your database URL by issuing the following command:
heroku config | grep CLEARDB_DATABASE_URL
CLEARDB_DATABASE_URL => mysql://adffdadf2341:adf4234#us-cdbr-east.cleardb.com/heroku_db?reconnect=true
Copy the value of the CLEARDB_DATABASE_URL config variable.
If you’re using Ruby on Rails and the mysql2 gem, you will need to change the mysql:// scheme in the CLEARDB_DATABASE_URL to mysql2://
heroku config:set DATABASE_URL='mysql://adffdadf2341:adf4234#us-cdbr-east.cleardb.com/heroku_db?reconnect=true'
Adding config vars:
DATABASE_URL => mysql2://adffd...b?reconnect=true
Restarting app... done, v61.
The connection information for Heroku Postgres can change at any time, but since the ClearDB documentation provides the preceding guidance I would hope that it does not do so.

Hyperledger Composer CLI Ping to a Business Network returns AccessException

Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network

Initialize Ghost database on new install

I am trying to set up a brand new Ghost blog on a Centos 7 server. I have Nginx, Node and Ghost installed and have written all of the necessary configuration files. It's pretty close to working, but I wanted to use MySQL instead of SQLite, so I created a new (blank) MySQL database called "ghost_db", set up a MySQL user called "ghost", gave the user permission for the database, and added these lines to config.js:
database: {
client: 'mysql',
connection: {
host: 'localhost',
user: 'ghost',
password: 'mypassword',
database: 'ghost_db'
charset: 'utf8'
filename: path.join(__dirname, '/content/data/ghost-dev.db')
},
debug: false
}, ...
When I try to start it, I get an error that suggests I use knex-migrator to initialize the database.
[john#a ghost]$ npm start
> ghost#1.18.4 start /var/www/ghost
> node index
[2017-12-10 00:08:00] ERROR
NAME: DatabaseIsNotOkError
CODE: MIGRATION_TABLE_IS_MISSING
MESSAGE: Please run knex-migrator init ...
However, some comments on Stackexchange suggest that using knex-migrate may be unnecessary for this version of Ghost, and when I run knex-migrator, it also fails:
[john#a ghost]$ knex-migrator init
[2017-12-09 16:21:33] ERROR
NAME: RollbackError
CODE: SQLITE_ERROR
MESSAGE: delete from "migrations" where "name" = '2-create-fixtures.js' and "version" = 'init' and "currentVersion" = '1.18' - SQLITE_ERROR: no such table: migrations
...[omitted]
Error: SQLITE_ERROR: no such table: migrations
I think the problem may be that the "ghost_db" database I initially created is blank. The "ghost-dev.db" file that is pointed to in the config.js seems to be for SQLite, but I get the same error message if I switch config.js back to using an SQLite database. I don't know what the "migrations" table is. I found the schema that I think Ghost expects at [https://github.com/TryGhost/Ghost/blob/1.16.2/core/server/data/schema/schema.js], but I'm not sure how to use that to initialize the tables, etc., except for doing it very laboriously by hand. I'm stumped!
Knex-migrator is new in Ghost 1.0, which also uses a config.<env>.json file for configuration.
It sounds like you added your database config into a file called config.js which was correct <1.0, however it seems you were installing Ghost 1.0 and therefore your new connection details would have needed to live in config.production.json.
You are correct that Ghost-CLI isn't intended for use on CentOS (it's for Ubuntu), but I'd be very surprised if it failed to install Ghost correctly. The issues with other OSs are mainly in the subtle differences between systemd i.e. keeping Ghost running.
The answer for me was just to not create the database at all and let Ghost do it as part of ghost install.
I took an alternate approach which proved successful, which was to install Ghost as an NPM module. The official Ghost instructions label this as an "advanced" process, but it wasn't too difficult to follow the instructions in the excellent nehalist.io and Stickleback blogs. There was also some useful guidance on the HugeServer knowledgebase. I think ultimately the problem was that the Ghost commandline interface (ghost-cli) wasn't designed for Centos 7.

go-ethereum - geth - puppeth - ethstat remote server : docker: command not found

I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.

Using MySQL on Openshift with Symfony 2

I added MySQL, and PHPMyAdmin cartridges to my openshift php app.
After mysql cartridge was added I saw the page which says:
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
but I have no idea what does it mean.
When I access mysql database through PHPMyAdmin,
I see 127.8.111.1 as db host, so I configured my symfony 2 app (parameters.yml):
parameters:
database_driver: pdo_mysql
database_host: 127.8.111.1
database_port: 3306
database_name: <some_database>
database_user: admin
database_password: <some_password>
Now when I access my web page it throws an error, which I believe related to mysql connection. Can someone show me proper way of doing the above?
EDIT: It seems mysql connection works fine, but somehow
Error 101 (net::ERR_CONNECTION_RESET): Unknown error
is thrown.
The code I use and works very well to make my apps working both on localhost and openshift without changing database config parameters every time I move between them is this:
<?php
# app/config/params.php
if (getEnv("OPENSHIFT_APP_NAME")!='') {
$container->setParameter('database_host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database_port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));
$container->setParameter('database_name', getEnv("OPENSHIFT_APP_NAME"));
$container->setParameter('database_user', getEnv("OPENSHIFT_MYSQL_DB_USERNAME"));
$container->setParameter('database_password', getEnv("OPENSHIFT_MYSQL_DB_PASSWORD"));
}?>
This will tell the app that if is openshift environment it needs to load different username, host, database, etc.
Then you have to import this file (params.php) from your app/config/config.yml file:
imports:
- { resource: parameters.yml }
- { resource: security.yml }
- { resource: params.php }
...
And that's it. You will never have to touch this file or parameters.yml when you move on openshift or localhost.
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
OpenShift exposes environment variables to your application containing the host and port information for your database. You should reference these environment variables in your configuration instead of hard-coding values. I am not a Symfony expert, but it looks to me like you would need to do the following in order to use this information in your app:
Create a pre-start hook for your application and export variables in Symfony's expected format. Add the following to the .openshift/action_hooks/pre_start_php-5.3 file in your application's git repo:
export SYMFONY__DATABASE__HOST=$OPENSHIFT_MYSQL_DB_HOST
export SYMFONY__DATABASE__PORT=$OPENSHIFT_MYSQL_DB_PORT
Symphony uses this pattern to identify external configuration in the environment, and will make the this configuration available for use in your YAML configuration:
parameters:
database_driver: pdo_mysql
database_host: "%database.host%"
database_port: "%database.port%"
EDIT:
Another option to expose this information for use in the YAML configuration is to import a php file in your app/config/config.yml:
imports:
- { resource: parameters.php }
In app/config/parameters.php:
$container->setParameter('database.host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database.port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));